Abstract
Modern datasets often exhibit heavy-tailed behavior, while quantization is inevitable in digital signal processing and many machine learning problems. This paper studies the quantization of heavy-tailed data in several fundamental statistical estimation problems where the underlying distributions have bounded moments of some order (no greater than 4). We propose to truncate and properly dither the data prior to a uniform quantization. Our major standpoint is that (near) minimax rates of estimation error could be achieved by computationally tractable estimators based on the quantized data produced by the proposed scheme. In particular, concrete results are worked out for covariance estimation, compressed sensing (also interpreted as sparse linear regression), and matrix completion, all agreeing that the quantization only slightly worsens the multiplicative factor. Additionally, while prior results focused on the quantization of responses (i.e., measurements), we study compressed sensing where the covariates (i.e., sensing vectors) are also quantized; in this case, though our recovery program is non-convex (since our covariance matrix estimator lacks positive semi-definiteness), we prove that all local minimizers enjoy near-optimal estimation error. Moreover, by the concentration inequality of the product process and a covering argument, we establish a near minimax uniform recovery guarantee for quantized compressed sensing with heavy-tailed noise. Finally, numerical simulations are provided to corroborate our theoretical results.
Original language | English (US) |
---|---|
Pages (from-to) | 1 |
Number of pages | 1 |
Journal | IEEE Transactions on Information Theory |
Volume | 70 |
Issue number | 3 |
DOIs | |
State | Accepted/In press - 2023 |
Keywords
- Compressed sensing
- Covariance matrices
- Estimation
- Heavily-tailed distribution
- Quantization (signal)
- Sensors
- Sparse matrices
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Library and Information Sciences