TY - GEN

T1 - LASSO with non-linear measurements is equivalent to one with linear measurements

AU - Thrampoulidis, Christos

AU - Abbasi, Ehsan

AU - Hassibi, Babak

N1 - KAUST Repository Item: Exported on 2022-06-28
Acknowledgements: This work was supported in part by the National Science Foundation under grants CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASA's Jet Propulsion Laboratory through the President and Directors Fund, by King Abdulaziz University, and by King Abdullah University of Science and Technology.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.

PY - 2015/1/1

Y1 - 2015/1/1

N2 - Consider estimating an unknown, but structured (e.g. sparse, low-rank, etc.), signal x0 ∈ ℝn from a vector y ∈ ℝm of measurements of the form yi = gi(aiT x0), where the ai's are the rows of a known measurement matrix A, and, g(·) is a (potentially unknown) nonlinear and random link-function. Such measurement functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., gi(x) = sign(x+zi), corresponds to noisy 1-bit quantized measurements. Motivated by the classical work of Brillinger, and more recent work of Plan and Vershynin, we estimate x0 via solving the Generalized-LASSO, i.e., ◯:= arg minx ||y-Ax0||2 + λf(x) for some regularization parameter λ > 0 and some (typically non-smooth) convex regularizer f(·) that promotes the structure of x0, e.g. ℓ1-norm, nuclear-norm, etc. While this approach seems to naively ignore the nonlinear function g(·), both Brillinger (in the non-constrained case) and Plan and Vershynin have shown that, when the entries of A are iid standard normal, this is a good estimator of x0 up to a constant of proportionality μ, which only depends on g(·). In this work, we considerably strengthen these results by obtaining explicit expressions for ||◯-μx0||2, for the regularized Generalized-LASSO, that are asymptotically precise when m and n grow large. A main result is that the estimation performance of the Generalized LASSO with non-linear measurements is asymptotically the same as one whose measurements are linear yi = μaiT x0 + σzi, with μ = Eγg(γ) and σ2 = E(g(γ)-μγ)2, and, γ standard normal. To the best of our knowledge, the derived expressions on the estimation performance are the first-known precise results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the Generalized LASSO is the celebrated Lloyd-Max quantizer.

AB - Consider estimating an unknown, but structured (e.g. sparse, low-rank, etc.), signal x0 ∈ ℝn from a vector y ∈ ℝm of measurements of the form yi = gi(aiT x0), where the ai's are the rows of a known measurement matrix A, and, g(·) is a (potentially unknown) nonlinear and random link-function. Such measurement functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., gi(x) = sign(x+zi), corresponds to noisy 1-bit quantized measurements. Motivated by the classical work of Brillinger, and more recent work of Plan and Vershynin, we estimate x0 via solving the Generalized-LASSO, i.e., ◯:= arg minx ||y-Ax0||2 + λf(x) for some regularization parameter λ > 0 and some (typically non-smooth) convex regularizer f(·) that promotes the structure of x0, e.g. ℓ1-norm, nuclear-norm, etc. While this approach seems to naively ignore the nonlinear function g(·), both Brillinger (in the non-constrained case) and Plan and Vershynin have shown that, when the entries of A are iid standard normal, this is a good estimator of x0 up to a constant of proportionality μ, which only depends on g(·). In this work, we considerably strengthen these results by obtaining explicit expressions for ||◯-μx0||2, for the regularized Generalized-LASSO, that are asymptotically precise when m and n grow large. A main result is that the estimation performance of the Generalized LASSO with non-linear measurements is asymptotically the same as one whose measurements are linear yi = μaiT x0 + σzi, with μ = Eγg(γ) and σ2 = E(g(γ)-μγ)2, and, γ standard normal. To the best of our knowledge, the derived expressions on the estimation performance are the first-known precise results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the Generalized LASSO is the celebrated Lloyd-Max quantizer.

UR - http://hdl.handle.net/10754/679376

UR - http://www.scopus.com/inward/record.url?scp=84965170219&partnerID=8YFLogxK

M3 - Conference contribution

SP - 3420

EP - 3428

BT - 29th Annual Conference on Neural Information Processing Systems, NIPS 2015

PB - Neural information processing systems foundation

ER -