TY - GEN
T1 - Learning without the phase: Regularized phaseMax achieves optimal sample complexity
AU - Salehi, Fariborz
AU - Abbasi, Ehsan
AU - Hassibi, Babak
N1 - KAUST Repository Item: Exported on 2022-06-30
Acknowledgements: This work was supported in part by the National Science Foundation under grants CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASA's Jet Propulsion Laboratory through the President and Director's Fund, and by King Abdullah University of Science and Technology.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2018/1/1
Y1 - 2018/1/1
N2 - The problem of estimating an unknown signal, x0 ∈ Rn, from a vector y ∈ Rm consisting of m magnitude-only measurements of the form yi = |aix0|, where ai's are the rows of a known measurement matrix A, is a classical problem known as phase retrieval. This problem arises when measuring the phase is costly or altogether infeasible. In many applications in machine learning, signal processing, statistics, etc., the underlying signal has certain structure (sparse, low-rank, finite alphabet, etc.), opening of up the possibility of recovering x0 from a number of measurements smaller than the ambient dimension, i.e., m < n. Ideally, one would like to recover the signal from a number of phaseless measurements that is on the order of the "degrees of freedom" of the structured x0. To this end, inspired by the PhaseMax algorithm, we formulate a convex optimization problem, where the objective function relies on an initial estimate of the true signal and also includes an additive regularization term to encourage structure. The new formulation is referred to as regularized PhaseMax. We analyze the performance of regularized PhaseMax to find the minimum number of phaseless measurements required for perfect signal recovery. The results are asymptotic and are in terms of the geometrical properties (such as the Gaussian width) of certain convex cones. When the measurement matrix has i.i.d. Gaussian entries, we show that our proposed method is indeed order-wise optimal, allowing perfect recovery from a number of phaseless measurements that is only a constant factor away from the optimal number of measurements required when phase information is available. We explicitly compute this constant factor, in terms of the quality of the initial estimate, by deriving the exact phase transition. The theory well matches empirical results from numerical simulations.
AB - The problem of estimating an unknown signal, x0 ∈ Rn, from a vector y ∈ Rm consisting of m magnitude-only measurements of the form yi = |aix0|, where ai's are the rows of a known measurement matrix A, is a classical problem known as phase retrieval. This problem arises when measuring the phase is costly or altogether infeasible. In many applications in machine learning, signal processing, statistics, etc., the underlying signal has certain structure (sparse, low-rank, finite alphabet, etc.), opening of up the possibility of recovering x0 from a number of measurements smaller than the ambient dimension, i.e., m < n. Ideally, one would like to recover the signal from a number of phaseless measurements that is on the order of the "degrees of freedom" of the structured x0. To this end, inspired by the PhaseMax algorithm, we formulate a convex optimization problem, where the objective function relies on an initial estimate of the true signal and also includes an additive regularization term to encourage structure. The new formulation is referred to as regularized PhaseMax. We analyze the performance of regularized PhaseMax to find the minimum number of phaseless measurements required for perfect signal recovery. The results are asymptotic and are in terms of the geometrical properties (such as the Gaussian width) of certain convex cones. When the measurement matrix has i.i.d. Gaussian entries, we show that our proposed method is indeed order-wise optimal, allowing perfect recovery from a number of phaseless measurements that is only a constant factor away from the optimal number of measurements required when phase information is available. We explicitly compute this constant factor, in terms of the quality of the initial estimate, by deriving the exact phase transition. The theory well matches empirical results from numerical simulations.
UR - http://hdl.handle.net/10754/679460
UR - http://www.scopus.com/inward/record.url?scp=85064827838&partnerID=8YFLogxK
M3 - Conference contribution
SP - 8641
EP - 8652
BT - 32nd Conference on Neural Information Processing Systems, NeurIPS 2018
PB - Neural information processing systems foundation
ER -