TY - JOUR
T1 - Randomized projection methods for convex feasibility: Conditioning and convergence rates
AU - Necoara, Ion
AU - Richtarik, Peter
AU - Patrascu, Andrei
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The work of the first and third authors was supported by the Executive Agency for Higher Education, Research and Innovation Funding (UEFISCDI), Romania, PNIII-P4-PCE-2016-0731, project ScaleFreeNet, 39/2017.
PY - 2019/11/7
Y1 - 2019/11/7
N2 - In this paper we develop a family of randomized projection methods (RPM) for solving the convex feasibility problem. Our approach is based on several new ideas and tools, including stochastic approximation of convex sets, stochastic reformulations, and conditioning of the convex feasibility problem. In particular, we propose four equivalent stochastic reformulations: stochastic smooth and nonsmooth optimization problems, the stochastic fixed point problem, and the stochastic feasibility problem. The last reformulation asks for a point which belongs to a certain random set with probability one. In this case, RPM can be interpreted as follows: we sample a batch of random sets in an independently and identically distributed fashion, perform projections of the current iterate on all sampled sets, and then combine these projections by averaging with a carefully designed extrapolation step. We prove that under stochastic linear regularity, RPM converges linearly, with a rate that has a natural interpretation as a condition number of the stochastic optimization reformulation and that depends explicitly on the number of sets sampled. In doing so, we extend the concept of condition number to general convex feasibility problems. This condition number depends on the linear regularity constant and an additional key constant which can be interpreted as a Lipschitz constant of the gradient of the stochastic optimization reformulation. Besides providing a general framework for the design and analysis of randomized projection schemes, our results resolve an open problem in the literature related to the theoretical understanding of observed practical efficiency of extrapolated parallel projection methods. In addition, we prove that our method converges sublinearly in the case when the stochastic linear regularity condition is not satisfied. Preliminary numerical results also show a better performance of our extrapolated step-size scheme over its constant step-size counterpart.
AB - In this paper we develop a family of randomized projection methods (RPM) for solving the convex feasibility problem. Our approach is based on several new ideas and tools, including stochastic approximation of convex sets, stochastic reformulations, and conditioning of the convex feasibility problem. In particular, we propose four equivalent stochastic reformulations: stochastic smooth and nonsmooth optimization problems, the stochastic fixed point problem, and the stochastic feasibility problem. The last reformulation asks for a point which belongs to a certain random set with probability one. In this case, RPM can be interpreted as follows: we sample a batch of random sets in an independently and identically distributed fashion, perform projections of the current iterate on all sampled sets, and then combine these projections by averaging with a carefully designed extrapolation step. We prove that under stochastic linear regularity, RPM converges linearly, with a rate that has a natural interpretation as a condition number of the stochastic optimization reformulation and that depends explicitly on the number of sets sampled. In doing so, we extend the concept of condition number to general convex feasibility problems. This condition number depends on the linear regularity constant and an additional key constant which can be interpreted as a Lipschitz constant of the gradient of the stochastic optimization reformulation. Besides providing a general framework for the design and analysis of randomized projection schemes, our results resolve an open problem in the literature related to the theoretical understanding of observed practical efficiency of extrapolated parallel projection methods. In addition, we prove that our method converges sublinearly in the case when the stochastic linear regularity condition is not satisfied. Preliminary numerical results also show a better performance of our extrapolated step-size scheme over its constant step-size counterpart.
UR - http://hdl.handle.net/10754/660992
UR - https://epubs.siam.org/doi/10.1137/18M1167061
UR - http://www.scopus.com/inward/record.url?scp=85076243885&partnerID=8YFLogxK
U2 - 10.1137/18M1167061
DO - 10.1137/18M1167061
M3 - Article
SN - 1052-6234
VL - 29
SP - 2814
EP - 2852
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 4
ER -