TY - JOUR
T1 - Stochastic distributed learning with gradient quantization and double-variance reduction
AU - Horvath, Samuel
AU - Kovalev, Dmitry
AU - Mishchenko, Konstantin
AU - Richtarik, Peter
AU - Stich, Sebastian
N1 - KAUST Repository Item: Exported on 2022-10-04
Acknowledgements: The authors would like to thank Xun Qian for the careful checking of the proofs and for spotting several typos in the analysis.
PY - 2022/9/27
Y1 - 2022/9/27
N2 - We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance ω>0 coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.
AB - We consider distributed optimization over several devices, each sending incremental model updates to a central server. This setting is considered, for instance, in federated learning. Various schemes have been designed to compress the model updates in order to reduce the overall communication cost. However, existing methods suffer from a significant slowdown due to additional variance ω>0 coming from the compression operator and as a result, only converge sublinearly. What is needed is a variance reduction technique for taming the variance introduced by compression. We propose the first methods that achieve linear convergence for arbitrary compression operators. For strongly convex functions with condition number κ, distributed among n machines with a finite-sum structure, each worker having less than m components, we also (i) give analysis for the weakly convex and the non-convex cases and (ii) verify in experiments that our novel variance reduced schemes are more efficient than the baselines. Moreover, we show theoretically that as the number of devices increases, higher compression levels are possible without this affecting the overall number of communications in comparison with methods that do not perform any compression. This leads to a significant reduction in communication cost. Our general analysis allows to pick the most suitable compression for each problem, finding the right balance between additional variance and communication savings. Finally, we also (iii) give analysis for arbitrary quantized updates.
UR - http://hdl.handle.net/10754/653103
UR - https://www.tandfonline.com/doi/full/10.1080/10556788.2022.2117355
U2 - 10.1080/10556788.2022.2117355
DO - 10.1080/10556788.2022.2117355
M3 - Article
SN - 1055-6788
SP - 1
EP - 16
JO - Optimization Methods and Software
JF - Optimization Methods and Software
ER -