TY - GEN
T1 - Optimal and practical algorithms for smooth and strongly convex decentralized optimization
AU - Kovalev, Dmitry
AU - Salim, Adil
AU - Richtarik, Peter
N1 - KAUST Repository Item: Exported on 2021-07-29
PY - 2020/1/1
Y1 - 2020/1/1
N2 - We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network. For this problem, lower bounds on the number of gradient computations and the number of communication rounds required to achieve e accuracy have recently been proven. We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees. We show that our first method is optimal both in terms of the number of communication rounds and in terms of the number of gradient computations. Unlike existing optimal algorithms, our algorithm does not rely on the expensive evaluation of dual gradients. Our second algorithm is optimal in terms of the number of communication rounds, without a logarithmic factor. Our approach relies on viewing the two proposed algorithms as accelerated variants of the Forward Backward algorithm to solve monotone inclusions associated with the decentralized optimization problem. We also verify the efficacy of our methods against state-of-the-art algorithms through numerical experiments.
AB - We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network. For this problem, lower bounds on the number of gradient computations and the number of communication rounds required to achieve e accuracy have recently been proven. We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees. We show that our first method is optimal both in terms of the number of communication rounds and in terms of the number of gradient computations. Unlike existing optimal algorithms, our algorithm does not rely on the expensive evaluation of dual gradients. Our second algorithm is optimal in terms of the number of communication rounds, without a logarithmic factor. Our approach relies on viewing the two proposed algorithms as accelerated variants of the Forward Backward algorithm to solve monotone inclusions associated with the decentralized optimization problem. We also verify the efficacy of our methods against state-of-the-art algorithms through numerical experiments.
UR - http://hdl.handle.net/10754/666022
UR - https://arxiv.org/pdf/2006.11773
UR - http://www.scopus.com/inward/record.url?scp=85108406715&partnerID=8YFLogxK
M3 - Conference contribution
BT - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
PB - Neural information processing systems foundation
ER -