TY - GEN
T1 - Stochastic Subspace Cubic Newton Method
AU - Hanzely, Filip
AU - Doikov, Nikita
AU - Richtarik, Peter
AU - Nesterov, Yurii
N1 - KAUST Repository Item: Exported on 2021-09-16
Acknowledgements: The work of the second and the fourth author was supported by ERC Advanced Grant 788368.
PY - 2020
Y1 - 2020
N2 - In xIn this paper, we propose a new randomized second-order optimization algorithm-Stochastic Subspace Cubic Newton (SSCN)-for minimizing a high dimensional convex function f. Our method can be seen both as a stochastic extension of the cubically-regularized Newton method of Nesterov and Polyak (2006), and a second-order enhancement of stochastic subspace descent of Kozak et al. (2019). We prove that as we vary the minibatch size, the global convergence rate of SSCN interpolates between the rate of stochastic coordinate descent (CD) and the rate of cubic regularized Newton, thus giving new insights into the connection between first and second-order methods. Remarkably, the local convergence rate of SSCN matches the rate of stochastic subspace descent applied to the problem of minimizing the quadratic function 1/2 (x - x* )(T) del(2) f (x*)(x - x*), where x* is the minimizer of f, and hence depends on the properties of f at the optimum only. Our numerical experiments show that SSCN outperforms non-accelerated first-order CD algorithms while being competitive to their accelerated variants.
AB - In xIn this paper, we propose a new randomized second-order optimization algorithm-Stochastic Subspace Cubic Newton (SSCN)-for minimizing a high dimensional convex function f. Our method can be seen both as a stochastic extension of the cubically-regularized Newton method of Nesterov and Polyak (2006), and a second-order enhancement of stochastic subspace descent of Kozak et al. (2019). We prove that as we vary the minibatch size, the global convergence rate of SSCN interpolates between the rate of stochastic coordinate descent (CD) and the rate of cubic regularized Newton, thus giving new insights into the connection between first and second-order methods. Remarkably, the local convergence rate of SSCN matches the rate of stochastic subspace descent applied to the problem of minimizing the quadratic function 1/2 (x - x* )(T) del(2) f (x*)(x - x*), where x* is the minimizer of f, and hence depends on the properties of f at the optimum only. Our numerical experiments show that SSCN outperforms non-accelerated first-order CD algorithms while being competitive to their accelerated variants.
UR - http://hdl.handle.net/10754/666020
UR - https://arxiv.org/pdf/2002.09526
M3 - Conference contribution
BT - International Conference on Machine Learning (ICML)
PB - arXiv
ER -