TY - GEN
T1 - Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information
AU - Jahani, Majid
AU - Rusakov, Sergey
AU - Shi, Zheng
AU - Richtarik, Peter
AU - Mahoney, Michael W.
AU - Takáč, Martin
N1 - KAUST Repository Item: Exported on 2023-03-28
Acknowledgements: MT was partially supported by the NSF, under award numbers CCF:1618717/CCF:1740796. PR was supported by the KAUST Baseline Research Funding Scheme. MM would like to acknowledge the US NSF and ONR via its BRC on RandNLA for providing partial support of this work. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred.
PY - 2022/1/29
Y1 - 2022/1/29
N2 - We present a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains gradient information preconditioned by a well-scaled diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyperparameter. We provide convergence guarantees on a comprehensive collection of optimization problems, including convex, strongly convex, and nonconvex problems, in both deterministic and stochastic regimes. We also conduct an extensive empirical evaluation on standard machine learning problems, justifying our algorithm's versatility and demonstrating its strong performance compared to other start-of-the-art first-order and second-order methods.
AB - We present a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains gradient information preconditioned by a well-scaled diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyperparameter. We provide convergence guarantees on a comprehensive collection of optimization problems, including convex, strongly convex, and nonconvex problems, in both deterministic and stochastic regimes. We also conduct an extensive empirical evaluation on standard machine learning problems, justifying our algorithm's versatility and demonstrating its strong performance compared to other start-of-the-art first-order and second-order methods.
UR - http://hdl.handle.net/10754/671214
UR - https://openreview.net/forum?id=HCelXXcSEuH
UR - http://www.scopus.com/inward/record.url?scp=85150338070&partnerID=8YFLogxK
M3 - Conference contribution
BT - 10th International Conference on Learning Representations, ICLR 2022
PB - International Conference on Learning Representations, ICLR
ER -