Abstract
Online learning is a framework for the design and analysis of algorithms that build predictive models by processing data one at the time. Besides being computationally efficient, online algorithms enjoy theoretical performance guarantees that do not rely on statistical assumptions on the data source. In this review, we describe some of the most important algorithmic ideas behind online learning and explain the main mathematical tools for their analysis. Our reference framework is online convex optimization, a sequential version of convex optimization within which most online algorithms are formulated. More specifically, we provide an in-depth description of online mirror descent and follow the regularized leader, two of the most fundamental algorithms in online learning. As the tuning of parameters is a typically difficult task in sequential data analysis, in the last part of the review we focus on coin-betting, an information-theoretic approach to the design of parameter-free online algorithms with good theoretical guarantees.
Original language | English (US) |
---|---|
Pages (from-to) | 165-190 |
Number of pages | 26 |
Journal | Annual Review of Statistics and Its Application |
Volume | 8 |
DOIs | |
State | Published - Mar 7 2021 |
Externally published | Yes |
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty