Open problem: Parameter-free and scale-free online algorithms

Francesco Orabona, Dávid Pál

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Existing vanilla algorithms for online linear optimization have O((ηR(u) + 1/η)√T) regret with respect to any competitor u, where R(u) is a 1-strongly convex regularizer and η > 0 is a tuning parameter of the algorithm. For certain decision sets and regularizers, the so-called parameter-free algorithms have Oe(pR(u)T) regret with respect to any competitor u. Vanilla algorithm can achieve the same bound only for a fixed competitor u known ahead of time by setting η = 1/pR(u). A drawback of both vanilla and parameter-free algorithms is that they assume that the norm of the loss vectors is bounded by a constant known to the algorithm. There exist scale-free algorithms that have O((ηR(u)+1/η)√T max1≤t≤T k`tk) regret with respect to any competitor u and for any sequence of loss vector `1, . . ., `T. Parameter-free analogue of scale-free algorithms have never been designed. Is is possible to design algorithms that are simultaneously parameter-free and scale-free?
Original languageEnglish (US)
Title of host publicationJournal of Machine Learning Research
PublisherMicrotome [email protected]
Pages1659-1664
Number of pages6
StatePublished - Jun 6 2016
Externally publishedYes

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Statistics and Probability
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Open problem: Parameter-free and scale-free online algorithms'. Together they form a unique fingerprint.

Cite this