Tighter PAC-Bayes Bounds Through Coin-Betting

Kyoungseok Jang*, Kwang Sung Jun, Ilja Kuzborskij, Francesco Orabona

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

4 Scopus citations

Abstract

We consider the problem of estimating the mean of a sequence of random elements f(θ, X1) , . . ., f(θ, Xn) where f is a fixed scalar function, S = (X1, . . ., Xn) are independent random variables, and θ is a possibly S-dependent parameter. An example of such a problem would be to estimate the generalization error of a neural network trained on n examples where f is a loss function. Classically, this problem is approached through concentration inequalities holding uniformly over compact parameter sets of functions f, for example as in Rademacher or VC type analysis. However, in many problems, such inequalities often yield numerically vacuous estimates. Recently, the PAC-Bayes framework has been proposed as a better alternative for this class of problems for its ability to often give numerically non-vacuous bounds. In this paper, we show that we can do even better: we show how to refine the proof strategy of the PAC-Bayes bounds and achieve even tighter guarantees. Our approach is based on the coin-betting framework that derives the numerically tightest known time-uniform concentration inequalities from the regret guarantees of online gambling algorithms. In particular, we derive the first PAC-Bayes concentration inequality based on the coin-betting approach that holds simultaneously for all sample sizes. We demonstrate its tightness showing that by relaxing it we obtain a number of previous results in a closed form including Bernoulli-KL and empirical Bernstein inequalities. Finally, we propose an efficient algorithm to numerically calculate confidence sequences from our bound, which often generates nonvacuous confidence bounds even with one sample, unlike the state-of-the-art PAC-Bayes bounds.

Original languageEnglish (US)
Pages2240-2264
Number of pages25
StatePublished - 2023
Event36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India
Duration: Jul 12 2023Jul 15 2023

Conference

Conference36th Annual Conference on Learning Theory, COLT 2023
Country/TerritoryIndia
CityBangalore
Period07/12/2307/15/23

Keywords

  • coin-betting
  • Concentration inequalities
  • confidence sequences
  • PAC-Bayes

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Tighter PAC-Bayes Bounds Through Coin-Betting'. Together they form a unique fingerprint.

Cite this