Hyperparameter Transfer Learning with Adaptive Complexity

Samuel Horvath, Aaron Klein, Peter Richtarik, Cedric Archambeau

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Bayesian optimization (BO) is a data-efficient approach to automatically tune the hyperparameters of machine learning models. In practice, one frequently has to solve similar hyperparameter tuning problems sequentially. For example, one might have to tune a type of neural network learned across a series of different classification problems. Recent work on multi-task BO exploits knowledge gained from previous hyperparameter tuning tasks to speed up a new tuning task. However, previous approaches do not account for the fact that BO is a sequential decision making procedure. Hence, there is in general a mismatch between the number of evaluations collected in the current tuning task compared to the number of evaluations accumulated in all previously completed tasks. In this work, we enable multi-task BO to compensate for this mismatch, such that the transfer learning procedure is able to handle different data regimes in a principled way. We propose a new multi-task BO method that learns a set of ordered, non-linear basis functions of increasing complexity via nested drop-out and automatic relevance determination. Experiments on a variety of hyperparameter tuning problems show that our method improves the sample efficiency of recently published multi-task BO methods.
Original languageEnglish (US)
Title of host publication24th International Conference on Artificial Intelligence and Statistics (AISTATS)
PublisherMLResearchPress
StatePublished - 2021

Fingerprint

Dive into the research topics of 'Hyperparameter Transfer Learning with Adaptive Complexity'. Together they form a unique fingerprint.

Cite this