Performance-estimation properties of cross-validation-based protocols with simultaneous hyper-parameter optimization

Ioannis Tsamardinos, Amin Rakhshani, Vincenzo Lagani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

33 Scopus citations

Abstract

In a typical supervised data analysis task, one needs to perform the following two tasks: (a) select the best combination of learning methods (e.g., for variable selection and classifier) and tune their hyper-parameters (e.g., K in K-NN), also called model selection, and (b) provide an estimate of the performance of the final, reported model. Combining the two tasks is not trivial because when one selects the set of hyper-parameters that seem to provide the best estimated performance, this estimation is optimistic (biased / overfitted) due to performing multiple statistical comparisons. In this paper, we confirm that the simple Cross-Validation with model selection is indeed optimistic (overestimates) in small sample scenarios. In comparison the Nested Cross Validation and the method by Tibshirani and Tibshirani provide conservative estimations, with the later protocol being more computationally efficient. The role of stratification of samples is examined and it is shown that stratification is beneficial. © 2014 Springer International Publishing.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer [email protected]
Pages1-14
Number of pages14
ISBN (Print)9783319070636
DOIs
StatePublished - Jan 1 2014
Externally publishedYes

Fingerprint

Dive into the research topics of 'Performance-estimation properties of cross-validation-based protocols with simultaneous hyper-parameter optimization'. Together they form a unique fingerprint.

Cite this