On how to avoid exacerbating spurious correlations when models are overparameterized

Tina Behnia, Ke Wang, Christos Thrampoulidis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Overparameterized learning architectures fail to generalize well in the presence of data imbalance even when combined with traditional techniques for mitigating imbalances. This paper focuses on imbalanced classification datasets, in which a small subset of the population - a minority - may contain features that correlate spuriously with the class label. For a parametric family of cross-entropy loss modifications and a representative Gaussian mixture model, we derive non-asymptotic generalization bounds on the worst-group error that shed light on the role of different hyper-parameters. Specifically, we prove that, when appropriately tuned, the recently proposed VS-loss learns a model that is fair towards minorities even when spurious features are strong. On the other hand, alternative heuristics, such as the weighted CE and the LA-loss, can fail dramatically. Compared to previous works, our bounds hold for more general models, they are non-asymptotic, and, they apply even at scenarios of extreme imbalance.
Original languageEnglish (US)
Title of host publication2022 IEEE International Symposium on Information Theory (ISIT)
PublisherIEEE
Pages121-126
Number of pages6
ISBN (Print)9781665421591
DOIs
StatePublished - Jun 26 2022
Externally publishedYes

Fingerprint

Dive into the research topics of 'On how to avoid exacerbating spurious correlations when models are overparameterized'. Together they form a unique fingerprint.

Cite this