A Large Dimensional Study of Regularized Discriminant Analysis

Khalil Elkhalil, Abla Kammoun, Romain Couillet, Tareq Y. Al-Naffouri, Mohamed-Slim Alouini

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

In this paper, we conduct a large dimensional study of regularized discriminant analysis classifiers with its two popular variants known as regularized LDA and regularized QDA. The analysis is based on the assumption that the data samples are drawn from a Gaussian mixture model with different means and covariances and relies on tools from random matrix theory (RMT). We consider the regime in which both the data dimension and training size within each class tends to infinity with fixed ratio. Under mild assumptions, we show that the probability of misclassification converges to a deterministic quantity that describes in closed form the performance of these classifiers in terms of the class statistics as well as the problem dimension. The result allows for a better understanding of the underlying classification algorithms in terms of their performances in practical large but finite dimensions. Further exploitation of the results permits to optimally tune the regularization parameter with the aim of minimizing the probability of misclassification. The analysis is validated with numerical results involving synthetic as well as real data from the USPS dataset yielding a high accuracy in predicting the performances and hence making an interesting connection between theory and practice.
Original languageEnglish (US)
Pages (from-to)1-1
Number of pages1
JournalIEEE Transactions on Signal Processing
Volume68
DOIs
StatePublished - 2020

Fingerprint

Dive into the research topics of 'A Large Dimensional Study of Regularized Discriminant Analysis'. Together they form a unique fingerprint.

Cite this