TY - GEN
T1 - Regularized Discriminant Analysis
T2 - 2018 IEEE International Symposium on Information Theory, ISIT 2018
AU - Yang, Xiaoke
AU - Elkhalil, Khalil
AU - Kammoun, Abla
AU - Al-Naffouri, Tareq Y.
AU - Alouini, Mohamed Slim
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/15
Y1 - 2018/8/15
N2 - This paper focuses on studying the performance of general regularized discriminant analysis (RDA) classifiers based on the Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classifiers. Based on fundamental results from random matrix theory, we analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Numerical results are provided to validate our theoretical findings on synthetic data showing high accuracy of our derivations.
AB - This paper focuses on studying the performance of general regularized discriminant analysis (RDA) classifiers based on the Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classifiers. Based on fundamental results from random matrix theory, we analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Numerical results are provided to validate our theoretical findings on synthetic data showing high accuracy of our derivations.
UR - http://www.scopus.com/inward/record.url?scp=85052472693&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2018.8437875
DO - 10.1109/ISIT.2018.8437875
M3 - Conference contribution
AN - SCOPUS:85052472693
SN - 9781538647806
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 536
EP - 540
BT - 2018 IEEE International Symposium on Information Theory, ISIT 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 June 2018 through 22 June 2018
ER -