TY - JOUR
T1 - Regularized maximum correntropy machine
AU - Wang, Jim Jing-Yan
AU - Wang, Yunji
AU - Jing, Bing-Yi
AU - Gao, Xin
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2015/2/12
Y1 - 2015/2/12
N2 - In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
AB - In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
UR - http://hdl.handle.net/10754/577311
UR - http://linkinghub.elsevier.com/retrieve/pii/S0925231215001150
UR - http://www.scopus.com/inward/record.url?scp=84927968712&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2014.09.080
DO - 10.1016/j.neucom.2014.09.080
M3 - Article
SN - 0925-2312
VL - 160
SP - 85
EP - 92
JO - Neurocomputing
JF - Neurocomputing
ER -