TY - JOUR
T1 - Feature selection and multi-kernel learning for sparse representation on a manifold
AU - Wang, Jim Jing-Yan
AU - Bensmail, Halima
AU - Gao, Xin
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The study was supported by grants from Chongqing Key Laboratory of Computational Intelligence, China (Grant No. CQ-LCI-2013-02), Tianjin Key Laboratory of Cognitive Computing and Application, China, 2011 Qatar Annual Research Forum Award (Grant no. ARF2011), and King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
PY - 2014/3
Y1 - 2014/3
N2 - Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.
AB - Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.
UR - http://hdl.handle.net/10754/575708
UR - https://linkinghub.elsevier.com/retrieve/pii/S0893608013002736
UR - http://www.scopus.com/inward/record.url?scp=84890219380&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2013.11.009
DO - 10.1016/j.neunet.2013.11.009
M3 - Article
C2 - 24333479
SN - 0893-6080
VL - 51
SP - 9
EP - 16
JO - Neural Networks
JF - Neural Networks
ER -