TY - GEN
T1 - Supervised Cross-Modal Factor Analysis for Multiple Modal Data Classification
AU - Wang, Jingbin
AU - Zhou, Yihua
AU - Duan, Kanghong
AU - Wang, Jim Jing-Yan
AU - Bensmail, Halima
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledgements: The research reported in this publication was supported by
competitive research funding from King Abdullah University
of Science and Technology (KAUST), Saudi Arabia.
PY - 2016/1/15
Y1 - 2016/1/15
N2 - In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., An image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods.
AB - In this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., An image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods.
UR - http://hdl.handle.net/10754/609037
UR - http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7379461
UR - http://www.scopus.com/inward/record.url?scp=84964444011&partnerID=8YFLogxK
U2 - 10.1109/SMC.2015.329
DO - 10.1109/SMC.2015.329
M3 - Conference contribution
SN - 9781479986972
SP - 1882
EP - 1888
BT - 2015 IEEE International Conference on Systems, Man, and Cybernetics
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -