TY - GEN
T1 - Semantic compositional networks for visual captioning
AU - Gan, Zhe
AU - Gan, Chuang
AU - He, Xiaodong
AU - Pu, Yunchen
AU - Tran, Kenneth
AU - Gao, Jianfeng
AU - Carin, Lawrence
AU - Deng, Li
N1 - Generated from Scopus record by KAUST IRTS on 2021-02-09
PY - 2017/11/6
Y1 - 2017/11/6
N2 - A Semantic Compositional Network (SCN) is developed for image captioning, in which semantic concepts (i.e., tags) are detected from the image, and the probability of each tag is used to compose the parameters in a long short-term memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an ensemble of tag-dependent weight matrices. The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag. In addition to captioning images, we also extend the SCN to generate captions for video clips. We qualitatively analyze semantic composition in SCNs, and quantitatively evaluate the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text. Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics.
AB - A Semantic Compositional Network (SCN) is developed for image captioning, in which semantic concepts (i.e., tags) are detected from the image, and the probability of each tag is used to compose the parameters in a long short-term memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an ensemble of tag-dependent weight matrices. The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag. In addition to captioning images, we also extend the SCN to generate captions for video clips. We qualitatively analyze semantic composition in SCNs, and quantitatively evaluate the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text. Experimental results show that the proposed method significantly outperforms prior state-of-the-art approaches, across multiple evaluation metrics.
UR - http://ieeexplore.ieee.org/document/8099610/
UR - http://www.scopus.com/inward/record.url?scp=85021786108&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2017.127
DO - 10.1109/CVPR.2017.127
M3 - Conference contribution
SN - 9781538604571
SP - 1141
EP - 1150
BT - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
PB - Institute of Electrical and Electronics Engineers Inc.
ER -