Adaptive feature abstraction for translating video to text

Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Scopus citations

Abstract

Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features. However, the variable context-dependent semantics in the video may make it more appropriate to adaptively select features from the multiple CNN layers. We propose a new approach to generating adaptive spatiotemporal representations of videos for the captioning task. A novel attention mechanism is developed, which adaptively and sequentially focuses on different layers of CNN features (levels of feature “abstraction”), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on three benchmark datasets: YouTube2Text, M-VAD and MSR-VTT. Along with visualizing the results and how the model works, these experiments quantitatively demonstrate the effectiveness of the proposed adaptive spatiotemporal feature abstraction for translating videos to sentences with rich semantics.
Original languageEnglish (US)
Title of host publication32nd AAAI Conference on Artificial Intelligence, AAAI 2018
PublisherAAAI press
Pages7284-7291
Number of pages8
ISBN (Print)9781577358008
StatePublished - Jan 1 2018
Externally publishedYes

Fingerprint

Dive into the research topics of 'Adaptive feature abstraction for translating video to text'. Together they form a unique fingerprint.

Cite this