Adaptive Convolutional Filter Generation for Natural Language Understanding

Dinghan Shen, Martin Renqiang Min, Yitong Li, Lawrence Carin

Research output: Contribution to journalArticlepeer-review

Abstract

Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP are not expressive enough, in the sense that all input sentences share the same learned (and static) set of filters. Motivated by this problem, we propose an adaptive convolutional filter generation framework for natural language understanding, by leveraging a meta network to generate input-aware filters. We further generalize our framework to model question-answer sentence pairs and propose an adaptive question answering (AdaQA) model; a novel two-way feature abstraction mechanism is introduced to encapsulate co-dependent sentence representations. We investigate the effectiveness of our framework on document categorization and answer sentence-selection tasks, achieving state-of-the-art performance on several benchmark datasets.
Original languageEnglish (US)
JournalArXiv
StatePublished - 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'Adaptive Convolutional Filter Generation for Natural Language Understanding'. Together they form a unique fingerprint.

Cite this