TY - GEN
T1 - Does Head Label Help for Long-Tailed Multi-Label Text Classification
AU - Xiao, Lin
AU - Zhang, Xiangliang
AU - Jing, Liping
AU - Huang, Chi
AU - Song, Mingyang
N1 - KAUST Repository Item: Exported on 2021-09-21
Acknowledged KAUST grant number(s): FCC/1/1976-19-01
Acknowledgements: This work was supported in part by the National Natural Science Foundation of China under Grant 61822601, 61773050, 61632004 and 61828302; The Beijing Natural Science Foundation under Grant Z180006; The National Key Research and Development Program of China
under Grant 2020AAA0106800 and 2017YFC1703506; The Fundamental Research Funds for the Central Universities (2019JBZ110); And King Abdullah University of Science & Technology, under award number FCC/1/1976-19-01.
PY - 2021
Y1 - 2021
N2 - Multi-label text classification (MLTC) aims to annotate documents with the most relevant labels from a number of candidate labels. In real applications, the distribution of label frequency often exhibits a long tail, i.e., a few labels are associated with a large number of documents (a.k.a. head labels), while a large fraction of labels are associated with a small number of documents (a.k.a. tail labels). To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to data-poor tail labels. The meta-knowledge is the mapping from few-shot network parameters to many-shot network parameters, which aims to promote the generalizability of tail classifiers. Extensive experimental results on three benchmark datasets demonstrate that HTTN consistently outperforms the state-of-the-art methods. The code and hyper-parameter settings are released for reproducibility(1).
AB - Multi-label text classification (MLTC) aims to annotate documents with the most relevant labels from a number of candidate labels. In real applications, the distribution of label frequency often exhibits a long tail, i.e., a few labels are associated with a large number of documents (a.k.a. head labels), while a large fraction of labels are associated with a small number of documents (a.k.a. tail labels). To address the challenge of insufficient training data on tail label classification, we propose a Head-to-Tail Network (HTTN) to transfer the meta-knowledge from the data-rich head labels to data-poor tail labels. The meta-knowledge is the mapping from few-shot network parameters to many-shot network parameters, which aims to promote the generalizability of tail classifiers. Extensive experimental results on three benchmark datasets demonstrate that HTTN consistently outperforms the state-of-the-art methods. The code and hyper-parameter settings are released for reproducibility(1).
UR - http://hdl.handle.net/10754/667224
UR - https://arxiv.org/pdf/2101.09704
M3 - Conference contribution
SP - 14103
EP - 14111
BT - 35th AAAI Conference on Artificial Intelligence / 33rd Conference on Innovative Applications of Artificial Intelligence / 11th Symposium on Educational Advances in Artificial Intelligence
PB - arXiv
ER -