Indexed by:
Abstract:
Multi-Label Text Classification (MLTC) is a task to assign documents to one or more class labels. Most of the latest approaches are based on deep learning. However, among the various types of methods, there are still defects in capturing fine-grained information in the text and in modeling the dependencies between labels. In recent years, neural network model-based approaches have made great progress, among which the BERT language model can capture bidirectional contextual information well but cannot capture label-specific semantic information, so we propose a new multi-label text classification framework LAGAT by combining attention mechanism for feature learning. This framework is devoted to mining inter-label dependencies by constructing graphs of label-specific text semantic information and using the graph attention network (GAT) to learn the relative importance between labels by updating the weights on the edges, thus mining the inter-label dependencies. In further experiments, we confirm that the GAT is weaker than the self-Attention mechanism in mining text labels by aggregating neighborhood information at moderate label size, and propose an improved model framework LA-Trans, which also utilizes label association relations with text semantic information, focuses on the target region and ignores irrelevant information, and achieves the performance of state of the arts. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2022
Volume: 12346
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: