Indexed by:
Abstract:
Transformer has good feature extraction ability and has achieved good performance on various NLP tasks such as sentence classification, machine translation and reading comprehension, but it does not perform well in named entity recognition tasks. According to recent researches, the Long Short-Term Memory (LSTM) usually performs better than Transformer in NER task. LSTM is a variant of Recurrent Neural Network (RNN), because of its natural chain structure, it can learn the front and back dependencies between words well, which is very suitable for processing text sequences. In this paper, the BiLSTM network structure is embedded into the Transformer Encoder, and a new network structure BiLSTM-IN-TRANS is proposed, which combines the sequential feature extraction capability of BiLSTM and the powerful global feature extraction capability of Transformer Encoder. The experiments can reflect that applying the model based on BiLSTM-INTRANS could work better than applying one of LSTM or Transformer alone in the NER task. © 2022 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2022
Volume: 12305
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 15
Affiliated Colleges: