Indexed by:
Abstract:
The number of texts on the Internet is large and growing. Using technical means to quickly obtain key information from massive texts and refine them into high-quality abstracts will effectively help people save time. The simple extractive abstract model based on the pre-training language model only uses Sentence-level information is extracted, ignoring the text-level text features and the correlation between texts. Therefore, an improved extractive abstract model is proposed, on the basis of the Roberta model, the transformer self-attention mechanism is further used to retain the information at the chapter level. Finally, a comparative experiment is carried out. The experimental results show that the model has achieved better results than the basic extraction model based on bert. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Page: 1392-1395
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: