• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Yu, W. (Yu, W..) | Li, S. (Li, S..) | Chen, Z. (Chen, Z..)

Indexed by:

EI Scopus

Abstract:

The number of texts on the Internet is large and growing. Using technical means to quickly obtain key information from massive texts and refine them into high-quality abstracts will effectively help people save time. The simple extractive abstract model based on the pre-training language model only uses Sentence-level information is extracted, ignoring the text-level text features and the correlation between texts. Therefore, an improved extractive abstract model is proposed, on the basis of the Roberta model, the transformer self-attention mechanism is further used to retain the information at the chapter level. Finally, a comparative experiment is carried out. The experimental results show that the model has achieved better results than the basic extraction model based on bert.  © 2023 IEEE.

Keyword:

Natural Language Processing Neural Network RoBERTa Extraction summary

Author Community:

  • [ 1 ] [Yu W.]Beijing University of Technology, Faculty of Information Technology, Beijing, China
  • [ 2 ] [Li S.]Beijing University of Technology, Faculty of Information Technology, Beijing, China
  • [ 3 ] [Chen Z.]Beijing University of Technology, Faculty of Information Technology, Beijing, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Year: 2023

Page: 1392-1395

Language: English

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 7

Affiliated Colleges:

Online/Total:884/10803220
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.