Indexed by:
Abstract:
Factual consistency of text summarization is consistent with information of the source document and the summarization. Recent researches have shown that there are large number of factual inconsistencies existed in the outputs of abstractive summarization models. It is important to design a method that can detect and evaluate the error of fact inconsistency. Most of existing methods based on natural language inference have insufficient ability to extract key content of source document and infer the information of content. This paper improves the accuracy of consistency assessment model for summarization by multi-attention mechanism. Firstly, the key sentences are selected according to sentence-BERT, which is fine-turning on pre-trained language model. Each evidence and claim are formed into separate sentence pairs and sent to the BERT encoder to obtain the vectors of the sentence pairs representation, then it achieves evidence reasoning by the vectors combined with ESIM. Finally, the graph attention network is used to complete the aggregation of inferential information to obtain the fact consistency assessment result. Experimental results show that this algorithm is compared with several typical algorithms in common datasets in the field, and its feasibility and effectiveness are verified. © 2023 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
Computer Engineering and Applications
ISSN: 1002-8331
Year: 2023
Issue: 7
Volume: 59
Page: 163-170
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: