Indexed by:
Abstract:
The title of an academic paper encapsulates its core knowledge in a succinct manner, while the abstract provides a brief yet comprehensive summary of the work. Crafting an effective title is essential for enhancing both the dissemination and retrieval of scholarly research. To address this need, we propose an automatic title generation model based on abstractive summarization techniques, aimed at assisting researchers in the title drafting process. This study introduces a novel model for generating titles for Chinese academic papers, utilizing the Transformer architecture in conjunction with the pre-trained BERT model. The challenges inherent in Chinese word segmentation are explored, and the effects of character-based versus word-based tokenization on model performance are thoroughly examined. Furthermore, a robust evaluation framework incorporating both quantitative metrics and human evaluations is implemented to assess the effectiveness of the proposed approach. Experimental results demonstrate that the Transformer model, augmented by pre-training, outperforms established baselines, achieving a Rouge-L score of 0.441. Further analysis reveals that the generated titles accurately capture the essential content of the papers, thus facilitating decision-making in academic title generation. This method shows considerable potential for improving the efficiency and quality of academic writing in the Chinese language. © China Computer Federation (CCF) 2025.
Keyword:
Reprint Author's Address:
Email:
Source :
CCF Transactions on Pervasive Computing and Interaction
ISSN: 2524-521X
Year: 2025
Issue: 1
Volume: 7
Page: 87-96
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: