Indexed by:
Abstract:
Prompt-based pre-trained language models (PLMs) have demonstrated their superior performance on a wide variety of downstream tasks. In particular, the performance of prompt tuning has significantly outperformed traditional fine-tuning in the zero-shot learning and few-shot learning scenarios. The core idea of prompt-tuning is to convert different downstream tasks to mask language modeling problems through prompts, which can bridge the gap between pre-training tasks and downstream tasks for better results. Verbalizer, as an important part of prompt-tuning, can largely determine the final performance of the model, but the design of the Chinese-based verbalizer is yet to be fully explored. In this paper, we propose a method to expand the verbalizer by extracting knowledge from the training set based on a Chinese text classification task. In brief, we first segment the Chinese training set, then filter the words that can express the semantics of the labels by semantic similarity, and finally add them to the verbalizer. Extensive experimental results on multiple text classification datasets show that our approach significantly outperforms ordinary prompt-tuning and outperforms other methods for constructing the verbalizer. © 2023 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2023
Volume: 12645
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: