Indexed by:
Abstract:
Effective sentiment analysis on social media data can help to better understand the public's sentiment and opinion tendencies. Combining multimodal content for sentiment classification uses the correlation information of data between modalities, thereby avoiding the situation that a single modality does not fully grasp the overall sentiment. This paper proposes a multimodal sentiment recognition model based on the attention mechanism. Through transfer learning, the latest pre-Trained model is used to extract preliminary features of text and image, and the attention mechanism is deployed to achieve further feature extraction of prominent image key regions and text keywords, better mining the internal information of modalities and learning the interaction between modalities. In view of the different contribution of each modal to sentiment classification, a decision-level fusion method is proposed to design fusion rules to integrate the classification results of each modal to obtain the final sentiment recognition result. This model integrates various unimodal features well, and effectively mines the emotional information expressed in Internet social media comments. This method is experimentally tested on the Twitter dataset, and the results show that the classification accuracy of sentiment recognition is significantly improved compared with the single-modal method. © 2022 ACM.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 213-220
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 14
Affiliated Colleges: