Indexed by:
Abstract:
Attention networks have had great success in sentiment analysis in the last few years, encouraging growing numbers investigating what aspects of sentimental information they can learn. However, most recent studies focus on the variation of representations of word vectors and network outputs. In this paper, we focus on analyzing attention mechanisms in the case of sentiment analysis. We hypothesize that in the humor level evaluation task, attention mechanisms pay more attention to funny words when asserting humor level. By exploring the attention distribution patterns, we find that attention patterns are likely to distribute more attention to contextual information rather than only to some words. We further show that during the training process, the attention patterns assign more and more weight on the noun, adjective, or verb at first but give less in the few latter epoch, which means that attention mechanisms does pay more attention in the early epochs to words which have obvious ostensible meaning but distribute more weight on contextual information in the end. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 178-185
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: