Indexed by:
Abstract:
Due to the significant intra-class differences and inter-class similarities inherent in facial expressions, distinguishing certain highly similar facial expressions such as disgust, fear, and sadness has become one of the main challenges in facial expression recognition (FER).To address this challenge, we propose a novel Rejected Domain Attention Network (RDANet), specifically designed to distinguish these easily confusing facial expressions. We aggregate and embed the correlations of regional features and facial expressions generated by convolutional neural networks into a compact interval. Adaptive weight adjustment is performed on the features in the interval, and a strict penalty mechanism is added. We identify anti correlated regions by correlation scores and classify them as rejected domains and the rest as supported domains. We select important rejected feature elements via RDANet and give them higher weight than supported domain features. An optimized contrastive loss is used to achieve a dynamic balance between intra-class clustering and inter-class separation, combined with regularization constraints to make the model converge faster. Through experimental verification, our method has superior performance in recognizing confused facial expressions.
Keyword:
Reprint Author's Address:
Email:
Source :
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024
ISSN: 1520-6149
Year: 2024
Page: 3495-3499
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: