Indexed by:
Abstract:
Emotion recognition based on facial expressions has low accuracy and doubtful reliability because of the exis-tence of fake expressions. In this paper, a method is proposed to recognize fake emotions based on multivisual information generated from single information sources, including facial expressions, eye states and physiological signals captured from video. An algorithm based on a graph neural network is used to extract spatial and spectral domain features from facial images for facial expression recognition. A model-based method is used for decomposing RGB signals into heart rates. A deep model trained by a labeled dataset that we created is used to segment the eye region. After obtaining the signals extracted from video, different fusion strategies are applied to evaluate emotion recognition performance based on multiple signals. In the experiment, the CK+, TFEID, JAFFE, RAF, PURE, and ESLD datasets are used to measure the accuracy of facial expression recognition, heart rate detection and eye region segmentation. The results show that multimodality is effective in improving accuracy and that eye state can be considered a cue for trusted emotion recognition. Compared with the method based on the eye state and noncontact physiological signal, the accuracy of multimodality can be improved by 26.19% and 9.52%, respectively.
Keyword:
Reprint Author's Address:
Email:
Source :
EXPERT SYSTEMS WITH APPLICATIONS
ISSN: 0957-4174
Year: 2023
Volume: 233
8 . 5 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:19
Cited Count:
WoS CC Cited Count: 11
SCOPUS Cited Count: 16
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: