Indexed by:
Abstract:
Electroencephalogram (EEG)-based emotion recognition has become a focus of brain–computer interface research. However, differences in EEG signals across subjects can lead to poor generalization. Moreover, current approaches individually extract temporal and spatial information, resulting in inadequate feature fusion during feature extraction. This study develops a novel ChannelMix-based transformer and convolutional multi-view feature fusion network (CMTCF) to enhance cross-subject EEG emotion recognition. Specifically, a bi-directional fusion module based on a convolutional neural network (CNN)-Transformer structure is introduced to extract multi-view spatial feature and temporal feature, enabling the representation of rich spatiotemporal information. Subsequently, the ChannelMix module is designed to effectively establish an intermediate domain, facilitating the alignment of the target and source domains to reduce their discrepancies. Additionally, a soft pseudo-label module is implemented to enhance the discriminative power of target domain data within the feature space. To further improve generalization, a ChannelMix-based data augmentation method is utilized. Comprehensive experiments are conducted on the SEED, SEED-IV and SEED-VII benchmark datasets, achieving recognition accuracies of 93.80% (±4.96), 79.37% (±6.05) and 49.13% (±8.22), respectively, demonstrating that the CMTCF network achieves competitive results in cross-subject EEG emotion recognition tasks. © 2025 Elsevier Ltd
Keyword:
Reprint Author's Address:
Email:
Source :
Expert Systems with Applications
ISSN: 0957-4174
Year: 2025
Volume: 280
8 . 5 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: