Indexed by:
Abstract:
Research on brain-computer interfaces (BCI) can identify the limbs of subjects that generate motor imagination by decoding brain physiological signals. The features extracted from traditional Electroencephalography (EEG)-based decoding methods are relatively single and limited, leading to the limited decoding performance of classification methods. In order to solve this problem, we proposed a complementary feature fusion network (CFFNet) model based on EEG and functional near-infrared spectroscopy (fNIRS) signals since the complementary information content between EEG and fNIRS signals. The CFFNet method integrates the feature extraction block, the feature selection block, the complementary feature fusion block, and the shared-specific feature for complementary feature fusion learning, which enables effective capability of utilizing the shared and specific information of each modal. This approach and representative MI recognition methods were evaluated an open multimodal dataset. The method we proposed achieved an average accuracy of 76.45% in intra-subject experiments, which is significantly higher than single-modal classification methods and slightly higher than the representative multi-modal BCI methods. Comprehensive experimental results verify the effectiveness of our proposed method, which can provide novel perspectives for multi-modal decoding. © 2024 ACM.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Page: 278-282
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 28
Affiliated Colleges: