• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Mou, L. (Mou, L..) | Zhao, Y. (Zhao, Y..) | Zhou, C. (Zhou, C..) | Nakisa, B. (Nakisa, B..) | Rastgoo, M.N. (Rastgoo, M.N..) | Ma, L. (Ma, L..) | Huang, T. (Huang, T..) | Yin, B. (Yin, B..) | Jain, R. (Jain, R..) | Gao, W. (Gao, W..)

Indexed by:

EI Scopus SCIE

Abstract:

Negative emotions may induce dangerous driving behaviors leading to extremely serious traffic accidents. Therefore, it is necessary to establish a system that can automatically recognize driver emotions so that some actions can be taken to avoid traffic accidents. Existing studies on driver emotion recognition have mainly used facial data and physiological data. However, there are fewer studies on multimodal data with contextual characteristics of driving. In addition, fully fusing multimodal data in the feature fusion layer to improve the performance of emotion recognition is still a challenge. To this end, we propose to recognize driver emotion using a novel multimodal fusion framework based on convolutional long-short term memory network (ConvLSTM), and hybrid attention mechanism to fuse non-invasive multimodal data of eye, vehicle, and environment. In order to verify the effectiveness of the proposed method, extensive experiments have been carried out on a dataset collected using an advanced driving simulator. The experimental results demonstrate the effectiveness of the proposed method. Finally, a preliminary exploration on the correlation between driver emotion and stress is performed. IEEE

Keyword:

Attention mechanism convolutional long short term memory Feature extraction driver stress multimodal fusion Anxiety disorders Physiology Data mining Vehicles Emotion recognition Accidents driver emotion recognition

Author Community:

  • [ 1 ] [Mou L.]Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Beijing Institute of Artificial Intelligence, Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 2 ] [Zhao Y.]Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Beijing Institute of Artificial Intelligence, Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 3 ] [Zhou C.]Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Beijing Institute of Artificial Intelligence, Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 4 ] [Nakisa B.]School of Information Technology, Faculty of Science Engineering and Built Environment, Deakin University, Waurn Ponds, VIC, Australia
  • [ 5 ] [Rastgoo M.N.]School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, QLD, Australia
  • [ 6 ] [Ma L.]Beijing Academy of Artificial Intelligence, Beijing, China
  • [ 7 ] [Huang T.]Beijing Academy of Artificial Intelligence, Beijing, China
  • [ 8 ] [Yin B.]Beijing Key Laboratory of Multimedia and Intelligent Software Technology, Beijing Institute of Artificial Intelligence, Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 9 ] [Jain R.]Institute for Future Health, Bren School of Information and Computer Sciences, University of California, Irvine, CA, USA
  • [ 10 ] [Gao W.]Institute of Digital Media, Peking University, Beijing, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Source :

IEEE Transactions on Affective Computing

ISSN: 1949-3045

Year: 2023

Issue: 4

Volume: 14

Page: 1-12

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:19

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 33

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 3

Affiliated Colleges:

Online/Total:1043/10681665
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.