• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Wang, Chunxi (Wang, Chunxi.) | Jia, Maoshen (Jia, Maoshen.) | Zhang, Yanyan (Zhang, Yanyan.) | Li, Lu (Li, Lu.)

Indexed by:

EI

Abstract:

In recent years, speaker-independent, monaural speech separation methods have made great progress with the development of deep neural networks (DNNs). However, in the automotive environment, computational resources are limited and the cockpit sound field environment is more complex. These factors pose significant challenges to the task of speech separation. To address these challenges, this paper proposes a parallel-path transformer model. The model employs a parallel processing strategy that combines improved feed-forward networks with transformer modules. It simultaneously applies intra-chunk transformer and inter-chunk transformer to process the input sequences, avoiding the need for implicit modeling of intermediate states. This approach enables the model to perform local and global modeling of the speech signals in a parallel manner, capturing both short and long-term dependencies within the speech sequences. Consequently, the proposed model enhances the system's modeling performance. Experimental results on the WSJ0-2Mix and WHAM! datasets demonstrate that the proposed model achieves excellent speech separation performance while maintaining smaller model parameters and computational complexity. © 2023 IEEE.

Keyword:

Time domain analysis Deep neural networks Acoustic fields Complex networks Speech analysis Source separation

Author Community:

  • [ 1 ] [Wang, Chunxi]Beijing University of Technology, Faculty of Information Technology, Beijing, China
  • [ 2 ] [Jia, Maoshen]Beijing University of Technology, Faculty of Information Technology, Beijing, China
  • [ 3 ] [Zhang, Yanyan]Beijing University of Technology, Faculty of Information Technology, Beijing, China
  • [ 4 ] [Li, Lu]Beijing University of Technology, Faculty of Information Technology, Beijing, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Year: 2023

Page: 509-514

Language: English

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 6

Affiliated Colleges:

Online/Total:1216/10606759
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.