• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Zhou, Ziyu (Zhou, Ziyu.) | Lyu, Gengyu (Lyu, Gengyu.) | Huang, Yiming (Huang, Yiming.) | Wang, Zihao (Wang, Zihao.) | Jia, Ziyu (Jia, Ziyu.) | Yang, Zhen (Yang, Zhen.)

Indexed by:

CPCI-S

Abstract:

Transformer has gained widespread adoption in modeling time series due to the exceptional ability of its self-attention mechanism in capturing long-range dependencies. However, when processing time series data with numerous variates, the vanilla self-attention mechanism tends to distribute attention weights evenly and smoothly, causing row-homogenization in attention maps and further hampering time series forecasting. To tackle this issue, we propose an advanced Transformer architecture entitled SDformer, which designs two novel modules, Spectral-Filter-Transform (SFT) and Dynamic-Directional-Attention (DDA), and integrates them into the encoder of Transformer to achieve more intensive attention allocation. Specifically, the SFT module utilizes the Fast Fourier Transform to select the most prominent frequencies, along with a HammingWindow to smooth and denoise the filtered series data; The DDA module applies a specialized kernel function to the query and key vectors projected from the denoised data, concentrating this innovative attention mechanism more effectively on the most informative variates to obtain a sharper attention distribution. These two modules jointly enable attention weights to be more salient among numerous variates, which in turn enhances the attention's ability to capture multivariate correlations, improving the performance in forecasting. Extensive experiments on public datasets demonstrate its superior performance over other state-of-the-art models. Code is available at https://github.com/zhouziyu02/SDformer.

Keyword:

Author Community:

  • [ 1 ] [Zhou, Ziyu]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 2 ] [Lyu, Gengyu]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 3 ] [Huang, Yiming]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 4 ] [Wang, Zihao]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 5 ] [Yang, Zhen]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 6 ] [Jia, Ziyu]Chinese Acad Sci, Inst Automat, Beijing, Peoples R China

Reprint Author's Address:

  • [Lyu, Gengyu]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024

Year: 2024

Page: 5689-5697

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 6

Affiliated Colleges:

Online/Total:479/10596118
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.