Indexed by:
Abstract:
As a powerful model for deep learning on graph-structured data, the scalability limitation of Graph Neural Networks (GNNs) are receiving increasing attention. To tackle this limitation, two categories of scalable GNNs have been proposed: sampling-based and model simplification methods. However, sampling-based methods suffer from high communication costs and poor performance due to the sampling process. Conversely, existing model simplification methods only rely on parameter-free feature propagation, disregarding its spectral properties. Consequently, these methods can only capture low-frequency information while disregarding valuable middle- and high-frequency information. This paper proposes Automatic Filtering Graph Neural Networks (AutoFGNN), a framework that can extract all frequency information from large-scale graphs. AutoFGNN employs parameter-free low-, middle-, and high-pass filters, which extract the corresponding information for all nodes without introducing parameters. To merge the extracted features, a trainable transformer-based information fusion module is utilized, enabling AutoFGNN to be trained in a mini-batch manner and ensuring scalability for large-scale graphs. Experimental results show that AutoFGNN outperforms existing methods on various scale graphs. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1520-6149
Year: 2024
Page: 4970-4974
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: