Indexed by:
Abstract:
Attention-based models have attracted crazy enthusiasm both in natural language processing and graph processing. We propose a novel model called Graph Encoder Representations from Transformers (GERT). Inspired by the similar distribution between vertices in graphs and words in natural language, GERT utilizes the equivalent of sentences-vertices obtained from truncated random walks to learn the local information of vertices. Then, GERT combines the strengths of local information learned from random walks and long-distance dependence obtained from transformer encoder models to represent latent features. Compared to other transformer models, the advantages of GERT include extracting local and global information, being suitable for homogeneous and heterogeneous networks, and possessing stronger strengths in extracting latent features. On top of GERT, we integrate convolution to extract information from the local neighbors and obtain another novel model Graph Convolution Encoder Representations from Transformers (GCERT). We demonstrate the effectiveness of proposed models on six networks DBLP, BlogCatalog, CiteSeerX, CoRE, Flickr, and PubMed. Evaluation results show that our models improve F1 scores of current state-of-the-art methods up to 10 %. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0302-9743
Year: 2023
Volume: 14178 LNAI
Page: 199-213
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: