Indexed by:
Abstract:
A structured semantic knowledge base called a temporal knowledge graph contains several quadruple facts that change throughout time. To infer missing facts is one of the main challenges with temporal knowledge graph, i.e., temporal knowledge graph completion (TKGC). Transformer has strong modeling abilities across a variety of domains since its self-attention mechanism makes it possible to model the global dependencies of input sequences, while few studies explore Transformer encoders for TKGC tasks. To address this problem, we propose a novel end-to-end TKGC model named Transbe-TuckERTT that adopts an encoder-decoder architecture. Specifically, t he proposed model employs the Transformer-based encoder to facilitate interaction between entities, relations, and temporal information within the quadruple to generate highly expressive embeddings. The TuckERTT decoder uses encoded embeddings to predict missing facts in the knowledge graph. Experimental results demonstrate that our proposed model outperforms several state-of-the-art TKGC methods on three public benchmark datasets, verifying the effectiveness of the self-attention mechanism in the Transformer-based encoder for capturing dependencies in the temporal knowledge graph. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Page: 443-448
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: