Indexed by:
Abstract:
Studies on graph contrastive learning, which is an effective way of self-supervision, have achieved excellent experimental performance. Most existing methods generate two augmented views, and then perform feature learning on the two views through maximizing semantic consistency. Nevertheless, it is still challenging to generate optimal views to facilitate the graph construction that can reveal the essential association relations among nodes by graph contrastive learning. Considering that the extremely high mutual information between views is prone to have a negative effect on model training, a good choice is to add constraints to the graph data augmentation process. This paper proposes two constraint principles, low dissimilarity priority (LDP) and mutual exclusion (ME), to mitigate the mutual information between views and compress redundant parts of mutual information between views. LDP principle aims to reduce the mutual information between views at global scale, and ME principle works to reduce the mutual information at local scale. They are opposite and appropriate in different situations. Without loss of generality, the two proposed principles are applied to two well-performed graph contrastive methods, i.e. GraphCL and GCA, and experimental results on 20 public benchmark datasets show that the models with the aid of the two proposed constraint principles achieve higher recognition accuracy.
Keyword:
Reprint Author's Address:
Source :
NEURAL PROCESSING LETTERS
ISSN: 1370-4621
Year: 2023
Issue: 8
Volume: 55
Page: 10705-10726
3 . 1 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:19
Cited Count:
WoS CC Cited Count: 2
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 14
Affiliated Colleges: