Indexed by:
Abstract:
Node classification is an important task in graph neural networks, but most existing studies assume that samples from different classes are balanced. However, the class imbalance problem is widespread and can seriously affect the model's performance. Reducing the adverse effects of imbalanced datasets on model training is crucial to improve the model's performance. Therefore, a new loss function FD-Loss is reconstructed based on the traditional algorithm-level approach to the imbalance problem. Firstly, we propose sample mismeasurement distance to filter edge-hard samples and simple samples based on the distribution. Then, the weight coefficients are defined based on the mismeasurement distance and used in the loss function weighting term, so that the loss function focuses only on valuable samples. Experiments on several benchmarks demonstrate that our loss function can effectively solve the sample node imbalance problem and improve the classification accuracy by 4% compared to existing methods in the node classification task. © 2022 Association for Computing Machinery.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 1957-1962
Language: English
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: