Indexed by:
Abstract:
The graph few-shot class incremental learning (GFSCIL) task incrementally acquires new knowledge from non-stationary data streams with limited training samples. It encompasses two distinct phenomena: overfitting, which occurs when the network excessively aligns with the characteristics of few-shot samples and fails to capture the underlying class patterns; and catastrophic forgetting, where the introduction of new information gradually interferes with previously learned knowledge in the network. The prototype-based methods tackle those challenges by employing clustering in the metric space to acquire class prototypes, which differ from traditional class incremental learning methods in modifying model parameters or inserting regular term constraints. We propose the uncertainty-guided recurrent prototype distillation network (URPD) for GFSCIL to address the aforementioned challenges. URPD comprises two key components: a recurrent prototype representation (RPR) module and a generated distillation (GD) module. The RPR module tackles the overfitting issue by generating recurrent class prototypes based on an uncertainty selection scheme applied to unlabeled nodes. The GD module mitigates the catastrophic forgetting issue by using a generative distillation scheme, which distills old knowledge based on not only current nodes but also generated replay nodes. Essentially, URPD improves traditional prototype-based methods by learning debiased class prototypes with more fruitful knowledge induced from unlabeled nodes. Experimental results demonstrate that URPD outperforms current state-of-the-art methods by a margin ranging from 0.95 to 6.46% on the Cora-Full, Cora-ML, Flickr, and Amazon datasets.
Keyword:
Reprint Author's Address:
Source :
MULTIMEDIA SYSTEMS
ISSN: 0942-4962
Year: 2025
Issue: 3
Volume: 31
3 . 9 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: