Indexed by:
Abstract:
Incremental learning aims to train a model on a sequence of tasks while preserving previously learned knowledge, whereas catastrophic forgetting is a widely-studied problem. To tackle this concern, we design a multi-level knowledge distillation framework (MLKD), which combines coarse-grained and fine-grained distillations to effectively memorize past knowledge. For the coarse-grained distillation, we enforce the model to memorize the neighborhood relationships among samples. For the fine-grained distillation, we aim to memorize the activation logits within each sample. Through the multi-level knowledge distillation, we can learn more robust incremental learning models. In order to assess the efficacy of the MLKD, we perform experiments on two popular incremental learning benchmarks(CIFAR100 and Mini-ImageNet), and our approach achieves good performance.
Keyword:
Reprint Author's Address:
Email:
Source :
COMPUTER ANIMATION AND SOCIAL AGENTS, CASA 2024, PT I
ISSN: 1865-0929
Year: 2025
Volume: 2374
Page: 290-305
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: