• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Xu, Kai (Xu, Kai.) | Wang, Lichun (Wang, Lichun.) | Zhang, Huiyong (Zhang, Huiyong.) | Yin, Baocai (Yin, Baocai.)

Indexed by:

CPCI-S EI Scopus

Abstract:

Self-knowledge distillation does not require a pre-trained teacher network like traditional knowledge distillation. Existing methods either require additional parameters or require additional memory consumption. To alleviate this problem, this paper proposes a more efficient self-knowledge distillation method, named LRMS (learning from role-model samples). In every mini-batch, LRMS selects out a rolemodel sample for each sampled category, and takes its prediction as the proxy semantic for the corresponding category. Then, predictions of the other samples are constrained to be consistent with the proxy semantics, which makes the distribution of predictions for samples within the same category more compact. Meanwhile, the regularization targets corresponding to proxy semantics are set with a higher distillation temperature to better utilize the classificatory information about the categories. Experimental results show that diverse architectures achieve improvements on four image classification datasets by using LRMS. Code is acaliable: https://github.com/KAI1179/LRMS

Keyword:

Self-knowledge Distillation Neural Networks Image Classification Model Compression

Author Community:

  • [ 1 ] [Xu, Kai]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 2 ] [Wang, Lichun]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 3 ] [Zhang, Huiyong]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 4 ] [Yin, Baocai]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China

Reprint Author's Address:

  • [Wang, Lichun]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024

ISSN: 1520-6149

Year: 2024

Page: 5185-5189

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:1927/10890996
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.