• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Han, Jidong (Han, Jidong.) | Liu, Zhaoying (Liu, Zhaoying.) | Li, Yujian (Li, Yujian.) | Zhang, Ting (Zhang, Ting.)

Indexed by:

EI Scopus SCIE

Abstract:

Deep learning technology has played an important role in our life. Since deep learning technology relies on the neural network model, it is still plagued by the catastrophic forgetting problem, which refers to the neural network model will forget what it has learned after learning new knowledge. The neural network model learns knowledge through labeled samples, and its knowledge is stored in its parameters. Therefore, many methods try to solve this problem from the perspective of constraint parameters and stored samples. There are few ways to solve this problem from the perspective of constraining features output of neural network models. This paper proposes an incremental learning method with super constraints on model parameters. This method not only calculates the parameter similarity loss of the old and new models, but also calculates the layer output feature similarity loss of the old and new models, and finally suppresses the change of model parameters from two directions. In addition, we also propose a new strategy for selecting representative samples from dataset and tackling the imbalance between stored samples and new task samples. Finally, we utilize the neural kernel mapping support vector machine theory to increase the interpretability of the model. In order to better meet the actual situation, five sample sets with different categories and amounts were employed in experiments. Experiments show the effectiveness of our method. For example, after learning the last task, our method is at least 1.930% and 0.562% higher than other methods on the training set and test set, respectively.

Keyword:

Incremental learning Catastrophic forgetting Neural Kernel mapping support vector machine Parameter similarity loss Layer output feature similarity loss

Author Community:

  • [ 1 ] [Han, Jidong]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Liu, Zhaoying]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 3 ] [Li, Yujian]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 4 ] [Zhang, Ting]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 5 ] [Li, Yujian]Guilin Univ Elect Technol, Sch Artificial Intelligence, Guilin 541004, Peoples R China

Reprint Author's Address:

  • [Han, Jidong]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China;;[Li, Yujian]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China;;[Li, Yujian]Guilin Univ Elect Technol, Sch Artificial Intelligence, Guilin 541004, Peoples R China;;

Show more details

Related Keywords:

Source :

INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS

ISSN: 1868-8071

Year: 2022

Issue: 5

Volume: 14

Page: 1751-1767

5 . 6

JCR@2022

5 . 6 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:46

JCR Journal Grade:2

CAS Journal Grade:3

Cited Count:

WoS CC Cited Count: 1

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 11

Affiliated Colleges:

Online/Total:437/10728546
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.