• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Zhou, R. (Zhou, R..) | Quan, P. (Quan, P..)

Indexed by:

EI Scopus

Abstract:

Deep learning is a powerful tool that uses simple representations to express complex ideas and allows computers to mine hidden information and value from experience. It has achieved great success in a variety of fields. However, with the development of deep learning models, the computing resources consumed by training the models increase rapidly, making it difficult to directly deploy the models to portable devices or embedded systems. Therefore, compressing and accelerating DNN models without sacrificing performance has become a crucial area of research in the field of deep learning. In many existing compression techniques, optimization theory and approaches play an important role in their research and implementation. In this paper, we focus on neural network compression from an optimization perspective and review related optimization strategies. Specifically, we summarize optimization techniques emerging from four general categories of commonly used network compression approaches, including network pruning, low-bit quantization, low-rank factorization, and knowledge distillation. Finally, we provide a summary and discuss some possible research directions. © 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0) Peer-review under responsibility of the scientific committee of the Tenth International Conference on Information Technology and Quantitative Management.

Keyword:

neural network compression deep learning optimization

Author Community:

  • [ 1 ] [Zhou R.]Institute of Operations Research and Information Engineering, Beijing University of Technology, Beijing, 100124, China
  • [ 2 ] [Quan P.]College of Economics and Management, Beijing University of Technology, Beijing, 100124, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Source :

ISSN: 1877-0509

Year: 2023

Volume: 221

Page: 1351-1357

Language: English

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 7

Affiliated Colleges:

Online/Total:2073/10727630
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.