Indexed by:
Abstract:
Deep neural networks have been widely applied across various domains, but their numerous parameters and high computational demands limit their practical usage scenarios. To address this issue, this paper introduces a convolutional neural network compression method based on multi-factor channel pruning. By integrating scaling and shifting factors from batch normalization layers, a multi-factor channel salience metric is proposed to measure channel importance. By removing redundant channels within the convolutional neural network, a compressed model is obtained. On the CIFAR-10 dataset, we pruned 93.06% of the parameters and 91.92% of the calculations from the VGG13BN network, with only a 2.81% decrease in accuracy. On the CIFAR-100 dataset, we pruned 72.84% of the parameters and 72.03% of the calculations from the VGG13BN network, with an accuracy improvement of 4.11%. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: