Indexed by:
Abstract:
Considering that existing recurrent neural network-based video deblurring methods are limited in cross-frame feature aggregation and computational efficiency, an efficient spatio-temporal feature extraction recurrent neural network is proposed. Firstly, we combine a residual dense module with the channel attention mechanism to efficiently extract discriminative features from each frame of a given sequence. Then, a spatio-temporal feature enhancement and fusion module is proposed to select features from the highly redundant and interfering sequential features and integrate them into the features of the current frame. Finally, the enhanced features of the current frame are converted into the deblurred image by a reconstruction module. The quantitative and qualitative experimental results on three public datasets, containing both synthetic and real blurred videos, show that the proposed network can achieve excellent video deblurring effect with less computational cost. Among them, on the GOPRO dataset, the PSNR reaches 31.43dB and the SSIM reaches 0.9201. © 2023 Institute of Computing Technology. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
Journal of Computer-Aided Design and Computer Graphics
ISSN: 1003-9775
Year: 2023
Issue: 11
Volume: 35
Page: 1720-1730
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 13
Affiliated Colleges: