Indexed by:
Abstract:
Guided depth map super-resolution (GDSR) is one of the mainstream methods in depth map super-resolution, as high-resolution color images can guide the reconstruction of the depth maps and are often easy to obtain. However, how to make full use of extracted guidance information of the color image to improve the depth map reconstruction remains a challenging problem. In this paper, we first design a multi-scale feedback module (MF) that extracts multi-scale features and alleviates the information loss in network propagation. We further propose a novel multi-scale feedback network (MSF-Net) for guided depth map super-resolution, which can better extract and refine the features by sequentially joining MF blocks. Specifically, our MF block uses parallel sampling layers and feedback links between multiple time steps to better learn information at different scales. Moreover, an inter-scale attention module (IA) is proposed to adaptively select and fuse important features at different scales. Meanwhile, depth features and corresponding color features are interacted using cross-domain attention conciliation module (CAC) after each MF block. We evaluate the performance of our proposed method on both synthetic and real captured datasets. Extensive experimental results validate that the proposed method achieves state-of-the-art performance in both objective and subjective quality. IEEE
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Transactions on Circuits and Systems for Video Technology
ISSN: 1051-8215
Year: 2023
Issue: 2
Volume: 34
Page: 1-1
8 . 4 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:19
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: