Indexed by:
Abstract:
3D reconstruction from single hand-drawn sketches can be considered as a task of single-view reconstruction, which faces great challenge of lifting the dimension of the geometric representation of objects. Most of the reconstruction networks are based on deep learning technology with supervised training which are used to suffer from dataset labeling, while self-supervised sketch-based 3D reconstruction remains challenging. In this paper, we propose a self-supervised 3D reconstruction network for hand-drawn sketch (IASSReNet), which introduces image information as an auxiliary to address the ambiguity and sparsity of sketch. In order to obtain image information, an image generator is firstly designed to provide augmented information for the reconstruction through a sketch feature enhancement module. To integrate the information from sketch and image, we use a spatially corresponding feature transfer module to fuse their feature. Finally, silhouettes are obtained from the predicted 3D mesh, and similarity constraints are applied to the sketch contour to perform self-supervised training on the network. Experimental results on multiple datasets show that our method outperforms other unsupervised methods and is competitive with some supervised methods. © 2023 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2023
Volume: 12791
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 29
Affiliated Colleges: