Indexed by:
Abstract:
Recovering the geometry of an object from a single depth image is an interesting yet challenging problem. While previous learning based approaches have demonstrated promising performance, they don't fully explore spatial relationships of objects, which leads to unfaithful and incomplete 3D reconstruction. To address these issues, we propose a Spatial Relationship Preserving Adversarial Network (SRPAN) consisting of 3D Capsule Attention Generative Adversarial Network (3DCAGAN) and 2D Generative Adversarial Network (2DGAN) for coarse-to-fine 3D reconstruction from a single depth view of an object. Firstly, 3DCAGAN predicts the coarse geometry using an encoder-decoder based generator and a discriminator. The generator encodes the input as latent capsules represented as stacked activity vectors with local-to-global relationships (i.e., the contribution of components to the whole shape), and then decodes the capsules by modeling local-to-local relationships (i.e., the relationships among components) in an attention mechanism. Afterwards, 2DGAN refines the local geometry slice-by-slice, by using a generator learning a global structure prior as guidance, and stacked discriminators enforcing local geometric constraints. Experimental results show that SRPAN not only outperforms several state-of-the-art methods by a large margin on both synthetic datasets and real-world datasets, but also reconstructs unseen object categories with a higher accuracy.
Keyword:
Reprint Author's Address:
Email:
Source :
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
ISSN: 1551-6857
Year: 2022
Issue: 4
Volume: 18
5 . 1
JCR@2022
5 . 1 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:46
JCR Journal Grade:1
CAS Journal Grade:3
Cited Count:
WoS CC Cited Count: 6
SCOPUS Cited Count: 10
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: