• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Liu, Caixia (Liu, Caixia.) | Kong, Dehui (Kong, Dehui.) (Scholars:孔德慧) | Wang, Shaofan (Wang, Shaofan.) | Wang, Zhiyong (Wang, Zhiyong.) (Scholars:王智勇) | Li, Jinghua (Li, Jinghua.) | Yin, Baocai (Yin, Baocai.) (Scholars:尹宝才)

Indexed by:

EI Scopus SCIE

Abstract:

Three-dimensional (3D) reconstruction of shapes is an important research topic in the fields of computer vision, computer graphics, pattern recognition, and virtual reality. Existing 3D reconstruction methods usually suffer from two bottlenecks: (1) they involve multiple manually designed states which can lead to cumulative errors, but can hardly learn semantic features of 3D shapes automatically; (2) they depend heavily on the content and quality of images, as well as precisely calibrated cameras. As a result, it is difficult to improve the reconstruction accuracy of those methods. 3D reconstruction methods based on deep learning overcome both of these bottlenecks by automatically learning semantic features of 3D shapes from low-quality images using deep networks. However, while these methods have various architectures, in-depth analysis and comparisons of them are unavailable so far. We present a comprehensive survey of 3D reconstruction methods based on deep learning. First, based on different deep learning model architectures, we divide 3D reconstruction methods based on deep learning into four types, recurrent neural network, deep autoencoder, generative adversarial network, and convolutional neural network based methods, and analyze the corresponding methodologies carefully. Second, we investigate four representative databases that are commonly used by the above methods in detail. Third, we give a comprehensive comparison of 3D reconstruction methods based on deep learning, which consists of the results of different methods with respect to the same database, the results of each method with respect to different databases, and the robustness of each method with respect to the number of views. Finally, we discuss future development of 3D reconstruction methods based on deep learning.

Keyword:

Deep learning models Deep autoencoder Convolutional neural network Recurrent neural network TP391 Generative adversarial network Three-dimensional reconstruction

Author Community:

  • [ 1 ] [Liu, Caixia]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 2 ] [Kong, Dehui]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 3 ] [Wang, Shaofan]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 4 ] [Li, Jinghua]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 5 ] [Yin, Baocai]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
  • [ 6 ] [Wang, Zhiyong]Univ Sydney, Sch Comp Sci, Multimedia Lab, Sydney, NSW 2006, Australia

Reprint Author's Address:

  • [Wang, Shaofan]Beijing Univ Technol, Fac Informat Technol, Beijing Inst Artificial Intelligence, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Source :

FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING

ISSN: 2095-9184

Year: 2021

Issue: 5

Volume: 22

Page: 652-672

3 . 0 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:87

JCR Journal Grade:2

Cited Count:

WoS CC Cited Count: 12

SCOPUS Cited Count: 13

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 20

Online/Total:530/10554707
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.