Indexed by:
Abstract:
This paper proposes a depth map estimation method based on Deformable Convolutional Neural Networks (DCNNs). Traditional single image depth estimation methods often struggle to accurately capture irregular shapes and details in complex scenes, resulting in suboptimal spatial resolution and object boundary reconstruction. To address these challenges, we introduce deformable convolution modules that enable convolutional kernels to adaptively adjust their sampling positions, thereby enhancing the model's capability to handle complex geometric structures and dynamic deformations. Deformable convolution features high adaptability, superior handling of complex scenes, and enhanced boundary clarity. The position offsets of convolutional kernels are adaptively adjusted through learning, allowing the model to flexibly handle features of various shapes and scales, thus excelling in capturing irregular shapes and complex details. In scenes with abundant details and deformations, deformable convolution can extract features more accurately, significantly improving depth estimation accuracy. Additionally, by finely adjusting sampling points, deformable convolution effectively reduces blurring and distortion at depth map boundaries, excelling in reconstructing small objects and complex edges. Experimental results demonstrate that the proposed method significantly outperforms existing approaches in terms of depth estimation accuracy and boundary clarity, particularly excelling in complex scenes and small object reconstruction. Our research highlights the extensive application potential of deformable convolution in depth map estimation, providing robust support for future computer vision tasks. © 2025 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2025
Volume: 13486
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: