• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Li, Leida (Li, Leida.) | Huang, Yipo (Huang, Yipo.) | Wu, Jinjian (Wu, Jinjian.) | Gu, Ke (Gu, Ke.) (Scholars:顾锞) | Fang, Yuming (Fang, Yuming.)

Indexed by:

EI Scopus SCIE

Abstract:

With the increasing prevalence of free-viewpoint video applications, virtual view synthesis has attracted extensive attention. In view synthesis, a new viewpoint is generated from the input color and depth images with a depth-image-based rendering (DIBR) algorithm. Current quality evaluation models for view synthesis typically operate on the synthesized images, i.e. after the DIBR process, which is computationally expensive. So a natural question is that can we infer the quality of DIBR-based synthesized images using the input color and depth images directly without performing the intricate DIBR operation. With this motivation, this paper presents a no-reference image quality prediction model for view synthesis via COlor-Depth Image Fusion, dubbed CODIF, where the actual DIBR is not needed. First, object boundary regions are detected from the color image, and a Wavelet-based image fusion method is proposed to imitate the interaction between color and depth images during the DIBR process. Then statistical features of the interactional regions and natural regions are extracted from the fused color-depth image to portray the influences of distortions in color/depth images on the quality of synthesized views. Finally, all statistical features are utilized to learn the quality prediction model for view synthesis. Extensive experiments on public view synthesis databases demonstrate the advantages of the proposed metric in predicting the quality of view synthesis, and it even suppresses the state-of-the-art post-DIBR view synthesis quality metrics.

Keyword:

Color quality prediction Predictive models DIBR Distortion measurement color-depth fusion Image color analysis Distortion View synthesis Image fusion interactional region

Author Community:

  • [ 1 ] [Li, Leida]Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
  • [ 2 ] [Li, Leida]Pazhou Lab, Guangzhou 510330, Peoples R China
  • [ 3 ] [Huang, Yipo]China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Jiangsu, Peoples R China
  • [ 4 ] [Wu, Jinjian]Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
  • [ 5 ] [Gu, Ke]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 6 ] [Fang, Yuming]Jiangxi Univ Finance & Econ, Sch Informat Management, Nanchang 330032, Jiangxi, Peoples R China

Reprint Author's Address:

  • [Huang, Yipo]China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Jiangsu, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

ISSN: 1051-8215

Year: 2021

Issue: 7

Volume: 31

Page: 2509-2521

8 . 4 0 0

JCR@2022

ESI Discipline: ENGINEERING;

ESI HC Threshold:87

JCR Journal Grade:1

Cited Count:

WoS CC Cited Count: 21

SCOPUS Cited Count: 23

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 7

Online/Total:787/10529162
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.