Indexed by:
Abstract:
Recent years have witnessed a growing number of image and video centric applications on mobile, vehicular, and cloud platforms, involving a wide variety of digital screen content images. Unlike natural scene images captured with modern high fidelity cameras, screen content images are typically composed of fewer colors, simpler shapes, and a larger frequency of thin lines. In this paper, we develop a novel blind/no-reference (NR) model for accessing the perceptual quality of screen content pictures with big data learning. The new model extracts four types of features descriptive of the picture complexity, of screen content statistics, of global brightness quality, and of the sharpness of details. Comparative experiments verify the efficacy of the new model as compared with existing relevant blind picture quality assessment algorithms applied on screen content image databases. A regression module is trained on a considerable number of training samples labeled with objective visual quality predictions delivered by a high-performance full-reference method designed for screen content image quality assessment (IQA). This results in an opinion-unaware NR blind screen content IQA algorithm. Our proposed model delivers computational efficiency and promising performance. The source code of the new model will be available at: https://sites.google.com/site/guke198701/publications.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON IMAGE PROCESSING
ISSN: 1057-7149
Year: 2017
Issue: 8
Volume: 26
Page: 4005-4018
1 0 . 6 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:165
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 183
SCOPUS Cited Count: 222
ESI Highly Cited Papers on the List: 22 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: