Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Learning a Unified Blind Image Quality Metric via On-Line and Off-Line Big Training Instances
 
research article

Learning a Unified Blind Image Quality Metric via On-Line and Off-Line Big Training Instances

Gu, Ke
•
Xu, Xin
•
Qiao, Junfei
Show more
December 1, 2020
Ieee Transactions On Big Data

In this work, we resolve a big challenge that most current image quality metrics (IQMs) are unavailable across different image contents, especially simultaneously coping with natural scene (NS) images or screen content (SC) images. By comparison with existing works, this paper deploys on-line and off-line data for proposing a unified no-reference (NR) IQM, not only applied to different distortion types and intensities but also to various image contents including classical NS images and prevailing SC images. Our proposed NR IQM is developed with two data-driven learning processes following feature extraction, which is based on scene statistic models, free-energy brain principle, and human visual system (HVS) characteristics. In the first process, the scene statistic models and an image retrieve technique are combined, based on on-line and off-line training instances, to derive a novel loose classifier for retrieving clean images and helping to infer the image content. In the second process, the features extracted by incorporating the inferred image content, free-energy and low-level perceptual characteristics of the HVS are learned by utilizing off-line training samples to analyze the distortion types and intensities and thereby to predict the image quality. The two processes mentioned above depend on a gigantic quantity of training data, much exceeding the number of images applied to performance validation, and thus make our model's performance more reliable. Through extensive experiments, it has been validated that the proposed blind IQM is capable of simultaneously inferring the quality of NS and SC images, and it has attained superior performance as compared with popular and state-of-the-art IQMs on the subjective NS and SC image quality databases. The source code of our model will be released with the publication of the paper at https://kegu.netlify.com.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TBDATA.2019.2895605
Web of Science ID

WOS:000590147100002

Author(s)
Gu, Ke
Xu, Xin
Qiao, Junfei
Jiang, Qiuping
Lin, Weisi
Thalmann, Daniel  
Date Issued

2020-12-01

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC

Published in
Ieee Transactions On Big Data
Volume

6

Issue

4

Start page

780

End page

791

Subjects

Computer Science, Information Systems

•

Computer Science, Theory & Methods

•

Computer Science

•

image quality

•

distortion

•

feature extraction

•

training

•

image coding

•

measurement

•

brain modeling

•

image quality metric (iqm)

•

natural scene (ns) image

•

screen content (sc) image

•

no-reference (nr)

•

data-driven process

•

big data learning

•

on-line

•

off-line

•

free-energy principle

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
VRLAB  
Available on Infoscience
December 2, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/173774
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés