Skip to content

๐Ÿ”Ž Python implementation of classical Image Quality Assessment methods

Notifications You must be signed in to change notification settings

ignaciohrdz/classical-iqa-python

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

14 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Classical IQA methods

This repository is a collection of classical IQA methods long forgotten in favour of deep learning-based IQA methods. I am doing this because lots of papers cite classical methods when testing new IQA methods and many of them are not very well documented (GM-LOG, SSEQ, CORNIA, LFA, HOSA...). If all of them are implemented in a single package, it would be easier to try them.

Implemented methods

Spatial-Spectral Entropy-based Quality (SSEQ) index

This is my implementation of the SSEQ index. I wasn't able to find a fully implemented Python version of this index, so I decided to use Aca4peop's code as a starting point and then add my own modifications.

The full details of SSEQ can be found in the paper: No-reference image quality assessment based on spatial and spectral entropies (Liu et al.). The original MATLAB implementation is here. The main highlight of this version is the vectorized implementation of:

  • Patch spatial entropy
  • DCT for spectral entropy (more info here)

High Orderd Statistics Aggregation (HOSA)

I implemented HOSA according to Blind Image Quality Assessment Based on High Order Statistics Aggregation (Xu et al.). However, there were a couple of points in the paper that were not very clear, so I had to make some decisions:

  • The construction of the visual codebook is memory-hungry, and probably not meant to be done with a laptop. Each local feature corresponds to a BxB patch, which results in (HxW)/(BxB) patches, and that can take a lot of RAM if you are using a large image dataset. For example, if we resized the images from KonIQ-10k to make them 512x382, each image would produce 3942 local features, which would result in more than 39M features for the whole dataset! Imagine if we used the original image size...
  • Related to my first point, it's not clear whether the images undergo any resizing or if HOSA was designed to work for all image sizes. To make it comparable to other IQA measures I implement, I'm resizing the images to 512px prior to feature extraction.

Local Feature Aggregation (LFA)

One year before HOSA, the same authors presented LFA in Local Feature Aggregation for Blind Image Quality Assessment (Xu et al. 2015), which can be considered the precursor of HOSA. As it follows a similar approach (generating a codebook and computing some metrics with each cluster's assignments), I've also implemented it.

How to train an IQA model

For all these models I'm following the same approach: splitting every dataset into a training and a test set. I use the training sets with K-fold cross-validation to get the best parameters for each SVR model.

I'm using the following datasets:

About

๐Ÿ”Ž Python implementation of classical Image Quality Assessment methods

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages