Next Article in Journal
Partial Order-Based Decoding of Rate-1 Nodes in Fast Simplified Successive-Cancellation List Decoders for Polar Codes
Next Article in Special Issue
No-Reference Image Quality Assessment Using the Statistics of Global and Local Image Features
Previous Article in Journal
Far-Field DOA Estimation of Uncorrelated RADAR Signals through Coprime Arrays in Low SNR Regime by Implementing Cuckoo Search Algorithm
Previous Article in Special Issue
No-Reference Video Quality Assessment Based on Benford’s Law and Perceptual Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency

Ronin Institute, Montclair, NJ 07043, USA
Electronics 2022, 11(4), 559; https://doi.org/10.3390/electronics11040559
Submission received: 5 January 2022 / Revised: 31 January 2022 / Accepted: 9 February 2022 / Published: 12 February 2022

Abstract

:
The purpose of image quality assessment is to estimate digital images’ perceptual quality coherent with human judgement. Over the years, many structural features have been utilized or proposed to quantify the degradation of an image in the presence of various noise types. Image gradient is an obvious and very popular tool in the literature to quantify these changes in the images. However, gradient is able to characterize images locally. On the other hand, results from previous studies indicate that global contents of a scene are analyzed before the local features by the human visual system. Relying on these features of the human visual system, we propose a full-reference image quality assessment metric that characterizes the global changes of an image by the Grünwald–Letnikov derivatives and the local changes by image gradients. Moreover, visual saliency is also utilized for weighting the changes in the images and emphasizing those areas of the image which are salient to the human visual system. To prove the efficiency of the proposed method, massive experiments were carried out on publicly available benchmark image quality assessment databases.

1. Introduction

Image quality assessment (IQA) is still a serious research challenge due to the difficulties of modelling the enormous complexity of the human visual system and perception. Presently, IQA algorithms are divided into two distinct classes, i.e. subjective and objective IQA. Specifically, subjective IQA focuses on collecting subjective quality scores from human participants in a laboratory environment [1] or an online crowdsourcing experiment [2]. Subsequently, users’ individual quality ratings are averaged into mean opinion scores (MOS) that are later considered as a direct measure of image quality. Besides, subjective IQA studies in detail the effects of viewing distances, display devices, lighting conditions, and participants demographical and physical features. Many benchmark IQA databases [3,4,5] can be found online which are the results of subjective quality experiments. Specifically, these databases consist of a number of digital images with their corresponding MOS values.
In contrast to subjective IQA, the aim of objective IQA is devising mathematical algorithms and methods which are capable of predicting perceptual image quality. In the literature, objective IQA is classified into three broad groups. The first group is full-reference image quality assessment (FR-IQA) where the algorithms estimate the quality of distorted images with full access to the distortion-free, reference images. In contrast, no information is available about reference images in no-reference image quality assessment (NR-IQA). Finally, reduced-reference image quality assessment (RR-IQA) corresponds to a transition between NR-IQA and FR-IQA. Although full information about the reference images is not available, but some features derived from the reference images can be applied in RR-IQA.
Over the years, many structural features have been utilized or proposed to quantify image degradations. Image gradient, which characterizes image locally, is a very popular tool in the literature for this purpose [6,7,8,9]. Results of previous studies indicate that global contents of a scene are analyzed by the human visual system before the local features [10]. The main contribution of this study is an FR-IQA metric that characterizes the global changes of an image by the Grünwald–Letnikov derivative and the local changes by image gradients. Thus, a combined approach is proposed in this regard. Moreover, visual saliency is also utilized for weighting the changes in the images and emphasizing those image regions which are salient to the human visual system.

1.1. Literature Review

In the literature, numerous FR-IQA algorithms and metrics have been proposed in recent decades [11]. Further, these methods can be divided into five classes, such as (i) error visibility, (ii) structural similarity, (iii) information-theoretic, (iv) learning-based, and (v) fusion-based methods. The main idea of error visibility methods is to devise a distance measure between pixel values or between the transformed representations of the reference and the distorted images to quantify perceptual quality. The most well-known example is the simple mean square error which correlates weakly with the perceptual quality but is still widely used owing to its simplicity [12]. Another well-known example is the peak signal-to-noise ratio (PSNR) which is commonly applied to quantify the quality of image reconstruction and lossy compression [13]. Ponomarenko et al. developed further PSNR by taking the discrete cosine transform (DCT) coefficients and the contrast sensitivity function [14].
Structural similarity methods try to measure similarity between the corresponding image regions of the reference and the distorted image. The representative example of this approach and probably the most well-known FR-IQA metrics is the structural similarity index measure (SSIM) [15] which compares the reference and the distorted images in respect of luminance, contrast, and structure. Over the years many extensions and modifications of SSIM have been proposed in the literature. For example, Zhou et al. [16] calculated SSIM over multiple scales of an input image. In contrast, Li and Bovik [17] determined SSIM for three distinct image regions, such as textures, edges, and smooth regions, and took their weighted average as perceptual quality metric. Later, this approach was further developed by dividing edges into preserved and changed categories [18]. To achieve higher accuracy, Liu et al. [19] performed SSIM in the wavelet domain. The authors’ approach was further developed in the complex wavelet domain by Sampat et al. [20] Wang and Li [21] measured the information content of the input images and used it to weight SSIM. Sun et al. [22] proposed to use superpixels [23] to segment the reference and distorted images first, since they provide a more meaningful representation of images than rectangular pixel grids. This method was further improved by Frackiewicz et al. [24] by using other color spaces and comparing similarity maps by mean deviation similarity index.
Information-theoretic FR-IQA approaches measure some kind of mutual information between the reference and the distorted image to quantify perceptual image quality. A representative example is the visual information fidelity (VIF) model [25]. Specifically, the authors applied Gaussian scale mixtures in the wavelet domain to model the reference and the distorted images. Mutual information was measured between the two Gaussain scale mixtures to quantify perceptual quality.
Recently, deep learning has gained popularity in the field of visual quality assessment as well [26,27,28]. Learning-based methods apply some kind of machine or deep learning algorithm to learn relationships between image features and perceptual quality. For example, Tang et al. [29] extracted spatial and frequency domain features from reference-distorted image pairs and combined them. The obtained features were projected onto perceptual quality scores with the help of a trained random forest regressor. In contrast, Bosse et al. [30] used a convolutional neural network (CNN) as feature extractor. More specifically, deep features were extracted from a distorted and a reference image patch by a CNN and were fused together. Subsequently, the fused feature vectors were projected onto patch-wise quality scores. To get the perceptual quality of an input image, the arithmetic mean of the patch-wise scores was determined. In contrast, Ahn et al. [31] predicted a distortion sensitivity map with a three-stream CNN using as input the distorted image, the reference image, and the spatial error map. To get the perceptual quality, the sensitivity map is multiplied by the spatial error map.
Fusion-based methods take existent FR-IQA metrics to compile a new image quality evaluator. The main idea behind fusion-based methods is similar to those of boosting in machine learning. For example, Okarma et al. [32] studied the properties of MS-SSIM, VIF, and R-SVD FR-IQA metrics thoroughly and proposed the fusion of these three metrics by particular arithmetic expression containing productions and powers. Later, Okarma proposed different regression techniques for a more effective FR-IQA metrics fusion [33,34]. Based on the results of Okarma, Oszust [35] and Yuan et al. [36] introduced other regression based fusion techniques. Specifically, in [35] traditional FR-IQA metrics were used as predictor variables in a multiple linear regression model, while Yuan et al. [36] utilized kernel ridge regression for combining predefined local structures and local distortion measurements. In [37], a support vector regression based fusion was carried out based on ten FR-IQA metrics. In contrast, Lukin et al. [38] trained a neural network to fuse the results of six traditional FR-IQA metrics. Instead of machine learning techniques, Oszust [39] implemented a genetic algorithm for the decision fusion of multiple metrics. This approach was further developed in [40] by applying multi-gene genetic programming. Amirshahi et al. [41] compared the feature maps, extracted from an AlexNet [42] convolutional neural network model, of the reference and the distorted image using traditional FR-IQA metrics. To obtain the perceptual quality of the distorted image, the quality scores of the feature maps were aggregated using different types of averages, such as arithmetic and geometric mean.
For comprehensive surveys about FR-IQA, we refer readers to [43,44,45,46].

1.2. Organization of the Paper

The remaining parts of this study is organized as follows. After this introduction and literature review, Section 2 introduces briefly the mathematical preliminaries, i.e. Grünwald–Letnikov derivative, and our proposed method in detail. Next, Section 3 gives the definitions of the applied evaluation metrics and presents a comprehensive comparison to the state-of-the-art. Lastly, the paper is concluded in Section 4.

2. Proposed Method

2.1. Preliminaries

In this section, some mathematical concepts and definitions are introduced which have vital importance in our proposed FR-IQA metrics. The Grünwald–Letnikov derivative, introduced by Anton Karl Grünwald Austrian and Aleksey Vasilievich Letnikov Russian mathematicians, is a basic extension of the definition of the derivative in fractal calculus. Specifically, it enables to take the derivative of a function a non-integer number of times [47]. The definition of Grünwald–Letnikov derivative is conducted from the integer-order calculus in the literature. The starting point is the definition of the first-order derivative of a one-dimensional signal f ( x ) which is determined as:
d f ( x ) d x = lim h 0 f ( x ) f ( x h ) h .
Based on this, the second-order derivative can be expressed as
d 2 f ( x ) d x 2 = lim h 0 f ( x ) 2 f ( x h ) + f ( x 2 h ) h 2 .
In general, for any positive integer n, we can derive the following formula
d n f ( x ) d x n = lim h 0 j = 0 n ( 1 ) j n j f ( x j h ) h 2 ,
where
n j = n ( n 1 ) . . . ( n j + 1 ) j ! .
Eliminating the restriction that n must be a positive by an α non-integer number, it is reasonable to define
D x 0 , x G L α f ( x ) = lim h 0 1 h α j = 0 [ x x 0 h ] ( 1 ) j α j f ( x j h ) ,
where D x 0 , x G L α f ( x ) is the α th order Grünwald–Letnikov derivative of f ( x ) , x and x 0 represent the upper and lower bounds, respectively. Moreover, [ · ] stands for the rounding operator. By replacing ( 1 ) j n j with ( 1 ) j Γ ( α + 1 ) Γ ( j + 1 ) Γ ( α j + 1 ) , where Γ ( · ) is the Gamma-function, we can define Grünwald–Letnikov derivative as:
D x 0 , x G L α f ( x ) = lim h 0 1 h α j = 0 [ x x 0 h ] ( 1 ) j Γ ( α + 1 ) Γ ( j + 1 ) Γ ( α j + 1 ) f ( x j h ) .
It is essential to highlight one important difference between ordinary and Grünwald–Letnikov derivatives. As one can see from Equation (6), the calculation of the Grünwald–Letnikov derivative of f ( x ) at x requires all function values from x 0 to x. As a consequence, Grünwald–Letnikov derivative is considered to have memory. In the literature, this property is also formulated as that Grünwald–Letnikov derivative requires non-local information [48]. As an illustration, Figure 1 depicts the fractional derivatives of the sine function with order between 0.1 and 0.9 .
Next, we have to define the Grünwald–Letnikov derivative of a two dimensional signal which is given as
I ( x , y ) = I 11 I 12 I 1 N I 21 I 22 I 2 N I M 1 I M 2 I M N ,
where M and N stand for the number of rows and columns in I ( x , y ) . Similarly to the ordinary derivative, the Grünwald–Letnikov derivative has to be defined in two dimensions, i.e., x - and y -directions [49,50]. In x -direction, it can be defined as follows
D G L α I x ( x , y ) = I ( x , y ) α I ( x 1 , y ) + α ( α 1 ) 2 I ( x 2 , y ) .
Similarly in y -direction
D G L α I y ( x , y ) = I ( x , y ) α I ( x , y 1 ) + α ( α 1 ) 2 I ( x , y 2 ) .
Hence, the Grünwald–Letnikov fractional derivative can be given as
D G L α I ( x , y ) = ( D G L α I x ( x , y ) ) 2 + ( D G L α I y ( x , y ) ) 2 .
Figure 2 shows a grayscale test image and its Grünwald–Letnikov derivatives with different values of α .

2.2. Proposed Metric

Results of previous studies indicate that global contents of a scene are analyzed by the human visual system before the local features [10]. In this study, we propose an FR-IQA metric that combines global and local information of an image by applying Grünwald–Letnikov derivatives and ordinary image derivatives (high-level overview is depicted in Figure 3). In the followings, R ( x , y ) stands for the pristine, reference image, while D ( x , y ) denote the distorted image generated from R ( x , y ) .
Global similarity (denoted by S G ( x , y ) ) between R ( x , y ) and D ( x , y ) is expressed as the similarity between the Grünwald–Letnikov derivatives of R ( x , y ) and D ( x , y )
S G ( x , y ) = 2 · D G L α R ( x , y ) · D G L α D ( x , y ) + c 1 ( D G L α R ( x , y ) ) 2 + ( D G L α D ( x , y ) ) 2 + c 1
where c 1 is a constant number to manage numerical stability [15]. In our MATLAB implementation α = 0.6 fractional derivative order was used. To characterize the similarity between local changes, gradient operators are applied. The literature [51,52] recommends the Scharr operator, since it has a good effect on image quality estimation. Specifically, a 3 × 3 Scharr operator was applied in our method whose horizontal ( S x ) and vertical ( S y ) templates can be given as
S x = 1 16 3 0 3 10 0 10 3 0 3 ,
S y = 1 16 3 10 3 0 0 0 3 10 3 .
These templates can be applied separately to obtain gradient components of an image I in each orientation:
G x = S x · I ,
G y = S y · I ,
where * stands for the convolution operator. These can be put together to get the gradient magnitude:
G = G x 2 + G y 2 .
Figure 4 depicts an illustration of the Scharr operator.
To characterize the local similarity ( S L ( x , y ) ) between the reference and distorted images, the gradient magnitudes are utilized as follows:
S L ( x , y ) = 2 · G R ( x , y ) · G D ( x , y ) + c 2 G R 2 ( x , y ) + G D 2 ( x , y ) + c 2 ,
where G R ( x , y ) and G D ( x , y ) stand for reference and distorted gradient magnitude maps, respectively. Moreover, c 2 is a constant number to manage numerical stability.
The similarity map (denoted by S ( x , y ) ) between a reference and a distorted image using the preceding equations is defined as
S ( x , y ) = ( S G ( x , y ) ) λ · ( S L ( x , y ) ) 1 λ ,
where λ is used to fine-tune the respective weights of the importance of global and local information. In our MATLAB implementation, λ = 0.7 was applied. To get the local global variation ( L G V ) quality score, we take the average of S ( x , y ) . Formally, it can be written
L G V = 1 M · N x = 1 M y = 1 N S ( x , y ) .
In the saliency weighted local global variation (SWLGV) quality score, the visual attention mechanism is also taken into account. Namely, the differences between the reference and the distorted images are emphasized in the salient regions. Let denote the saliency maps of the reference and distorted images by S M R ( x , y ) and S M D ( x , y ) , respectively. In our metric, the algorithm of Imamoglu et al. [53] was used to generate saliency maps. The saliency map of a reference-distorted image pair (denoted by S M ( x , y ) ) is the elementwise maximum of S M R ( x , y ) and S M D ( x , y ) ,
S M ( x , y ) = m a x ( S M R ( x , y ) , S M D ( x , y ) ) .
Specifically, SWLGV corresponds to the weighted average of S ( x , y ) and S M ( x , y ) , where S M ( x , y ) represents the weights. Formally, it can be written:
S W L G V = i = 1 M j = 1 N S M ( x , y ) · S ( x , y ) i = 1 M j = 1 N S M ( x , y ) .

3. Experimental Results and Analysis

This section presents our obtained experimental results. First, Section 3.1 describes the applied evaluation metrics and protocol. Subsequently, the used benchmark IQA databases are introduced in Section 3.2. Finally, a comparison of L G V and S W L G V to the state-of-the-art is presented in Section 3.3.

3.1. Evaluation Metrics and Protocol

The performance of an FR-IQA metric is given by correlation coefficients in the literature which are measured between predicted and ground-truth quality scores [54]. With this end in view, three correlation coefficients, such as Pearson’s linear correlation coefficient (PLCC), Spearman’s rank order correlation coefficient (SROCC), and Kendall’s rank order correlation coefficient (KROCC), are widely used in the literature. Thus, these evaluation metrics are also applied in this paper. The PLCC between two vectors (denoted by x and y ) is defined as:
P L C C ( x , y ) = i = 1 N ( x i x ¯ ) ( y i y ¯ ) ) i = 1 N ( x i x ¯ ) 2 i = 1 N ( y i y ¯ ) 2 ,
where x ¯ = 1 N i = 1 N x i and y ¯ = 1 N i = 1 N y i . Following the recommendations of Sheikh et al. [55], a non-linear mapping is applied between the predicted and ground-truth scores before the calculation of PLCC using the following formula:
Q = β 1 1 2 1 e β 2 ( Q p β 3 ) + β 4 Q p + β 5 ,
where β i ’s ( i goes from 1 to 5) represent the fitting parameters. Moreover, Q p and Q denote the predicted and mapped quality scores, respectively. Similarly, SROCC is defined as:
S R O C C ( x , y ) = i = 1 N ( x i x ) ( y i y ) ) i = 1 N ( x i x ) 2 i = 1 N ( y i y ) 2 ,
where x and y stand for the middle ranks of x and y , respectively. Finally, KROCC is defined as
K R O C C ( x , y ) = C D N 2 ,
where C stands for the number of concordant pairs between x and y , while D denotes the number of discordant pairs.
The proposed methods were implemented in MATLAB R2020a using a STRIX Z270H Gaming personal computer with Intel(R) Core(TM) i7-7700K CPU 4.20 GHz (8 cores) and 15 GB memory.

3.2. Databases

Benchmark IQA databases used for developing, testing, and ranking FR-IQA methods contain a small group of reference images whose perceptual quality are believed flawless. Moreover, distorted images are artificially generated from the reference images applying several levels and types of distortions, such as motion blur, JPEG compression, or salt & pepper noise. Implicitly, MOS values belong to the distorted images. In this study, we utilized four popular IQA benchmark databases, i.e. KADID-10k [5], TID2013 [3], TID2008 [56], and CSIQ [57], to evaluate the proposed LGV and SWLGV metrics. The empirical MOS distributions of these databases are depicted in Figure 5, while their main properties are outlined in Table 1. Figure 6 depicts some sample distorted images from KADID-10k [5] as an illustration of IQA databases.

3.3. Comparison to the State-of-the-Art

As it can be seen in the previous section, the proposed method possesses several adjustable parameters to determine global and local similarity between the reference and the distorted images, such as α and h (Equation (6)) for the Grünwald–Letnikov derivative and λ that is applied to weight the importance of global and local information. To determine optimal values for these parameters, eight random reference images and their corresponding 544 distorted counterparts were taken and numerical experiments were carried out on this subset of TID2008 [56]. Namely, α and λ were varied from 0 to 1 in steps of 0.1, while h was varied from 10 to 100 in steps of 10. During the numerical experiments, we monitored the SROCC values. Finally, we choose α = 0.6 , λ = 0.7 , and h = 80 where the maximum of SROCC was measured.
To compare the previously presented LGV and SWLGV FR-IQA metrics to the state-of-the-art, nine other state-of-the-art FR-IQA metrics were collected, i.e., 2stepQA [58], CSV [59], DISTS [60], GSM [8], MAD [57], MS-SSIM [16], ReSIFT [61], RVSIM [62], and SSIM [15], whose source codes were made online available for the research community. The results measured on KADID-10k [5] and TID2013 [3] are outlined in Table 2. The results of TID2008 [56] and CSIQ [57] are presented in Table 3. From the presented results, it can be concluded that the SWLGV metric provides the best outcomes in terms of SROCC and KROCC on KADID-10k [5] and TID2008 [56]. Furthermore, it gives the best PLCC value and the second best SROCC and KROCC values on TID2013 [3]. Interestingly, the saliency weighting step do not improve the performance of estimation on CSIQ [57], while it significantly improves the estimation accuracy on the other applied databases. Table 4 summarizes the direct and weighted averages of the PLCC, SROCC, and KROCC values measured on KADID-10k [5], TID2013 [3], TID2008 [56], and CSIQ [57]. It can be observed that the proposed SWLGV provides the best results in terms of SROCC and KROCC results, while the proposed LGV gives the second best results in terms of SROCC and KROCC.
Table 5 and Table 6 summarizes the SROCC values which were measured separately on the distortion levels of TID2013 [3] and TID2008 [56]. As mentioned in Section 1, TID2013 [3] and TID2008 [56] have five and four different distortion levels, respectively. It can be observed that LGV and SWLGV give in general higher performance on higher distortion levels. Moreover, SWLGV provides the second best SROCC values on 4 out of 5 distortion levels of TID2013 [3], while it is the best performing method on all distortion levels of TID2008 [56]. On the other hand, LGV provides the second best result on the lowest distortion level of TID2013 [3] and the second highest SROCC values on all distortion levels of TID2008 [56].
Table 7 and Table 8 presents the results on TID2013 [3] and TID2008 [56] in detail for every distortion types found in these IQA benchmark databases. As mentioned in Section 3.2, TID2013 [3] contains 24 distinct distortion types, i.e., AGN (additive Gaussian noise), ANC (additive noise in color components), SCN (spatially correlated noise), MN (masked noise), HFN (high frequency noise), IN (impulse noise), QN (quantization noise), GB (Gaussian blur), DEN (image denoising), JPEG (JPEG compression), JP2K (JPEG2000 compression), JGTE (JPEG transmission errors), J2TE (JPEG2000 transmission errors), NEPN (Non-eccentricity pattern noise), BLOCK (local block-wise distortions of different intensity), MS (Mean shift), CC (contrast change), CCS (change of color saturation), MGN (multiplicative Gaussian noise), CN (comfort noise), LCNI (lossy compression of noisy images), ICQD (image color quantization with dither), CA (chromatic aberrations), and SSR (sparse sampling and reconstruction). On the other hand, TID2008 [56] contains a narrower set of distortions than TID2013 [3]. Specifically, it includes the first 17 distortion types of TID2013 [3]. It can be seen that SWLGV and LGV are the best performing method on 5 out of 24 distortion types of TID2013 [3]. On the other hand, SWLGV provides the highest SROCC values on 8 out of 17 distortion types of TID2008 [56] and gives on 6 distortions the second best results.

4. Conclusions

In the present study, an innovative FR-IQA metric was proposed relying on Grünwald–Letnikov derivative, image gradients, and visual saliency. The starting point was the observation of previous studies that the human visual system analyzes the global features of a scene before the local ones. However, image gradients, which are very popular in the literature to quantify image degradations, characterize the image locally. Our main contribution was a metric that describes the global changes of an image relying on Grünwald–Letnikov derivative, while the local changes are quantified by image gradients. Next, the combination of local and global changes were weighted by visual saliency to estimate perceptual image quality. The proposed metric was compared with several other state-of-the-art algorithms on major standard IQA databases. It was demonstrated that the proposed method is able to surpass or approach the state-of-the-art performance.

Funding

This research received no external funding.

Data Availability Statement

In this paper, the following publicly available benchmark databases were used: 1. KADID-10k: http://database.mmsp-kn.de/kadid-10k-database.html, accessed on 5 January 2022. 2. TID2013: http://www.ponomarenko.info/tid2013.htm, accessed on 5 January 2022. 3. TID2008: http://www.ponomarenko.info/tid2008.htm, accessed on 5 January 2022. 4. CSIQ: https://isp.uv.es/data_quality.html, accessed on 5 January 2022.

Acknowledgments

We thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AGNadditive Gaussian noise
ANCadditive noise in color components
CAchromatic aberrations
CCcontrast change
CCSchange of color saturation
CNcomfort noise
CSIQcategorical image quality
DCRdegradation category ratings
DISTSdeep image structure and texture similarity
FR-IQAfull-reference image quality assessment
GBGaussian blur
GSMgradient similarity measure
HFNhigh frequency noise
ICQDimage color quantization with dither
INimpulse noise
IQAimage quality assessment
JGTEJPEG transmission error
JPEGJoint Photographic Experts Group
KADIDKonstanz artificially distorted image quality database
KROCCKendall’s rank order correlation coefficient
LCNIlossy compression of noisy image
LGVlocal and global variations
MADmost apparent distortion
MGNmultiplicative Gaussian noise
MNmasked noise
MOSmean opinion score
MSmean shift
MS-SSIMmulti-scale structural similarity index measure
NEPNnon-eccentricity pattern noise
NR-IQAno-reference image quality assessment
PLCCPearson’s linear correlation coefficient
PSNRpeak signal-to-noise ratio
QNquantization noise
ReSIFTreliability-weighted scale invariant feature transform
RR-IQAreduced-reference image quality assessment
RVSIMRiesz transform and visual contrast sensitivity-based feature similarity index
SCNspatially correlated noise
SROCCSpearman’s rank order correlation coefficient
SSIMstructural similarity index measure
SWLGVsaliency weighted local and global variations
TIDTampere image database

References

  1. Chubarau, A.; Akhavan, T.; Yoo, H.; Mantiuk, R.K.; Clark, J. Perceptual image quality assessment for various viewing conditions and display systems. Electron. Imaging 2020, 2020, 67-1. [Google Scholar] [CrossRef]
  2. Saupe, D.; Hahn, F.; Hosu, V.; Zingman, I.; Rana, M.; Li, S. Crowd workers proven useful: A comparative study of subjective video quality assessment. In Proceedings of the QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Lisbon, Portugal, 6–8 June 2016. [Google Scholar]
  3. Ponomarenko, N.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Jin, L.; Astola, J.; Vozel, B.; Chehdi, K.; Carli, M.; Battisti, F.; et al. Color image database TID2013: Peculiarities and preliminary results. In European Workshop on Visual Information Processing (EUVIP); IEEE: New York, NJ, USA, 2013; pp. 106–111. [Google Scholar]
  4. Lin, H.; Hosu, V.; Saupe, D. KonIQ-10K: Towards an ecologically valid and large-scale IQA database. arXiv 2018, arXiv:1803.08489. [Google Scholar]
  5. Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A large-scale artificially distorted IQA database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar]
  6. Chen, G.H.; Yang, C.L.; Xie, S.L. Gradient-based structural similarity for image quality assessment. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 2929–2932. [Google Scholar]
  7. Zhu, J.; Wang, N. Image quality assessment by visual gradient similarity. IEEE Trans. Image Process. 2011, 21, 919–933. [Google Scholar] [PubMed]
  8. Liu, A.; Lin, W.; Narwaria, M. Image quality assessment based on gradient similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar]
  9. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [Green Version]
  10. De Cesarei, A.; Loftus, G.R. Global and local vision in natural scene identification. Psychon. Bull. Rev. 2011, 18, 840–847. [Google Scholar] [CrossRef] [PubMed]
  11. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Comparison of full-reference image quality models for optimization of image processing systems. Int. J. Comput. Vis. 2021, 129, 1258–1281. [Google Scholar] [CrossRef]
  12. Bovik, A.C. Handbook of Image and Video Processing; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
  13. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  14. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between-coefficient contrast masking of DCT basis functions. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics, Scottsdale, AZ, USA, 25–26 January 2007; Volume 4. [Google Scholar]
  15. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  17. Li, C.; Bovik, A.C. Content-weighted video quality assessment using a three-component image model. J. Electron. Imaging 2010, 19, 011003. [Google Scholar]
  18. Li, C.; Bovik, A.C. Content-partitioned structural similarity index for image quality assessment. Signal Process. Image Commun. 2010, 25, 517–526. [Google Scholar] [CrossRef]
  19. Liu, L.; Wang, Y.; Wu, Y. A wavelet-domain structure similarity for image quality assessment. In Proceedings of the 2009 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–5. [Google Scholar]
  20. Sampat, M.P.; Wang, Z.; Gupta, S.; Bovik, A.C.; Markey, M.K. Complex wavelet structural similarity: A new image similarity index. IEEE Trans. Image Process. 2009, 18, 2385–2401. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef] [PubMed]
  22. Sun, W.; Liao, Q.; Xue, J.H.; Zhou, F. SPSIM: A superpixel-based similarity index for full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 4232–4244. [Google Scholar] [CrossRef] [Green Version]
  23. Neubert, D.I.P. Superpixels and Their Application for Visual Place Recognition in Changing Environments. 2015. Available online: https://nbn-resolving.org/urn:nbn:de:bsz:ch1-qucosa-190241 (accessed on 5 February 2022).
  24. Frackiewicz, M.; Szolc, G.; Palus, H. An improved SPSIM index for image quality assessment. Symmetry 2021, 13, 518. [Google Scholar] [CrossRef]
  25. Sheikh, H.R.; Bovik, A.C.; De Veciana, G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Process. 2005, 14, 2117–2128. [Google Scholar] [CrossRef] [Green Version]
  26. Wu, J.; Ma, J.; Liang, F.; Dong, W.; Shi, G.; Lin, W. End-to-end blind image quality prediction with cascaded deep neural network. IEEE Trans. Image Process. 2020, 29, 7414–7426. [Google Scholar] [CrossRef]
  27. Xu, J.; Zhou, W.; Chen, Z. Blind omnidirectional image quality assessment with viewport oriented graph convolutional networks. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1724–1737. [Google Scholar] [CrossRef]
  28. Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. Generalizable No-Reference Image Quality Assessment via Deep Meta-learning. IEEE Trans. Circuits Syst. Video Technol. 2021. [Google Scholar] [CrossRef]
  29. Tang, Z.; Zheng, Y.; Gu, K.; Liao, K.; Wang, W.; Yu, M. Full-reference image quality assessment by combining features in spatial and frequency domains. IEEE Trans. Broadcast. 2018, 65, 138–151. [Google Scholar] [CrossRef]
  30. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef] [Green Version]
  31. Ahn, S.; Choi, Y.; Yoon, K. Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 344–353. [Google Scholar]
  32. Okarma, K. Combined full-reference image quality metric linearly correlated with subjective assessment. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 13–17 June 2010; pp. 539–546. [Google Scholar]
  33. Okarma, K. Combined image similarity index. Opt. Rev. 2012, 19, 349–354. [Google Scholar] [CrossRef]
  34. Okarma, K. Extended hybrid image similarity–combined full-reference image quality metric linearly correlated with subjective scores. Elektron. Elektrotechnika 2013, 19, 129–132. [Google Scholar] [CrossRef] [Green Version]
  35. Oszust, M. Image quality assessment with lasso regression and pairwise score differences. Multimed. Tools Appl. 2017, 76, 13255–13270. [Google Scholar] [CrossRef] [Green Version]
  36. Yuan, Y.; Guo, Q.; Lu, X. Image quality assessment: A sparse learning way. Neurocomputing 2015, 159, 227–241. [Google Scholar] [CrossRef]
  37. Liu, T.J.; Lin, W.; Kuo, C.C.J. Image quality assessment using multi-method fusion. IEEE Trans. Image Process. 2012, 22, 1793–1807. [Google Scholar] [CrossRef]
  38. Lukin, V.V.; Ponomarenko, N.N.; Ieremeiev, O.I.; Egiazarian, K.O.; Astola, J. Combining full-reference image visual quality metrics by neural network. In Human Vision and Electronic Imaging XX; International Society for Optics and Photonics: Bellingham, DC, USA, 2015; Volume 9394, p. 93940K. [Google Scholar]
  39. Oszust, M. Decision fusion for image quality assessment using an optimization approach. IEEE Signal Process. Lett. 2015, 23, 65–69. [Google Scholar] [CrossRef]
  40. Merzougui, N.; Djerou, L. Multi-gene Genetic Programming based Predictive Models for Full-reference Image Quality Assessment. J. Imaging Sci. Technol. 2021, 65, 60409-1. [Google Scholar] [CrossRef]
  41. Amirshahi, S.A.; Pedersen, M.; Beghdadi, A. Reviving traditional image quality metrics using CNNs. In Color and Imaging Conference; Society for Imaging Science and Technology: Bellingham, DC, USA, 2018; Volume 2018, pp. 241–246. [Google Scholar]
  42. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  43. Pedersen, M.; Hardeberg, J.Y. Full-reference image quality metrics: Classification and evaluation. Found. Trends® Comput. Graph. Vis. 2012, 7, 1–80. [Google Scholar]
  44. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. A comprehensive evaluation of full reference image quality assessment algorithms. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1477–1480. [Google Scholar]
  45. Phadikar, B.S.; Maity, G.K.; Phadikar, A. Full reference image quality assessment: A survey. In Industry Interactive Innovations in Science, Engineering and Technology; Springer: Berlin, Germany, 2018; pp. 197–208. [Google Scholar]
  46. Wasson, V.; Kaur, B. Full Reference Image Quality Assessment from IQA Datasets: A Review. In Proceedings of the 2019 6th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 13–15 March 2019; pp. 735–738. [Google Scholar]
  47. Sabatier, J.; Agrawal, O.P.; Machado, J.T. Advances in Fractional Calculus; Springer: Berlin, Germany, 2007; Volume 4. [Google Scholar]
  48. Abadias, L.; De León-Contreras, M.; Torrea, J.L. Non-local fractional derivatives. Discrete and continuous. J. Math. Anal. Appl. 2017, 449, 734–755. [Google Scholar] [CrossRef] [Green Version]
  49. Jia, H.; Pu, Y. Fractional calculus method for enhancing digital image of bank slip. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; Volume 3, pp. 326–330. [Google Scholar]
  50. Pu, Y.F.; Zhou, J.L.; Yuan, X. Fractional differential mask: A fractional differential-based approach for multiscale texture enhancement. IEEE Trans. Image Process. 2009, 19, 491–511. [Google Scholar] [PubMed]
  51. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, X.; Feng, X.; Wang, W.; Xue, W. Edge strength similarity for image quality assessment. IEEE Signal Process. Lett. 2013, 20, 319–322. [Google Scholar] [CrossRef]
  53. Imamoglu, N.; Lin, W.; Fang, Y. A saliency detection model using low-level features based on wavelet transform. IEEE Trans. Multimed. 2012, 15, 96–105. [Google Scholar] [CrossRef]
  54. Xu, L.; Lin, W.; Kuo, C.C.J. Visual Quality Assessment by Machine Learning; Springer: Berlin, Germany, 2015. [Google Scholar]
  55. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef] [PubMed]
  56. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  57. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  58. Yu, X.; Bampis, C.G.; Gupta, P.; Bovik, A.C. Predicting the quality of images compressed after distortion in two steps. IEEE Trans. Image Process. 2019, 28, 5757–5770. [Google Scholar] [CrossRef]
  59. Temel, D.; AlRegib, G. CSV: Image quality assessment based on color, structure, and visual system. Signal Process. Image Commun. 2016, 48, 92–103. [Google Scholar] [CrossRef] [Green Version]
  60. Ding, K.; Ma, K.; Wang, S.; Simoncelli, E.P. Image quality assessment: Unifying structure and texture similarity. arXiv 2020, arXiv:2004.07728. [Google Scholar] [CrossRef] [PubMed]
  61. Temel, D.; AlRegib, G. ReSIFT: Reliability-weighted sift-based image quality assessment. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2047–2051. [Google Scholar]
  62. Yang, G.; Li, D.; Lu, F.; Liao, Y.; Yang, W. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 6. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Fractional derivatives of the sine function with order between 0.1 and 0.9 .
Figure 1. Fractional derivatives of the sine function with order between 0.1 and 0.9 .
Electronics 11 00559 g001
Figure 2. Illustration of Grünwald–Letnikov derivative with different values of α : (a) Grayscale test image, (b) α = 0.2 , (c) α = 0.4 , (d) α = 0.6 , (e) α = 0.8 .
Figure 2. Illustration of Grünwald–Letnikov derivative with different values of α : (a) Grayscale test image, (b) α = 0.2 , (c) α = 0.4 , (d) α = 0.6 , (e) α = 0.8 .
Electronics 11 00559 g002
Figure 3. High-level overview of the proposed method.
Figure 3. High-level overview of the proposed method.
Electronics 11 00559 g003
Figure 4. Illustration of Scharr operator: (a) Grayscale test image, (b) Normalized x-gradient from Scharr operator, (c) Normalized y-gradient from Scharr operator, (d) Normalized gradient magnitude from Scharr operator.
Figure 4. Illustration of Scharr operator: (a) Grayscale test image, (b) Normalized x-gradient from Scharr operator, (c) Normalized y-gradient from Scharr operator, (d) Normalized gradient magnitude from Scharr operator.
Electronics 11 00559 g004
Figure 5. Empirical MOS distributions in the used benchmark IQA databases: (a) KADID-10k [5], (b) TID2013 [3], (c) TID2008 [56], and (d) CSIQ [57].
Figure 5. Empirical MOS distributions in the used benchmark IQA databases: (a) KADID-10k [5], (b) TID2013 [3], (c) TID2008 [56], and (d) CSIQ [57].
Electronics 11 00559 g005
Figure 6. Sample images from KADID-10k [5]: (a) Reference, pristine image, (b) Gaussian blur, M O S = 3.20 , (c) Color saturation, M O S = 3.13 , (d) Color block, M O S = 2.30 .
Figure 6. Sample images from KADID-10k [5]: (a) Reference, pristine image, (b) Gaussian blur, M O S = 3.20 , (c) Color saturation, M O S = 3.13 , (d) Color block, M O S = 2.30 .
Electronics 11 00559 g006
Table 1. Summary about the main properties of the applied publicly available IQA benchmark databases.
Table 1. Summary about the main properties of the applied publicly available IQA benchmark databases.
AttributeKADID-10k [5]TID2013 [3]TID2008 [56]CSIQ [57]
Year2019201320082010
Number of reference images85252530
Number of distorted images10,12530001700866
Number of distortion types2524176
Number of distortion levels5544–5
Subjective testing methodDCRCustomCustomCustom
Resolution 512 × 384 512 × 384 512 × 384 512 × 512
MOS range1–50–90–90–1
Table 2. Comparison of LGV and SWLGV to the state-of-the-art on KADID-10k [5] and TID2013 [3]. The highest values are typed in bold, while the second highest ones are underlined.
Table 2. Comparison of LGV and SWLGV to the state-of-the-art on KADID-10k [5] and TID2013 [3]. The highest values are typed in bold, while the second highest ones are underlined.
KADID-10k [5]TID2013 [3]
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [58]0.7680.7710.5710.7360.7330.550
CSV [59]0.6710.6690.5310.8520.8480.657
DISTS [60]0.8090.8140.6260.7590.7110.524
GSM [8]0.7800.7800.5880.7890.7870.593
MAD [57]0.7160.7240.5350.8270.7780.600
MS-SSIM [16]0.8190.8210.6300.7940.7850.604
ReSIFT [61]0.6480.6280.4680.6300.6230.471
RVSIM [62]0.7280.7190.5400.7630.6830.520
SSIM [15]0.6700.6710.4890.6180.6160.437
LGV0.6400.8200.6300.8320.8010.631
SWLGV0.6850.8400.6550.8550.8040.637
Table 3. Comparison of LGV and SWLGV to the state-of-the-art on TID2008 [56] and CSIQ [57]. The highest values are typed in bold, while the second highest ones are underlined.
Table 3. Comparison of LGV and SWLGV to the state-of-the-art on TID2008 [56] and CSIQ [57]. The highest values are typed in bold, while the second highest ones are underlined.
TID2008 [56]CSIQ [57]
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [58]0.7570.7690.5740.8410.8490.655
CSV [59]0.8520.8510.6590.9330.9330.766
DISTS [60]0.7050.6680.4880.9300.9300.764
GSM [8]0.7820.7810.5780.9060.9100.729
MAD [57]0.8310.8290.6390.9500.9470.796
MS-SSIM [16]0.8380.8460.6480.9130.9170.743
ReSIFT [61]0.6270.6320.4840.8840.8680.695
RVSIM [62]0.7890.7430.5660.9230.9030.728
SSIM [15]0.6690.6750.4850.8120.8120.606
LGV0.7780.8740.6870.7790.9260.760
SWLGV0.8110.8840.7050.7760.9220.755
Table 4. Comparison of LGV and SWLGV to the state-of-the-art. Direct and weighted average PLCC, SROCC, and KROCC values are reported. Measured on KADID-10k [5], TID2013 [3], TID2008 [56], and CSIQ [57]. The highest values are typed in bold, while the second highest ones are underlined.
Table 4. Comparison of LGV and SWLGV to the state-of-the-art. Direct and weighted average PLCC, SROCC, and KROCC values are reported. Measured on KADID-10k [5], TID2013 [3], TID2008 [56], and CSIQ [57]. The highest values are typed in bold, while the second highest ones are underlined.
Direct AverageWeighted Average
FR-IQA MetricPLCCSROCCKROCCPLCCSROCCKROCC
2stepQA [58]0.7760.7810.5870.7650.7680.572
CSV [59]0.8270.8250.6530.7400.7380.582
DISTS [60]0.8010.7810.6010.7950.7850.599
GSM [8]0.8140.8150.6220.7890.7890.596
MAD [57]0.8310.8200.6430.7630.7580.573
MS-SSIM [16]0.8410.8420.6560.8210.8220.633
ReSIFT [61]0.6970.6880.5300.6550.6410.483
RVSIM [62]0.8010.7620.5890.7520.7250.549
SSIM [15]0.6920.6940.5040.6680.6690.485
LGV0.7570.8550.6770.6990.8280.644
SWLGV0.7820.8630.6880.7360.8420.662
Table 5. Comparison of SROCC of each FR-IQA metrics on TID2013’s [3] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
Table 5. Comparison of SROCC of each FR-IQA metrics on TID2013’s [3] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [58]CSV [59]DISTS [60]GSM [8]MAD [57]MS-SSIM [16]ReSIFT [61]RVSIM [62]SSIM [15]LGVSWLGV
Level 10.2460.4240.2350.3720.3880.1660.1810.2480.2040.4010.398
Level 20.3940.6260.4400.5120.3680.0490.4010.4300.2760.6050.610
Level 30.5390.6350.3670.5230.4420.2400.4150.4160.0840.6300.632
Level 40.5710.7490.6060.6690.2840.1720.6990.7020.2080.7280.735
Level 50.6630.7870.6640.7450.3080.3970.7880.8030.2020.7460.747
All0.7330.8480.7110.7870.7780.7850.6230.6830.6160.8010.804
Table 6. Comparison of SROCC of each FR-IQA metrics on TID2008’s [56] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
Table 6. Comparison of SROCC of each FR-IQA metrics on TID2008’s [56] distortion levels (Level 1 represents the lowest level of degradation, while Level 5 represents the highest one). The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [58]CSV [59]DISTS [60]GSM [8]MAD [57]MS-SSIM [16]ReSIFT [61]RVSIM [62]SSIM [15]LGVSWLGV
Level 10.4700.6380.5660.6390.4320.0670.4570.6340.3680.6440.649
Level 20.6190.6830.3810.6360.5200.2210.4370.5130.1050.7010.703
Level 30.5730.7740.5810.6770.2390.0590.7070.7610.1900.7790.782
Level 40.6100.8290.6280.7180.2320.2750.7880.8250.2410.8410.845
All0.7690.8510.6680.7810.8290.8460.6320.7430.6750.8740.884
Table 7. Comparison on TID2013’s [3] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
Table 7. Comparison on TID2013’s [3] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [58]CSV [59]DISTS [60]GSM [8]MAD [57]MS-SSIM [16]ReSIFT [61]RVSIM [62]SSIM [15]LGVSWLGV
AGN0.8170.9380.8450.8990.9120.6240.8310.8860.8480.9210.936
ANC0.5900.8620.7860.8230.8000.3870.7490.8360.7790.9110.904
SCN0.8600.9390.8590.9270.9290.6830.8390.8680.8510.8870.930
MN0.3950.7480.8140.7040.6580.3720.7020.7340.7750.8210.842
HFN0.8280.9270.8680.8840.9020.7040.8690.8950.8890.9160.874
IN0.7150.8480.6740.8130.7430.7660.8240.8650.8100.7590.795
QN0.8860.8920.8100.9110.8950.7200.7450.8690.8170.8660.956
GB0.8530.9330.9260.9540.9150.7620.9370.9700.9100.9520.961
DEN0.9000.9520.8990.9550.9220.8190.9070.9260.8760.9840.976
JPEG0.8670.9440.8970.9330.9240.7840.9050.9300.8930.9700.952
JP2K0.8910.9660.9310.9340.9290.7900.9280.9460.8060.9450.968
JGTE0.8060.8000.9060.8660.7680.5820.7120.8310.7010.8950.900
J2TE0.8540.8870.8650.8930.8540.7420.8350.8820.8130.9180.874
NEPN0.7750.8110.8330.8040.8030.7920.6930.7710.6340.7190.795
BLOCK0.0440.1830.3020.588−0.3220.3820.4400.5450.5640.6030.601
MS0.6600.6540.7520.7280.7080.7320.4180.5590.7380.6770.756
CC0.4300.2270.4640.4660.4200.027−0.0550.1320.3550.6590.667
CCS−0.2580.8090.7890.676−0.059−0.055−0.2090.3660.7420.7500.758
MGN0.7470.8840.7900.8310.8880.6530.7650.8530.8040.8190.841
CN0.8580.9240.9070.9020.9040.5960.8820.9140.7970.8380.859
LCNI0.9020.9650.9320.9450.9500.7130.8970.9330.8770.8730.917
ICQD0.8080.9190.8320.9010.8670.7390.7700.8710.8200.8450.864
CA0.7020.8450.8790.8350.7600.5680.8380.8710.7400.7930.788
SSR0.9260.9760.9440.9610.9490.8010.9440.9560.8220.8000.810
All0.7330.8480.7110.7870.7780.7850.6230.6830.6160.8010.804
Table 8. Comparison on TID2008’s [56] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
Table 8. Comparison on TID2008’s [56] distortion types. SROCC values are given. The highest values are typed in bold, while the second highest ones are underlined.
2stepQA [58]CSV [59]DISTS [60]GSM [8]MAD [57]MS-SSIM [16]ReSIFT [61]RVSIM [62]SSIM [15]LGVSWLGV
AGN0.7660.9220.8120.8550.8720.6100.7710.8400.8050.9130.922
ANC0.6270.8930.8110.8210.8030.3540.7620.8290.7800.8980.897
SCN0.8140.9320.8380.9040.9010.7270.8100.8370.8000.9170.914
MN0.4500.7810.8300.7360.6730.3040.7280.7600.7970.8090.843
HFN0.8180.9360.8700.8890.8940.7490.8810.8860.8710.9180.908
IN0.6590.8190.6260.7640.6500.7670.7770.8360.7760.7860.786
QN0.8500.8940.7700.9030.8510.7080.7300.8360.7840.8870.941
GB0.8770.9230.9090.9480.8960.7590.9040.9630.8660.9590.960
DEN0.9190.9700.9310.9710.9280.7860.9230.9390.8730.9670.971
JPEG0.8950.9480.8940.9370.9310.7740.9140.9260.8800.9530.948
JP2K0.9100.9840.9530.9490.9410.8370.9350.9700.7450.9750.984
JGTE0.8510.7900.9070.8710.7810.6060.7350.8600.6660.8790.901
J2TE0.8450.8520.8330.8800.8020.7420.7780.8540.7690.9020.901
NEPN0.8030.7520.8820.7840.8010.7490.7610.7320.5880.7270.793
Block0.4410.7700.6180.843−0.3620.7650.7430.7820.8040.8960.897
MS0.6550.5940.6810.6380.5630.7110.3220.5250.6290.6990.774
CC0.5970.3300.6490.6340.5480.042−0.0180.1940.5020.6690.693
All0.7690.8510.6680.7810.8290.8460.6320.7430.6750.8740.884
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics 2022, 11, 559. https://doi.org/10.3390/electronics11040559

AMA Style

Varga D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics. 2022; 11(4):559. https://doi.org/10.3390/electronics11040559

Chicago/Turabian Style

Varga, Domonkos. 2022. "Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency" Electronics 11, no. 4: 559. https://doi.org/10.3390/electronics11040559

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop