Next Article in Journal
Robotic Intracellular Pressure Measurement Using Micropipette Electrode
Previous Article in Journal
Milling Surface Roughness Prediction Based on Physics-Informed Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality

1
Department of Artificial Intelligence, Shenzhen University, Shenzhen 518060, China
2
Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, China
3
Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA 01854, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4974; https://doi.org/10.3390/s23104974
Submission received: 1 May 2023 / Revised: 19 May 2023 / Accepted: 21 May 2023 / Published: 22 May 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Blind image quality assessment (BIQA) aims to evaluate image quality in a way that closely matches human perception. To achieve this goal, the strengths of deep learning and the characteristics of the human visual system (HVS) can be combined. In this paper, inspired by the ventral pathway and the dorsal pathway of the HVS, a dual-pathway convolutional neural network is proposed for BIQA tasks. The proposed method consists of two pathways: the “what” pathway, which mimics the ventral pathway of the HVS to extract the content features of distorted images, and the “where” pathway, which mimics the dorsal pathway of the HVS to extract the global shape features of distorted images. Then, the features from the two pathways are fused and mapped to an image quality score. Additionally, gradient images weighted by contrast sensitivity are used as the input to the “where” pathway, allowing it to extract global shape features that are more sensitive to human perception. Moreover, a dual-pathway multi-scale feature fusion module is designed to fuse the multi-scale features of the two pathways, enabling the model to capture both global features and local details, thus improving the overall performance of the model. Experiments conducted on six databases show that the proposed method achieves state-of-the-art performance.

1. Introduction

With the rapid development of digital multimedia technology and the popularity of various photography devices, image information has become an important source of human visual information. However, in the process of going from obtaining digital images to arriving at the human visual system, there is an inevitable degradation in image quality. Therefore, it is meaningful to research image quality assessment (IQA) methods that are highly consistent with human visual perception [1].
According to the degree of participation of the original image information, objective IQA methods can be classified into the following categories: full-reference IQA, reduced-reference IQA, and no-reference IQA [2]. No-reference IQA is also called blind IQA (BIQA). Because BIQA methods do not require the use of reference image information and are more closely related to actual application scenarios, they have become a focus of research in recent years [3].
Traditional BIQA methods (e.g., NIQE [4], BRISQUE [5], DIIVINE [6], and BIQI [7]) typically extract low-level features from images and then use regression models to map them to image quality scores. The extracted features are often manually designed and are often inadequate to fully characterize the quality of images. With the development of deep learning, many deep-learning-based BIQA methods (e.g., IQA-CNN [8], DIQaM-NR [9], DIQA [10], HyperIQA [11], DB-CNN [12], and TS-CNN [13]) have been proposed. With their powerful learning abilities, these methods can extract the high-level features of distorted images, and their performance is greatly improved compared to the traditional methods. Although most existing deep-learning-based IQA methods enhance the feature-extraction ability by proposing new network structures to improve the model’s performance, they overlook the important influence of HVS characteristics and the guiding role they may play.
The goal of BIQA is to judge the degree of image distortion with high consistency to human visual perception. It is natural to combine the characteristics of the human visual system (HVS) with powerful deep learning methods. Moreover, based on HVS characteristics, research on BIQA can provide new research perspectives for the study of IQA. This can help to develop evaluation metrics that are more in line with HVS characteristics and provide useful references for understanding how the HVS perceives image degradation mechanisms, making it a valuable scientific problem.
The HVS has many characteristics, such as the dual-pathway feature [14,15], in which visual information is transmitted through the ventral pathway and dorsal pathway in the visual cortex. The former is involved in image-content recognition and long-term memory and is also known as the “what” pathway. The latter is involved in processing spatial-location information of objects and is also known as the “where” pathway. Inspired by the ventral and dorsal pathways of the HVS, Karen and Andrew [16] proposed a dual-stream convolutional neural network (CNN) structure and successfully applied it to the field of video action recognition. They used a spatial stream to take video frames as input to learn scene information and a temporal stream to take optical flow images as input to learn object motion information. Optical flow images explicitly describe the motion between video frames, eliminating the need for CNNs to implicitly predict object motion information, simplifying the learning process, and significantly improving the model accuracy. The contrast sensitivity characteristic of the HVS reflects the different sensitivity of the human eye to different spatial frequencies [17]. This characteristic is similar to the widely used spatial attention mechanism [18] and image saliency [19]. Campbell et al. [20] proposed a contrast sensitivity function to explicitly calculate the sensitivity of the HVS to different spatial frequencies. Some traditional IQA methods [21,22] use the contrast sensitivity function to weight the extracted features to achieve better results. In addition, when perceiving images, the HVS simultaneously pays attention to both global and local features [23]. This characteristic is particularly important for IQA because the degree of distortion of authentically distorted images is often not uniformly distributed [24]. Some IQA methods [25,26] are designed for extracting multi-scale features based on this characteristic, and the results show that using multi-scale features can effectively improve the algorithm’s performance. The aforementioned HVS characteristics have been directly or indirectly applied to computer-vision-related tasks and have been experimentally proven to be effective.
The main contribution of this article is to propose a new model based on dual-pathway and contrast sensitivity (DPCS) for BIQA. The HVS’s dual-pathway characteristic is used to guide the construction of a dual-pathway BIQA deep learning model, which can simultaneously learn the content and spatial location information of distorted images. The multi-scale and contrast sensitivity characteristics of the HVS are also introduced to enable the model to extract distortion features that are highly consistent with human perception. Specifically, our contributions are as follows:
  • First, inspired by the ventral and dorsal pathways of the HVS, a dual-stream convolutional neural network is proposed, with the two streams named the “what” pathway and the “where” pathway, respectively. The “what” pathway extracts the content features of distorted images, while the “where” pathway extracts the global shape features. The features of the two streams are fused and mapped into an image quality score.
  • Second, by weighting the gradient image of the contrast sensitivity as the input of the “where” pathway, the global shape features that are sensitive to the human eye can be extracted.
  • Third, a dual-stream multi-scale feature fusion module is designed to fuse the multi-scale features of the two pathways, enabling the model to focus on both global and local features of distorted images.
The rest of this paper is organized as follows. Section 2 introduces related works for BIQA and analyzes their limitations. Section 3 provides a detailed description of the proposed HVS-based dual-stream model, image-preprocessing method, and dual-stream multi-scale feature fusion module. Section 4 reports the experiment results. Section 5 discusses some related issues and concludes this paper.

2. Related Works

According to the method for feature extraction, BIQA methods can be generally divided into two categories: handcrafted feature-extraction methods and learning-based methods. Handcrafted feature-extraction methods typically extract the natural scene statistics (NSS) features of distorted images. Researchers have found that the NSS features vary with the degree of distortion. Therefore, NSS features can be mapped to image quality scores through regression models.
Early NSS methods extracted features in the transform domain of the image. For example, the BIQI method proposed by Moorthy and Bovik [7] performs a wavelet transform on the distorted image and fits the wavelet decomposition coefficients using the generalized Gaussian distribution (GGD). They first determine the type of distortion and then predict the quality score of the image based on the specific distortion type. Later, they extend the features of BIQI to obtain the DIIVINE [6], which more comprehensively describes scene statistics by considering the correlation of sub-bands, scales, and directions. The BLIINDS method proposed by Saad et al. [27] performs a discrete cosine transform (DCT) on distorted images to extract contrast and structural features based on DCT, which are then mapped to quality scores through a probabilistic prediction model. It is computationally expensive for all of these methods to extract features in the transform domain of the image. To avoid transforming the image, many researchers have proposed methods to directly extract NSS features in the spatial domain. The BRISQUE method proposed by Mittal et al. [5] extracts the local normalized luminance coefficients of distorted images in the spatial domain and quantifies the loss of “naturalness” of distorted images. This method has very low computational complexity. Based on the BRISQUE, Mittal et al. proposed NIQE [4], which uses multivariate Gaussian models (MVGs) to fit the NSS features of distorted and natural images and defines the distance between the two models as the quality of the distorted image. The handcrafted feature-extraction methods achieve a good performance on small databases (such as LIVE [28]), but the designed features can only extract low-level features of images, and their expressive power is limited. Therefore, their performance on large-scale synthetically distorted databases (such as TID2013 [29] and KADID-10k [30]) and authentically distorted databases (such as LIVE Challenge [31]) is relatively poor.
With the successful applications of deep learning methods to other visual tasks [32,33], more and more researchers have applied deep learning to BIQA. Kang et al. [8] first used CNNs for no-reference image quality assessment. To solve the problem of insufficient data, they segmented the distorted images into non-overlapping 32 × 32 patches and assigned each patch a quality score as its source image’s score. Bosse et al. [9] proposed DIQaM-NR and WaDIQaM-NR based on the VGG [32]. This method uses a deeper CNN and simultaneously predicts the quality scores and weights of image patches, and weighting summation is used to obtain the quality score of the image. Kim et al. [33] proposed BIECON. It uses the FR-IQA method to predict the quality scores of distorted image patches, utilizes these scores as intermediate results to train the model, and subsequently finely tunes the model using ground truth scores of images. Kim et al. [10] subsequently proposed DIQI. The framework is similar to BIECON but uses error maps as intermediate training targets to avoid overfitting. Su et al. [11] proposed HyperIQA for authentically distorted images. This method predicts the image quality score based on the perceived image content and also increases the multi-scale features so that the model can capture local distortions. Some researchers have introduced multitask learning into BIQA, which integrates multiple tasks into one model for training and promotes each other based on the correlation between tasks. Kang et al. [34] proposed IQA-CNN++, which integrates image quality assessment and image distortion type classification tasks and improves the model’s distortion type classification performance through multitask training. Ma et al. [35] proposed MEON, which simultaneously performs distortion-type classification and quality score prediction. Unlike other multitask models, the authors first pre-train the distortion-type classification sub-network and then perform joint training of the quality score prediction network. The experimental results show that this pre-training mechanism is effective. Sun et al. [36] proposed a Distortion Graph Representation (DGR) learning framework called GraphIQA. GraphIQA enables the distinction of distortion types by learning the contrast relationship between different DGRs and inferring the ranking distribution of samples from various levels within a DGR. Experimental results show that GraphIQA achieves state-of-the-art performance on both synthetic and authentic distortions. Zhu et al. [37] proposed a meta-learning-based NR-IQA method named MeataIQA. The method collects a diverse set of NR-IQA tasks for different distortions and employs meta-learning to capture prior knowledge. The quality prior-knowledge model is then fine-tuned for a target NR-IQA task, achieving superior performance compared to state-of-the-art methods. Wang and Ma [38] proposed an active learning method to improve the NR-IQA methods by leveraging group maximum differentiation (gMAD) examples. The method involves pre-training a DNN-based BIQA model, identifying weaknesses through gMAD comparisons, and fine-tuning the model using human-rated images. Li et al. [39] proposed a normalization-based loss function, called “Norm-in-Norm” for NR-IQA. The loss function utilizes the normalization of predicted and subjective quality scores and is defined based on the norm of the differences between these normalized values. Theoretical analysis and experimental results show that the embedded normalization enhances the stability and predictability of gradients, leading to faster convergence. Zhang et al. [40] conducted the first study on the perceptual robustness of NR-IQA models. The study identifies that conventional, knowledge-driven NR-IQA models and modern DNN-based methods lack inherent robustness against imperceptible perturbations. Furthermore, the counter-examples generated by one NR-IQA model do not efficiently transfer to falsify other models, highlighting valuable insights into the design flaws of individual models.
In recent years, continual learning has achieved significant success in the field of image classification, and some researchers have also applied it to IQA. Zhang et al. [41] formulated continual learning for NR-IQA to handle novel distortions. The method allows the model to learn from a stream of IQA datasets, preventing catastrophic forgetting and adapting to new data. Experimental results show the effectiveness of the proposed method compared to standard training techniques for BIQA. Liu et al. [42] proposed a lifelong IQA (LIQA) method to address the challenge of adapting to unseen distortion types by mitigating catastrophic forgetting and learning new knowledge without accessing previous training data. It utilizes the Split-and-Merge distillation strategy to train a single-head network for task-agnostic predictions. To enhance the model’s feature extraction ability, some researchers have proposed a dual-stream CNN structure. Zhang et al. [12] proposed a DB-CNN, which uses VGG-16, pre-trained on ImageNet [43], to extract authentic distortion features and uses CNN, pre-trained on Waterloo Exploration Database [44] and PASCAL VOC 2012 [45], to extract synthetic distortion features. Yan et al. [13] also proposed a dual-stream method. The two streams take the distorted image and its gradient image as input, respectively, so that the gradient stream focuses more on the details of the distorted image.
Although the aforementioned deep-learning-based BIQA methods have achieved good results, there is still room for further improvement. For example, the relevant characteristics of the HVS can be combined with deep learning to make the model consistent with the perceptual approach of the HVS. Inspired by the dual-pathway characteristics of the HVS, our work also adopts a dual-pathway structure. However, our two pathways extract the content features and location features of the distorted image, which are functionally consistent with the ventral and dorsal pathways of the HVS. In addition, our dual-pathway model adds contrast-sensitivity-weighted gradient images as an input. This provides different perspectives of the distorted image for the model and explicitly learns the contrast sensitivity characteristics of the HVS. The dual-pathway multi-scale feature fusion module designed in our work enables the model to focus on the global and local features of the image simultaneously. It is also highly consistent with the process of HVS perception.
In comparison to DB-CNN and TS-CNN, particularly TS-CNN, our method shares similarities in using gradient images as the input for one stream of the network. However, there are key differences between our proposed method and these two works. First, our method explicitly models both the ventral (“what” pathway) and dorsal (“where” pathway) streams of the human visual system, providing a more comprehensive representation of the human perception mechanism. Secondly, we introduce a contrast-sensitive weighting scheme for the gradient images in the “where” pathway, which enhances the sensitivity of the network to important contrast information in the input images. Thirdly, our dual-pathway multi-scale feature fusion module allows for the effective integration of features at different levels, enabling the network to capture both local and global image characteristics. These main differences contribute to the distinctiveness of our proposed method to DB-CNN and TS-CNN and enhance the ability of the proposed deep network to capture and evaluate image quality from the angle of human visual perception.

3. Proposed Method

Inspired by the ventral and dorsal pathways of the HVS, this paper proposes a dual-stream CNN structure for BIQA. The model architecture is shown in Figure 1, where the two pathways are referred to as the “what” pathway and the “where” pathway. Han and Sereno [46,47] proved that when modeling the ventral and dorsal pathways using CNNs, both pathways can use the same network structure. Therefore, here, both pathways have the same structure and use the ResNet-50 [48] as the backbone network, which is pre-trained on ImageNet. However, these two pathways receive as input distorted images and contrast-sensitivity-weighted gradient images, respectively, to achieve the function of the ventral and dorsal pathways. In addition, the model introduces a multi-scale feature fusion module that concatenates the multi-scale feature maps of the two streams and fuses them through the module. This allows the model to focus on the global features and local details of the image simultaneously.

3.1. “What” Pathway and “Where” Pathway

The “what” pathway takes a distorted image as the input and extracts content features through a pre-trained ResNet-50. The pre-trained ResNet-50 has demonstrated excellent performance in image classification tasks, proving its strong ability to understand image content. Because the content and structure of an image are closely related to its perceived quality, using ResNet-50 can better capture details and structural information in images, thus improving the accuracy of the model. To apply it to our method, the last average pooling layer and fully connected layer of the original ResNet-50 are removed, as shown in Table 1.
In line with the “what” pathway, the “where” pathway also uses the pre-trained ResNet-50 as a feature extractor. However, the “where” pathway takes a gradient image weighted by contrast sensitivity as the input. Gradient images provide rich structural and contour information, and the HVS is highly sensitive to such information [49]. Using gradient images allows the “where” pathway to extract object shape information from the distorted image, which is more consistent with the global shape perception of the dorsal pathway [50].
The Scharr operator is a widely used edge-detection filter in computer vision and image processing. It is specifically designed to capture edges and gradients in images with high accuracy and low computational complexity. Compared to other popular image filters such as the Sobel and Roberts operators, the Scharr operator can offer superior performance in terms of edge detection and gradient estimation. Specifically, the Scharr operator utilizes a 3 × 3 kernel that approximates the derivative of an image using second-order differences. This kernel provides better isotropy properties, which means it can detect edges with equal sensitivity in all directions. This characteristic is particularly valuable for image quality assessment, as it allows for capturing edge information in distorted images accurately. Therefore, the Scharr operator [51] is chosen as the gradient operator, and its mask structure is shown in Figure 2.
The HVS has the characteristic of contrast sensitivity, meaning that the sensitivity of the human eye varies for different spatial frequencies. This characteristic is similar to the widely used spatial attention mechanism [18] and image saliency [19]. Campbell et al. [20] proposed a contrast sensitivity function to explicitly calculate the sensitivity of the HVS for different spatial frequencies:
A f = 2.6 0.192 + 0.114 f e 0.114 f 1.1 ,
where f denotes the spatial frequency of a point. For the point I i , j , its spatial frequency can be calculated as:
f = f x 2 + f y 2 ,
f x = I i , j I i 1 , j ,
f y = I i , j I i , j 1 .
The proposed method performs contrast-sensitivity weighting on gradient images to enhance the frequency information that is sensitive to the HVS, thereby making the model highly consistent with the HVS perception. Specifically, a contrast-sensitivity function is used to calculate the contrast sensitivity of each pixel in the distorted image. This yields a contrast-sensitivity image, which is then combined with the gradient image to obtain the contrast-sensitivity-weighted gradient image:
I C W G = α I C + β I G + γ ,
where I C denotes the contrast sensitivity image, I G denotes the gradient image, and α , β , and γ are constants. We set α = β = 0.5 and γ = 0.
Representative gradient images and the corresponding contrast-sensitivity-weighted gradient images are shown in Figure 3. Compared to the gradient images, it can be observed that the contrast-sensitivity-weighted gradient images better highlight the regions of interest to human eyes, such as the patterns around the eyes and the edges of the bird’s beak and body. This is because the contrast-sensitive weighted gradient images assign different weights to different regions of the images, which enables it to capture the image details that human eyes pay attention to. Additionally, contrast-sensitive weighted gradient images are also capable of capturing the structural information of distortions, such as block artifacts caused by JPEG compression and image noise caused by Gaussian distortion. These can significantly affect image quality. Figure 4 shows the gradient images and the corresponding contrast-sensitive weighted gradient images for different distortion levels of JPEG. It can be seen that, as the image distortion level increases from top to bottom, the block artifacts caused by JPEG compression become increasingly apparent, leading to declining image quality. For different distortion levels, contrast-sensitivity-weighted images can accurately capture the changes in distortion structures in the image, especially in highly sensitive regions of the HVS.
The feature maps extracted by the “what” pathway and the “where” pathway on different distortion types are shown in Figure 5. It can be seen intuitively that the feature maps extracted by the “what” pathway focus more on the content of the image, such as the lighthouse and buildings. The feature maps extracted by the “where pathway” not only focus on the shape of the main content but also accurately perceives the global shape of the image. This makes the “where” pathway able to accurately extract the distorted structural features in the image and enhance them. For example, the block effect is strengthened in the feature map extracted by JPEG distortion, the global noise is strengthened in the feature map extracted by WN distortion, and the blur-effect areas are more focused in the feature map extracted by GB distortion. Overall, the feature maps extracted by the “what” pathway focus more on the main content of the image, while the “where” pathway focuses more on global shape perception, rather than just the main features of the image. This is consistent with the function of the ventral and dorsal pathways and improves the performance of the model.

3.2. Proposed Multi-Scale Module

When the image quality is evaluated, the HVS not only focuses on the global content features of the image, which are the high-level features, but also pays attention to the local distortion features of the image, which are the low-level features [23]. This characteristic is particularly important for IQA tasks because it is often not uniformly distributed for the degree of distortion in images that have undergone authentic distortion. Using only global features may not enable the model to perceive the local distortion features of the image. Therefore, we propose a multi-scale module to extract distortion features of different scales in distorted images and effectively fuse the multi-scale features from the two streams. This enables the model to focus on both global and local features simultaneously, which is more in line with HVS perception.
The multi-scale module, as shown in Figure 6, concatenates the features output by Conv2_10, Conv3_12, and Conv4_18 in the “what” pathway and the “where” pathway. A channel attention mechanism [52] is then used to reassign different channel importance to the concatenated feature map. Specifically, the concatenated feature map is, first, global-average-pooled to a one-dimensional vector. Then, a fully connected layer is used to generate a weight vector W c for each channel, so that each channel has a corresponding weight to better distinguish the importance of each channel. Finally, the weight vector is multiplied with the concatenated feature map to further fuse the feature maps from the “what” pathway and the “where” pathway, thereby enhancing the representational power and robustness of the features. A 1 × 1 convolution is used to reduce the number of channels in the fused feature map by half to reduce the computational cost, and global average pooling is applied to obtain a multi-scale feature vector. This process can be described as:
F i = F c F p W c = σ ( W 2 δ ( W 1 G A P ( F i ) ) ) F i = W c F i F m = G A P ( C o n v 1 × 1 ( F i ) ) ,
where F c and F p denote the feature maps from the “what” pathway and the “where” pathway, respectively. W 1 and W 2 are the parameters of two fully connected layers. σ ( ) and δ ( ) denote the sigmoid function and ReLU function. G A P ( ) denotes global average pooling, and C o n v 1 × 1 ( ) denotes 1 × 1 convolution operation.

3.3. Network Training

For data augmentation, we follow the training strategy in [11] and [53] by performing random horizontal flipping of the images in the training set and randomly sampling five 224 × 224-pixel image patches from each image to increase the number of training samples. The quality score of each image patch is the same as the quality score of the distorted image. Considering that the L 1 loss function is more robust to outliers, which is crucial in the task of image quality assessment, and the L 2 loss function is more sensitive to outliers, which can lead to poor fitting of exceptional samples during model training, the L 1 loss function is used to train the model:
L 1 = 1 N i = 1 N q i q ^ i l 1 ,
where q i represents the ground truth score of the image patch, q ^ i represents the predicted quality score of the image patch by the model, N denotes the number of image patches, and l 1 denotes the l 1 -norm.
We use the Adam [54] optimizer for model parameter optimization with a weight decay rate of 5 × 10 4 . The model is trained for 50 epochs with a batch size of 48, and the initial learning rate is set to 5 × 10 5 , which is reduced by half every 10 epochs. During the testing process, we also randomly sample five 224 × 224 image patches from each testing image and calculate the average predicted quality score for five image patches as the quality score of the testing image. The proposed method is implemented by Pytorch and the experiments are conducted on an NVIDIA 3080Ti GPU.

4. Experiments

4.1. Image Quality Databases

To evaluate the performance of the proposed method, experiments are conducted on both synthetically distorted databases and authentically distorted databases, and the proposed approach is compared with the state-of-the-art methods. The synthetically distorted databases are LIVE [28], CSIQ [24], TID2013 [29], KADID-10k [30], and the Waterloo Exploration Database [44], with detailed information summarized in Table 2. The authentically distorted databases are LIVE Challenge (LIVEC) [31] and KonIQ-10k [55]. The LIVEC database contains 1162 images captured by different photographers using different equipment in natural environments, which include complex authentic distortion types. The KonIQ-10k dataset contains 10,073 images selected from the YFCC100M database [56], ensuring diversity in image content and quality, and an even distribution in brightness, color, contrast, and sharpness.

4.2. Experimental Protocols and Evaluation Metrics

To avoid content overlap between the training and testing images, we use 80% of the synthetically distorted databases based on the reference images for training and the remaining 20% for testing. For the authentically distorted databases, we directly use 80% of all images for training and 20% for testing. Each database is randomly split 10 times according to the aforementioned rule for experiments, and the average of 10 experimental results is taken as the final result.
We use the Spearman rank-order correlation coefficient (SROCC) and Pearson linear correlation coefficient (PLCC) to evaluate the performance of the IQA methods. These coefficients are used to evaluate the monotonicity and linear correlation between the predicted scores and the ground truth scores, respectively. Their range is [−1, 1], and the larger the absolute value is, the better the model’s performance is. In addition, on the Waterloo Exploration Database, the D-test metric is used to evaluate the model’s ability to distinguish between reference images and distorted images, the L-Test metric is used to evaluate the consistency between the predicted rank orders with different distortion levels but the same content and distortion type and their true rank orders, and the P-Test metric is used to evaluate the consistency of the IQA model in terms of the distortion score order between image pairs and their true order.

4.3. Performance on Individual Database

The experimental results on the individual databases are summarized in Table 3 and Table 4. The proposed method is compared with three traditional methods (PSNR, SSIM [57], and BRISQUE [5]) and seven deep-learning-based methods (IQA-CNN [8], BIECON [33], MEON [35], DIQaM-NR [9], HyperIQA [11], MMMNet [58], AIGQA [59], DB-CNN [12], and TS-CNN [13]) in terms of SROCC and PLCC results on six databases. Here, the DB-CNN and TS-CNN are similar to our proposed method, both with a dual-stream structure.
From Table 3 and Table 4, it can be observed that all methods exhibit good performance on the LIVE and CSIQ databases, which contain fewer distortion types. However, varying degrees of performance degradation are evident on the more complex distortion types of the TID2013 and KADID-10k databases, as well as the synthetically distorted databases of LIVEC and KonIQ-10k.
On the synthetically distorted databases of LIVE, TID2013, and KADID-10k, the proposed method achieves the top two SROCC and PLCC scores. On the authentically distorted databases of LIVEC and KonIQ-10k, the performance of the proposed method is among the top two methods, partly because the proposed method adopts a pre-trained ResNet-50 as the backbone to enable the model to learn the authentic distortions in the images more easily. Additionally, since the degree of distortion distribution in authentic distortion images is uneven, the proposed method introduces a multi-scale feature fusion module. This allows the model to focus on local details and better align with human visual perception.
Overall, based on the SROCC and PLCC results, the proposed method demonstrates excellent performance on six commonly used databases. Compared with other dual-pathway structures such as DB-CNN and TS-CNN, the proposed method maintains a leading position on most databases. In particular, compared with TS-CNN, the proposed method shows a significant performance difference on authentically distorted databases. This is mainly due to the incorporation of the dual-path characteristics of the HVS in the proposed approach, which can extract the content and location features of distorted images simultaneously. The contrast-sensitivity-weighted gradient image can explicitly extract the frequency information that is of interest to human vision. Additionally, the proposed multi-scale feature fusion module allows the model to focus on both global content and local details.

4.4. Performance on Individual Distortion Types

To compare the performance of the proposed method with the state-of-the-art methods on individual distortion types, experiments are conducted on three synthetically distorted databases, LIVE, CSIQ, and TID2013. All the distortion types are used for training on each database, and testing is performed on specific distortion types. The experimental results are summarized in Table 5, Table 6, and Table 7 for each database, respectively.
From Table 5, it can be observed that the proposed method achieves the best performance on four distortion types, JPEG, WN, GB, and FF, in the LIVE database. In particular, the proposed method outperforms other methods by a large margin on the FF distortion type. From Table 6, it can be seen that the proposed method achieves the best performance on four distortion types, JPEG, WN, PN, and CC, in the CSIQ database, and obtains the second- and third-best performance on the JP2K and GB distortion types, respectively, with only a small gap between it and the top methods. For the more complex distortion types of PN and CC, the proposed method still maintains a high SROCC.
It can be observed from Table 7 that the proposed method achieves top-two performance on 17 out of 24 distortion types, second only to HyperIQA’s 19 of out 24. Moreover, for complex distortion types such as NPN, BW, MS, and CC, most methods fail to achieve satisfactory results, while the proposed method still achieves relatively good performance. It can be seen that our method maintains stable and excellent performance on all distortion types in TID2013. Overall, the experimental results on the individual distortion types of the three datasets demonstrate that our method also performs well for specific distortion types.

4.5. Performance across Different Databases

Cross-database testing is a common method to test model generalizability. We conduct cross-database tests on four databases: LIVE, CSIQ, TID2013, and LIVEC. Specifically, we train the model on one database and test it on the others, such as training the model on the LIVE database and testing on the CSIQ, TID2013, and LIVEC databases, and so on. The SROCC results of the tests are summarized in Table 8.
From Table 8, it can be seen that the proposed method achieves the best performance in a total of eight cases, surpassing the DB-CNN’s three cases. When cross-database testing is conducted among the three synthetically distorted databases (LIVE, CSIQ, and TID2013), most methods achieve relatively good results. However, because synthetically distorted databases cannot fully simulate authentic distortion, many methods cannot achieve good performance on authentically distorted databases. Nevertheless, the proposed method still maintains good performance in such scenarios. Although it is trained on LIVE, CSIQ, and TID2013 and tested on LIVEC, it achieves the best performance. Similarly, when trained on LIVEC and tested on LIVE, CSIQ, and TID2013, our method also maintains good performance and achieves better results than other methods on TID2013. Although the performance on LIVE and CSIQ is slightly lower than DB-CNN, the proposed method still outperforms other methods and maintains a significant lead.
To further evaluate the generalization performance of the proposed method on large-scale databases, we train the model on the entire LIVE database and test it on the Waterloo Exploration Database, calculating the D-Test, P-Test, and L-Test metrics. The experimental results are presented in Table 9. It can be observed that the proposed method achieves top-two performance in both D-Test and L-Test metrics. It also demonstrates competitive performance in the P-Test metric, which further validates its superior generalization capability.

4.6. Ablation Experiments

To validate the effectiveness of the modules in the proposed method, ablation experiments are conducted on the LIVE, CSIQ, TID2013, and LIVEC databases. The “what” pathway, which only takes distorted images as the input, is used as the baseline model. Then, the “where” pathway is added, which takes gradient images as the input, followed by the contrast-sensitivity-weighted gradient image as the input in comparison, and finally the multi-scale module. The experimental results are summarized in Table 10. To further validate the significance of module contributions to model performance improvement, paired t-tests were conducted on various models in the ablation experiments. The experimental results are shown in Table 11, where 1, 0, and −1 represent the models in the corresponding row that are significantly better than, indistinguishable from, or worse than the models in the respective column. The confidence interval is set at 95%.
From Table 10 and Table 11, it can be observed that when there is only one pathway in the model, the performance is poor, especially when it only contains the “where” pathway. This is because the model can only extract high-frequency information from the gradient image and lacks detail information. When the model contains both the “what” pathway and the “where” pathway, the model can extract rich structural information from the gradient domain of the distorted image and significantly improve the model’s performance. This improves the performance by 0.011, 0.019, 0.017, and 0.009 on the three databases, respectively. When the contrast-sensitivity-weighted gradient image is used as the input for the “where” pathway, the improvement in model performance is even more significant, with increases of 0.015, 0.028, 0.028, and 0.019 on the three databases, respectively. This demonstrates that using the contrast-sensitivity-weighted gradient map as input can explicitly make the model focus more on the sensitive parts of the HVS, making the model highly consistent with the HVS perception.
Then, when the multi-scale module without a channel attention mechanism is added to the dual-pathway model, a slight improvement in model performance can be observed. However, this improvement is not significant, as only a simple concatenation of feature maps from the two pathways is performed in this case. This may result in redundant or irrelevant information being combined, limiting the model’s ability to effectively leverage the complementary strengths of the two pathways. Finally, adding the multi-scale module with a channel attention mechanism to the model shows that the performance of both the “where” pathway and the “what” pathway, which take the gradient image and contrast-sensitivity-weighted gradient map as inputs, are improved, with the largest improvement seen on the authentically distorted database LIVEC, with increases of 0.011 and 0.014, respectively. This is because, by incorporating a channel attention mechanism, the model gains the ability to selectively attend to informative channels from both pathways, effectively enhancing the fusion process. This allows the model to capture more fine-grained relationships between different channels, leading to improved performance.

5. Conclusions

In this paper, we propose a dual-pathway CNN model for BIQA based on the dual-pathway characteristic and contrast sensitivity of the HVS. Both pathways use pre-trained ResNet-50 as a feature extractor to enhance their feature-extraction capability. The model can be used to evaluate the image quality of both synthetic and authentic distortions. Considering the contrast sensitivity and edge sensitivity of the HVS, the method uses contrast-sensitivity-weighted gradient images as the input to the “where” pathway, enabling the model to explicitly focus on the highly salient parts of distorted images. Finally, a multi-scale module is proposed to focus on the global and local features of images simultaneously. Experimental results on individual databases and individual distortion types demonstrate that the proposed method’s performance is comparable to the state-of-the-art methods. Cross-database experiments and experiments on the Waterloo Exploration Database also demonstrate that the proposed method has a strong generalization performance.
Although the proposed method has achieved good performance on most commonly used databases, there is still room for further improvement in some aspects. For example, better feature fusion methods such as bilinear pooling or other fusion methods could be considered when global features from both pathways and global and local features are fused. In addition, how to combine more HVS characteristics highly related to image quality assessment tasks (e.g., the bandpass characteristic [63,64] and the masking effect [65]) with deep learning methods is also a future research direction.

Author Contributions

Conceptualization, F.C. and Y.C.; methodology, F.C.; software, F.C.; validation, F.C.; formal analysis, F.C.; investigation, F.C.; resources, F.C.; data curation, F.C.; writing—original draft preparation, F.C.; writing—review and editing, H.F., H.Y. and Y.C.; visualization, F.C.; supervision, H.F., H.Y. and Y.C.; project administration, Y.C.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Stabilization Support Plan for Shenzhen Higher Education Institutions, grant number 20200812165210001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rehman, A.; Zeng, K.; Wang, Z. Display device-adapted video quality-of-experience assessment. Hum. Vis. Electron. Imaging 2015, 9394, 27–37. [Google Scholar]
  2. Wang, Z.; Bovik, A.C. Modern image quality assessment. In Synthesis Lectures on Image, Video, and Multimedia Processing; Morgan & Claypool Publishers: San Rafael, CA, USA, 2006; Volume 2, pp. 1–156. [Google Scholar]
  3. Wang, Z.; Bovik, A.C. Reduced-and no-reference image quality assessment. IEEE Signal Process. Mag. 2011, 28, 29–40. [Google Scholar] [CrossRef]
  4. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  5. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  6. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef] [PubMed]
  7. Moorthy, A.K.; Bovik, A.C. A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  8. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar]
  9. Bosse, S.; Maniry, D.; Müller, K.R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2017, 27, 206–219. [Google Scholar] [CrossRef]
  10. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 11–24. [Google Scholar] [CrossRef]
  11. Su, S.; Yan, Q.; Zhu, Y.; Zhang, C.; Ge, X.; Sun, J.; Zhang, Y. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3667–3676. [Google Scholar]
  12. Zhang, W.; Ma, K.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2018, 30, 36–47. [Google Scholar] [CrossRef]
  13. Yan, Q.; Gong, D.; Zhang, Y. Two-stream convolutional networks for blind image quality assessment. IEEE Trans. Image Process. 2018, 28, 2200–2211. [Google Scholar] [CrossRef]
  14. Mishkin, M.; Ungerleider, L.G. Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behav. Brain Res. 1982, 6, 57–77. [Google Scholar] [CrossRef] [PubMed]
  15. Goodale, M.A.; Milner, A.D. Separate visual pathways for perception and action. Trends Neurosci. 1992, 15, 20–25. [Google Scholar] [CrossRef]
  16. Simonyan, K.; Zisserman, A. Two-stream convolutional networks for action recognition in videos. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, USA, 8–13 December 2014; Volume 27. [Google Scholar]
  17. Mannos, J.; Sakrison, D. The effects of a visual fidelity criterion of the encoding of images. IEEE Trans. Inf. Theory 1974, 20, 525–536. [Google Scholar] [CrossRef]
  18. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  19. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
  20. Campbell, F.W.; Robson, J.G. Application of Fourier analysis to the visibility of gratings. J. Physiol. 1968, 197, 551. [Google Scholar] [CrossRef] [PubMed]
  21. Gao, X.; Lu, W.; Tao, D.; Li, X. Image quality assessment based on multiscale geometric analysis. IEEE Trans. Image Process. 2009, 18, 1409–1423. [Google Scholar]
  22. Saha, A.; Wu QM, J. Utilizing image scales towards totally training free blind image quality assessment. IEEE Trans. Image Process. 2015, 24, 1879–1892. [Google Scholar] [CrossRef]
  23. Shnayderman, A.; Gusev, A.; Eskicioglu, A.M. An SVD-based grayscale image quality measure for local and global assessment. IEEE Trans. Image Process. 2006, 15, 422–429. [Google Scholar] [CrossRef]
  24. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  25. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  26. Pan, Z.; Zhang, H.; Lei, J.; Fang, Y.; Shao, X.; Ling, N.; Kwong, S. DACNN: Blind image quality assessment via a distortion-aware convolutional neural network. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7518–7531. [Google Scholar] [CrossRef]
  27. Saad, M.A.; Bovik, A.C.; Charrier, C. A DCT statistics-based blind image quality index. IEEE Signal Process. Lett. 2010, 17, 583–586. [Google Scholar] [CrossRef]
  28. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef]
  29. Ponomarenko, N.; Jin, L.; Ieremeiev, O.; Lukin, V.; Egiazarian, K.; Astola, J.; Kuo, C.C.J. Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 2015, 30, 57–77. [Google Scholar] [CrossRef]
  30. Lin, H.; Hosu, V.; Saupe, D. KADID-10k: A large-scale artificially distorted IQA database. In Proceedings of the 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–3. [Google Scholar]
  31. Ghadiyaram, D.; Bovik, A.C. Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process. 2015, 25, 372–387. [Google Scholar] [CrossRef]
  32. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  33. Kim, J.; Lee, S. Fully deep blind image quality predictor. IEEE J. Sel. Top. Signal Process. 2016, 11, 206–220. [Google Scholar] [CrossRef]
  34. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 2791–2795. [Google Scholar]
  35. Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 2017, 27, 1202–1213. [Google Scholar] [CrossRef]
  36. Sun, S.; Yu, T.; Xu, J.; Zhou, W.; Chen, Z. GraphIQA: Learning distortion graph representations for blind image quality assessment. IEEE Trans. Multimed. 2022. [Google Scholar] [CrossRef]
  37. Zhu, H.; Li, L.; Wu, J.; Dong, W.; Shi, G. MetaIQA: Deep meta-learning for no-reference image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14143–14152. [Google Scholar]
  38. Wang, Z.; Ma, K. Active fine-tuning from gMAD examples improves blind image quality assessment. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4577–4590. [Google Scholar] [CrossRef] [PubMed]
  39. Li, D.; Jiang, T.; Jiang, M. Norm-in-norm loss with faster convergence and better performance for image quality assessment. In Proceedings of the 28th ACM International Conference on Multimedia, New York, NY, USA, 12–16 October 2020; pp. 789–797. [Google Scholar]
  40. Zhang, W.; Li, D.; Min, X.; Zhai, G.; Guo, G.; Yang, X.; Ma, K. Perceptual Attacks of No-Reference Image Quality Models with Human-in-the-Loop. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; pp. 2916–2929. [Google Scholar]
  41. Zhang, W.; Li, D.; Ma, C.; Zhai, G.; Yang, X.; Ma, K. Continual learning for blind image quality assessment. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2864–2878. [Google Scholar] [CrossRef]
  42. Liu, J.; Zhou, W.; Li, X.; Xu, J.; Chen, Z. LIQA: Lifelong blind image quality assessment. IEEE Trans. Multimed. 2022. [Google Scholar] [CrossRef]
  43. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  44. Ma, K.; Duanmu, Z.; Wu, Q.; Wang, Z.; Yong, H.; Li, H.; Zhang, L. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Process. 2016, 26, 1004–1016. [Google Scholar] [CrossRef] [PubMed]
  45. Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2009, 88, 303–308. [Google Scholar] [CrossRef]
  46. Han, Z.; Sereno, A. Identifying and Localizing Multiple Objects Using Artificial Ventral and Dorsal Cortical Visual Pathways. Neural Comput. 2023, 35, 249–275. [Google Scholar] [CrossRef] [PubMed]
  47. Han, Z.; Sereno, A. Modeling the Ventral and Dorsal Cortical Visual Pathways Using Artificial Neural Networks. Neural Comput. 2022, 34, 138–171. [Google Scholar] [CrossRef]
  48. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  49. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef]
  50. Ayzenberg, V.; Behrmann, M. The dorsal visual pathway represents object-centered spatial relations for object recognition. J. Neurosci. 2022, 42, 4693–4710. [Google Scholar] [CrossRef]
  51. Jähne, B.; Haussecker, H.; Geissler, P. Handbook of Computer Vision and Applications with Cdrom; Academic Press: New York, NY, USA, 1999; pp. 423–450. [Google Scholar]
  52. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  53. Kim, J.; Zeng, H.; Ghadiyaram, D.; Lee, S.; Zhang, L.; Bovik, A.C. Deep convolutional neural models for picture-quality prediction: Challenges and solutions to data-driven image quality assessment. IEEE Signal Process. Mag. 2017, 34, 130–141. [Google Scholar] [CrossRef]
  54. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  55. Hosu, V.; Lin, H.; Sziranyi, T.; Saupe, D. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Trans. Image Process. 2020, 29, 4041–4056. [Google Scholar] [CrossRef]
  56. Thomee, B.; Shamma, D.A.; Friedland, G.; Elizalde, B.; Ni, K.; Poland, D.; Li, L.J. YFCC100M: The new data in multimedia research. Commun. ACM 2016, 59, 64–73. [Google Scholar] [CrossRef]
  57. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  58. Li, F.; Zhang, Y.; Cosman, P.C. MMMNet: An end-to-end multi-task deep convolution neural network with multi-scale and multi-hierarchy fusion for blind image quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4798–4811. [Google Scholar] [CrossRef]
  59. Ma, J.; Wu, J.; Li, L.; Dong, W.; Xie, X.; Shi, G.; Lin, W. Blind image quality assessment with active inference. IEEE Trans. Image Process. 2021, 30, 3650–3663. [Google Scholar] [CrossRef]
  60. Ye, P.; Kumar, J.; Kang, L.; Doerman, D. Unsupervised feature learning framework for no-reference image quality assessment. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 1098–1105. [Google Scholar]
  61. Xu, J.; Ye, P.; Li, Q.; Du, H.; Liu, Y.; Doermann, D. Blind image quality assessment based on high order statistics aggregation. IEEE Trans. Image Process. 2016, 25, 4444–4457. [Google Scholar] [CrossRef] [PubMed]
  62. Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [PubMed]
  63. Daugman, J.G. Two-dimensional spectral analysis of cortical receptive field profiles. Vis. Res. 1980, 20, 847–856. [Google Scholar] [CrossRef]
  64. Lee, T.S. Image representation using 2D Gabor wavelets. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 959–971. [Google Scholar]
  65. Legge, G.E.; Foley, J.M. Contrast masking in human vision. J. Opt. Soc. Am. 1980, 70, 1458–1471. [Google Scholar] [CrossRef]
Figure 1. The architecture of the proposed method. MS-Module represents the multi-scale module.
Figure 1. The architecture of the proposed method. MS-Module represents the multi-scale module.
Sensors 23 04974 g001
Figure 2. The structure of the Scharr operator.
Figure 2. The structure of the Scharr operator.
Sensors 23 04974 g002
Figure 3. Examples of gradient images and the corresponding contrast-sensitive weighted gradient images for different distortion types. (a) are the distorted images with JPEG compression distortion, white Gaussian noise (WN) distortion, and Gaussian blur (GB) distortion. (b) are the gradient images. (c) are the contrast-sensitive weighted gradient images.
Figure 3. Examples of gradient images and the corresponding contrast-sensitive weighted gradient images for different distortion types. (a) are the distorted images with JPEG compression distortion, white Gaussian noise (WN) distortion, and Gaussian blur (GB) distortion. (b) are the gradient images. (c) are the contrast-sensitive weighted gradient images.
Sensors 23 04974 g003
Figure 4. Examples of gradient images and the corresponding contrast-sensitive weighted gradient images for different JPEG distortion levels. (a) are JPEG distortion images at different levels, with DMOS scores of 12.56, 33.97, and 70.02 from top to bottom. A higher DMOS score indicates worse quality. (b) are the gradient images. (c) are the contrast-sensitive weighted gradient images.
Figure 4. Examples of gradient images and the corresponding contrast-sensitive weighted gradient images for different JPEG distortion levels. (a) are JPEG distortion images at different levels, with DMOS scores of 12.56, 33.97, and 70.02 from top to bottom. A higher DMOS score indicates worse quality. (b) are the gradient images. (c) are the contrast-sensitive weighted gradient images.
Sensors 23 04974 g004
Figure 5. Examples of feature maps extracted by the “what” pathway and the “where” pathway on different distortion types, where the images are, from top to bottom, distorted images, feature maps extracted by the “what” pathway, and feature maps extracted by the “where” pathway, respectively. (a) is JP2K-compression distortion, (b) is JPEG-compression distortion, (c) is WN distortion, (d) is GB distortion, and (e) is FF distortion.
Figure 5. Examples of feature maps extracted by the “what” pathway and the “where” pathway on different distortion types, where the images are, from top to bottom, distorted images, feature maps extracted by the “what” pathway, and feature maps extracted by the “where” pathway, respectively. (a) is JP2K-compression distortion, (b) is JPEG-compression distortion, (c) is WN distortion, (d) is GB distortion, and (e) is FF distortion.
Sensors 23 04974 g005
Figure 6. The structure of the proposed multi-scale module. Where W, H, and C represent the width, height, and channels of the feature map, respectively.
Figure 6. The structure of the proposed multi-scale module. Where W, H, and C represent the width, height, and channels of the feature map, respectively.
Sensors 23 04974 g006
Table 1. The architecture of ResNet-50.
Table 1. The architecture of ResNet-50.
Conv1Conv2_10Conv3_12Conv4_18Conv5_9
7 × 7, 64, s23 × 3 max pool, s2 1 × 1 , 128 3 × 3 , 128 1 × 1 , 512 × 4 1 × 1 , 256 3 × 3 , 256 1 × 1 , 1024 × 6 1 × 1 , 512 3 × 3 , 512 1 × 1 , 2048 × 3
1 × 1 , 64 3 × 3 , 64 1 × 1 , 256 × 3
Table 2. Details of the synthetically distorted databases.
Table 2. Details of the synthetically distorted databases.
DatabaseRef. ImgsDist. ImgsDist. TypesScore’s Type
LIVE [28]297795DMOS
CSIQ [24]308666DMOS
TID2013 [29]25300024MOS
KADID-10k [30]8110,12525DMOS
Waterloo [44]474494,8804/
Table 3. The SROCC results on six databases. The top two results are shown in bold font.
Table 3. The SROCC results on six databases. The top two results are shown in bold font.
SROCCLIVECSIQTID2013KADID-10kLIVECKonIQ-10k
PSNR0.8660.8060.6360.674--
SSIM [57]0.9130.8760.6370.783--
BRISQUE [5]0.9400.7460.6040.5190.6070.673
IQA-CNN [8]0.9560.8760.7010.6510.5160.655
BIECON [33]0.9610.8250.7170.6850.5950.618
MEON [35]0.9430.8390.8280.8130.6930.754
DIQaM-NR [9]0.9600.9010.8350.8400.6060.722
HyperIQA [11]0.9620.9230.8400.8520.8590.906
MMMNet [58]0.9700.9240.8320.8410.8520.867
AIGQA [59]0.9600.9270.8710.8640.7510.766
DB-CNN [12]0.9680.9460.8160.8010.8510.875
TS-CNN [13]0.9690.8920.7790.7450.6550.722
DPCS0.9710.9290.8660.8820.8560.909
Table 4. The PLCC results on six databases. The top two results are shown in bold font.
Table 4. The PLCC results on six databases. The top two results are shown in bold font.
PLCCLIVECSIQTID2013KADID-10kLIVECKonIQ-10k
PSNR0.8560.8000.7060.681--
SSIM [57]0.9310.8610.6910.780--
BRISQUE [5]0.9420.8290.6940.5540.5850.692
IQA-CNN [8]0.9530.9050.7520.6070.5360.671
BIECON [33]0.9620.8380.7620.6910.6130.651
MEON [35]0.9540.8500.8110.8220.6880.760
DIQaM-NR [9]0.9720.9080.8550.8430.6010.736
HyperIQA [11]0.9660.9420.8580.8450.8820.917
MMMNet [58]0.9700.9370.8530.8400.8460.871
AIGQA [59]0.9570.9520.8930.8630.7610.773
DB-CNN [12]0.9710.9590.8650.8060.8690.884
TS-CNN [13]0.9780.9050.7840.7440.6670.729
DPCS0.9730.9350.8800.8840.8730.914
Table 5. The SROCC results of the individual distortion type on the LIVE database. The top two results are shown in bold font.
Table 5. The SROCC results of the individual distortion type on the LIVE database. The top two results are shown in bold font.
SROCCJP2KJPEGWNGBFF
PSNR0.8700.8850.9420.7630.874
SSIM [57]0.9390.9460.9640.9070.941
BRISQUE [5]0.9100.9190.9550.9410.874
IQA-CNN [8]0.9360.9650.9740.9520.906
BIECON [33]0.9520.9740.9800.9560.923
MEON [35]0.9530.9640.9810.9580.904
DIQaM-NR [9]0.9140.9510.9720.9440.926
HyperIQA [11]0.9490.9610.9820.9260.934
DB-CNN [12]0.9550.9720.9800.9350.930
TS-CNN [13]0.9660.9500.9790.9630.911
MMMNet [58]0.9680.9740.9850.9350.936
DPCS0.9630.9780.9870.9660.957
Table 6. The SROCC results of the individual distortion type on the CSIQ database. The top two results are shown in bold font.
Table 6. The SROCC results of the individual distortion type on the CSIQ database. The top two results are shown in bold font.
SROCCJP2KJPEGWNGBPNCC
PSNR0.9260.8880.9360.8290.8740.852
SSIM [57]0.9210.9220.9250.9140.9410.740
BRISQUE [5]0.8400.8060.7230.8200.3780.804
IQA-CNN [8]0.9300.9150.9190.9180.9000.786
BIECON [33]0.9540.9420.9020.9460.8840.523
MEON [35]0.9340.9220.9440.9010.8670.847
DIQaM-NR [9]0.8960.9460.9470.9080.8950.807
HyperIQA [11]0.9600.9340.9270.9150.9310.874
DB-CNN [12]0.9530.9400.9480.9470.9400.870
TS-CNN [13]0.9140.9070.9380.8950.8820.866
MMMNet [58]0.9320.9120.8790.8940.9410.942
DPCS0.9360.9470.9540.9300.9440.912
Table 7. The SROCC results of the individual distortion types on the TID2013 database. The top two results are shown in bold font.
Table 7. The SROCC results of the individual distortion types on the TID2013 database. The top two results are shown in bold font.
SROCCBRISQUE [5]IQA-CNN [8]MEON [35]DIQA [10]HyperIQA [11]DB-CNN [12]TS-CNN [13]DPCS
AGN0.7110.7840.8130.9160.9420.7900.8160.890
ANC0.4320.7580.7220.7550.9160.7000.7040.794
SCN0.7460.7620.9260.8780.9470.8260.8090.960
MN0.2520.7760.7280.7340.8010.6460.4750.848
HFN0.8420.8160.9110.9390.9550.8790.8330.906
IN0.7650.8070.9010.8440.8550.7080.8190.899
QN0.6620.6160.8880.8580.7260.8250.8010.873
GB0.8710.9210.8870.9200.9690.8590.7860.858
DEN0.6120.8720.7970.7880.9410.8650.7330.871
JPEG0.7640.8740.8500.8920.8980.8940.8470.896
JP2K0.7450.9100.8910.8120.9470.9160.8510.909
JGTE0.3010.6860.7460.8620.9340.7720.6990.843
J2TE0.7480.6780.7160.8130.8920.7730.7660.894
NPN0.2690.2860.1160.1600.8080.2700.2110.600
BW0.2070.2190.5000.4080.3610.4440.3130.639
MS0.2190.5650.1770.3000.374−0.0090.1070.545
CC−0.0010.1820.2520.4470.7530.5480.3150.819
CCS0.0030.0810.6840.1510.8570.6310.3240.725
MGN0.7170.6440.8490.9040.8990.7110.7440.910
CN0.1960.5340.4060.6560.9600.7520.6380.849
LCNI0.6090.8100.7720.8300.8970.8600.7420.918
ICQD0.8310.2720.8570.9370.9010.8330.7590.872
CHA0.6150.8920.7790.7570.8700.7320.7140.823
SSR0.8070.9100.8550.9090.9100.9020.8260.933
Mean0.5380.6520.7090.7280.8460.7140.6510.836
Count0335191017
Table 8. The SROCC results of cross-database tests. The top result is shown in bold font.
Table 8. The SROCC results of cross-database tests. The top result is shown in bold font.
TrainingLIVETID2013
TestingCSIQTID2013LIVECLIVECSIQLIVEC
DIIVINE [6]0.5820.3730.3000.7140.5850.230
BRISQUE [5]0.5620.3580.3260.7580.5700.209
CORNIA [60]0.6200.3820.4310.8290.6620.267
HOSA [61]0.5980.4700.4550.8440.6090.253
IQA-CNN [8]0.6160.4070.1030.5300.6000.102
DIQaM-NR [9]0.6230.4250.2060.8120.6980.112
DB-CNN [12]0.7580.5240.5670.8910.8070.457
TS-CNN [13]0.6210.4310.2730.5760.6090.114
MMMNet [58]0.7930.5460.5020.8530.7020.348
DPCS0.7430.6140.5870.8970.7390.462
TrainingCSIQLIVEC
TestingLIVETID2013LIVECLIVECSIQTID2013
DIIVINE [6]0.8150.4190.3660.3620.4170.337
BRISQUE [5]0.7900.5900.1060.3460.2450.258
CORNIA [60]0.8430.3310.3930.5780.4560.403
HOSA [61]0.7700.3410.3090.5370.3360.399
IQA-CNN [8]0.7130.3150.1030.2130.1950.132
DIQaM-NR [9]0.8170.5160.1140.3190.3130.215
DB-CNN [12]0.8770.5400.4520.7460.6970.424
TS-CNN [13]0.8360.4770.1580.2830.2490.225
MMMNet [58]0.8900.5220.4060.5280.5180.398
DPCS0.8930.5840.4910.6380.6860.426
Table 9. Results of D-Tests, L-Tests, and P-Tests. The top two results are shown in bold font.
Table 9. Results of D-Tests, L-Tests, and P-Tests. The top two results are shown in bold font.
MethodD-TestL-TestP-Test
BRISQUE [5]0.9200.9770.993
IQA-CNN [8]0.9290.9300.997
dipIQ [62]0.9350.9850.999
DIQaM-NR [9]0.9070.9470.963
MEON [35]0.9380.9670.998
HyperIQA [11]0.9010.9750.997
DB-CNN [12]0.9620.9610.999
TS-CNN [13]0.9300.9790.995
DPCS0.9410.9760.999
Table 10. The SROCC results of the ablation experiments. The top result is shown in bold font.
Table 10. The SROCC results of the ablation experiments. The top result is shown in bold font.
Baseline
Gradient Image
Contrast Sensitivity
Multi-scale Module
Multi-scale Module w/o CA
LIVE0.9510.9380.9440.9620.9660.9630.9680.9670.971
CSIQ0.8940.8640.8750.9130.9220.9160.9210.9240.929
TID20130.8320.8060.8140.8490.8600.8510.8540.8620.866
LIVEC0.8230.7680.7730.8320.8420.8370.8430.8480.856
Table 11. The t-test results of the different models in ablation experiments. M1 to M9 correspond to the models in each column of Table 10, respectively.
Table 11. The t-test results of the different models in ablation experiments. M1 to M9 correspond to the models in each column of Table 10, respectively.
ModelM1M2M3M4M5M6M7M8M9
M1011−1−1−1−1−1−1
M2−10−1−1−1−1−1−1−1
M3−110−1−1−1−1−1−1
M41110−1−1−1−1−1
M51111010−1−1
M61111−10−1−1−1
M711110100−1
M811111100−1
M9111111110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, F.; Fu, H.; Yu, H.; Chu, Y. Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality. Sensors 2023, 23, 4974. https://doi.org/10.3390/s23104974

AMA Style

Chen F, Fu H, Yu H, Chu Y. Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality. Sensors. 2023; 23(10):4974. https://doi.org/10.3390/s23104974

Chicago/Turabian Style

Chen, Fan, Hong Fu, Hengyong Yu, and Ying Chu. 2023. "Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality" Sensors 23, no. 10: 4974. https://doi.org/10.3390/s23104974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop