Next Article in Journal
Minimum Message Length Inference of the Exponential Distribution with Type I Censoring
Next Article in Special Issue
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism
Previous Article in Journal
A Study on Railway Surface Defects Detection Based on Machine Vision
Previous Article in Special Issue
A Penalized Matrix Normal Mixture Model for Clustering Matrix Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Region Adaptive Single Image Dehazing

Korean Intellectual Property Office, Daejeon 35208, Korea
Entropy 2021, 23(11), 1438; https://doi.org/10.3390/e23111438
Submission received: 1 October 2021 / Revised: 26 October 2021 / Accepted: 29 October 2021 / Published: 30 October 2021
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)

Abstract

:
Image haze removal is essential in preprocessing for computer vision applications because outdoor images taken in adverse weather conditions such as fog or snow have poor visibility. This problem has been extensively studied in the literature, and the most popular technique is dark channel prior (DCP). However, dark channel prior tends to underestimate transmissions of bright areas or objects, which may cause color distortions during dehazing. This paper proposes a new single-image dehazing method that combines dark channel prior with bright channel prior in order to overcome the limitations of dark channel prior. A patch-based robust atmospheric light estimation was introduced in order to divide image into regions to which the DCP assumption and the BCP assumption are applied. Moreover, region adaptive haze control parameters are introduced in order to suppress the distortions in a flat and bright region and to increase the visibilities in a texture region. The flat and texture regions are expressed as probabilities by using local image entropy. The performance of the proposed method is evaluated by using synthetic and real data sets. Experimental results show that the proposed method outperforms the state-of-the-art image dehazing method both visually and numerically.

1. Introduction

Outdoor images and videos in hazy or cloudy weather conditions often suffer from the loss of details, decrease in contrast, and shifted chromaticity due to light scattering by atmospheric particles. This phenomenon affects the performance of subsequent high-level computer vision tasks, such as object detection and recognition. Therefore, improving image quality and enhancing system robustness in challenging weather conditions play a crucial role as a pre-processing step for broad application values [1].
This problem has been studied extensively in the literature with two main approaches: methods that use multiple images and methods that use only a single image.
Multi-image dehazing uses additional information about the scene, such as multiple images taken under diverse conditions [2,3,4], two or more images taken with different degrees of polarization [5,6], or geometric features of the scene [7], and infrared and visible images [8] in order to determine transmission and obtain haze-free images.
Compared to multi-image dehazing, a single image can only provide intensities of the three channels. Thus, additional priors or constraints are required for single image dehazing. Solutions for single image dehazing methods based on additional priors or constraints have been intensively developed in recent years.
Tan [9] proposed a dehazing method by maximizing local contrast based on the prior such that the contrast in a fogless image is higher than that in a foggy image. Fattal [10] estimated the medium of transmission by considering that there is no correlation between object surface shading and the transmission map. Moreover, he introduced a color line prior to the observation that pixels in small image patches typically exhibit a one-dimensional distribution in the RGB color space in [11]. Nishino et al. [12] estimated the scene albedo and depth by introducing a Bayesian probabilistic method. Tarel et al. [13] introduced a contrast-based enhancing approach in order to remove haze effects based on the assumption that the atmospheric veil function changed gently in the local region. Later, they introduced an additional constraint that much of the image could be assumed to be a flat road in order to better process the road image in [14]. Meng et al. [15] added a boundary constraint on the transmission function for single-image dehazing. In order to perform contrast enhancement, Ancuti et al. [16] implemented image dehazing based on multi-scale fusion. In [17], images with different exposure levels are extracted from a series of gamma corrections, and multi-exposure image fusion (MEF)-based adaptive restructuring was applied to each image patch in order to fuse into a haze-removed image. This study was further extended by Zhu et al. [18]. In order to guide the fusion process, they analyzed both global and local exposures to construct a per-pixel weight map. Zhu et al. [19] proposed a color attenuation prior to create a linear model for estimating the scene depth. Negru et al. [20] calculated atmospheric veils based on a mathematical model that accounted for changes in fog density with distance. Wang et al. [21] improved the color attenuation prior by estimating atmospheric light with a haze-line prior and replacing constant scattering coefficients with dynamic scattering coefficients. Duminil et al. [22] proposed a new prior for removing atmospheric veils based on a physical model considering that fog appears thicker near the horizon rather than closer to the camera. The Naka–Rushton function was used to modulate the atmospheric veil according to empirical observations on synthetic fog images. Berman et al. [23] proposed a non-local approach based on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors. Bui [24] proposed a color ellipsoid prior for dehazing where the dark channel prior was a special prior vector.
The Retinex theory has been widely applied in the field of single image dehazing. Xie et al. [25] used a multi-scale Retinex algorithm for the luminance component in order to acquire a transmission map and combined the haze image model with a dark channel prior to recover the haze-removed image. Gao et al. [26] proposed an enhancement MSRCR algorithm for color fog images based on an adaptive scale. Multi-scale images were obtained by weighting and performing local corrections for reflection component estimation. To take advantage of Retinex’s enhancements and address the lack of information about image scenarios, Wang et al. [27] proposed a single-image dehazing method based on atmospheric light scattering physical models and image brightness components by using multiscale Retinex with color restoration (MSRCR). Park et al. [28] estimated improved illuminance and reflectance by using the bright channel, which is estimated to control the amount of brightness enhancement by iteratively minimizing a varying Retinex-based energy function. Tang et al. [29] proposed night image dehazing by decomposing the atmospheric light image from the input image using Retinex theory and accurately estimating the point-by-point transmission map by using a Taylor series expansion, according to an approach based on dark channel prior.
With the availability of large-scale paired data and the success of the Convolutional Neural Network (CNN) in various image processing and computer vision tasks, learning-based dehazing methods have become popular in recent years. Tang et al. [30] used random forests to learn a mapping function between haze-relevant features and their true transmission in image patches. Ren et al. [31] proposed a multi-scale convolutional neural network to learn a mapping function between hazy images and corresponding transmission maps in a coarse-to-fine manner. Later on, they designed a network to learn confidence maps and proposed a fusion-based approach for haze removal in [32]. Cai et al. [33] adopted a deep convolutional neural network structure (four-layer) model that was specially designed to embody image dehazing. Li et al. [34] proposed an All-in-One Dehazing Network (AOD-Net) by reformulating the physical scattering model. Zhang and Patel [35] proposed a densely connected pyramid dehaze network that can examine scene depth and atmospheric light simultaneously. A Grid dehazing Network (GridNet) based on an attention mechanism for single image dehazing was proposed in [36]. Qu et al. [37] regarded the dehazing task as an image-to-image translation problem and designed an enhanced pix2pix dehazing network (EPDN) in order to generate clear results.
Among the various single-image dehazing, the most popular is the dark channel prior [38]. Their prior is based on the observation that the minimum color components of local patches in haze-free images are usually very close to zero. DCP is a simple but effective approach in most cases, although it requires high computational complexity due to the soft matting algorithm and produces artifacts in bright areas. In order to improve computational efficiency, other approaches such as bilateral filtering [39], median filtering [40,41], edge-preserving filtering [42], and guided filtering [43] are used to optimize the transmission instead of soft matting. To reduce the distortions in a large area of the sky or a bright white object where the dark channel prior is invalid, sky detection-based methods [44,45,46,47,48,49] and white object detection-based method are proposed [50]. Jackson et al. [51] estimated the initial transmission map by modeling the scattering coefficients using Rayleigh scattering theory and dark channel prior. In addition, linear transformation, tolerance, and offset are proposed in order to consider that DCP values are not zero in bright regions [52,53,54,55]. Inspired by DCP, the bright channel prior (BCP) was proposed in [56]. It was assumed that most local patches which are not covered by dark objects in haze-free outdoor images contain some pixels that have very high intensities close to the upper limit in at least one color (RGB) channel. The dark channel prior and the bright channel prior are combined to calculate a transmission map in [57,58]. Zhang et al. [57] separated the hazy image into the two regions based on the BCP values and estimated a transmission map by jointly considering both priors in each region. Yutong et al. [58] proposed an adaptive bi-channel prior by combining the dark and bright channel priors and corrected the inaccurate estimation of transmission map and atmospheric light values for both white and black pixels that do not meet the assumptions of the bi-channel priors.
Several approaches have been proposed that consist of using superpixels for haze removal [58,59,60,61]. In the superpixel domain, Tan and Wang [59] obtained a transmission map and then improved the transmission map by using a Markov random field. Wang et al. [60] used the superpixel to estimate the transmission of sky and non-sky area in order to reduce halo artifacts around sharp edges and color distortion in sky area. In [58,61], the superpixel method was adopted instead of rectangular local patches to calculate the initial transmission map.
Image entropy was used for single-image dehazing either to compute transmission maps or to evaluate the resulting images. Park et al. [62] estimated the transmission map by an objective function, which comprises information fidelity and image entropy at non-overlapped sub-block regions. Ngo et al. [63] determined that hazy density is highly correlated with contrast energy, entropy, and sharpness and estimated the transmission map by utilizing an optimization of the objective function considering contrast energy, entropy, and sharpness. Salazar-Colores et al. [64] used local Shannon entropy to detect and segment a sky region in order to reduce artifacts and improve image recovery of the sky region. Image entropy was used as a qualitative metric to evaluate the quality of dehazed images in [49,65].
In this article, a region adaptive single image dehazing method is proposed to overcome the limitations of the DCP. The main contributions are as follows:
  • Dark and bright channel priors are combined, and the combined priors are further improved by accurate estimation of atmospheric light and by introducing region adaptive control parameters.
  • Patch-wise bright pixel selection and atmospheric light candidate scores calculated from dark channel and saturation values are introduced for an accurate estimation of atmospheric lights.
  • A region adaptive control parameter for deciding whether to decrease or increase haze removal rate is proposed based on flat and non-flat region segmentations using local Shannon entropy.
As a result, the proposed method effectively restores haze-removed images while reducing an undesirable artifact in a bright area.
The rest of the paper is organized as follows. In Section 2, related works are briefly reviewed. In Section 3, the details of the proposed algorithm are described. In Section 4, the experimental results and analyses are presented. Finally, conclusions are provided in Section 5.

2. Related Work

The atmospheric scattering model, which is shown in Figure 1, can be mathematically modeled as follows [1]:
I x = t x J x + 1 t x A ,
where I x is the hazy image, x is the spatial image index, t x is the medium transmission map, J x is the scene radiance, and A represents the global atmospheric light RGB vector. t x is the transmission of scattered light in an homogeneous medium, which can be described as follows:
t x = e β d x ,
where β is the scattering coefficient of the atmosphere, and d x is the scene depth between the objects and the camera. Assuming β is constant, then t x 0 when d x , and t x = 1 when d x = 0 .
Since the goal of image dehazing is to recover J x from I x , J x can be obtained by simply transforming Equation (1).
J x = I x A t x + A
However, Equation (3) is an ill-posed problem because there are unknown variables t x and A. Therefore, the performance of algorithms based on atmospheric scattering models depenSds on accurate calculations of t(x) and A.

2.1. Dark Channel Prior and Bright Channel Prior

Dark channel prior is based on the empirical investigation of the characteristics of clean outdoor images. It is observed that dark pixels have intensity values that are very close to zero for at least one-color channel within an image patch. The dark channel is defined as follows [38]:
J d x = min y Ω x min c R , G , B J c y   0 ,
where J c is a color channel of J , and Ω x is a local patch centered at x . From Equations (1) and (4), the transmission can be obtained by the following.
t D C P x = 1 min y Ω x min c R , G , B I c y A c
In order to be consistent with reality, a constant coefficient ω (0 < ω < 1) can be introduced into Equation (5) to keep some haze particles at a distance.
t D C P ω x = 1 ω · min y Ω x min c R , G , B I c y A c
In general, ω has a value of 0.9 to 0.95.
On the other hand, the bright channel prior assumes that most local patches and pixels in natural haze-free images contain some pixels that have intensities that are very high in at least one-color channel [56].
J b x = max y Ω x max c R , G , B J c y   1 .
The transmission of bright channel prior is calculated by combining Equations (1) and (7).
t B C P x = max y Ω x max c R , G , B I c y A c 1 max c R , G , B 1 A c 1 .

2.2. Air Light Estimation

2.2.1. Dark Channel Prior

The atmospheric light A is calculated by choosing the highest intensity pixels from the top 0.1% brightest pixels in the dark channel in a haze image.
A c = I c ( a r g max y P 0.1 % I d y )
In Equation (9), among the brightest pixels of 0.1%, the pixels corresponding to the maximum intensity in the color-channel of hazy input I are selected as the atmospheric light vector.

2.2.2. Coarse-to-Fine Search Strategy

Iwamoto et al. [55] proposed a coarse-to-fine search strategy where they initially step down the resolution of the dark channel image and obtained the position of the brightest dark channSel value at the lowest resolution. Then, they recalculated the position of the largest dark channel value at the second lowest resolution and continued to recalculate the position of the brightest dark channel value until the original image size is reached. The schematic flow of the exploration strategy is shown in Figure 2a.

2.2.3. Quad Decomposition Method

In the quad decomposition method proposed by Park et al. [62], the image is decomposed into quarters, and the quarter with the largest average luminance value is selected. The decomposition process is repeated on the selected quarter until the its size becomes smaller than a predetermined quarter size. The pixel with the smallest Euclidean norm relative to the white point in the RGB color space within the selected quarter is chosen as atmospheric light. The selected quarter is depicted in Figure 2b.

2.3. Shannon’s Entropy

Shannon entropy (Shannon, 1948) is originally proposed to quantify the information content in strings of text. It provides a measure of the amount of information required to describe a random variable. Similarly to the entropy concept widely used in information theory, when entropy is applied to a hazy image, high image entropy means that the image contains much detail, while low image entropy means that the image has less detail [63]. The local Shannon image entropy E x on a local patch Ω x is defined as follows:
E x = y Ω x L 1 p y log p y ,
where L is the number of possible values for a pixel in Ω x (in a grey-scale image L equals to 256), p y = n j N is the probability that the grey-scale value j appears in Ω x , and n j is the number of pixels with the value j in Ω x . Figure 3 depicts the local image entropy of detailed and less detailed images.

3. Proposed Method

This section describes the proposed image dehazing method in detail. In the proposed method, the superpixel method [66] is adopted instead of the rectangular local patch that is used in the existing DCP, and depth information of the scene is accurately expressed as an image patch. Firstly, a combined dark and bright channel prior is further improved by analyzing DCP and BCP in Section 3.1, and atmospheric light is estimated from the selected superpixels by using the newly proposed candidate scores in Section 3.2. In Section 3.3, a region adaptive haze control parameter designed to prevent artifacts in the hazy and bright regions with less detail and to improve the haze removal rate in the hazy region with a lot of detail is introduced based on the flat map calculated from Shannon entropy. Finally, transmission, t(x), is calculated by using the region adaptive haze control parameter and refined in order to obtain a haze-free image.

3.1. Modified Dark and Bright Channel Prior

One of the reasons of the side effects of conventional DCP and BCP is that the characteristics of transmission function are overlooked. From Equation (3), transmission can be directly calculated as follows.
t x = I x A J x A  
Since 0 t x   1 , given the global atmospheric light A, the hazy input image can be separated into two parts.
I x > A I x < A J x > A   J x > I x J x < A   J x < I x  
From Equation (12), it can be clearly observed that assumptions on DCP are valid only in the region where I(x) < A, and assumptions on BCP are valid in the region where I(x) > A.
This is observed by Zhang et al. [44], and they separated the input image into the two regions based on the BCP values. However, for I b x > A and I d x < A , where I b x   =   max y Ω x max c R , G , B I c y   and I d x   =   min y Ω x min c R , G , B I c y   , both BCP and DCP are valid. Thus, in order to consider the above case, the input hazy image should be separated into three regions instead of two. Then, Equation (11) can be rewritten as follows:
t x = t B C P x = A M 1 A M · ( I A b x 1 ) I A d x > 1 t D C P x =   1 I A d x I A b x < 1 0.5 × max t B C P x , t 0 + 0.5 × max t D C P x , t 0 o t h e r w i s e ,
where I A d x   =   min y Ω x min c R , G , B I c y A c   , I A b x   =   max y Ω x max c R , G , B I c y A c   , and A M   =   max c R , G , B A c .
Figure 4 shows that many regions belonging to BCP (white region) in Equation (13) satisfy both BCP and DCP (gray region). As observed from Equation (13), the estimation of A becomes more important because it is necessary to divide the region based on A and to apply an appropriate prior to each divided region. Therefore, in this paper, a novel atmospheric light estimation method will be proposed in Section 3.2.
The oversimplified assumption explains another reason for the side effects of DCP and BCP. The original transmission of DCP and BCP  t a c t u a l x is expressed as follows:
t a c t u a l x = 1 I A d x 1 J A d x       1 I A d x I A b x 1 J A b x 1     A 0 · ( I A b x 1 ) ,
where A 0 = A M 1 A M . For the areas that do not satisfy the dark channel prior, the value of the dark channel cannot be approximated to 0; similarly, for the areas that do not satisfy the bright channel prior, the value of the bright channel cannot be approximated to 1. Thus, t a c t u a l x is always greater than transmission t x , which is calculated based on the dark channel and bright channel priors.
Inspired by the constant factor ω for controlling the haze removal rate in DCP, a region adaptive haze control parameter is introduced in order to consider the oversimplified assumption of DCP and BCP. Based on the Equation (14), modified transmission t m o d i f i e d x can be simply expressed as follows:
t m o d i f i e d x = t D C P m o d i f i e d x = 1 ω p x · I A d x     t D C P x t B C P m o d i f i e d x = A 0 · ω p x · ( I A b x 1 ) + 1 ω p x     t B C P x 0 ω p x 1 ,
where ω p x is a proposed region adaptive haze control parameter. The equation for BCP is simply derived based on the assumption that t m o d i f i e d x   =     1 ω p x when I A d x   =   I A b x   =   1 . By combining Equations (13) and (15), the proposed transmission t p r o p o s e d x is expressed as follows.
t p r o p o s e d x = t B C P m o d i f i e d x t B C P m o d i f i e d x = A 0 · ω p x · ( I A b x 1 ) + 1 ω p x I A d x > 1 t D C P m o d i f i e d x = 1 ω p x · I A d x I A b x < 1 max t B C P m o d i f i e d x , 1 ω p x 2 + max t D C P m o d i f i e d x , 1 ω p x 2 o t h e r w i s e
The details of region adaptive haze control parameter will be explained in Section 3.3.

3.2. Atmospheric Light Estimation

Due to the fact that most existing methods estimate atmospheric light by considering only brightness, white or bright landscape objects are often incorrectly chosen as atmospheric light. In the ideal case, if d(x) is large enough, t(x) tends to be very small according to Equation (2), and I(x) will be approximately A. Therefore, atmospheric light A can be estimated from deeper-depth regions. Since the depth of the scene is assumed to be positively correlated with haze concentration, atmospheric light candidate areas can be calculated by using the relationship between haze and contrast, saturation, and dark channel values [67]. Using the above relationship, the atmospheric light area candidate score is calculated as follows:
S c o r e A x = ( 1 P s x ) · C x · 1 S x · I d x ,
where S x and I d x denote saturation and dark channel value in the superpixel. Moreover, the portion of overexposed pixels in a superpixel P s x is introduced in order to avoid selecting overexposed pixels as candidates for ambient light. Candidate superpixels for atmospheric light estimation will be selected based on the score. The contrast related value C x is defined as follows:
C x = 1 σ x σ σ x < σ 0 σ x σ ,
where σ x and σ denote a standard deviation of local patch Ω x and entire image, respectively.
Inspired by the patch selection method for calculating color constancy [68], the hazy image is evenly divided into multiple rectangular patches (e.g., a 2 × 3 patch is used, but it is not limited). For each patch F i , j , the mean of dark channel value is calculated as follows:
D ¯ F i , j = x   F i , j I d x N i , j ,
where N i , j is number of superpixels in patch F i , j .
The number of superpixels selected form patch F i , j is set to be proportional to the mean of the dark channel value D ¯ F i , j :
N s F i , j = D ¯ F i , j μ d N ε s ,     D ¯ F i , j μ d 0 ,     D ¯ F i , j < μ d ,
where μ d is mean of the dark channel value of image, N is the total number of superpixes in the image, and ε s is a constant that determines the selecting rate, which generally takes a value of 0.001. An example of patch means of dark channel and candidate scores for each superpixels can be seen in Figure 5. Finally, the highest score N s F i , j superpixels in patch F i , j are selected, and atmospheric light A is calculated by averaging the selected superpixels. The selected superpixels for calculating atmospheric light are shown in Figure 6.

3.3. Region Adaptive Haze Control Parameter

The effects of haze control parameter ω are shown in Figure 7. As ω increases, more haze is removed, and the visibilities are improved, but unwanted artifacts are produced in the sky area. As discussed in Section 3.1, the artifacts are caused by oversimplified assumptions of DCP and BCP; thus, region adaptive haze control parameters are introduced to address this problem. The region adaptive control parameter should improve visibility in high detail hazy areas while avoiding artifacts in low detail bright and hazy region such as sky areas. To this end, the region with and without detail should be determined first. In this paper, the area with detail is regarded as texture, and the area without detail is regarded as a flat.

3.3.1. Texture and Flat Area Detection

As mentioned in Section 2.3, image entropy can be used to characterize the texture of an image and to determine the amount of image information [69]. However, images affected by haze tend to have low image entropy values due to the biased distribution of brightness. This makes it difficult to distinguish between texture and flat areas. Therefore, instead of using an image, the gradient of the image is used to calculate entropy. As observed in Figure 8, entropy calculated from the gradient of the image has a larger difference in value than the entropy calculated from the image in the flat region and the texture region.
Local image entropy E G x is computed over the gradient image, and then the texture probability P T x is calculated as follows:
P T x = 1 E G x > T T E G x T F T T T F T F < E G x < T T 0 E G x < T F ,
where T F is the threshold obtained from OTSU threshold [52] of E G x , and T T = T F + 64 . A high P T x value means a texture region, and a low P T x value means a flat region. An example of texture probability P T x can be seen in Figure 9.

3.3.2. Region Adaptive Haze Control Parameter Calculation

In order to avoid artifacts in low-detail bright and hazy region, a prevention weight is introduced. The texture probability P T x has low values in the low detail region, and hazy image has low saturation values due to scattering and diffusion of reflected light in the atmosphere [67], and the dark channel value I d x has a high value at bright region. Based on the above relationship, the prevention weight for avoiding artifacts in low-detail bright and hazy region is defined as follows.
W p x = ( 1 P T x ) · 1 S x · I d x
The value of W p x becomes higher in bright and hazy regions with low detail and lower in regions where it is not. Figure 10 illustrates the prevention weight and shows the artifact suppression performance in low-detail bright and hazy regions.
In order to improve visibilities in high-detail hazy areas, an enhancement weight is introduced. Texture probability and local variance are high in regions with high detail, low saturation values in regions of hazy and enhancement weight should be inversely proportional to the prevention weight. By using the above relationship, the enhancement weight for improving visibilities in high-detail hazy areas is defined as follows:
W E x = 1 W p x · P T x · D x · 1 S x ,
where D x = I x μ x N x is a variance of local patch Ω x . The value of W E x becomes lower in areas with high prevention weight W p x or low-detail and higher in high-detail hazy region. Figure 11 depicts the enhancement weight and shows the haze removal performance in high-detail hazy regions.
As inferred from Figure 7, a decrease in the value of ω p x leaves a haze, and an increase in the value of ω p x removes the haze, but artifacts may occur. Thus, ω p x should be low when W p x is high and should increase when W E x is high. By using the above relation, ω p x can be expressed as follows:
ω p x = ω 0 1 ω p W p x + ω E W E x ,
where ω 0 is a parameter for controlling the overall haze removal rate, and ω p and ω E are parameters for controlling the amount of prevention and enhancement.

3.4. Haze-Free Image Recovery

The patch level transmission t p r o p o s e d x should be refined to contain pixel level transmission t p ,   q using a guided filter [43]. With atmospheric light A and the refined transmission map t p ,   q , the haze-free image J can be recovered as follows.
J p ,   q = I p ,   q A max t p ,   q , 0.05 + A
The transmission is limited by a lower bound (0.05), which is the same empirical value in [38], in order to avoid excessive enhancement.

4. Experimental Results

In this section, the effectiveness of the proposed method is evaluated and verified qualitatively and quantitatively with the conventional method. The hazy images used in the experiment are divided into those with and without ground truth. The hazy images that have ground truth are collected from I-HAZE database [70], O-HAZE database [71], Synthetic Objective Testing Set (SOTS), and Hybrid Subjective Testing Set (HSTS) from a Realistic Single Image Dehazing (RESIDE) dataset [72]. On the other hand, real-world hazy images do not have ground truth images, but in order to check whether the proposed method is applicable to real life, real-world hazy images should be considered. In order to acquire a wide range of images for testing, 200 real-world hazy images were collected from the Realistic Single Image Dehazing (RESIDE) dataset. The proposed method was tested on hazy images with ground truth and real-world hazy images and compared with the He et al. [38], Meng et al. [15], Berman et al. [23], Zhu et al. [19], Ren et al. [31], and Cai et al. [33].

4.1. Quantitative Analysis

4.1.1. Quantitative Comparison of Hazy Image with Ground Truth

In this subsection, the performance of dehazing algorithms was compared by using various hazy image sets containing ground-truth (haze-free) images. The performance of the algorithms can be evaluated by analyzing the similarity between the dehazing results and the ground-truth images by using the full reference metrics PSNR and SSIM.
In Table 1, PSNR and SSIM values are calculated for images restored from I-HAZE, O-HAZE, SOTS indoor and outdoor, and HSTS datasets. In general, deep learning-based methods (i.e., Ren et al. and Cai et al.) show good numerical results for hazy images with ground truth. The proposed method showed the best or second highest performance in SSIM and PSNR, respectively. The quantitative metrics show that the proposed method effectively removes haze.

4.1.2. Quantitative Comparison of Hazy Image without Ground Truth

Since there is no reference image in the natural images experiment, the IQA evaluation index and image entropy (IE) were used to evaluate the quality of the dehazing results. As an IQA evaluation index, the natural image quality evaluator (NIQE) [73], the blind/referenceless image spatial quality evaluator (BRISQUE) which outputs the value range [0, 100] [74], and the perception-based image quality evaluation (PIQE) [75] are used in this paper. Moreover, image entropy (IE) describes the randomness distribution of the image, and its value denotes the amount of image information [62]. The better the picture quality, the smaller the NIQE, BRISQUE, and PIQE values and the higher the IE.
Table 2 shows the haze removal performance of natural images in NIQE, BRISQUE, PIQE, and IE. It can be observed that the proposed method outperforms other conventional methods except NIQE. This shows the effectiveness of the proposed method in a real case.

4.2. Qualitative Comparison

4.2.1. Qualitative Comparison of Hazy Image with Ground Truth

Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 show detailed comparisons of different methods using the I-HAZE, O-HAZE, SOTS indoor, SOTS outdoor, and HSTS datasets, respectively. He et al. and Meng et al. darkened the image and caused distortions in bright areas such as the sky. Berman et al. removed too much haze, causing color distortion and saturation. Zhu et al. is unstable in terms performance and sometimes result in more or less hazy residues and in some cases dimmed images. Ren et al. and Cai et al. do not completely eliminate the haze. Ren et al. causes color distortions in several images. On the other hand, the proposed method recovers the closest image to the ground truth image, and the visual performance was much better.

4.2.2. Qualitative Comparison of Hazy Image without Ground Truth

Figure 17 compares dehazing results of real images. Figure 18, Figure 19, Figure 20 and Figure 21 show parts of each of Figure 17 in order to show the differences more clearly. As observed in Figure 18, Figure 19, Figure 20 and Figure 21, the proposed method outperforms the state of-the art haze removal methods in terms of the amount of haze removal and also without producing undesirable artifacts in flat, bright, and hazy regions. He et al., Meng et al., and Berman et al. produced artifacts in the sky area, and haze partially remained. Compared to the above methods, Ren et al. and Cai et al. showed stable results, but haze still remained in the far area, and there is some color distortion in the sky area. Zhu et al. dimmed the image and left haze in the results. These results prove the effectiveness of the proposed method in real cases.

4.3. Limitations of Proposed Method

The effectiveness of the proposed method for haze removal is shown in Section 4.1 and Section 4.2. In this section, the limitations of the proposed method are presented. Figure 22 shows the case where noise exists in the texture areas. In this case, the texture region is considered as a visibility enhancement region; thus, a large ω p x value is applied to remove haze while amplifying noise. Figure 23 shows the case of misclassifying the texture as a flat area. Due to dense haze and weak textures, the entropy of distant mountains is relatively low, which causes this region to be misclassified as a flat region. As a result, a small ω p x value is applied, resulting in relatively little haze removal.

4.4. Computational Complexity

Table 3 shows the average running time in order to show the computational efficiency of the proposed method. Experiments were performed by considering all methods at different resolutions (640 × 480, 1024 × 768, 1280 × 720, and 1920 × 1080) on a PC equipped with 2.8 Ghz Intel Core i7 and 16 GB RAM. The proposed method is implemented in C++, and the other methods are implemented in Matlab. The differences in implementation cannot result in a fair comparison, but the effectiveness of the proposed method is clearly visible.

5. Conclusions

In this paper, a method to recover images from single hazy image was presented. To this end, dehazing was performed by combining complementary DCP and BCP. A patch-based robust atmospheric light estimation has been proposed for dividing the image into regions to which the DCP assumption and the BCP assumption are applied. In addition to brightness, saturation and contrast were taken into account when estimating atmospheric light in order to avoid erroneous selection of white or bright landscape objects such as the atmosphere. A region adaptive haze control parameter was introduced to prevent artifacts in bright and hazy areas of low detail and to improve haze removal in high detail hazy areas. Shannon’s entropy was used to compute texture/flat probabilities, then the prevention and enhancement weights for each superpixel were calculated. The performance of the proposed method was evaluated by using qualitative and quantitative analysis of synthetic images and real-world images. Experiments confirmed that the proposed method effectively prevent the artifacts of the flat and bright area while effectively removing haze. The results of the proposed algorithm showed better performance than the conventional methods in both quantitative and qualitative criteria.

Funding

This research received no external funding.

Data Availability Statement

Data sharing no applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Xu, Y.; Wen, J.; Fei, L.; Zhang, Z. Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access 2016, 4, 165–188. [Google Scholar] [CrossRef]
  2. Nayar, S.K.; Narasimhan, S.G. Vision in bad weather. In Proceedings of the International Conference on Computer Vision, Kerkyra, Greece, 20–25 September 1999; Volume 2, pp. 820–827. [Google Scholar]
  3. Narasimhan, S.G.; Nayar, S.K. Chromatic framework for vision in bad weather. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA, 15 June 2000; pp. 598–605. [Google Scholar]
  4. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef] [Green Version]
  5. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 325–332. [Google Scholar]
  6. Shwartz, S.; Namer, E.; Schechner, Y.Y. Blind haze separation. In Proceedings of the IEEE Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 1984–1991. [Google Scholar]
  7. Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen-Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
  8. Liang, J.; Zhang, W.; Ren, L.; Ju, H.; Qu, E. Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion. Appl. Opt. 2016, 55, 8221–8226. [Google Scholar] [CrossRef] [PubMed]
  9. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the IEEE Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  10. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
  11. Fattal, R. Dehazing using color-lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar] [CrossRef]
  12. Nishino, K.; Kratz, L.; Lombardi, S. Bayesian defogging. Int. J. Comput. Vis. 2012, 98, 263–278. [Google Scholar] [CrossRef]
  13. Tarel, J.P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4 October 2009; pp. 2201–2208. [Google Scholar]
  14. Tarel, J.P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision Enhancement in Homogeneous and Heterogeneous Fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20. [Google Scholar] [CrossRef] [Green Version]
  15. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar]
  16. Ancuti, C.O.; Ancuti, C. Single image dehazing by multi-scale fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef]
  17. Zheng, M.; Qi, G.; Zhu, Z.; Li, Y.; Wei, H.; Liu, Y. Image Dehazing by an Artificial Image Fusion Method Based on Adaptive Structure Decomposition. IEEE Sens. J. 2020, 20, 8062–8072. [Google Scholar] [CrossRef]
  18. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A Novel Fast Single Image Dehazing Algorithm Based on Artificial Multiexposure Image Fusion. IEEE Trans. Instrum. Meas. 2021, 70, 1–23. [Google Scholar]
  19. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed] [Green Version]
  20. Negru, M.; Nedevschi, S.; Peter, R.I. Exponential Contrast Restoration in Fog Conditions for Driving Assistance. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2257–2268. [Google Scholar] [CrossRef]
  21. Wang, Q.; Zhao, L.; Tang, G.; Zhao, H.; Zhang, X. Single-Image Dehazing Using Color Attenuation Prior Based on Haze-Lines. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5080–5087. [Google Scholar]
  22. Duminil, A.; Tarel, J.-P.; Bremond, R. Single Image Atmospheric Veil Removal Using New Priors for Better Genericity. Atmosphere 2021, 12, 772. [Google Scholar] [CrossRef]
  23. Berman, D.; Treibitz, T.; Avidan, S. Non-local image dehazing. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  24. Bui, T.M.; Kim, W. Single Image Dehazing Using Color Ellipsoid Prior. IEEE Trans. Image Process. 2018, 27, 999–1009. [Google Scholar] [CrossRef]
  25. Xie, B.; Guo, F.; Cai, Z. Improved single image dehazing using dark channel prior and multi-scale retinex’. In Proceedings of the International Conference on Intelligent System Design and Engineering Application, Changsha, China, 13–14 October 2010; pp. 848–851. [Google Scholar]
  26. Gao, Y.; Yun, L.; Shi, J.; Chen, F.; Lei, L. Enhancement MSRCR algorithm of color fog image based on the adaptive scale. In Proceedings of the Sixth International Conference on Digital Image Processing, Athens, Greece, 5–6 April 2014; SPIE: Bellingham, WA, USA, 2014; pp. 91591B-1–91591B-7. [Google Scholar]
  27. Wang, J.; Lu, K.; Xue, J.; He, N.; Shao, L. Single Image Dehazing Based on the Physical Model and MSRCR Algorithm. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2190–2199. [Google Scholar] [CrossRef]
  28. Park, S.; Moon, B.; Ko, S.; Yu, S.; Paik, J. Low-light image restoration using bright channel prior-based variational Retinex model. EURASIP J. Image Video Process. 2017, 2017, 44. [Google Scholar] [CrossRef] [Green Version]
  29. Tang, Q.; Yang, J.; He, X.; Jia, W.; Zhang, Q.; Liu, H. Nighttime image dehazing based on Retinex and dark channel prior using Taylor series expansion. Comput. Vis. Image Underst. 2021, 202, 103086. [Google Scholar] [CrossRef]
  30. Tang, K.; Yang, J.; Wang, J. Investigating haze-relevant features in a learning framework for image dehazing. In Proceedings of the IEEE International Conference Computer Vision Pattern Recognition (CVPR), Columbus, OH, USA, 24–27 June 2014; pp. 2995–3002. [Google Scholar]
  31. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.-H. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 154–169. [Google Scholar]
  32. Ren, W.; Ma, L.; Zhang, J.; Pan, J.; Cao, X.; Liu, W.; Yang, M.-H. Gated fusion network for single image dehazing. In Proceedings of the IEEE International Conference Computer Vision Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 3253–3261. [Google Scholar]
  33. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [Green Version]
  34. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference Computer Vision Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4780–4788. [Google Scholar]
  35. Zhang, H.; Patel, V.M. Densely connected pyramid dehazing network. In Proceedings of the IEEE/CVF International Conference Computer Vision Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18 June–2 July 2018; pp. 3194–3203. [Google Scholar]
  36. Liu, X.; Ma, Y.; Shi, Z.; Chen, J. GridDehazeNet: Attention-based multi-scale net-work for image dehazing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seuol, Korea, 17 October–2 November 2019; pp. 7313–7322. [Google Scholar]
  37. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the IEEE International Conference Computer Vision Pattern Recognition. (CVPR), Long Beach, CA, USA, 15–21 June 2019; pp. 8160–8168. [Google Scholar]
  38. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  39. Yu, J.; Li, D.; Liao, Q. Physics-based fast single image fog removal. Acta Autom. Sin. 2011, 37, 143–149. [Google Scholar] [CrossRef]
  40. Gibson, K.B.; Vo, D.T.; Nguyen, T.Q. An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process. 2012, 21, 662–673. [Google Scholar] [CrossRef] [PubMed]
  41. Ding, M.; Tong, R. Efficient dark channel based image dehazing using quadtrees. Sci. China Inf. Sci. 2013, 56, 1–9. [Google Scholar] [CrossRef] [Green Version]
  42. Shiau, Y.H.; Yang, H.Y.; Chen, P.Y.; Chuang, Y.Z. Hardware implementation of a fast and efficient haze removal method. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1369–1374. [Google Scholar] [CrossRef]
  43. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  44. Zhu, Y.-B.; Liu, J.-M.; Hao, Y.-G. A single image dehazing algorithm using sky detection and segmentation. In Proceedings of the 7th Intrnational Congress Image Signal Process, Dalian, China, 14–16 October 2014; pp. 248–252. [Google Scholar]
  45. Yuan, H.; Liu, C.; Guo, Z.; Sun, Z. A region-wised medium transmission based image dehazing method. IEEE Access 2017, 5, 1735–1742. [Google Scholar] [CrossRef]
  46. Liu, Y.; Li, H.; Wang, M. Single image dehazing via large sky region segmentation and multiscale opening dark channel model. IEEE Access 2017, 5, 8890–8903. [Google Scholar] [CrossRef]
  47. Wang, W.; Yuan, X.; Wu, X.; Liu, Y. Dehazing for images with large sky region. Neurocomputing 2017, 238, 365–376. [Google Scholar] [CrossRef]
  48. Fu, H.; Wu, B.; Shao, Y.; Zhang, H. Scene-awareness based single image dehazing technique via automatic estimation of sky area. IEEE Access 2019, 7, 1829–1839. [Google Scholar] [CrossRef]
  49. Anan, S.; Khan, M.I.; Kowsar, M.M.S.; Deb, K.; Dhar, K.P.; Koshiba, T. Image Defogging Framework Using Segmentation and the Dark Channel Prior. Entropy 2021, 23, 285. [Google Scholar] [CrossRef]
  50. Zhang, L.; Wang, X.; She, C.; Wang, S.; Zhang, Z. Saliency-driven single image haze removal method based on reliable airlight and transmission. J. Electron. Imaging 2018, 27, 1–11. [Google Scholar] [CrossRef]
  51. Jackson, J.; Kun, S.; Agyekum, K.O.; Oluwasanmi, A.; Suwansrikham, P. A Fast Single-Image Dehazing Algorithm Based on Dark Channel Prior and Rayleigh Scattering. IEEE Access 2020, 8, 73330–73339. [Google Scholar] [CrossRef]
  52. Wang, W.; Yuan, X.; Wu, X.; Liu, Y. Fast image dehazing method based on linear transformation. IEEE Trans. Multimed. 2017, 19, 1142–1155. [Google Scholar] [CrossRef]
  53. Yang, F.; Tang, S. Adaptive Tolerance Dehazing Algorithm Based on Dark Channel Prior. Algorithms 2020, 13, 45. [Google Scholar] [CrossRef] [Green Version]
  54. Su, C.; Wang, W.; Zhang, X.; Jin, L. Dehazing with Offset Correction and a Weighted Residual Map. Electronics 2020, 9, 1419. [Google Scholar] [CrossRef]
  55. Iwamoto, Y.; Hashimoto, N.; Chen, Y.W. Real-Time Haze Removal Using Normalised Pixel-Wise Dark-Channel Prior and Robust Atmospheric-Light Estimation. Appl. Sci. 2020, 10, 1165. [Google Scholar] [CrossRef] [Green Version]
  56. Panagopoulos, A.; Wang, C.; Samaras, D.; Paragios, N. Estimating shadows with the bright channel cue. In Proceedings of the European Conference on Computer Vision, Crete, Greece, 5–11 September 2010; pp. 1–12. [Google Scholar]
  57. Zhang, Y.; Gao, K.; Wang, J.; Zhang, X.; Wang, H.; Hua, Z.; Wu, Q. Single-image Dehazing Using Extreme Reflectance Channel Prior. IEEE Access 2021, 87827–87838. [Google Scholar] [CrossRef]
  58. Jiang, Y.; Jiang, Z.; Yang, Z.; Guo, L.; Zhu, L. An image defogging method using adaptive bi-channel priors. In Seventh Symposium on Novel Photoelectronic Detection Technology and Applications; SPIE: Bellingham, WA, USA, 2021; Volume 11763, p. 117635Y. [Google Scholar]
  59. Tan, Y.; Wang, G. Image Haze Removal Based on Superpixels and Markov Random Field. IEEE Access 2020, 8, 60728–60736. [Google Scholar] [CrossRef]
  60. Wang, R.; Li, R.; Sun, H. Haze removal based on multiple scattering model with superpixel algorithm. Signal. Process. 2016, 127, 24–36. [Google Scholar] [CrossRef]
  61. Yang, M.; Li, Z.; Liu, J. Super-pixel based single image haze removal. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC) IEEE, Yinchuan, China, 28–30 May 2016; pp. 1965–1969. [Google Scholar]
  62. Park, D.; Park, H.; Han, D.K.; Ko, H. Single image dehazing with image entropy and information fidelity. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4037–4041. [Google Scholar]
  63. Ngo, D.; Lee, S.; Kang, B. Robust single-image haze removal using optimal transmission map and adaptive atmospheric light. Remote Sens. 2020, 12, 2333. [Google Scholar] [CrossRef]
  64. Salazar-Colores, S.; Moya-Sanchez, E.U.; Ramos-Arreguin, J.-M.; Cabal-Yepez, E.; Flores, G.; Cortes, U. Fast Single Image Defogging with Robust Sky Detection. IEEE Access 2020, 8, 149176–149189. [Google Scholar] [CrossRef]
  65. Liang, W.; Long, J.; Li, K.C.; Xu, J.; Ma, N.; Lei, X. A fast defogging image recognition algorithm based on bilateral hybrid filtering. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 1–16. [Google Scholar] [CrossRef]
  66. Achanta, R.; Susstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4651–4660. [Google Scholar]
  67. Ngo, D.; Lee, G.D.; Kang, B. Haziness Degree Evaluator: A Knowledge-Driven Approach for Haze Density Estimation. Sensors 2021, 21, 3896. [Google Scholar] [CrossRef] [PubMed]
  68. Shi, Y.; Wang, J.; Xue, X. Fast Color Constancy with Patch-wise Bright Pixels. arXiv 2019, arXiv:1911.07177. [Google Scholar]
  69. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  70. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C.; MEO; Universitatea Politehnica Timisoara; ETH Zurich; Switzerland and Merantix GmbH; Universite Catholique de Louvain. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Poitiers, France, 24–27 September 2018; pp. 620–631. [Google Scholar]
  71. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C.; MEO; Universitatea Politehnica Timisoara; ETH Zurich; Switzerland and Merantix GmbH; Universite Catholique de Louvain. O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18 June–2 July 2018; pp. 754–762. [Google Scholar]
  72. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [Green Version]
  73. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal. Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  74. Mittal, A.; Moorthy, A.K.; Bovik, A. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  75. Venkatanath, N.; Praneeth, D.; Chandrasekhar, B.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Atmospheric scattering model.
Figure 1. Atmospheric scattering model.
Entropy 23 01438 g001
Figure 2. Atmospheric light estimation: (a) coarse-to-fine method and (b) quad decomposition method.
Figure 2. Atmospheric light estimation: (a) coarse-to-fine method and (b) quad decomposition method.
Entropy 23 01438 g002
Figure 3. Examples of local image entropy: (a,c) input image, (b,d) local image entropy of (a,c).
Figure 3. Examples of local image entropy: (a,c) input image, (b,d) local image entropy of (a,c).
Entropy 23 01438 g003
Figure 4. Region separation based on the atmospheric light A: (a) input image, (b) the result of Equation (11), and (c) proposed method.
Figure 4. Region separation based on the atmospheric light A: (a) input image, (b) the result of Equation (11), and (c) proposed method.
Entropy 23 01438 g004
Figure 5. Atmospheric light estimation: (a) input image, (b) patch means of dark channel, and (c) candidate scores for each superpixels.
Figure 5. Atmospheric light estimation: (a) input image, (b) patch means of dark channel, and (c) candidate scores for each superpixels.
Entropy 23 01438 g005
Figure 6. Selected superpixels for estimating atmospheric light: (a,c) input image, (b,d) selected superpixels.
Figure 6. Selected superpixels for estimating atmospheric light: (a,c) input image, (b,d) selected superpixels.
Entropy 23 01438 g006
Figure 7. The effects of haze control parameter ω: (a) input hazy image, (b) recovered image (ω = 0.25), (c) recovered image (ω = 0.5), (d) recovered image (ω = 0.75), and (e) recovered image (ω = 1.0).
Figure 7. The effects of haze control parameter ω: (a) input hazy image, (b) recovered image (ω = 0.25), (c) recovered image (ω = 0.5), (d) recovered image (ω = 0.75), and (e) recovered image (ω = 1.0).
Entropy 23 01438 g007
Figure 8. Comparison of local image entropy: (a,d) input image, (b,e) entropy calculated from input image, and (c,f) entropy calculated from gradient of image.
Figure 8. Comparison of local image entropy: (a,d) input image, (b,e) entropy calculated from input image, and (c,f) entropy calculated from gradient of image.
Entropy 23 01438 g008
Figure 9. Examples of texture probability: (a,c) input image, (b,d) texture probability.
Figure 9. Examples of texture probability: (a,c) input image, (b,d) texture probability.
Entropy 23 01438 g009
Figure 10. The effect of prevention weight: (a) input image, (b) prevention weight, (c) result without prevention weight, and (d) result with prevention weight.
Figure 10. The effect of prevention weight: (a) input image, (b) prevention weight, (c) result without prevention weight, and (d) result with prevention weight.
Entropy 23 01438 g010
Figure 11. The effect of enhancement weight: (a) input image, (b) enhancement weight, (c) result without enhancement weight, and (d) result with enhancement weight.
Figure 11. The effect of enhancement weight: (a) input image, (b) enhancement weight, (c) result without enhancement weight, and (d) result with enhancement weight.
Entropy 23 01438 g011
Figure 12. Comparison of dehazing results from I-HAZE dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Figure 12. Comparison of dehazing results from I-HAZE dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Entropy 23 01438 g012
Figure 13. Comparison of dehazing results from O-HAZE dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Figure 13. Comparison of dehazing results from O-HAZE dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Entropy 23 01438 g013
Figure 14. Comparison of dehazing results from SOTS indoor dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Figure 14. Comparison of dehazing results from SOTS indoor dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Entropy 23 01438 g014
Figure 15. Comparison of dehazing results from SOTS outdoor dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Figure 15. Comparison of dehazing results from SOTS outdoor dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Entropy 23 01438 g015
Figure 16. Comparison of dehazing results from HSTS dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Figure 16. Comparison of dehazing results from HSTS dataset: (a) hazy image, (b) ground truth, (c) He et al., (d) Meng et al., (e) Berman et al., (f) Zhu et al., (g) Ren et al., (h) Cai et al., and (i) proposed.
Entropy 23 01438 g016
Figure 17. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Figure 17. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Entropy 23 01438 g017
Figure 18. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Figure 18. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Entropy 23 01438 g018
Figure 19. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Figure 19. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Entropy 23 01438 g019
Figure 20. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Figure 20. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Entropy 23 01438 g020
Figure 21. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Figure 21. Comparison of dehazing results from real world image: (a) hazy image, (b) He et al., (c) Meng et al., (d) Berman et al., (e) Zhu et al., (f) Ren et al., (g) Cai et al., and (h) proposed.
Entropy 23 01438 g021
Figure 22. Example of noise boosting problem: (a) input image, (b) entropy of input image, (c) enhancement map, and (d) result image.
Figure 22. Example of noise boosting problem: (a) input image, (b) entropy of input image, (c) enhancement map, and (d) result image.
Entropy 23 01438 g022
Figure 23. Example of haze remaining problem: (a) input image, (b) entropy of input image, (c) prevention map, and (d) result image.
Figure 23. Example of haze remaining problem: (a) input image, (b) entropy of input image, (c) prevention map, and (d) result image.
Entropy 23 01438 g023
Table 1. Quantitative comparison of different methods with ground truth.
Table 1. Quantitative comparison of different methods with ground truth.
DatasetMetricHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
I-Haze
(35 samples)
PSNR11.5613.1113.871416.4714.5516.26
SSIM0.420.510.510.50.590.530.61
O-Haze
(45 samples)
PSNR14.7215.4915.1515.4916.7615.2115.78
SSIM0.380.420.460.370.40.420.49
SOTS-Indoor
(500 samples)
PSNR16.8117.0517.2918.9817.1311.9719.42
SSIM0.820.790.750.850.810.680.86
SOTS-Outdoor
(500 samples)
PSNR14.8115.5718.0818.2519.6122.9220.96
SSIM0.75490.7830.80260.78670.86330.88860.8966
HSTS
(10 samples)
PSNR15.0915.1617.6319.8418.6724.4822.36
SSIM0.76560.74140.79330.81570.81740.92160.9
Table 2. Quantitative comparison of different methods with real-world images.
Table 2. Quantitative comparison of different methods with real-world images.
MetricInputHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
NIQE3.193.13.223.493.113.153.12.95
BRISQUE32.2428.6725.9129.7230.6530.1130.5224.53
PIQE41.1540.7440.3544.7541.1641.241.336.02
IE6.976.987.037.357.117.247.147.4
Table 3. Computational complexity comparisons.
Table 3. Computational complexity comparisons.
Image SizeHe et al.Meng et al.Berman et al.Zhu et al.Ren et al.Cai et al.Proposed
640 × 4801.3597762.9864632.6378700.6909271.8546961.6192890.344
1024 × 7683.5355644.0327414.1400681.3586142.5872113.5763060.535
1280 × 7204.0975853.6780534.4887621.7495792.9067464.3734310.547
1920 × 10809.5342876.8247478.1493853.1820586.4321939.8722560.636
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, C. Region Adaptive Single Image Dehazing. Entropy 2021, 23, 1438. https://doi.org/10.3390/e23111438

AMA Style

Kim C. Region Adaptive Single Image Dehazing. Entropy. 2021; 23(11):1438. https://doi.org/10.3390/e23111438

Chicago/Turabian Style

Kim, Changwon. 2021. "Region Adaptive Single Image Dehazing" Entropy 23, no. 11: 1438. https://doi.org/10.3390/e23111438

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop