Next Article in Journal
Improved Drought Monitoring Index Using GNSS-Derived Precipitable Water Vapor over the Loess Plateau Area
Next Article in Special Issue
Automatic Hierarchical Classification of Kelps Using Deep Residual Features
Previous Article in Journal
Novel and Automatic Rice Thickness Extraction Based on Photogrammetry Using Rice Edge Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering

1
School of Ocean and Earth Science, Tongji University, Shanghai 200092, China
2
Institute of Deep-Sea Science and Engineering, Chinese Academy of Sciences, Hainan 572000, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(24), 5567; https://doi.org/10.3390/s19245567
Submission received: 19 November 2019 / Revised: 8 December 2019 / Accepted: 10 December 2019 / Published: 16 December 2019
(This article belongs to the Special Issue Imaging Sensor Systems for Analyzing Subsea Environment and Life)

Abstract

:
In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to the use of chromatic and inhomogeneous artificial lighting devices. Since the latter situation is rarely studied in the field of underwater image enhancement, we propose a new model to include it in the analysis of underwater image degradation. Based on the theoretical study of the new model, a comprehensive method for enhancing underwater images under different illumination conditions is proposed in this paper. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. In the first module, the regional or full-extent color deviation caused by different types of incident light is corrected via frequency-based color-tone estimation. And in the second module, the residual low contrast and pixel-wise color shift problems are handled by combining the descattering results under the assumption of different states of the image. The proposed method is experimented on laboratory and open-water images of different depths and illumination states. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms many other methods in enhancing the quality of different types of underwater images, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, such as deep-sea images.

1. Introduction

As one of the most direct approaches to perceive the world below the surface, optical images can provide plenty of useful information for various underwater applications, such as marine geology surveys, underwater mining, fishery and marine archaeology [1,2,3,4,5]. However, compared with images from terrestrial environment, underwater images often suffer from severe degradation problems, which can impact their reliability and utility in underwater applications. The reason of underwater image degradation is in many aspects. The first one is the degradation caused by the water body. As shown in Figure 1, light with different wavelengths is attenuated in different ratios in the water body. This uneven attenuation leads to ubiquitous color bias in underwater images. Suspended particles in water can also cause degradation in underwater images. As depicted in Figure 1, particles near the transmission path from photographed scene to camera cause small-angle scattering (forward scattering) of incident light, and particles in surrounding environment induce ambient light into the camera lens by large-angle scattering (backscattering). These redirections of light lead to blurriness and hazy looks in underwater images. As the going deep of underwater explorations, artificial lighting devices are added to provide necessary illumination for dark deep-sea environment, as depicted in Figure 1. Due to the limited range and inhomogeneity of artificial illumination, problems like dark background and bright spot are often seen in deep-sea images. Besides, if the light source is chromatic, the color balance of underwater images will also be affected. To facilitate the usage of underwater images, these degradation problems should be addressed.
To date, there have been many attempts to improve the quality of underwater images. According to recent literature reviews [6,7], they are usually classified into two main categories: underwater image enhancement and underwater image restoration. Methods of underwater image enhancement require no prior knowledge about the environment and mainly aim at improving the visual quality of underwater images. For example, in [8], Iqbal et al. used histogram stretching in the RGB color space to restore the color balance, and the stretching of saturation and intensity in the HSI color space to improve the colorfulness and contrast of underwater images. In [9], Ancuti et al. proposed a multi-scale fusion method that combined the result images of color correction and contrast enhancement by four weight maps about image luminance, contrast, chromatic and saliency. In [10], Ghani and Isa reduced the color deviation in underwater images by using the characteristics of Rayleigh distribution, and improved the image saturation and contrast by stretching corresponding components in the HSV color space. In [11], Li et al. proposed a weakly supervised color transfer method to correct color distortion in underwater images.
Underwater image restoration methods, on the other hand, attempt to recover the true scene radiances from degraded underwater images. These methods use models to analyze the mechanism of underwater image degradation, and restore the images by reversing the degradation process and using model parameters deduced via prior knowledges. In these methods, the simplified image formation model (IFM) is frequently used for its effectiveness and simplicity [6], and due to its similarity to the model of outdoor hazy image, the Dark Channel Prior (DCP) from outdoor image dehazing [12] is also widely introduced in methods based on this model. In [13], Galdran et al. proposed the Red Channel Prior based on the DCP to recover the lost contrast in underwater images. This new prior reversed the red channel to deal with the strong attenuation of red light in the water body. In [14], Drews Jr et al. proposed the Underwater DCP (UDCP) from the traditional DCP by excluding the red channel in producing the prior. Apart from the DCP-related priors, there are also other priors proposed for underwater image restoration. In [15], Carlevaris-Bianco et al. proposed a prior by comparing the maximum intensity of the red channel to the maximum intensity in the green and blue channels over a small image patch. In [16] and [17], Peng et al. defined a new prior from image blurriness and used it to improve the quality of images from various underwater environments. There are also some other methods that combined the features of former two categories. For example, in the work of Hou et al. [18], the UDCP is used together with quad-tree subdivision and Gamma correction to improve the contrast and saturation of underwater images. And in [19], Qing et al. proposed a comprehensive method with adaptive dehazing and adaptive histogram equalization to remove the scattering and restore the color balance of underwater images. Usually for these methods, no strict classification is made.
From the study of the relevant literature, we have also noticed that most of the published works are designed for solving water-caused problems, i.e., color deviation and low contrast caused by the attenuation and scattering of light in the water body, while only a few of them have considered the degradation caused by artificial lighting. Moreover, in latter works, the study of lighting-caused degradation is also seemed to be limited in the range of local problems like bright spots [13] or vignetting [20], but more general problems like the influence to the distribution of color and brightness in the whole image range are rarely studied.
In this paper, we propose an underwater image enhancement method for different illumination conditions based on a new model of underwater image degradation. In the new model, illumination is included in the modeling of the single-pixel intensity, so its influences to local regions as well as the whole image range are covered in this model. The proposed method is composed of two components: color-tone correction and fusion-based descattering. The first component is based on a frequency-based color-tone estimation strategy. By changing its application range and using necessary modification filters, it can be used to correct the global color cast in uniformly-illuminated images and regional color cast in non-uniformly-illuminated images. The second component is used to solve the residual degradation problems that are related to the scene-camera distance. This component adopts a fusion strategy to enhance images under different states. Experiments on laboratory and open-water images of different depths and lighting conditions prove the effectiveness of the proposed method. According to qualitative and quantitative evaluation results, the proposed method can improve the color balance and contrast of underwater images, and restore the color accuracy and visibility of badly-illuminated regions in non-uniformly illuminated images.
The rest of this paper is organized as follows: In Section 2, the enhancement of underwater images with different illumination conditions is studied theoretically based on the model study of underwater images with arbitrary illumination conditions. In Section 3, the overall framework and individual components of the proposed method are introduced. In Section 4, the proposed method is evaluated on images from shallow water, laboratory and deep sea images, with comparison to three state-of-the-art underwater image enhancement and restoration methods. And in the last section, conclusions of this work are presented.

2. Problem Formulation and Model Improvement

To study the solution of underwater image degradation problems caused by the water body and the light source, the Jaffe–McGlamery model [21,22], a general model of underwater image formation, is reviewed first. In the Jaffe–McGlamery model, the irradiance of a monochromatic underwater image is formulated as the linear combination of following three components: the direct component E d , the forward scattering component E f s and the backscattering component E b s , i.e.,
E ( x | λ ) = E d ( x | λ ) + E f s ( x | λ ) + E b s ( x | λ ) ,
where x represents an image point, and λ is the wavelength of incident light. By weighing with the spectral response [23] of the detector, the monochromatic irradiances of the whole spectrum are integrated and transformed into the pixel values of an image in the RGB color space, i.e., I k ( x ) = λ Q k ( λ ) ϕ k ( λ ) E ( x | λ ) d λ , where ϕ k ( λ ) is the spectral response of channel k , k { R , G , B } , and Q k ( λ ) is a factor about the imaging system and the unit conversion from light irradiance to pixel intensity.
In the model of monochromatic image irradiance (i.e., Equation (1)), the direct component E d ( x | λ ) corresponds to the irradiance that has been exponentially decayed after being reflected by the photographed scene, i.e.,
E d ( x | λ ) = E I ( x | λ ) M ( x | λ ) e c ( λ ) d ( x ) κ ( x ) ,
where E I ( x | λ ) is the irradiance of incident light on the scene point, M ( x | λ ) is the reflectance of the scene point, c ( λ ) is the volume attenuation coefficient, d ( x ) is the distance between the scene point and the camera, and κ ( x ) represents other parameters of the imaging system. According to [22], E I ( x | λ ) is a function of the light source irradiance and the attenuation along the transmission path from the light source to the scene. Using B P to represent the beam pattern of the light source, γ to represent the angle from the light source to the scene point, and D to represent the distance between the scene point and the light source, the incident irradiance E I ( x | λ ) is calculated as E I ( x | λ ) = B P ( x | λ ) c o s γ ( x )   e c ( λ ) D ( x ) / D 2 ( x ) . The formula of κ ( x ) is given by κ ( x ) = cos 4 ϑ ( x ) T l 4 f n 2 [ d ( x ) F l d ( x ) ] 2 , where ϑ is the angle from the camera to the scene point, T l is the transmittance of the lens, and f n is the F number of the camera of focal length F l .
The forward scattering component E f s ( x | λ ) corresponds to the reflected light with small-angle scattering. It is calculated by convoluting the direct component with a point spread function g ( x | λ ) :
E f s ( x | λ ) = E d ( x | λ ) g ( x | λ ) .
Here, “ ” denotes convolution, g ( x | λ ) = [ e G ( λ ) d ( x ) e c ( λ ) d ( x ) ] F 1 { e ( λ ) d ( x ) f } , where G and are empirical constants, F 1 { } represents the inverse Fourier transform, and f is radial frequency. According to [22], a more accurate representation of E I in Equation (2) should also include the forward scattering process, i.e., E I ( x | λ ) = E I ( x | λ )   + E I ( x | λ ) g ( x | λ ) , where E I represents the more accurate representation of E I .
The final component, backscattering E b s ( x | λ ) , corresponds to the light that enters the camera without being reflected by the scene. In its original form, it is calculated as the superposition of the illuminated volume elements weighted by the value of the volume-scattering function [22]:
E b s ( x | λ ) = i = 1 N e c ( λ ) Z i β ( ϕ ) E s ( x | λ ) κ ( x ) ,
where β ( ϕ ) is the volume-scattering function, E s ( x | λ ) is the irradiance received by the small volume element in layer i , Z i is the distance from the layer i to the camera, and κ ( x ) represents parameters of the camera system. According to [22], E s ( x | λ ) , like E I ( x | λ ) of Equation (2), is a function of the source light and the attenuation along the source-to-scene path; mathematically, E s ( x | λ ) = B P ( x | λ )   e c ( λ ) D i ( x ) / D i 2 ( x ) , where D i is the distance from the element in layer i to the light source. Forward scattering should also be included for a more accurate calculation of E s . κ ( x ) is similar to κ ( x ) in Equation (2) and is calculated by κ ( x ) = cos 3 ϑ i ( x ) π Δ Z T l 4 f n 2 [ Z i F l Z i ] 2 , where ϑ i is the angle from the camera to the element in layer i , and Δ Z is the sickness of the layer.
In [24], a simplified formula for calculating backscattering in uniformly illuminated conditions is provided, which is
E b s ( x | λ ) = E b s , ( λ ) [ 1 e c ( λ ) d ( x ) ] ,
where E b s , represents the backscattering of infinite distance. According to [24], it is a function of incident irradiance, the reciprocal of volume attenuation coefficient and the full-angle integral of volume scattering function, i.e., E b s , ( λ ) E s ( λ ) c ( λ ) Θ β ( ϕ ) d ϕ .
Clearly, in the Jaffe–McGlamery model, water-caused degradation and the influence of light source are well covered, which makes it a good simulation tool for general underwater images. But for the task of underwater image restoration, its complexity hinders its usage. Instead, a simplified version, the IFM model, is more commonly used to restore underwater images, as mentioned in the former section. The formula of the IFM model is given as follows:
I k ( x ) = J k ( x ) t k ( x ) + B k [ 1 t k ( x ) ] ,    k { R , G , B }
where I k ( x ) is the underwater image, J k ( x ) is the target image that represents scene radiance, t k ( x ) is the transmission map (TM) that equals to e c k d ( x ) , and B k is the background light (BL). For IFM-based methods, the restoration of an underwater image is achieved by estimating and eliminating TM and BL via proper priors about the input underwater image.
By comparing the Jaffe–McGlamery model and the IFM model, it’s very clear that the IFM model is an approximation of the Jaffe–McGlamery model in the case of uniform illumination. In the IFM model, the product of E I ( x | λ ) and M ( x | λ ) is replaced by J k ( x ) , and the original form of the backscattering component is replaced by the simplified formula in Equation (5). The forward scattering component and the other parameters are omitted because of their relatively low influence on the value of pixel intensity, and the integral of irradiances of the whole spectrum is simplified by using corresponding parameters of each color channel (i.e., c k and B k ) to keep the solvability of the model. Apparently, the IFM model can be invalid in the condition of inhomogeneous illumination, because the condition of using Equation (5) is not satisfied. Moreover, the estimated scene radiance can also be inaccurate due to the discrepancy of E I from normal daylight. To improve the model accuracy while remaining its simplicity, we make a small change to the IFM model by adding a parameter to represent incident light. The modified model is given as follows:
I k ( x ) = L k ( x ) J k ( x ) t k ( x ) + b k ( x ) ,     k { R , G , B } .
Here L k ( x ) stands for the incident light of the scene that corresponds to E I ( x | λ ) for E d ( x | λ ) . b k ( x ) corresponds to the E b s ( x | λ ) in Equation (4). In the case of uniform illumination, it is simplified as B k [ 1 t k ( x ) ] , as in the IFM model.
For the case of inhomogeneous illumination, the simplification of backscattering cannot be directly applied, but based on the assumption that light field changes gradually in space, the inhomogeneous illumination can be approximated as homogeneous in small regions. By applying the simplification of backscattering to these small regions, the above model is transformed into:
I k ( x ) = L k ( Ω x ) J k ( x ) t k ( x ) + B k ( Ω x ) [ 1 t k ( x ) ] ,     k { R , G , B } ,
where Ω x represents a small neighborhood of x that receives homogeneous illumination. Apparently, the case of homogenous illumination that is assumed in the IFM model is a special case of this new model.
Due to the uncertainty and spatial variance of L k ( Ω x ) and B k ( Ω x ) , priors for solving J k ( x ) from the new model are hard to derive. However, since L k ( Ω x ) and B k ( Ω x ) mainly cause color deviation in the regional or full-image scale, their influence can be suppressed by the idea of white balance under the Gray World assumption. For an image with uniform illumination and invariant scene-camera distance, i.e., I k ( x ) = L k J k ( x ) t k + B k ( 1 t k ) , white balance can be easily applied with a linear transformation whose general form is I T k ( x ) = [ I k ( x ) H 1 k ] / H 2 k , because all the terms that result in color deviation in I k ( x ) (i.e., L k , t k and   B k ) are global constants in this model. In the general formula of linear transformation, H 1 k and H 2 k are abstract parameters that respectively represent the general effects of offsetting and scaling processes. These abstract parameters need to be concretized according to the color-deviation characters of I k ( x ) to ensure an unbiased transformation result. For example, in the color-tone correction method presented in the next section, the offsetting ( H 1 k ) and scaling ( H 2 k ) processes are performed with a subtraction of the biased color tone and a classic histogram stretching, and the subtracted color-tone image is estimated on the basis of the frequency character of I k ( x ) to approximate the true color-deviation condition of I k ( x ) .
For an image with uniform illumination and variant scene-camera distance, i.e., I k ( x ) = L k J k ( x ) t k ( x ) + B k [ 1 t k ( x ) ] , a similar linear transformation can remove the color deviation resulted from L k and B k , and partly from t k ( x ) , because former two terms are spatial constants and the last term is not. For the case of non-uniform illumination, i.e., I k ( x ) = L k ( Ω x ) J k ( x ) t k ( x ) + B k ( Ω x ) [ 1 t k ( x ) ] , the color cast caused by L k ( Ω x ) and B k ( Ω x ) can still be suppressed by linear transformation in region Ω x , based on the assumption that the illumination is uniform in Ω x . But in practice, the criterion of uniform illumination cannot be applied very strictly, because the Gray World assumption can be invalid in extremely small regions with colorful objects. So instead of suppressing the regional constants L k ( Ω x ) and B k ( Ω x ) , it is more practical to apply linear transformation by pixels, with corresponding parameters estimated from loosely-defined uniform regions and refined by smoothing filters to compensate the inter-regional differences.
After linear transformation, the global or regional color-tone deviation related to incident light is suppressed. To solve the residual problems about t k ( x ) , the result image is further transformed into the following form:
I T k ( x ) = L k ( Ω x ) J k ( x ) H 1 k ( x ) H 2 k ( x ) t k ( x ) + B k ( Ω x ) H 1 k ( x ) H 2 k ( x ) [ 1 t k ( x ) ] = J T k ( x ) t k ( x ) + B T k ( x ) [ 1 t k ( x ) ] ,    k { R , G , B } .
Apparently, this image is in a similar form as the IFM model. Assuming a good linear transformation, both J T k ( x ) and B T k ( x ) of this image are supposed to be free from color deviations caused by the incident light, i.e., J T k ( x ) is close to J k ( x ) and B T k ( x ) is a constant. So the restoration of J k ( x ) can be applied by estimating and eliminating t k ( x ) (TM) and B T k (BL) from image I T k ( x ) . But to ensure a good estimation of J k ( x ) , priors should be choosen carefully to fit the character of I T k ( x ) . Moreover, a bad linear transformation is always possible, no matter how robust the transformation method is, so the situation of J T k ( x ) being deviated from J k ( x ) should also be considered in the restoration process.

3. Proposed Method

Based on former analysis, the enhancement of an underwater image with uncertain illumination condition should include two parts: one for suppressing the color deviations originated from the light source, and the other for handling the residual problems for scene radiance estimation. In this section, an enhancement method for underwater images with different illumination conditions is proposed accordingly, with two modules named color-tone correction and fusion-based descattering. The general workflow with key steps in each module is presented in Figure 2. In the following content, details of these modules are introduced.

3.1. Color-Tone Correction

3.1.1. General Routine for Uniformly-illuminated Underwater Images

As discussed in the former section, the basis of correcting deviated color tone is to find proper parameters to perform linear transformation of underwater image intensities. Based on our observation, the most frequent color in an equidistant underwater image makes a good estimation of the deviated color tone of this image, and by subtracting it from the original image, an image with achromatic color tone that fits the Gray World assumption can be obtained. To find the most frequent color, we calculate the average Fourier frequency of the input underwater image of all channels and apply inverse Fourier transformation to its maximum value. Details of applying this process is provided in Algorithm 1. In Figure 3a–c, an example of obtaining the deviated color tone from a close-up underwater image captured in our experimental pool is presented.
Algorithm 1. Full-extent color tone estimation for an uniformly illuminated image
Input: Uniformly illuminated image I k   , k { R , G , B } .
Output: Color-tone image T k .
  • Calculating the Fourier frequency of the input image: F I k F { I k } . F { } represents Fourier transform.
  • Calculating the average frequency of all channels:   a v g F I     ( F I R + F I G + F I B ) / 3 .
  • Defining the max-frequency filter: F i l t    [ a v g F I = = m a x ( a v g F I ) ] .
  • Applying the max-frequency filter on the input image: F T k   F I k . F i l t .
  • Calculating the intensity of color tone: T k F 1 { F T k } , F 1 { } represents inverse Fourier transform.
After calculating the deviated color tone in the full image-range, the color-balanced image is obtained by subtracting the deviated color tone from the original image. Result of this step for the original image in Figure 3a is given in Figure 3d. Apparently, this image is much darker than normal images. Here we use a linear histogram stretching to restore the right intensity range. The basic function of linear histogram stretching [25] is given as follows:
I o u t = ( I i n a ) ( c d b a ) + d ,
where I i n and I o u t represent the input and output pixel intensities, respectively, a , b , c and d are the minimum and maximum intensity of the input image and the targeted output image, respectively. In this paper, c and d are set to 255 and 0, and a and b are selected in 0~10% and 90~100% in the whole histogram of the input image depending on the brightness level of the original image. The brightness-adjusted result of the image in Figure 3d is given in Figure 3e. Obviously, the result image has more balanced color tone than the original underwater image.
For underwater images with uniform illumination and variant scene-camera distance, the above method removes the color deviation induced by the incident light and partly removes the deviation caused by the attenuation along the scene-camera path, as mentioned in the theoretical analysis in former section. In Figure 4, an example of applying the proposed color tone correction method on a uniformly-illuminated and non-equidistant underwater image from Bubble Vision [26] is presented. Compared with the close-up case, the proposed color-tone correction method has managed to remove the global color shift in the input image but left a large amount of regional color deviation and scattering in the processed result. These problems are caused by the variant scene-camera distance, i.e., variant transmission rate of the input image, and will be handled in next module of image descattering.

3.1.2. Regional Color-Tone Estimation for Underwater Images with Non-Uniform Illumination

For underwater images with non-uniform illumination, the key to address the problem of illumination variation is to transform it into the color-tone correction of uniformly-illuminated regions, but to define these regions, the spatial distribution of illumination is supposed to known. To break this cycle, we use a spatial Gaussian filter to produce a blurred version of the input image to approximate the real color-tone deviation, then the regions with nearly-uniform illumination are obtained by applying K-means clustering to the raw color-tone image. To ensure spatial continuity and avoid unwanted details, the used spatial Gaussian filter should be large enough. In this paper, the standard deviation of the Gaussian filter is set to about 1/8 of the shorter-side length of the input image, and the filter size is set to about 4 times the value of standard deviation. The number of clusters is determined intuitively according to the light pattern in the original image, but in most cases, at least three clusters are needed to define a bright region, a dim region, and an intermediate region. After the definition of uniformly illuminated regions, color-tone deviations of these regions are estimated with the same frequency-based method as in the former case. To avoid discontinuities in the ultimate color tone corrected result, the obtained regional color tone image is modified with another spatial Gaussian filter using the same parameters as in the first step to remove noticeable boundaries and with a guided filter [27] to refine the shape of light patterns, in case these patterns are changed in the previous step. A detailed description of this procedure is given in Algorithm 2. In Figure 5a–e, an example of estimating deviated color tone in an underwater image with inhomogeneous artificial illumination is presented with images of key steps of this procedure. In this experiment, the input image was captured in Mariana at the depth of 8200 m during the TS-03 Cruise of the Chinese Academy of Sciences, with illumination completely provided by inhomogeneous and chromatic artificial light sources.
Algorithm 2. Regional color-tone estimation for a non-uniformly illuminated image
Input: Non-uniformly illuminated image I k   , k { R , G , B } .
Output: Color-tone image T k .
  • Applying a spatial Gaussian filter to I k to get a raw color-tone image G k .
  • Segmenting I k into M regions by the K-means method based on the Euclidian distance of colors in G k .
  • Initializing T k with an all-zero matrix of the same size as I k .
  • For m = 1   t o   M do
  • Reshaping the pixels of region m of I k into vector V m k .
  • Estimating the color tone T m k from vector V m k by using the frequency-based method in Algorithm 2.
  • Replacing the intensity values of pixels in region m of T k with T m k .
  • End for
  • Applying a spatial Gaussian filter and a guided filter to remove boundaries and refine T k . The guidance image for the guided filtering is the input image I k .
After the calculation of deviated color tone, the color-tone corrected result is obtained in the same way as in the former case, i.e., subtracting the estimated color tone from the original image and apply linear histogram stretching to restore a normal brightness level. In Figure 5f, the color-tone corrected result of the original image in Figure 5a is presented. For comparison, the color-tone estimation and correction results obtained via former method for uniformly-illuminated underwater images are also provided in Figure 5g,h. By comparing these two color-tone-corrected results, it is very clear that using the regional color-tone estimation method proposed in this subsection can better restore the color and brightness balance of the non-uniformly illuminated image, and improve the visibility in regions with limited illumination.

3.2. Fusion-Based Descattering

As shown in Figure 4 and Figure 5, a non-equidistant underwater image after being applied with color-tone correction can still present some degradation problems like scattering and regional color deviation, due to the unsettled problems like the variation of transmission rate. According to the theoretical analysis in Section 2, these degradation problems can be solved by image descattering, i.e., estimating TM and BL from input image and recovering scene radiance from it. Mathematically, the descattering of a color-tone-corrected image can be represented with:
J T ( x ) = I T ( x ) B T t ( x ) + B T ,
where t ( x ) is the TM, I T ( x ) , J T ( x ) and B T are respectively the linear transformation results of the original image I ( x ) , the deviated scene radiance L ( Ω x ) J ( x ) and the deviated BL B ( Ω x ) , as demonstrated in Equation (9).
Essentially, by applying this descattering process, the ultimate goal is to achieve a good estimation of J ( x ) by recovering J T ( x ) from I T ( x ) . Considering the uncertain states of J T ( x ) with respect to the true   J ( x ) , the following three cases are included in the descattering process:
(1)
Case of Dim & Scattering images (Case DS): J T ( x ) is close to   J ( x ) , and B T is low. In this case, distant regions of I T ( x ) are low-contrast and dim, due to the low level of incident and backscattering light. Considering the relativeness between the low contrast and long scene-camera distance, TM and BL can be estimated based on the level of intensity variation of input image [17].
(2)
Case of Bright & Scattering images (Case BS): J T ( x ) is close to   J ( x ) , and B T is bright. For this case, the high level of ambient light causes high intensity and low contrast in distant regions. Since this case is very close to the outdoor hazy image, the estimation of TM and BL can be done based on the Dark Channel Prior [12].
(3)
Case of Color-Cast images (Case CC): J T ( x ) is largely biased from   J ( x ) . This is a complement case for images with under- or over-corrected color tone. Since the low-contrast trait is still valid in this case, BL is estimated in the same way as in Case DS. For the calculation of TM, the estimation of scene-camera distance is made in the red channel, because red light is more sensitive to distance change in underwater environment.
Since no hard boundaries exist between these three cases, the descattering results of these cases are combined together to form an ultimate enhancement result. The overall workflow of this module is briefly shown in Figure 6. In the following content, details of this workflow are presented.

3.2.1. Descattering of Case DS

As mentioned above, distant regions of this case are dim and low-contrast. Since BL usually corresponds to the farthest region, it can be estimated by finding the region with the lowest intensity variation. The definition of regional intensity variation (RIV) is given as follows, which is modified from the “blurriness” feature defined in [17]:
P R I V ( x ) = 1 N { G { { m a x y Ω x ( 1 n i = 1 n | P N G ( y ) G r i , r i ( y ) | ) } } } .
Here P N G represents the normalized grayscale version of input image. G r i , r i is obtained by filtering P N G by an r i × r i spatial Gaussian filter with variance r i 2 ( r i = 2 i n + 1 , n is set to 4). This image represents the average intensity level of surrounding region of a given pixel. Ω x represents a square neighborhood of pixel x , whose size is about 1.5% of the input image. And functions { } , G { } and N { } are respectively hole-filling operator, guided filtering function and min-max normalization function. By applying this calculation, the level of the regional variation of a pixel is quantified by a 0-1 number with higher values representing lower variations.
The candidates of BL can be found in the region with the highest RIVs. To robustly locate this region, a quad-tree subdivision [17] is applied to P R I V ( x ) . Based on our experiments, an efficient estimation of BL can be made by setting the ultimate region size to 0.1% of the input image, so the iteration time is usually set to 5. The value of BL is obtained by calculating the average value of pixel intensities in candidate region. Mathematically,
B T , D S = m e a n ( I T ( Ω x ) ) ,      w h e r e   Ω x = a r g m a x x   ( Q T ) ( P R I V ( x ) ) .
Since P R I V ( x ) is a normalized quantification of the level of intensity variation, it can be seen as an approximation of the relative distance between scene and camera, and used for the calculation of TM. The formula of TM estimation is given by:
t D S ( x ) = e α P R I V ( x ) β ,
where α and β are used to control the variation range of TM. In this paper, their values are set as 0.95 and 0, respectively.
Considering the uneven-attenuation nature of light underwater, the TM of different channels should be set differently. Based on the theoretical analysis in [24], the relationship of TMs of different channels is
l n ( t D S k 1 ( x ) ) l n ( t D S k 2 ( x ) ) = B T , D S k 2 ( p λ k 1 + q ) B T , D S k 1   ( p λ k 2 + q ) ,
where k 1 ,   k 2 { R , G , B } and k 1 k 2 , p = 0.00113 , q = 1.62517 , λ R = 620   n m ,   λ G = 540   n m and λ B = 450   n m . Since the red channel is most sensitive to underwater attenuation, the TM calculated in Equation (14) is assigned to the red channel, and the other two TMs are derived from it based on Equation (15).
The final step of descattering is to recover scene radiance from the given image. The basic formula of this process is shown in Equation (11), but for improving the robustness of scene-radiance recovery, a lower bound of TM is used to avoid dividing extremely small values [12]. The formula of this step is given as follows:
  J T , D S ( x ) = I T ( x ) B T , D S m a x ( t 0 , t D S ( x ) ) + B T , D S .
where t 0 is the lower bound of TM and is set to 0.1 in this paper.
In the first row of Figure 6, an example of applying descattering to Case DS is shown. Following the direction of arrows, the first image is the map of P R I V ( x ) , the second image shows the color of B T , D S , the third image is the t D S ( x ) , and the last image is the recovered   J T , D S ( x ) . The red box in the first image indicates the candidate region of BL estimation. Compared to the input image at the far left of Figure 6, the recovered image is much brighter and has better contrast, especially in the background areas.

3.2.2. Descattering of Case BS

The descattering of Case BS is similar to the dehazing process due to the similarity of Case BS and outdoor hazy images. According to the dehazing method in [12], both BL and TM are estimated from the dark channel of input image to avoid the interference of bright objects in the foreground, but based on former calculation, the estimation of BL can be simplified. As mentioned in the previous subsection, RIV is related to the variation level of pixel intensities within a given region. Since foreground regions are better contrast and richer in details, their RIVs tend to be lower than background regions, so by adding this feature to image intensity, foreground regions can be excluded and pixels with high brightness and long distance can be located. Mathematically, the newly derived feature is given by:
P N ( x ) = [ P N G ( x ) + P R I V ( x ) ] / 2 .
Candidates of BL are then obtained by finding the top 0.1% pixels in P N ( x ) , and the value of BL is calculated from the average of candidate intensities, i.e.,
B T , B S = m e a n ( I T ( Ω x ) ) ,     w h e r e   Ω x = a r g m a x x   ( 0.1 % ) ( P N ( x ) ) .
The estimation of TM is similar to that in DCP-based dehazing method [12], which includes dividing the input image with BL and calculating the Dark Channel from it. The formula of TM calculation is given as follows:
t B S ( x ) = 1 σ m i n k { R , G , B } [ m i n y Ω x ( I T k ( y ) / B T , B S k ) ] ,
where σ is used to preserve a small portion of scattering for human eyes to perceive depth, and its value is set as 0.95 in this paper. The size of Ω x is the same as in the calculation of P R I V ( x ) .
Due to the uneven-attenuation of light in water and high sensitivity of red light, the t B S ( x ) calculated from Equation (19) is only assigned to the red channel and processed with Equation (15) to derive TMs for the other two channels, as has been done for Case DS. The recovery of scene radiance is also similar, by replacing the BL and TM in Equation (16) with those from this subsection, the scene radiance of this case is obtained. The recovered scene radiance is denoted as   J T , B S . The sample images of this case is shown in the second row of Figure 6 with the same order as in the previous case. As shown by these images, the main function of this case is to reduce the intensities in distant regions and produce a higher-contrast result.

3.2.3. Descattering of Case CC

As mentioned earlier, the estimation of BL in this case is the same as that in Case DS, due to the low contrast of distant regions caused by backscattering, and for the estimation of TM, the red channel is used because it is more sensitive to distance change in underwater environment than the other two channels.
As the RIV in Case DS, the intensity of red channel is used to produce a rough estimation of the relative distance between the scene and the camera. Since it attenuates severely, we use its regional maximum to indicate the distance of corresponding pixel. Likewise, it is also processed with a guided filter to remove the block artifact and a min-max normalization function to map its value to the range of 0-1. Mathematically, the relative distance calculated from the red channel is defined as:
d R ( x ) = 1 N { G { m a x y Ω ( x ) ( I T R ( y ) ) } } .
The size of Ω x here is still the same as that for calculating P R I V ( x ) .
With the relative distance calculated, TM of this case is obtained by replacing the P R I V ( x ) in Equation (14). The following process is identical with former two cases, in which the calculated TM is assigned to the red channel and used to derive the values for the other two channels, and the scene radiance is recovered by replacing corresponding parameters in Equation (16). The recovered scene radiance is denoted as   J T , C C .
Images of this case is shown in the last row of Figure 6. Since the BL of this case is the same as that in Case DS, the resulted images of these two cases has similar color tones. But due to the difference of TM estimation, the descattering of this case emphasizes more on color correction than contrast improvement, which leads to a plainer result than Case DS.

3.2.4. Image Fusion

Since there is no hard boundaries between the previous three cases, all of the descattered images are fused together to form a final enhancement result. Inspired by [17], these images are fused linearly with weights calculated from two sigmoid functions which respectively describe the level of brightness and color balance of the input image. The formula of image fusion is given as follows:
J f u s i o n = θ 1 J T , B S + ( 1 θ 1 ) [ θ 2 J T , C C + ( 1 θ 2 ) J T , D S   ] ,
where θ 1 is a sigmoid function related to the general brightness level of input image:
θ 1 = { 1 + e x p [ ω ( a v g ( I T ) τ I M ) ] } 1 ,
with ω controls the flatness of the sigmoid function and τ controls the base level of image brightness. In this paper, their values are set as 20 and 0.6, respectively. I M represents the maximum gray scale level, it equals to 255 for images with 8-bit pixels. The second weight θ 2 is about the level of color bias in the input image. It is defined as follows:
θ 2 = { 1 + e x p [ ω ( m a x k { R , G , B } a v g ( I T k ) m i n k { R , G , B } a v g ( I T k ) γ a v g ( I T ) ) ] } 1 ,
where γ controls the basic metric of color bias, and is set as 0.1 in this paper.
The sample image of this step is shown at the far right of Figure 6. As indicated by the arrows, it is the combination of descattered results of Case DS, Case BS and Case CC. In this fusion, the calculated weights for the three cases are respectively 0.4529, 0.2141 and 0.333, so the result image is more similar to the output of Case DS and Case CC, where the low contrast, low intensity and color deviation problems are tackled.

4. Evaluation and Results

In this section, the proposed method is experimented on three different datasets, including a shallow water dataset from Bubble Vision (BV) [26], a laboratory dataset from a deep-sea experiment pool (EP), and a real deep sea (DS) dataset from the TS-03 Cruise of the Chinese Academy of Sciences. Among these datasets, the BV dataset was captured in Bali under uniform and natural light, the EP dataset was captured in a 22   m × 10   m × 20   m water-filled pool at night, and the DS dataset was captured in open water of Mariana at the depth over 8200m. For the latter two datasets, illumination was provided by artificial lighting devices located on the photographed objects or near the camera, and light bulbs of green or blue colors were used to extend vision range while saving energy. These illumination settings caused additional color deviation and light imbalance problems to images of these datasets.
The performance of proposed method is evaluated qualitatively and quantitatively in terms of visual quality and practical value, which include the assessment of color richness, color balance and contrast of enhanced images, and their credibility and information content, especially in badly-illuminated regions. The performance of proposed method is compared with the fusion-based method from Ancuti et al. [9], the Red-Channel method from Galdran et al. [13] and the IFM-based method of Peng and Cosman [17]. These methods are selected for their reported good performance on underwater image enhancement. Specifically, Galdran’s method is able to enhance normal underwater images as well as those with artificial light sources, and Peng’s method is able to enhance underwater images with various lighting conditions. In the following content, the evaluation results of these methods on underwater images with different illumination conditions are reported.

4.1. Evaluation on the Dataset of Uniform Natural Illumination

The evaluation on uniformly-illuminated underwater images is conducted on the BV dataset. In Figure 7, enhancement results of Ancuti’s method, Galdran’s method, Peng’s method and the proposed method are presented, together with the original images from the BV datasets. As shown in this figure, Ancuti’s method removes the bluish hue and broads the range of color, but the low contrast problem of distant regions is not solved. Galdran’s method, on the contrast, improves the image contrast but cannot fully remove the bluish hue. Peng’s method achieves improvements in both aspects, but it can be unstable and cause over-enhanced results, such as that of BV-2. In comparison, our method is robust and can improve both color balance and image contrast. Moreover, our method is also better at preserving details during the enhancement, such as the fish school in BV-2 and the bubble group in BV-3.
Since visual evaluation can be influenced by the size of image and the color temperature of the displayer, we also use a quantitative metric to objectively evaluate the image quality. The selected metric is the UIQM from [28], which is a non-reference measure of the image visual quality based on evaluations of color balance and richness, detail sharpness and image contrast. The formula of UIQM is given by:
U I Q M = c 1 × U I C M + c 2 × U I S M + c 3 × U I C o n M ,
where UICM, UISM and UIConM are respectively the measures of image colorfulness, image sharpness and image contrast. The weights of these measures are assigned as c 1 = 0.0282 , c 2 = 0.2953 and c 3 = 3.5753 , according to [28]. The UIQM is calculated for every image in this experiment, but to avoid an overlong table, the scores of individual images are listed in Appendix A (Table A1), and in the main text, only the average scores of each method are presented (Table 1). Apparently, the quantitative evaluation results are in accordance with previous qualitative evaluation. As shown in Table 1, the highest scores of UICM, UIConM and UIQM are won by our results, which means our method outperforms the others in terms of color richness and balance, contrast, and the overall visual quality of image. In the evaluation of sharpness, our method has the highest values in BV-3 and BV-4, but is behind Peng’s method in BV-1 and BV-2. This can be caused by the strong descattering of Peng’s results in these two trials.

4.2. Evaluation on the Datasets of Non-Uniform Artifial Illumination

The evaluation on underwater images with inhomogeneous illumination is conducted on the EP dataset and the DS dataset. As mentioned above, due to the use of inhomogeneous and chromatic lighting devices, images of these two datasets are more likely to suffer from regional color deviation and lighting imbalance than images in the BV dataset. So in this evaluation, we will not only focus on the quality improvement of the whole image, but also assess the enhancement in severely degraded regions.

4.2.1. Experiment on the EP Dataset

The original EP images and corresponding enhancement results of Ancuti’s method, Galdran’s method, Peng’s method and the proposed method are shown in Figure 8. Clearly, Galdran’s method and Peng’s method fail to restore the right color in this test. Ancuti’s method is also failed in the first two trials. By contrast, our method improves the color balance in all tested images. As can be seen in Figure 8, the blue-green color cast in the original images is largely reduced in our results. For EP-3 and EP-4, a very realistic color tone is restored in our results, and for EP-1 and EP-2, the strong color deviations are largely suppressed, but due to the rapid color change in these images, the result images are less realistic than those of EP-3 and EP-4. Actually, this is a deficiency of the regional color-tone estimation method in our work, which will be solved in future studies hopefully. Apart from color correction, the illumination balance is also improved in our results. For EP-1, EP-2 and EP-3, the low-illumination regions are brighter than before, and for EP-4, the hazy region is of higher contrast. This improvement is a side-product of the regional color-tone correction method in our work. For practical use, this trait can be very helpful since it improves the visibility of badly-illuminated regions without consuming more energy for illumination.
For objective evaluation, UIQM is used to assess the visual quality of enhanced images. The average scores of each method are shown in Table 2, and the scores of individual images are listed in Table A2 of Appendix A. Unlike former experiment, the evaluation scores do not support our visual assessment. Based on the scores, Galdran’s results have the best visual quality and the highest contrast, and Peng’s method has the best performance in terms of colorfulness and sharpness. From the calculation formulas of UICM, UISM and UIConM in [28], we speculate that this disagreement may be caused by the omission of special information in the calculation of these measures. Due to this omission, the regional color deviation and uneven brightness in Galdran’s and Peng’s results are mistaken as high colorfulness and contrast, which leads to high scores in the evaluation results. Unfortunately, no other metric is found to replace UIQM for this case, since the study of regional-deviated underwater image is very rare.
Although the evaluation of overall image quality is unachievable at present, the evaluation of regional color balance can still be made, owing to the color board captured in each EP image. As shown in Figure 9a, the color board locates at the region with uniform and severe color deviation in each image, so by calculating the color difference between the color board and a ground truth image, the degree of color balance of this region can be known. Before calculation, the procedure in Figure 9b is applied to correct the affine deformation of the color board in each image. The metric used to evaluate color difference is the mean square error (MSE), which is given by:
M S E = 1 N x = 1 N ( I ( x ) I 0 ( x ) ) 2 ,
where I 0 ( x ) and I ( x ) are respectively the pixel values of the ground truth image and the image to evaluate. The average and individual MSEs for each method are listed in Table 2 and Table A2 (Appendix A), respective. As shown by these values, the color boards of our results have the lowest MSEs. Since the color tones of color board regions are nearly uniform, this evaluation result shows that our method can restore the regional color balance better than the other methods. This result is consistent with former visual observation.

4.2.2. Experiment on the DS Dataset

The original DS images and corresponding enhancement results of the four tested methods are shown in Figure 10. Due to the lack of ambient light, the DS images present less scattering but stronger brightness imbalance than the EP images. Similar to the former case, Galdran’s method and Peng’s method fail to enhance the original images in terms of the color balance and the brightness balance. Ancuti’s method removes the color cast in the foreground but only slightly improves the illumination in the background. Our method restores the color balance as well as the illumination balance. As can be easily noticed, in our results, the greenish tone in the foreground is completely removed, and the colors and shapes in the background are much more salient than before.
To keep the consistency of the evaluation procedure, the UIQM measures are still calculated for images in this experiment. The average and individual scores of this evaluation are shown in Table 3 and Table A3 (Appendix A), respectively. Due to the large span of color and strong brightness imbalance, the highest scores of this evaluation are won by Peng’s results.
To evaluate the enhancement of dark regions in the original images, we calculate the entropy for all images in this experiment. In information theory, entropy measures the “randomness” of a system. And for image enhancement tasks, the increase of image entropy indicates larger information content and higher distinguishability of details in the enhanced images [29,30]. The entropy of an 8-bit gray-scale image is defined as follows:
e n t r o p y = i = 0 255 p i l o g 2 p i .
Here p i is the probability of the gray level i in a pixel of the image, which is calculated by:
p i = x = 1 N ( I ( x ) = = i ) j = 0 255 x = 1 N ( I ( x ) = = j ) ,
where I ( x ) represents the pixel value of the image and N is the total number of pixels in the image. To measure the entropy of an RGB image, we first calculate the entropies of the three color channels separately, then calculate the average of them as the entropy of the whole image.
The average and individual entropies of images in the experiment are listed in Table 3 and Table A3 (Appendix A), respectively. As shown in these tables, our method improves the image entropies better than the other methods. This result is consistent with former visual evaluation that our method can effectively improve the information content in low-brightness regions. Ancuti’s results come the second in this evaluation, which is due to the less salient backgrounds in these images.

5. Conclusions

In this paper, we proposed an enhancement method for underwater images with uniform or non-uniform illumination conditions. In practical operations, these conditions usually correspond to the shallow-water environment and the deep-sea environment, respectively. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. The first module reduces the regional or full-extent color-tone deviation that is caused by different types of incident light. And the second module solves the problems of low contrast and pixel-wise color deviation that are left after applying the first module. The proposed method is experimented on laboratory and open-water images under different depths and illumination states. Qualitative and quantitative evaluations show that the proposed method outperforms many other methods in improving the color balance and contrast of underwater images with different illumination conditions, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, which are commonly seen in deep-sea researches and operations.

Author Contributions

Conceptualization, Y.L.; Data curation, Y.L. and C.L.; Formal analysis, Y.L.; Funding acquisition, H.X., D.S. and C.L.; Investigation, Y.L. and X.Q.; Methodology, Y.L.; Project administration, H.X. and C.L.; Resources, H.X., D.S. and X.Q.; Software, Y.L.; Supervision, H.X.; Validation, Y.L., D.S. and C.L.; Visualization, Y.L. and D.S.; Writing—original draft, Y.L.; Writing—review & editing, Y.L., H.X., D.S. and X.Q.

Funding

This research was funded by the Key Project of Hainan Province Science and Technology Plan (Grant No. ZDKJ2017005), the National Natural Science Foundation of China (NSFC) (Grant No. 41530960), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA19060403) and the National Key Research and Development Project (Grant No. 2018YFC0307905).

Acknowledgments

The authors are grateful to the colleagues in the TS-03 Cruise of the Chinese Academy of Sciences for providing field images for method evaluation.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Quantitative Evaluation Scores of Individual Images in Section 4

Table A1. UICM, UISM, UIConM and UIQM scores of original BV images and corresponding enhanced images of tested methods.
Table A1. UICM, UISM, UIConM and UIQM scores of original BV images and corresponding enhanced images of tested methods.
OriginalAncutiGaldranPengOurs
UICMBV-14.23794.83824.71655.66247.2188
BV-23.24393.26603.60663.95315.5196
BV-33.24746.99194.41795.41547.9651
BV-42.20817.18294.13715.63247.5185
UISMBV-10.00510.14690.01271.32730.4114
BV-20.00510.45810.01320.85430.4969
BV-30.00390.00530.00660.00480.0404
BV-40.00200.00380.00710.00380.2010
UIConMBV-10.15180.15180.15181.72742.2891
BV-20.15180.18540.30991.13673.3554
BV-30.15210.15210.15210.44622.8458
BV-40.15150.15150.15150.15151.7980
UIQMBV-10.66390.72270.67966.72758.5091
BV-20.63580.89041.21364.427712.2988
BV-30.63640.74240.67031.749310.4113
BV-40.60460.74540.66050.70176.6997
Table A2. UICM, UISM, UIConM, UIQM and color board MSE scores of original EP images and corresponding enhanced images of tested methods.
Table A2. UICM, UISM, UIConM, UIQM and color board MSE scores of original EP images and corresponding enhanced images of tested methods.
OriginalAncutiGaldranPengOurs
UICMEP-18.424610.75328.076611.91126.5124
EP-28.09848.69278.072712.68498.2544
EP-32.43013.15632.80462.40724.7343
EP-41.18032.94162.28952.07202.5635
UISMEP-10.00070.00030.00070.93120.0005
EP-20.22190.00040.22200.20450.3437
EP-30.00330.00200.00330.00280.0024
EP-40.24500.00140.20020.18540.0017
UIConMEP-10.80040.78990.85745.27340.7899
EP-21.88390.78994.97383.62232.2100
EP-34.65110.78999.26994.14544.3392
EP-49.63700.916412.98149.29590.9164
UIQMEP-13.09933.12753.293419.46503.0079
EP-27.02953.069418.075913.36898.2357
EP-316.69852.913733.222714.889915.6480
EP-434.56093.359746.536233.34873.3491
MSEEP-149576228404638902454
EP-264133901664241172039
EP-31015857401113892224172
EP-482014902931091864690
Table A3. UICM, UISM, UIConM, and UIQM scores and entropies of original DS images and corresponding enhanced images of tested methods.
Table A3. UICM, UISM, UIConM, and UIQM scores and entropies of original DS images and corresponding enhanced images of tested methods.
OriginalAncutiGaldranPengOurs
UICMDS-14.44384.12354.30406.57913.4382
DS-23.83164.09593.59615.51813.9264
DS-34.41775.99764.64126.74062.9886
UISMDS-10.55320.00230.25780.37260.1347
DS-20.00340.00170.36080.44980.0967
DS-30.00420.00200.45220.44940.0080
UIConMDS-11.50330.21270.99312.27271.2113
DS-20.15020.15020.30780.96860.5435
DS-30.15020.15020.24610.42900.1924
UIQMDS-15.66350.87733.74818.42104.4674
DS-20.64590.65291.30843.75162.0824
DS-30.66270.70661.14431.85660.7746
EntropyDS-16.99797.20077.00566.55537.3432
DS-26.92997.21926.94397.53557.4390
DS-37.05367.48947.08306.87837.5001

References

  1. Lee, D.H.; Kim, G.Y.; Kim, D.H.; Myung, H.; Choi, H.T. Vision-Based Object Detection and Tracking for Autonomous Navigation of Underwater Robots. Ocean Eng. 2012, 48, 59–68. [Google Scholar] [CrossRef]
  2. Chuang, M.C.; Hwang, J.N.; Kresimir, W.A. Feature Learning and Object Recognition Framework for Underwater Fish Images. IEEE Trans. Image Process. 2016, 25, 1862–1872. [Google Scholar] [CrossRef]
  3. Panagiotis, A.; Georgios, D.I.; Andreas, G.; Dimitrios, S. The effect of underwater imagery radiometry on 3D reconstruction and orthoimagery. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, XLII-2/W3, 25–31. [Google Scholar]
  4. Chen, X.D.; Yang, Y.H. A Closed-Form Solution to Single Underwater Camera Calibration Using Triple Wavelength Dispersion and Its Application to Single Camera 3D Reconstruction. IEEE Trans. Image Process. 2017, 26, 4553–4561. [Google Scholar] [CrossRef]
  5. Lu, H.M.; Uemura, T.; Wang, D.; Zhu, J.H.; Huang, Z.; Kim, H. Deep-Sea Organisms Tracking Using Dehazing and Deep Learning. Mob. Netw. Appli. 2018, 24, 1572–8153. [Google Scholar]
  6. Wang, Y.; Song, W.; Fortino, G.; Qi, L.Z.; Zhang, W.Q.; Liotta, A. An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging. IEEE Access. 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  7. Yang, M.; Hu, J.T.; Li, C.I.; Rohde, G.; Du, Y.X.; Hu, K. An In-Depth Survey of Underwater Image Enhancement and Restoration. IEEE Access. 2019, 7, 123638–123657. [Google Scholar] [CrossRef]
  8. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG Int. J. Comput. Sci. 2007, 34, 239–244. [Google Scholar]
  9. Ancuti, C.O.; Ancuti, C.; Haber, T.; Bekaert, P. Fusion-based restoration of the underwater images. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 1557–1560. [Google Scholar]
  10. Ghani, A.S.A.; Isa, N.A.M. Underwater Image Quality Enhancement through Rayleigh-Stretching and Averaging Image Planes. Int. J. Nav. Archit. Ocean Eng. 2014, 6, 840–866. [Google Scholar] [CrossRef] [Green Version]
  11. Li, C.Y.; Guo, J.C.; Guo, C. Emerging From Water: Underwater Image Color Correction Based on Weakly Supervised Color Transfer. IEEE Signal Process Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef] [Green Version]
  12. He, K.M.; Sun, J.; Tang, X.O. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  13. Galdran, A.; Pardo, D.; Picón, A.; Gila, A.A. Automatic Red-Channel Underwater Image Restoration. J. Visual Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef] [Green Version]
  14. Drews, P., Jr.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission Estimation in Underwater Single Images. In Proceedings of the The IEEE International Conference on Computer Vision (ICCV) Workshops, Sydney, Australia, 1–8 December 2013. [Google Scholar]
  15. Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial Results in Underwater Single Image Dehazing. In Proceedings of the OCEANS 2010 MTS/IEEE SEATTLE, Seattle, WA, USA, 20–23 September 2010. [Google Scholar]
  16. Peng, Y.T.; Zhao, X.Y.; Cosman, P.C. Single Underwater Image Enhancement Using Depth Estimation Based on Blurriness. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015. [Google Scholar]
  17. Peng, Y.T.; Zhao, X.Y.; Cosman, P.C. Underwater Image Restoration Based on Image Blurriness and Light Absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  18. Hou, G.J.; Pan, Z.K.; Wang, G.D.; Yang, H.; Duan, J.M. An Efficient Nonlocal Variational Method with Application to Underwater Image Restoration. Neurocomputing 2019, 369, 106–121. [Google Scholar] [CrossRef]
  19. Qing, C.M.; Huang, W.Y.; Zhu, S.Q.; Xu, X.M. Underwater Image Enhancement With An Adaptive Dehazing Framework. In Proceedings of the 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 10 September 2015. [Google Scholar]
  20. Lu, H.M.; Li, Y.J.; Xu, X.; Li, J.R.; Liu, Z.F.; Li, X.; Yang, J.M.; Serikawa, S. Underwater Image Enhancement Method Using Weighted Guided Trigonometric Filtering and Artificial Light Correction. J. Visual Commun. Image Represent. 2016, 38, 504–516. [Google Scholar] [CrossRef]
  21. McGlamery, B.L. A Computer Model for Underwater Camera Systems. Ocean Opt. 1980, 6, 221–231. [Google Scholar]
  22. Jaffe, J.S. Computer Modeling and the Design of Optimal Underwater Imaging Systems. IEEE J. Oceanic Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  23. Boffety, M.; Galland, F.; Allais, A.G. Color Image Simulation for Underwater Optics. Appl. Opt. 2012, 51, 5633–5642. [Google Scholar] [CrossRef]
  24. Zhao, X.W.; Jin, T.; Qu, S. Deriving Inherent Optical Properties from Background Color and Underwater Image Enhancement. Ocean Eng. 2015, 94, 163–172. [Google Scholar] [CrossRef]
  25. Huang, D.M.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-Water Image Enhancement Using Relative Global Histogram Stretching Based on Adaptive Parameter Acquisition. Int. Conf. Multimedia Model. 2018, 10704, 453–465. [Google Scholar]
  26. Bali Diving HD. Available online: https://www.youtube.com/user/bubblevision (accessed on 7 October 2019).
  27. He, K.M.; Sun, J.; Tang, X.O. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  28. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Oceanic Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
  29. Banerjee, J.; Vadali, S.R.K.; Shome, S.N.; Nandy, S. Real-Time Underwater Image Enhancement: An Improved Approach for Imaging with AUV-150. Sadhana 2016, 41, 225–238. [Google Scholar] [CrossRef]
  30. Abdul Ghani, A.S. Image Contrast Enhancement Using An Integration of Recursive-Overlapped Contrast limited Adaptive Histogram Specification and Dual-Image Wavelet Fusion for The High Visibility of Deep Underwater Image. Ocean Eng. 2018, 162, 224–238. [Google Scholar] [CrossRef]
Figure 1. Underwater optical imaging in shallow water and deep sea.
Figure 1. Underwater optical imaging in shallow water and deep sea.
Sensors 19 05567 g001
Figure 2. The framework of proposed method.
Figure 2. The framework of proposed method.
Sensors 19 05567 g002
Figure 3. (a) A close-up underwater image captured in a water-filled pool. (b) Estimated color tone of (a). (c) The average Fourier frequency of (a). The black box in it locates the maximum frequency. (d) Color-tone subtracted result of (a). (e) Brightness-adjusted result of (d).
Figure 3. (a) A close-up underwater image captured in a water-filled pool. (b) Estimated color tone of (a). (c) The average Fourier frequency of (a). The black box in it locates the maximum frequency. (d) Color-tone subtracted result of (a). (e) Brightness-adjusted result of (d).
Sensors 19 05567 g003
Figure 4. (a) The original underwater images from [26]. (b) Estimated color tone. (c) Color-tone subtracted result. (d) Final result of color-tone correction.
Figure 4. (a) The original underwater images from [26]. (b) Estimated color tone. (c) Color-tone subtracted result. (d) Final result of color-tone correction.
Sensors 19 05567 g004
Figure 5. (a) An underwater image that was captured in deep sea with only inhomogeneous artificial illumination. (b) A raw color tone estimation of (a) obtained by applying spatial Gaussian filter. (c) Segmenting (a) into regions with nearly-uniform illumination by K-means. (d) A combination of estimated color tones of all regions. (e) Ultimate color-tone image obtained by applying spatial Gaussian filter and guided filter to (d). (f) Color-tone corrected result of (a) by using the color tone in (e). (g) Estimated color-tone image from (a) by applying color-tone estimation method for uniformly-illuminated images. (h) Color-tone corrected result of (a) by using the color tone in (g).
Figure 5. (a) An underwater image that was captured in deep sea with only inhomogeneous artificial illumination. (b) A raw color tone estimation of (a) obtained by applying spatial Gaussian filter. (c) Segmenting (a) into regions with nearly-uniform illumination by K-means. (d) A combination of estimated color tones of all regions. (e) Ultimate color-tone image obtained by applying spatial Gaussian filter and guided filter to (d). (f) Color-tone corrected result of (a) by using the color tone in (e). (g) Estimated color-tone image from (a) by applying color-tone estimation method for uniformly-illuminated images. (h) Color-tone corrected result of (a) by using the color tone in (g).
Sensors 19 05567 g005
Figure 6. A brief workflow of the fusion-based descattering module.
Figure 6. A brief workflow of the fusion-based descattering module.
Sensors 19 05567 g006
Figure 7. Visual comparison of different methods on enhancing uniformly-illuminated underwater images from the BV dataset.
Figure 7. Visual comparison of different methods on enhancing uniformly-illuminated underwater images from the BV dataset.
Sensors 19 05567 g007
Figure 8. Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the EP dataset.
Figure 8. Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the EP dataset.
Sensors 19 05567 g008
Figure 9. (a) The locations of color boards in EP images. (b) The procedure of preparing the color board region for calculating the color difference against the ground truth image.
Figure 9. (a) The locations of color boards in EP images. (b) The procedure of preparing the color board region for calculating the color difference against the ground truth image.
Sensors 19 05567 g009
Figure 10. Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the DS dataset.
Figure 10. Visual comparison of different methods on enhancing non-uniformly-illuminated underwater images from the DS dataset.
Sensors 19 05567 g010
Table 1. Quantitative evaluation on the BV dataset in terms of UICM, UISM, UIConM and UIQM.
Table 1. Quantitative evaluation on the BV dataset in terms of UICM, UISM, UIConM and UIQM.
OriginalAncutiGaldranPengOurs
UICM3.23435.56974.21955.16587.0555
UISM0.00400.15350.00990.54750.2874
UIConM0.15180.16020.19130.86542.5721
UIQM0.63520.77520.80603.40169.4797
Table 2. Quantitative evaluation on the EP dataset in terms of general visual quality (UICM, UISM, UIConM and UIQM) and regional color accuracy (MSE).
Table 2. Quantitative evaluation on the EP dataset in terms of general visual quality (UICM, UISM, UIConM and UIQM) and regional color accuracy (MSE).
OriginalAncutiGaldranPengOurs
UICM5.03346.38605.31087.26885.5161
UISM0.11780.00100.10660.33100.0871
UIConM4.24310.82157.02065.58432.0639
UIQM15.34713.117625.282020.26817.5602
MSE74325193778466043339
Table 3. Quantitative evaluation on the DS dataset in terms of visual quality (UICM, UISM, UIConM and UIQM) and image entropy.
Table 3. Quantitative evaluation on the DS dataset in terms of visual quality (UICM, UISM, UIConM and UIQM) and image entropy.
OriginalAncutiGaldranPengOurs
UICM4.23104.73904.18046.27933.4511
UISM0.18690.00200.35700.42390.0798
UIConM0.60120.17100.51571.22340.6491
UIQM2.32400.74562.06694.64672.4415
Entropy6.99387.30317.01086.98977.4274

Share and Cite

MDPI and ACS Style

Liu, Y.; Xu, H.; Shang, D.; Li, C.; Quan, X. An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering. Sensors 2019, 19, 5567. https://doi.org/10.3390/s19245567

AMA Style

Liu Y, Xu H, Shang D, Li C, Quan X. An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering. Sensors. 2019; 19(24):5567. https://doi.org/10.3390/s19245567

Chicago/Turabian Style

Liu, Yidan, Huiping Xu, Dinghui Shang, Chen Li, and Xiangqian Quan. 2019. "An Underwater Image Enhancement Method for Different Illumination Conditions Based on Color Tone Correction and Fusion-Based Descattering" Sensors 19, no. 24: 5567. https://doi.org/10.3390/s19245567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop