Next Article in Journal
Assessing Machine Learning versus a Mathematical Model to Estimate the Transverse Shear Stress Distribution in a Rectangular Channel
Next Article in Special Issue
Semantic Segmentation by Multi-Scale Feature Extraction Based on Grouped Dilated Convolution Module
Previous Article in Journal
General Fractional Integrals and Derivatives with the Sonine Kernels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model

College of Oceanography and Space Informatics, China University of Petroleum, Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(6), 595; https://doi.org/10.3390/math9060595
Submission received: 29 January 2021 / Revised: 20 February 2021 / Accepted: 23 February 2021 / Published: 10 March 2021
(This article belongs to the Special Issue Computer Graphics, Image Processing and Artificial Intelligence)

Abstract

:
Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.

1. Introduction

Underwater vision has played an important role in different fields of science such as marine biology research [1], inspection of underwater man-made objects [2], and control of underwater vehicles [3]. Underwater images suffer from color cast and poor visibility resulting from absorption and scattering effects in the optical propagation process [4,5], which makes the enhancement and restoration of underwater images a challenging task.
Underwater image processing methods are categorized into two broad classes based on whether the physical principle of underwater optical propagation is applied or not. These two kinds of categories are called image restoration techniques and image enhancement methods, respectively. Image restoration based on the basic physics of light propagation establishes an underwater imaging model via some prior knowledge and finally recovers the degraded image. Equation (1) is considered as the simplified underwater image imaging model [6]:
I λ ( x ) = J λ ( x ) t λ ( x ) + ( 1 t λ ( x ) ) B λ ,
where x represents one particular pixel of the image, the wavelength λ r e d , g r e e n , b l u e , I λ ( x ) and J λ ( x ) respectively represent the image to be restored and the image after restoration, and B λ is defined as the background light, t λ ( x ) as the transmission map. In 2017, Peng et al. [7] proposed a method based on image blurring and light absorption (IBLA) to estimate more accurate background light and underwater scene depth. In 2018, they proposed another method based on generalized dark channel prior (GDCP) [8] by calculating the difference between the observed intensity and the background light. Due to the complexity of the combination of natural light and abnormal artificial light in underwater real image shooting, the common imaging model can hardly illustrate the color attenuation and absorption. In this paper, we apply the method based on pixel intensity redistribution to process the distorted image.
The image enhancement based on pixel intensity redistribution changes the pixel values in either the spatial domain or a transformed domain [9], which uses qualitative subjective criteria to produce a more visually pleasing image and does not rely on any physical model for the image formation [10]. In 2017, Ghani and Isa [11] proposed a method based on recursive adaptive histogram modification (RAHIM). The saturation and brightness of the HSV color model are modified by Rayleigh distribution and the human visual system to improve the natural performance of image color. Ancuti et al. [12] introduced a method based on white balancing and multi-scale fusion. Their approach improves global contrast and is especially good at processing color deviation images, but the contrast of the resulting image needs to be further improved. In 2018, Huang et al. [13] put forward relative global histogram stretching (RGHS) in the red, green, blue (RGB) and Commission International Eclairage-Lab (CIE-Lab) color models, which effectively improves the visual effect of the blurred image, except for the color deviation image. In 2020, Bai et al. [14] proposed an enhancement method based on global and local equalization of the histogram and dual-image multi-scale fusion (GLHDF), which obtained good performance, but there existed some limitations in processing turbid water images.
In recent years, deep learning methods, especially convolutional neural networks (CNN) and generative adversarial networks (GAN), have been applied in image-based tasks such as image enhancement and underwater object detection [15]. Li et al. proposed WaterGAN [16] in 2018 and UWGAN [17] in 2019, both based on generative adversarial networks (GANs), to generate realistic underwater images. To solve the difficulty of lack of dataset in underwater image processing, Hou et al. [18] designed an underwater image synthesis algorithm (UISA) based on the physical model of underwater optical propagation and built the synthetic underwater image dataset (SUID) by outdoor ground-truth images. Li et al. [19] built a large-scale and real-world underwater image enhancement benchmark dataset (UIEBD). In 2020, Anwar and Li [20] provided a comprehensive and in-depth survey of the deep learning-based underwater image enhancement and pointed out that, in most cases, the deep learning-based methods fall behind state-of-the-art conventional methods. It is difficult to develop the deep learning method because of the lack of a real underwater image dataset. The reality of the generated underwater images has hardly been examined.
To enhance the underwater image, it is necessary to give priority to the correction of color deviation. In the color correction study, the unsupervised color correction method (UCM) [21], RGHS [13], and integrated color model-Rayleigh distribution (ICM-RD) [22] achieve color deviation correction through global stretching. However, the first two methods cannot effectively remove the blue-green deviation and ICM-RD has an obvious overcompensation in the red channel. Ancuti et al. [12] proposed underwater white-balancing, which can effectively reduce the color deviation caused by various lighting or medium attenuations. In this paper, white-balancing is selected to correct the color deviation.
Considering the lack of contrast and clarity of the image after white-balancing, it is necessary to further enhance the image. In the current image enhancement methods, fusion is widely used, which can achieve the multi-scale fusion of two enhanced images and improve the image quality, and the selection of weight coefficient is particularly critical. In the classical fusion algorithm, the saliency weight map has the disadvantage of low contrast in effectively distinguishing the saliency target of the underwater image. In this paper, the saliency detection method, combined with spatial and contrast cues proposed by Fu et al. [23], is combined into the fusion algorithm, which can effectively distinguish saliency targets and make the fusion image more consistent with human vision.
After analyzing the characteristics of the image processed in the first two stages, the problem of color deviation has been greatly improved. Considering the poor color contrast and dark brightness of the image caused by the relatively concentrated histogram, it is necessary to enhance the brightness of the image. The global stretching algorithm has poor performance in color deviation correction, but it can effectively improve the image contrast and achieve de-hazing. To stretch the brightness elements of the CIE-Lab model better, firstly, the RGB model channels are globally stretched to enhance the overall color contrast of the image. The image is then transferred to the Lab model, and the L channel of the CIE-Lab model is selectively stretched globally. Figure 1 shows the output images of each stage. The proposed method can effectively deal with a variety of underwater degradation.
This paper will discuss the three phases of the underwater image enhancement algorithm, and the results of the qualitative and quantitative comparison of the proposed classical and state-of-the-art methods will be shown.

2. Methodology

As illustrated in Figure 2, the proposed multi-scale fusion and global stretching of dual-model (MFGS) method consists of three stages, namely: (1) Color correction based on white-balancing, (2) Updated multi-scale fusion, and (3) Global stretching of dual-model. The first step aims to correct the color deviation of the underwater image. Due to the poor contrast and clarity of the restored image, sharpening and gamma correction are used to enhance the image, and the multi-scale pyramid with updated saliency weight is used for image fusion in the process of the second step. Finally, to solve the problem of poor brightness caused by a too concentrated histogram of the color model, global stretching of the corresponding channels is carried out in RGB and CIE-Lab models, respectively. Next, we will introduce each stage in detail.

2.1. Color Correction Based on White-Balancing

Due to the scattering of light and the absorption of light by the water medium, the color of the underwater image (especially deep water images) tends to have a blue-green distortion. Before enhancing the image quality, priority should be given to correct the color deviation. In the color correction study, white-balancing can effectively eliminate the undesirable color deviation caused by various lighting or medium attenuation characteristics. Traditional white balance algorithms such as principal component analysis (PCA) [24], gray world [25], and white patch retinex [26] are compared and analyzed in Figure 3. The gray world algorithm can achieve good visual performance for reasonable distortion underwater scenes. Due to the absorption of red light, the average value of the red channel is very small, which may lead to overcompensation of the channel at the position where red appears (Gray world divides each channel according to the average value).
Taking advantage of the relatively good preservation of the green channel underwater, some values of the green channel are selected to compensate for the red light attenuation. Based on the upper limit of the dynamic range of each channel, all channel values are normalized and limited to the range [0, 1]. The compensation ratio coefficient of the red channel is designed as follows:
r r c ( x ) = I g ( x ) × ( 1 I r ( x ) ) ,
where I r x and I g x are the pixel values of the image in the red and green channel, respectively. I g x , I r x 0 , 1 , r r c ( x ) 0 , 1 . The final pixel value of the red channel is defined as:
I r c ( x ) = I r ( x ) + r b c ( x ) × ( I ¯ g I ¯ r ) ,
where I ¯ r and I ¯ g are the average of I r and I g . In turbid water or water areas with a high concentration of plankton, the blue light attenuation is obvious due to the absorption of organic matter (lines 5–7). To calculate the compensation coefficient of the blue channel and compensate the blue channel, the following equations are used:
r b c ( x ) = I g ( x ) × ( 1 I b ( x ) ) ,
I b c ( x ) = I b ( x ) + r b c ( x ) × ( I ¯ g I ¯ b ) ,
where I b x is the pixel value of the blue channel. After the attenuation has been compensated, gray world is applied to compensate for the illuminant color cast. The first stage can effectively correct the color deviation. Next, we need to deal with the problem of insufficient contrast and clarity caused by the loss of edge and detail information.

2.2. Multi-Scale Fusion with Updated Saliency Weight

After color deviation correction, the image details are blurred. We need to further enhance the image contrast. Image fusion is widely used in image decomposition [27], image de-hazing [28], multispectral video enhancement [29], and underwater image enhancement. Ancuti et al. improved their previous fusion-based underwater de-hazing approach [30] and proposed an alternative definition of inputs and weights to deal with severely degraded scenes [12]. The first input is obtained by gamma correction to correct the global contrast, and the second input is a sharpened version of the white balance image to reduce the degradation caused by scattering. By independently employing a fusion process at every scale level, the potential artifacts due to the sharp transitions of the weight maps are minimized. However, the selection of saliency weight cannot effectively highlight the obvious target in the image. In this paper, the fusion process from Ancuti et al. [12] is used, and we adjust the saliency weight coefficient to improve the attention of saliency objects in the fusion process. The process of this step is shown in Figure 4.
The sharpened version of the white balance image is selected to be the first input of the fusion process, and the formula is defined as follows:
S = ( I + N { I G × I } ) / 2 ,
where S is the output image after sharpening and I is the white-balancing image. G × I denotes the Gaussian filtered version of I . N represents the linear normalization operator. Another input is the gamma-corrected image. In the process of blending, pixels with high weight values are more represented in the final image. In this paper, three weight coefficients were selected according to the algorithm of Ancuti et al. [12].
Laplacian contrast weight ( W L ) estimates the global contrast by computing the absolute value of a Laplacian filter applied on each input luminance channel.
Saturation weight ( W S a t ) enables the fusion algorithm to adapt to chromatic information by advantaging highly saturated regions. This weight map is simply computed as the deviation between the R k , G k , and B k color channels and the luminance, L k , of the k t h input:
W S a t = 1 3 [ ( R k L k ) 2 + ( G k L k ) 2 + ( B k L k ) 2 ] .
Saliency weight ( W S ) is aimed at emphasizing the salient objects that lose their prominence in the underwater scene. Ancuti et al. [12] have employed the saliency estimator of Achantay et al. [31]. However, the effect of the estimated saliency map is not ideal and cannot effectively highlight the main target in the image. For underwater scenes, the center of the image is usually brighter than the surrounding area due to the influence of artificial lighting. This scenario is known as the ‘central bias rule’, which means that when the distance between the object and the image center increases, the attention gain is depreciating. In this paper, we have employed the cluster-based saliency algorithm of Fu et al. [23], which combines the contrast and spatial cues. The spatial cue w s ( k ) of the cluster C k is defined as:
w s ( k ) = 1 n k j = 1 M i = 1 N j K z i j c j 2 | 0 , σ 2 δ b p i j C k ,
where δ is the Kronecker delta function, c j denotes the center of the image, Gaussian kernel, K , computes the Euclidean distance between the pixel, z i j , and the image center, c j , the variance, σ 2 , is the normalized radius of images, and the normalization coefficient, n k , is the pixel number of clusters, C k . The result of this saliency map is better than the original map in the paper of Ancuti et al. [12] (Figure 5).
Three corresponding normalized weight maps are merged into one weight map:
W ¯ k = ( W k + δ ) / ( k = 1 K W k + K δ ) ,
where K = 2 (the number of input images) and k 1 , 3 , W ¯ k denotes the normalized weight map and δ denotes a small regularization term that ensures that each input contributes to the output.
The multi-scale decomposition is based on the Laplacian pyramid originally described in Burt and Adelson [32]. Each source input, I k is decomposed into a Laplacian pyramid while the normalized weight maps, W ¯ k , are decomposed using a Gaussian pyramid. The mixing of the Laplacian inputs with the Gaussian normalized weights is performed independently at each level, l :
R l ( x ) = K = 1 K G l { W ¯ k } L l { I k ( x ) } ,
where L l and G l present the level of the Laplacian pyramid and Gaussian pyramid, k denotes the number of the input images, and R l denotes the reconstructed image. The enhanced output is obtained by summing the fused contribution of all levels. This stage enhances the edge and detail information, but the image tone is dark. The brightness and clarity should be further improved.

2.3. Global Stretching of Dual-Model

After multi-scale fusion processing, the color balance and the contrast of the image have been greatly improved. It was found that, after the fusion of the gamma-corrected image, the brightness of the image is relatively dark. The histogram of the underwater image is relatively concentrated, resulting in low contrast and visibility. In the contrast enhancement study, UCM [21], ICM-RD [22], and RGHS [13] achieve a global enhancement in the whole histogram modification process. UCM only stretches the histogram in the channels of the RGB model, and it cannot deal with the color deviation effectively. ICM-RD and RGHS are based on the assumption that the histogram distribution of the RGB channel is consistent with the Rayleigh distribution. RGHS is combined with an imaging model to calculate the relative stretching parameters. Because of the complexity of image distortion, the imaging model cannot fully describe the underwater illumination, and most of the histograms of the RGB channels in real underwater images do not conform to the assumption of the Rayleigh distribution.
In this paper, according to the previous related works, the RGB channel histogram is first globally stretched to enhance the color contrast. After the pre-processing, the color of the image is relatively satisfactory. The L element representing the brightness in the Lab model is stretched globally to improve the overall brightness of the image. The commonly used linear histogram stretching formula is shown in Equation (11).
p o = p i I m i n I m a x I m i n O m a x O m i n + O m i n ,
where p i and p o are the input and output pixels, respectively. I m i n , I m a x , O m i n , and O m a x are the fixed parameters for the before and after stretching images, respectively.
Global stretching in RGB color model. After separating the three channels of the RGB color model, we set the desired range O m i n , O m a x to 0 , 255 . Specifically,
I k o ( x ) = I k i ( x ) I k m i n I k m a x I k m i n 255 0 + 0 ,
where the channel k { r e d , g r e e n , b l u e } . The stretching process is shown in Figure 6. After stretching, the histogram of the image covers the whole range of values, and the visual contrast and brightness of the image are improved. To further improve the image brightness, we chose the CIE-Lab model, which defines the most kinds of color to adjust the image feature parameters. After this step, the image is transformed from the RGB model to the Lab model.
Global stretching in CIE-Lab color model. In the Lab model, the ‘L’ component is used to adjust the brightness of the image, and the values ranging from 0 to 100 denote darkest to the brightest. The color gradations of the ‘a’ and ‘b’ components are modified to acquire color correction. After the first two stages of processing, the color deviation of the underwater image has been better improved. There is no extra stretching and compensation for the ‘a’ and ‘b’ components. The ‘L’ component is applied with linear slide stretching, given by Equation (11), which ranges between 0.1% and 99.9% and is stretched to the range [0, 100]. The 0.1% of the lower and upper values in the image are set to 0 and 100, respectively.
L o ( x ) = L i ( x ) L m i n L m a x L m i n 100 0 + 0 ,
where L m i n = L s N / 100 , L m a x = L s N / 100 , and L s x = s o r t ( L ( x ) ) . N represents the number of components in the L channel. As shown in Figure 7, the stretched image is transformed to the RGB model and is taken as the final output. Compared with the stretched image in the RGB model, the brightness and contrast of the image are increased. In the next section, the results of the proposed algorithm will be evaluated quantitatively and qualitatively and compared with the current algorithm.

3. Results and Discussion

In this section, we compare the proposed approach with the existing classical and state-of-the-art underwater restoration/enhancement techniques, namely, unsupervised color correction (UCM) [21], Rayleigh distribution (RD) [22], relative global histogram stretching (RGHS) [13], image blurriness and light absorption (IBLA) [7], global and local equalization of histogram and fusion (GLHDF) [14], and color balance and fusion (Fusion) [12]. The results of each method are evaluated qualitatively and quantitatively. In 2018, Mangeruga et al. [33] concluded that, even if the quantitative metrics can provide a useful indication about image quality, they do not seem reliable enough to be blindly employed for an objective evaluation of the performances of an underwater image enhancement algorithm. They then proposed some guidelines for underwater image enhancement based on the benchmarking of different methods [34].
Qualitative evaluation is mainly evaluated in terms of contrast, visibility, and color. Quantitative evaluation is mainly evaluated in terms of underwater image quality measure (UIQM) [35], patch-based contrast quality index (PCQI) [36], and underwater color image quality evaluation (UCIQE) [37]. The UIQM and UCIQE metrics are dedicated to underwater image assessment. They address three important criteria: colorfulness, sharpness, and contrast. A high PCQI value indicates an enhanced image with high contrast. Finally, the execution time of each method is evaluated.
The proposed underwater image enhancement method has no limitation on the resolution of input images. The resolution of the output image is consistent with that of the input image. In the UIEBD, the resolution of the raw images is quite different, and the range is from 300 × 168 to 2180 × 1447. According to the types of image degradation, 890 images in the UIEBD dataset [19] are divided into three categories. Data A represents images with obvious blue/green color deviation (Figure 8), Data B represents shallow water images with natural light (lines 1–4 in Figure 9) and Data C represents turbid water images (lines 5–7 in Figure 9). Figure 8 and Figure 9 only present the experimental results of some randomly selected images from the Data A/B/C. Table 1 and Table 2 respectively show the corresponding quantitative analysis results of Figure 8 and Figure 9. Because of the poor performance of UCM [21] in qualitative evaluation, we do not carry out a quantitative analysis of UCM.
Figure 8 shows the qualitative results of various algorithms for the image with blue/green deviation (Data A). The results of RD [22] and GLHDF [14] are more colorful and produce unexpected oversaturation with a red hue. The results of UCM [21], RGHS [13], and IBLA [7] still have obvious color deviation. Fusion [12] and the proposed approach have a good performance on this kind of image distortion. The results of the proposed algorithm are clearer and brighter than Fusion, and the color contrast is more obvious.
As illustrated in Table 1, RD [22] and GLHDF [14] perform better in UCIQE index. The color of these two algorithms is more saturated. The UCIQE value of the proposed method is significantly higher than Fusion [12]. In terms of UIQM and PCQI index, Fusion achieves the highest average value, followed by the proposed method.
For Data B and Data C in Figure 9, the results of UCM [21] and IBLA [7] are dark. There is excessive red compensation in the results of RD [22]. RGHS [13] and GLHDF [14] have a better effect on shallow water image processing (Data B), but when dealing with turbid water images (Data C), there exist blue and red deviation, respectively. Fusion [12] and our approach can effectively smooth the image color. And the brightness of underwater images is enhanced effectively by the proposed method.
It can be seen from the data in Table 2 that the average value of the PCQI index of our algorithm is the highest for shallow water images. The GLHDF algorithm [13], which performs well in color and saturation, obtains a higher UIQM value. Although IBLA [7] does not perform well in qualitative evaluation, it has the highest UCIQE value.
Table 3 shows the average results of quantitative evaluation of 890 real underwater images in the underwater image enhancement benchmark dataset (UIBDE) [16]. The proposed method achieves the highest average value of the PCQI index, which indicates that the method has a good performance on contrast enhancement.
In the execution time evaluation, we randomly selected 100 images of different sizes to test the execution time under the same equipment conditions. The device processor used in this paper is Intel (R) Pentium (R) CPU G620 @ 2.60GHz, and the two kinds of software involved are MATLAB R2017b and Pycharm 2020.2.3. The average execution time of different algorithms for a single image is shown in Table 4.
It can be seen from Table 4 that the average execution speed of the GLHDF is the fastest among the tested algorithms, and the proposed algorithm is inferior to the GLHDF and the Fusion algorithm.
From the results of qualitative evaluation, the proposed algorithm and Fusion [12] have certain advantages in correcting color deviation, which can effectively remove different degrees of green, blue, and yellow deviation. The performance of UCM [21], RGHS [13], and IBLA [7] is unsatisfactory. In the case of serious color deviation, RD [22] and GLHDF [14] have the problem of incomplete color deviation removal and excessive red compensation. Although the results of the quantitative evaluation can not be employed blindly, they can be used as a reference. The proposed algorithm achieves a higher PCQI value, which means that it performs better in contrast enhancement. RD [22] and GLHDF [14], which have better performance in color richness, have higher UCIQE values. Fusion, with better color balance, has the highest UIQM value. Additionally, Fusion [12] and GLHDF [14] perform well in algorithm execution time. To shorten our execution time, a simplified version of the Fusion method can be chose to replace multi-scale fusion with the price of sacrificing the quality of image details. Specifically, the sharpened image and the gamma-corrected image are directly combined with the corresponding weight coefficients to get the fusion results. In general, the proposed method deals effectively with a variety of underwater image distortion scenes.

4. Conclusions

This paper proposes an enhancement method based on multi-scale fusion and global stretching of dual-model (MFGS). This method is realized by simple pixel value redistribution without relying on the underwater optical imaging model. The main contributions of this paper can be summarized as follows: (1) Regarding the problem that the saliency weight map in the fusion algorithm cannot effectively distinguish the saliency target, we updated the saliency weight map by combining contrast and spatial cues to highlight the obvious target in the processed image. (2) To further enhance the brightness of the fused image, after stretching the histogram of the RGB channel, the pixel values in the L channel of the Lab model that range between 0.1% and 99.9% are stretched to the range [0, 100].
Through qualitative and quantitative analysis, the proposed method deals with a variety of underwater image distortion scenes effectively and has advantages in improving contrast and correcting color deviation compared with other algorithms. In terms of the color richness of the resulting images and the execution time, there are still deficiencies with the latest algorithm. In future work, the structure of our algorithm will be further adjusted to shorten the execution time, and optimization of the color compensation method under different color deviation will also be the focus of future research. With the wide application of underwater vision in different scientific research fields, underwater image enhancement will play an increasingly important role in the process of image and video processing in marine biology research and underwater archaeology. Most of the target images of the current algorithms are shallow water images. When the artificial light source is added to deep water images, the raw images will face more diverse noises, and image enhancement will face more challenges. The effective enhancement of degraded images can provide convenience for many underwater studies.

Author Contributions

Conceptualization, H.S.; methodology, H.S. and R.W.; software, R.W.; validation, H.S. and R.W.; formal analysis, H.S., and R.W; investigation, H.S.; resources, H.S.; data curation, R.W.; writing—original draft preparation, R.W.; writing—review and editing, R.W. and H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant No. 61602517, the Fundamental Research Funds for the Central Universities (grant No. 18CX02109A), the Natural Science Foundation of Shandong Province of China (ZR2020MD034), and the Key Program of Marine Economy Development Special Foundation of Department of Natural Resources of Guangdong Province (GDNRC [2020]012).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: [https://li-chongyi.github.io/proj_benchmark.html] (accessed on 26 February 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MFGSMulti-scale fusion and global stretching of dual-model
RGBRed, green, blue
CIE-LabCommission International Eclairage-Lab
UIEBDUnderwater image enhancement benchmark dataset
IBLAImage blurring and light absorption
GDCPGeneralized dark channel prior
RAHIMRecursive adaptive histogram modification
HSVHue, saturation, value
RGHSRelative global histogram stretching
GLHDFGlobal and local equalization of the histogram and dual-image multi-scale fusion
CNNConvolutional neural networks
GANGenerative adversarial network
UCMUnsupervised color correction method
ICM-RDIntegrated color model-Rayleigh distribution
RDRayleigh distribution
PCAPrincipal component analysis
UIQMUnderwater image quality measure
PCQIPatch-based contrast quality index
UCIQEUnderwater color image quality evaluation

References

  1. Mazel, C.H. In situ measurement of reflectance and fluorescence spectra to support hyperspectral remote sensing and marine biology research. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006; pp. 1–4. [Google Scholar]
  2. Hou, G.J.; Luan, X.; Song, D.L.; Ma, X.Y. Underwater man-made object recognition on the basis of color and shape features. J. Coast. Res. 2016, 32, 1135–1141. [Google Scholar] [CrossRef]
  3. Foresti, G. Visual inspection of sea bottom structures by an autonomous underwater vehicle. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2001, 31, 691–705. [Google Scholar] [CrossRef] [PubMed]
  4. Lu, H.; Li, Y.; Zhang, L.; Serikawa, S. Contrast enhancement for images in turbid water. JOSA A 2015, 32, 886–893. [Google Scholar] [CrossRef] [Green Version]
  5. Lu, H.; Guna, J.; Zhou, Q. Preface: Optical imaging for extreme environment. Opt. Laser Technol. 2019, 110, 1. [Google Scholar] [CrossRef]
  6. Jaffe, J. Computer modeling and the design of optimal underwater imaging systems. IEEE J. Ocean. Eng. 1990, 15, 101–111. [Google Scholar] [CrossRef]
  7. Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
  8. Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the Dark Channel Prior for Single Image Restoration. IEEE Trans. Image Process. 2018, 27, 2856–2868. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, Y.; Song, W.; Fortino, G.; Qi, L.-Z.; Zhang, W.; Liotta, A. An Experimental-Based Review of Image Enhancement and Image Restoration Methods for Underwater Imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  10. Schettini, R.; Corchs, S. Underwater image processing: State of the art of restoration and image enhancement methods. EURASIP J. Adv. Signal Process. 2010, 2010, 1–14. [Google Scholar] [CrossRef] [Green Version]
  11. Ghani, A.S.A.; Isa, N.A.M. Automatic system for improving underwater image contrast and color through recursive adaptive histogram modification. Comput. Electron. Agric. 2017, 141, 181–195. [Google Scholar] [CrossRef] [Green Version]
  12. Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Process. 2017, 27, 379–393. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In International Conference on Multimedia Modeling; Springer: Cham, Switzerland, 2018; pp. 453–465. [Google Scholar]
  14. Bai, L.; Zhang, W.; Pan, X.; Zhao, C. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion. IEEE Access 2020, 8, 128973–128990. [Google Scholar] [CrossRef]
  15. Han, F.; Yao, J.; Zhu, H.; Wang, C. Underwater Image Processing and Object Detection Based on Deep CNN Method. J. Sens. 2020, 2020, 1–20. [Google Scholar] [CrossRef]
  16. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  17. Wang, N.; Zhou, Y.; Han, F.; Zhu, H.; Zheng, Y. UWGAN: Underwater GAN for real-world underwater color restoration and dehazing. arXiv 2019, arXiv:1912.10269. [Google Scholar]
  18. Hou, G.; Zhao, X.; Pan, Z.; Yang, H.; Tan, L.; Li, J. Benchmarking Underwater Image Enhancement and Restoration, and Beyond. IEEE Access 2020, 8, 122078–122091. [Google Scholar] [CrossRef]
  19. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An Underwater Image Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 2020, 29, 4376–4389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Anwar, S.; Li, C. Diving deeper into underwater image enhancement: A survey. Signal Process. Image Commun. 2020, 89, 115978. [Google Scholar] [CrossRef]
  21. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing the low quality images using unsupervised colour correction method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709. [Google Scholar]
  22. Ghani, A.S.A.; Isa, N.A.M. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015, 27, 219–230. [Google Scholar] [CrossRef]
  23. Fu, H.; Cao, X.; Tu, Z. Cluster-based co-saliency detection. IEEE Trans. Image Process. 2013, 22, 3766–3778. [Google Scholar] [CrossRef] [Green Version]
  24. Cheng, D.; Prasad, D.K.; Brown, M.S. Illuminant estimation for color constancy: Why spatial-domain methods work and the role of the color distribution. JOSA A 2014, 31, 1049–1058. [Google Scholar] [CrossRef]
  25. Ebner, M. White Patch Retinex. In Color Constancy; John Wiley & Sons Ltd.: Chichester, UK, 2007; pp. 104–105. ISBN 978-0-470-05829-9. [Google Scholar]
  26. Ebner, M. The Gray World Assumption. In Color Constancy; John Wiley & Sons: Chichester, UK, 2007; pp. 106–108. ISBN 978-0-470-05829-9. [Google Scholar]
  27. Grundland, M.; Vohra, R.; Williams, G.P.; Dodgson, N.A. Cross Dissolve Without Cross Fade: Preserving Contrast, Color and Salience in Image Compositing. Comput. Graph. Forum 2010, 25, 577–586. [Google Scholar] [CrossRef] [Green Version]
  28. Ancuti, C.; Bekaert, P. Effective Single Image Dehazing by Fusion. In Proceedings of the IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010. [Google Scholar]
  29. Bennett, E.P.; Mason, J.L.; Mcmillan, L. Multispectral Bilateral Video Fusion; IEEE Press: New York, NY, USA, 2007. [Google Scholar]
  30. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 81–88. [Google Scholar]
  31. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
  32. Burt, P.J.; Adelson, E.H. The Laplacian pyramid as a compact image code. In Readings in Computer Vision; Morgan Kaufmann: Burlington, MA, USA, 1987; pp. 671–679. [Google Scholar]
  33. Mangeruga, M.; Cozza, M.; Bruno, F. Evaluation of underwater image enhancement algorithms under different environmental conditions. J. Mar. Sci. Eng. 2018, 6, 10. [Google Scholar] [CrossRef] [Green Version]
  34. Mangeruga, M.; Bruno, F.; Cozza, M.; Agrafiotis, P.; Skarlatos, D. Guidelines for Underwater Image Enhancement Based on Benchmarking of Different Methods. Remote Sens. 2018, 10, 1652. [Google Scholar] [CrossRef] [Green Version]
  35. Panetta, K.; Gao, C.; Agaian, S. Human-Visual-System-Inspired Underwater Image Quality Measures. IEEE J. Ocean. Eng. 2016, 41, 541–551. [Google Scholar] [CrossRef]
  36. Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A Patch-Structure Representation Method for Quality Assessment of Contrast Changed Images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
  37. Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Trans. Image Process. 2015, 24, 6062–6071. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The results of each stage.
Figure 1. The results of each stage.
Mathematics 09 00595 g001
Figure 2. Method overview.
Figure 2. Method overview.
Mathematics 09 00595 g002
Figure 3. The processing results of different white balance algorithms. (a) is the original input image. (be) represent the processing results of principal component analysis (PCA), the white patch retinex method, the grey world method, and our method, respectively.
Figure 3. The processing results of different white balance algorithms. (a) is the original input image. (be) represent the processing results of principal component analysis (PCA), the white patch retinex method, the grey world method, and our method, respectively.
Mathematics 09 00595 g003
Figure 4. The two inputs are derived from the white-balancing image. Input 1 is the sharpened version of the white-balancing image and Input 2 is the gamma-corrected version of the white-balancing image. The rest of the pictures are three corresponding normalized weight maps and the merged weight maps for two inputs. The last one is the multi-scale fusion result.
Figure 4. The two inputs are derived from the white-balancing image. Input 1 is the sharpened version of the white-balancing image and Input 2 is the gamma-corrected version of the white-balancing image. The rest of the pictures are three corresponding normalized weight maps and the merged weight maps for two inputs. The last one is the multi-scale fusion result.
Mathematics 09 00595 g004
Figure 5. The saliency weight maps of two methods. In the first column, two images of the same scene are the sharpened version and the gamma corrected version of white-balancing images, respectively. The images in the second and third columns represent the saliency weight maps obtained by Ancuti et al. [12] and our method, respectively.
Figure 5. The saliency weight maps of two methods. In the first column, two images of the same scene are the sharpened version and the gamma corrected version of white-balancing images, respectively. The images in the second and third columns represent the saliency weight maps obtained by Ancuti et al. [12] and our method, respectively.
Mathematics 09 00595 g005
Figure 6. The stretching process in the RGB color model. (a) is the entire histogram distribution of three channels (RGB). (bd) is the histogram distribution of a single channel in RGB color model, respectively.
Figure 6. The stretching process in the RGB color model. (a) is the entire histogram distribution of three channels (RGB). (bd) is the histogram distribution of a single channel in RGB color model, respectively.
Mathematics 09 00595 g006
Figure 7. The stretching process in the Lab color model. Line (b) is the histogram distribution of the L component of the corresponding image in line (a), respectively.
Figure 7. The stretching process in the Lab color model. Line (b) is the histogram distribution of the L component of the corresponding image in line (a), respectively.
Mathematics 09 00595 g007
Figure 8. The qualitative evaluation result of 7 selected images from Data A.
Figure 8. The qualitative evaluation result of 7 selected images from Data A.
Mathematics 09 00595 g008
Figure 9. The qualitative evaluation result of 7 selected images from Data B/C.
Figure 9. The qualitative evaluation result of 7 selected images from Data B/C.
Mathematics 09 00595 g009
Table 1. The quantitative evaluation result of 7 selected images from Data A.
Table 1. The quantitative evaluation result of 7 selected images from Data A.
Data ARD [22]RGHS [13]IBLA [7]
UIQMPCQIUCIQEUIQMPCQIUCIQEUIQMPCQIUCIQE
11.9861.34034.3832.2731.11031.0282.3861.07313.808
22.9521.15634.3283.0911.19031.5892.4191.05628.186
32.9831.04234.1173.1421.12130.1802.6761.00920.676
42.6661.28732.5002.0481.30431.3151.9141.11313.040
52.5001.26035.6882.2541.31531.8591.6361.10817.175
62.7451.05835.0872.0340.96731.4912.5020.95217.076
71.3361.24038.0171.6291.29532.3551.7131.0788.190
Average2.4531.19734.8742.3531.18631.4022.1781.05516.879
Data AGLHDF [14]Fusion [12]Our Method
UIQMPCQIUCIQEUIQMPCQIUCIQEUIQMPCQIUCIQE
12.5491.27033.7092.4941.22715.9052.4991.23731.477
22.8261.19332.7673.1391.21618.1663.2131.26031.189
32.7991.10132.0993.0481.16214.1923.1351.13430.207
42.5051.31432.8713.2011.27313.3433.1901.32929.357
52.5881.28434.7552.9921.31412.4062.7501.30329.697
62.6241.14828.9522.8971.15313.6672.8801.11329.289
72.6701.30136.8891.7561.30916.6041.6601.26134.998
Average2.6521.23033.1492.7901.23614.8982.7611.23430.888
Table 2. The quantitative evaluation result of 7 selected images from Data B/C.
Table 2. The quantitative evaluation result of 7 selected images from Data B/C.
Data B/CRD [22]RGHS [13]IBLA [7]
UIQMPCQIUCIQEUIQMPCQIUCIQEUIQMPCQIUCIQE
12.7491.16031.4772.7531.20030.3471.6400.95635.010
23.4131.22534.6123.3191.23028.9460.8600.72736.087
32.3621.10133.2402.4401.08931.4400.6520.63835.764
43.1391.08333.8132.9971.09730.9291.2860.70636.287
52.2401.21735.1772.0341.15032.3551.4230.81639.603
61.8031.07735.5121.8361.09934.1752.3951.05534.993
71.6291.01437.7961.4171.00136.9150.9831.00435.528
Average2.4761.12534.5182.3991.12432.1581.3200.84336.182
Data B/CGLHDF [14]Fusion [12]Our method
UIQMPCQIUCIQEUIQMPCQIUCIQEUIQMPCQIUCIQE
12.9421.17730.5303.0831.19618.3352.7731.23429.653
23.4921.18132.8493.5031.21819.5843.3231.30131.501
32.7761.02831.0552.6881.19316.2492.6231.20031.853
43.0331.08631.3703.1601.06928.2153.2031.15130.497
52.7921.11433.0742.7801.28515.2202.2741.27232.042
62.4911.06834.1152.1051.19823.0822.0011.13130.447
72.3160.94637.8581.8101.12018.6321.5541.06729.719
Average2.8351.08632.9792.7331.18319.9022.5361.19430.816
Table 3. The quantitative evaluation result of the entire UIEBD [16].
Table 3. The quantitative evaluation result of the entire UIEBD [16].
MethodUIEBD [6]
UIQMPCQIUCIQE
RD [22]2.5691.09734.106
RGHS [13]2.1101.06331.950
IBLA [7]1.6840.95631.027
GLHDF [14]2.7471.07531.958
Fusion [12]2.8161.11325.016
Our method2.6911.14331.177
Table 4. The execution time evaluation result.
Table 4. The execution time evaluation result.
MethodTotal(s)Average(s)Rank
UCM [21]4698.96646.9906
RD [22]3232.54132.3255
RGHS [13]2490.71424.9074
IBLA [7]11,494.855114.957
GLHDF [14]787.1817.8721
Fusion [12]885.3238.8532
Our method1225.59512.2563
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, H.; Wang, R. Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model. Mathematics 2021, 9, 595. https://doi.org/10.3390/math9060595

AMA Style

Song H, Wang R. Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model. Mathematics. 2021; 9(6):595. https://doi.org/10.3390/math9060595

Chicago/Turabian Style

Song, Huajun, and Rui Wang. 2021. "Underwater Image Enhancement Based on Multi-Scale Fusion and Global Stretching of Dual-Model" Mathematics 9, no. 6: 595. https://doi.org/10.3390/math9060595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop