Next Article in Journal
YDD-SLAM: Indoor Dynamic Visual SLAM Fusing YOLOv5 with Depth Information
Next Article in Special Issue
A Finger Vein Liveness Detection System Based on Multi-Scale Spatial-Temporal Map and Light-ViT Model
Previous Article in Journal
Effective Zero-Shot Multi-Speaker Text-to-Speech Technique Using Information Perturbation and a Speaker Encoder
Previous Article in Special Issue
DeepVision: Enhanced Drone Detection and Recognition in Visible Imagery through Deep Learning Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BézierCE: Low-Light Image Enhancement via Zero-Reference Bézier Curve Estimation

1
Department of Basic Sciences, Shanxi Agricultural University, Taigu 030801, China
2
Faculty of Engineering, University of New South Wales, Sydney, NSW 2052, Australia
3
School of Sciences, Harbin University of Science and Technology, Harbin 150080, China
4
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(23), 9593; https://doi.org/10.3390/s23239593
Submission received: 23 October 2023 / Revised: 26 November 2023 / Accepted: 30 November 2023 / Published: 3 December 2023
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)

Abstract

:
Due to problems such as the shooting light, viewing angle, and camera equipment, low-light images with low contrast, color distortion, high noise, and unclear details can be seen regularly in real scenes. These low-light images will not only affect our observation but will also greatly affect the performance of computer vision processing algorithms. Low-light image enhancement technology can help to improve the quality of images and make them more applicable to fields such as computer vision, machine learning, and artificial intelligence. In this paper, we propose a novel method to enhance images through Bézier curve estimation. We estimate the pixel-level Bézier curve by training a deep neural network (BCE-Net) to adjust the dynamic range of a given image. Based on the good properties of the Bézier curve, in that it is smooth, continuous, and differentiable everywhere, low-light image enhancement through Bézier curve mapping is effective. The advantages of BCE-Net’s brevity and zero-reference make it generalizable to other low-light conditions. Extensive experiments show that our method outperforms existing methods both qualitatively and quantitatively.

1. Introduction

With the rapid development of information technology and deep learning, image processing has become an indispensable and important technology in the application of the field of artificial intelligence, such as in medical images [1], image recognition [2], agricultural research [3], traffic information systems [4], object detection [5], and image segmentation [6]. During the image acquisition process, it is easy to produce a large number of low-light images under conditions such as low-light environments, low-end devices, and unreasonable camera equipment configurations. Low-light images will adversely affect people’s subjective visual experience and the performance of computer vision systems due to the shortcomings of color distortion, high noise, damaged quality, and low contrast. Therefore, the study of low-light image enhancement has strong practical significance.
Image enhancement can be used in all areas with low-light image scenarios, e.g., object detection [7], underwater images [8], underground utilities [9], autonomous driving [10], and video surveillance [11]. It is difficult or even impossible to achieve low-light image enhancement by changing the shooting environment or improving the hardware of the shooting equipment. Therefore, it is necessary to process images through low-light image enhancement algorithms.
In this paper, we propose a novel method to enhance images through Bézier curve estimation. The core idea of this paper is using Bézier representation to estimate the light enhancement curve. Instead of image-to-image mapping, our approach takes the low-light image as an input and estimates the parameters of the light enhancement curve; thus, we can dynamically adjust pixels to enhance the image. Using the good properties of the Bézier curve, such as its smoothness, continuousness, and differentiability, the image enhancement effect can be guaranteed. Similar to the existing methods [12,13,14], our method involves unsupervised learning and zero-reference during the training process; it does not require paired or even unpaired data, and it can be used in various dark-light environments, as well as having a good generalization performance. The experimental results demonstrate that our method outperforms existing methods in subjective feelings and objective indicators. The main contributions of this paper can be summarized as follows:
  • Based on the good properties of the Bézier curve, we use it as the output for the dynamic adjustment of pixels. Compared with Zero-DCE, we overcome the overexposure problem.
  • This paper proposes a zero-shot learning model with a short training time, which effectively avoids the risk of overfitting and improves the generalization ability.
  • Experiments on a number of low-light image datasets reveal that our method outperforms some of the current state-of-the-art methods.
The rest of the paper is organized as follows: In Section 2, we give an introduction to related works. In Section 3, we propose the zero-reference method to enhance images through Bézier curve estimation. Section 4 presents the experimental results, and the last section concludes the paper.

2. Related Works

In this section, we review the related works on low-light image enhancement, mainly including conventional methods (CMs) and deep learning methods.

2.1. Conventional Methods

Among conventional low-light image enhancement algorithms, the histogram equalization (HE) algorithms and algorithms based on the Retinex model are commonly used.

2.1.1. Histogram Equalization Algorithms

The HE algorithm uses the image histogram to adjust the contrast of the image and improves the image contrast by uniformly expanding the concentrated gray range to the entire gray range [15,16,17]. However, during the HE processing, the contrast of the noise may be increased, and some useful signals may be lost. To overcome the inherent weaknesses of these shortcomings, many improved HE algorithms have been proposed [18,19,20,21,22,23].

2.1.2. Retinex Model-Based Methods

Retinex theory is commonly used in image enhancement based on scientific experiments and analysis [24]. The method based on the Retinex model decomposes the low-light image S into reflection property R and illumination map I. After that, many kinds of improved versions of the Retinex model appeared, e.g., the single-scale Retinex model (SSR) [25], multiscale Retinex model (MSR) [26], variational Retinex model [27,28,29,30], and the maximum-entropy-based Retinex model [31]. However, the calculating velocity of the Retinex algorithm is relatively slow, and the computational complexity of the variational Retinex model is high. This method cannot be applied to some real-time low-light image enhancement scenes.

2.2. Deep Learning Methods

In recent years, low-light image enhancement based on deep learning methods has attracted widespread attention. Compared with conventional methods, deep-learning-based methods have better accuracy, better generalization ability, and faster computing speed. According to different learning strategies, low-light image enhancement methods based on deep learning can be divided into supervised learning (SL), unsupervised learning (UL), semi-supervised learning (SSL), reinforcement learning (RL), and zero-shot learning (ZSL). In the following subsections, we briefly review some representative approaches to these strategies.

2.2.1. Supervised Learning

Lore et al. [32] proposed a pioneering work on low-light image enhancement based on deep learning. Subsequently, a variety of supervised-learning-based low-light image enhancement methods have been studied, e.g., the convolutional neural network [33], residual convolutional neural network [34], Msr-net [35], Retinex-Net [36], LightenNet [37], DeepUPE [38], KinD [39], EEMEFN [40], luminance-aware pyramid network [41], deep lightening network [42], and the progressive–recursive image enhancement network [43].

2.2.2. Unsupervised Learning

To address the issue that training a deep model on paired data may lead to overfitting and limit the generalization ability, Jiang et al. [12] designed an unsupervised generative adversarial network without unpaired images. Fu et al. [44] proposed LE-GAN based on generative adversarial networks using an attention module and identity invariant loss. Xiong et al. [45] used decoupled networks for unsupervised low-light image enhancement. Han et al. [46] proposed unsupervised learning based on a dual-branch fusion low-light image enhancement algorithm.

2.2.3. Semi-Supervised Learning

In order to combine the advantages of supervised learning and unsupervised learning, semi-supervised learning was proposed. Yang et al. [47] proposed a low-light image enhancement method using a deep recursive band network (DRBN). Chen et al. [48] put forward a semi-supervised network framework (SSNF) to enhance low-light images. Malik and Soundararajan [49] proposed semi-supervised learning for low-light image restoration via quality-assisted pseudo-labeling.

2.2.4. Reinforcement Learning

Without paired training data, Yu et al. [50] proposed a method to enhance low-light images through reinforcement learning. Zhang et al. [51] presented a deep reinforcement learning method (ReLLIE) for customized low-light enhancement. Cotogni and Cusano [52] introduced a lightweight fully automatic and explainable method for low-light image enhancement.

2.2.5. Zero-Shot Learning

In order to make up for the shortcomings of supervised learning, reinforcement learning, and unsupervised learning methods, the zero-shot learning method was proposed. Zhang et al. [53] proposed a zero-shot scheme for backlit image restoration that does not rely on any prior image examples or prior training. Zhu et al. [54] proposed a three-branch convolution neural network (RRDNet) for underexposed image restoration. Zhao et al. [55] combined Retinex-based and learning-based methods for low-light image enhancement. Liu et al. [56] proposed a Retinex-inspired Unrolling with cooperative prior architecture search for low-light image enhancement. Zheng and Gupta [57] introduced a novel semantic-guided zero-shot low-light image enhancement network implemented through enhancement factor extraction, recurrent image enhancement, and unsupervised semantic segmentation. Gao et al. [58] proposed a novel low-light image enhancement method via the Retinex composition of denoised Deep Image Prior. Xie et al. [59] proposed a zero-shot Retinex network (IRNet) composed of Decom-Net and Enhance-Net to alleviate the problem of low brightness and low contrast. In addition, depth curve estimation networks based on image reconstruction have been proposed. Guo et al. [14] presented an approach that treats light enhancement as an image-specific curve estimation task using deep networks. Li et al. [60] proposed Zero-DCE++, a fast and lightweight version of Zero-DCE. The novel low-light image enhancement method we propose in this paper also belongs to depth curve estimation networks.

3. Methodology

We present the framework of BézierCE in Figure 1. The Bézier Curve Estimation Network (BCE-Net) is devised to estimate the best fitting light enhancement curve given the input low-light image.

3.1. Decomposition

For a low-light image L , we first use a subnetwork Decom-Net to decompose the input image L into reflectance R and illumination I :
R , I = Decom Net ( L ) .
In practice, we make use of a CNN-architecture neural network to build the Decom-Net. According to the prior knowledge of computer vision, we use the shadow layers’ output as R and the deep layers’ output as I . We describe the network details in Table 1.

3.2. Bézier Curve Estimation

The inspiration comes from the curve adjustments used in photo editing. We try to design a parameter-controlled curve that can automatically map low-brightness images to their enhanced versions, where the adaptive curve parameters entirely depend on the input image. This curve design has three objectives: (1) Each pixel value of the enhanced image should be within the normalized range of [0,1] to avoid information loss due to overflow or truncation. (2) The curve should maintain unidirectionality to preserve the differences (contrast) between adjacent pixels. (3) The curve should be controlled by a class of parameter curves, and the control method should be as simple as possible.
In order to control the parametric curve, the simplest parametric curve in computational geometry is the Bézier curve. But, how do we maintain the unidirectionality to preserve the differences (contrast) between adjacent pixels? To solve this problem, we designed the curve to be controlled by n parameters Δ 1 , Δ 2 , Δ 3 , , Δ n ; these n parameters should sum to 1: i Δ i = 1 . This implicitly defines the control points of the Bézier curve:
P 0 = ( 0 , 0 ) , P 1 = ( 1 n , Δ 1 ) , , P i = ( i n , t = 1 i Δ t ) , , P n = ( 1 , 1 ) .
Then, the Bézier curve controlled by these control points can be formulated as
P t = i n C n i t i ( 1 t ) n i P i .
Based on this formulation, we design the Bézier Curve Estimation Network to estimate the Bézier curve parameters of each pixel: Δ x , y t . The Δ x , y t represents the parameter of the t-th control point for the pixel at position ( x , y ) on the image. The output illumination I can be formulated as
I ( x , y ) = i n C n i t = 1 i Δ x , y i ( I ( x , y ) ) i ( 1 I ( x , y ) ) n i ,
where I ( x , y ) i represents the i-th power of the image intensity I at a specific pixel coordinate ( x , y ) . The output enhanced image can be expressed as
H = I × R .
Table 2 describes the network details.

3.3. Non-Reference Loss Functions

To enable zero-reference learning in BézierCE, we used Spatial Consistency Loss, Exposure Control Loss, Color Constancy Loss, and Illumination Smoothness Loss in our experiments. We briefly introduce these loss functions.

3.3.1. Spatial Consistency Loss

By preserving the differences between adjacent regions in the input image and its enhanced version, the spatial consistency loss ( L s p a ) is defined as
L s p a = 1 K i = 1 K j Ω ( i ) H i H j L i L j 2 ,
where K and Ω ( i ) , respectively, denote the number of local regions and the four neighboring regions (top, down, left, and right) centered on region i. H represents the average intensity values of the local region in the enhanced version, and L represents the input image. The size of the local region is empirically set to 4 × 4 . It is worth noting that the loss remains stable regardless of other region sizes.

3.3.2. Exposure Control Loss

In order to restrain underexposed/overexposed areas, the exposure control loss ( L e x p ) is designed to control the exposure level. In the experiment, E is set to 0.6, and the loss L e x p can be expressed as
L exp = 1 M k = 1 M H k E ,
where M denotes the count of nonoverlapping local regions with a size of 16 × 16 . H has the same meaning as in the spatial consistency loss.

3.3.3. Color Constancy Loss

In order to correct the potential color deviations in the enhanced image and establish the relationship between the three adjustment channels, the color constancy loss ( L c o l ) can be mathematically represented as
L c o l = ( m , n ) ε J m J n 2 , ε = { ( R , G ) , ( R , B ) , ( G , B ) } ,
where J m represents the average intensity value of the m-th channel in the enhanced image, and ( m , n ) denotes a pair of channels.

3.3.4. Illumination Smoothness Loss

To maintain the smoothness and monotonicity relationships between adjacent pixels, an illumination smoothness loss ( L t v ) is added for each curve parameter map A , which is defined as
L t v = 1 N n = 1 N c ξ x A n c + y A n c 2 , ξ = { R , G , B } ,
where N represents iterations, and x and y denote the horizontal and vertical gradient operations, respectively.

3.3.5. Total Loss

The total loss is the combination of these four loss functions:
L = λ s p a L s p a + λ e x p L e x p + λ c o l L c o l + λ t v L t v ,
where λ s p a , λ e x p , λ c o l , and λ t v are the weights of the four losses.

4. Experiment

In this section, we present the performance results of our approach and five representative methods on five public low-light image datasets and also introduce the experimental setup and performance metrics.

4.1. Training Setting

To take full advantage of the capability of wide dynamic range adjustment, low-light and over-exposed images were included in the training set. The training method was consistent with reference [14]. DCE-Net was trained using 360 multi-exposure sequences from Part1 of the SICE dataset [61]. These 3022 images of different exposure levels in the Part1 subset [61] were randomly divided into two parts, including 2422 images used for training and the rest used for validation. Prior to training, the images were resized to 512 × 512 pixels.
We implemented our framework with PyTorch on an NVIDIA 2080Ti GPU. The batch size, and weights λ s p a , λ e x p , λ c o l , and λ t v were set to 8, 1, 10, 5, and 200, respectively. Bias was initialized to a constant value. The filter weights of each layer were initialized using a Gaussian function with standard zero mean and 0.02 standard deviation. Optimization was performed using the Adam optimizer with default parameters and a fixed learning rate η = 0.0001 .

4.2. Performance Criteria

In addition to measuring the experimental results by visual observation, we also used the following evaluation indicators of the no-reference evaluation method.
Natural image quality evaluator (NIQE). The NIQE evaluation index is biased towards the evaluation of image naturalness, clarity, and noise [62]. A lower NIQE score indicates that the naturalness of the enhanced image is better preserved.
Colorfulness-based Patch-based Contrast Quality Index (CPCQI). CPCQI is a color-based contrast quality evaluation index [63]. A larger CPCQI value indicates a higher enhanced image effect.

4.3. Results

In this subsection, in addition to showing our method for low-light image enhancement, there are five other representative methods, respectively, LIME [64], NPE [65], SRIE [66], KinD [39], and Zero-DCE [14] (three conventional methods and two deep learning methods). The enhancement effects of these methods are compared qualitatively and quantitatively on five low-light image datasets, namely DICM [21], LIME [64], MEF [67], NPE [65], and VV (https://sites.google.com/site/vonikakis/datasets, accessed on 5 September 2023).

4.3.1. Qualitative Evaluation

In order to more intuitively compare the performance of different methods, the experiment provided the visualization results of the enhanced images, as shown in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. The boxes in the figure represent local area enlargements to better demonstrate the obvious differences. As can be seen in Figure 2, the brightness enhancement effect of the NPE and SRIE methods is not obvious. The local over-brightness of the KinD, LIME, and Zero-DCE methods leads to a decrease in the naturalness of the image and color distortion. As shown in Figure 3, the LIME and Zero-DCE methods are overexposed. KinD has color distorted, and the NPE and SRIE enhancement effects are not obvious. As can be seen from the circular structure of the roof in Figure 4, the image becomes blurred after enhancement by the KinD and NPE methods. The enhancement effect of the SRIE method is not obvious, and the color is distorted by the LIME and Zero-DCE. Corresponding conclusions can also be obtained from the observations in Figure 5 and Figure 6. From the observation in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, our method can effectively enhance low-light images. The low-light images enhanced by our method have better naturalness and contrast, without the problems of overexposure and artifacts.

4.3.2. Quantitative Comparison

In order to verify the performance of the algorithm in this paper, LIME, NPE, SRIE, KinD, and Zero-DCE were used as the comparison methods, respectively. LIME, NPE, and SRIE are conventional methods, KinD is a supervised learning method, and Zero-DCE is zero-shot learning method. The NIQE metrics of different methods on the five datasets are shown in Table 3. The CPCQI indicators of different methods on the five datasets are shown in Table 4. The red and magenta scores represent the top two in the corresponding dataset, respectively. Our method achieved the best NIQE results on the MEF and NPE datasets and the second-best result on LIME. Except for the DICM dataset, the NIQE results of our method were better than the Zero-DCE. Moreover, the NIQE results of our method were better than the KinD on all datasets. Our method achieved the second-best CPCQI results on the LIME and MEF. The CPCQI results of our method outperformed the two deep learning methods (KinD and Zero-DCE) on all datasets. It is worth noting that as a zero-shot learning method, our method was almost always better than Zero-DCE. This shows that the images enhanced by our method maintain better naturalness and contrast.
As shown in Table 5 and Table 6, we also compared the variance in the NIQE and CPCQI for different methods. Our method also performed well in the variance comparisons.
We also calculated the relMSE for low-light images and enhanced images, as shown in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. RelMSE is the abbreviation of the relative mean square error, which is used to measure the reconstruction quality of the original image by the enhancement algorithm. This metric can reflect that our method has a better image reconstruction quality than Zero-DCE. Our method is slightly larger than SRIE and smaller than other methods in most cases. Combined with the visual effects, it can be seen that we do not enhance the originally dark areas too much, and the bright areas are not overexposed.

4.4. Ablation Study

We analyzed our approach by comparing different models based on varying numbers of control points, as shown in Figure 7. We observe that using approximately five control points yielded good performance on different datasets. As the number of control points increases, the NIQE decreases for different datasets. This is consistent with our understanding that more control points should lead to better model performance. We finally chose five control points for the experiment. The reason the NIQE slightly increased on the VV dataset is because many images in the VV dataset were overexposed.

4.5. Time Analysis

As shown in Table 7, we compared the running time of different methods with five different input image sizes. Our approach is the most efficient method compared with other methods on different input image sizes. Unlike conventional methods such as LIME, NPE, and SRIE, the running time of our method changes very little as the image resolution increases. Compared with the KinD, the memory of our method does not increase significantly with the increase in the image resolution.
We also compared the inference time of our method based on different numbers of control points. As shown in Figure 8, as the number of control points increases, the inference time of our method does not increase significantly. In Figure 8, the blue solid line is the inference time, and the blue dotted line is the trend.

5. Conclusions

In this paper, we proposed a novel method called BézierCE, which builds upon the Zero-DCE algorithm by introducing control points to manipulate the curve at different locations. To determine the parameters of these control points, we employed a neural network with a U-net architecture to approximate their positions, allowing us to generate a Bézier curve based on these control points. In our experiments, we observed significant improvements in mitigating overexposure issues compared to Zero-DCE. In addition, unlike the iterative adjustments in Zero-DCE, our method offers faster processing speeds during testing because we adjust the parameters of the curve in one regression step.
However, our approach faces certain limitations. Compared to Retinex-based methods, our approach accurately determines the gamma value for individual pixels based on the prior information of the entire image. We acknowledge that our method currently struggles with noise situations. We plan to address this limitation in our future work, aiming to enhance its performance. The enhancement of brightness may not be as pronounced, especially in situations where the global illumination is very dim. Addressing this issue may require incorporating constraints based on our experience with dark images. Furthermore, we are also considering exploring block-wise adjustments for images, which will be the focus of our future work.

Author Contributions

Conceptualization, X.G. and J.L.; methodology, X.G. and J.L.; validation, X.G., K.Z., and L.H.; resources, X.G.; writing—original draft preparation, X.G.; writing—review and editing, K.Z., L.H., and J.L.; supervision, K.Z. and L.H.; project administration, X.G.; funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (12101378), the Shanxi Provincial Research Foundation for Basic Research, China (20210302124548), and the Project of Science and Technology Innovation Fund of Shanxi Agricultural University (2021BQ10).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We are grateful to the anonymous reviewers and the editors for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shyni, H.M.; Chitra, E. A comparative study of X-ray and ct images in COVID-19 detection using image processing and deep learning techniques. Comput. Methods Programs Biomed. Update 2022, 2, 100054. [Google Scholar]
  2. Hu, H.; Zhang, Z.; Xie, Z.; Lin, S. Local relation networks for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3464–3473. [Google Scholar]
  3. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  4. Buch, N.; Velastin, S.A.; Orwell, J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans. Intell. Transp. Syst. 2011, 12, 920–939. [Google Scholar] [CrossRef]
  5. Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
  6. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  7. Cui, Z.; Qi, G.J.; Gu, L.; You, S.; Zhang, Z.; Harada, T. Multitask aet with orthogonal tangent regularity for dark object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2553–2562. [Google Scholar]
  8. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 81–88. [Google Scholar]
  9. Su, Y.; Wang, J.; Wang, X.; Hu, L.; Yao, Y.; Shou, W.; Li, D. Zero-reference deep learning for low-light image enhancement of underground utilities 3d reconstruction. Autom. Constr. 2023, 152, 104930. [Google Scholar] [CrossRef]
  10. Li, G.; Yang, Y.; Qu, X.; Cao, D.; Li, K. A deep learning based image enhancement approach for autonomous driving at night. Knowl. Based Syst. 2021, 213, 106617. [Google Scholar] [CrossRef]
  11. Ai, S.; Kwon, J. Extreme low-light image enhancement for surveillance cameras using attention u-net. Sensors 2020, 20, 495. [Google Scholar] [CrossRef]
  12. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  13. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  14. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
  15. Ibrahim, H.; Kong, N.S.P. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
  16. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  17. Arici, T.; Dikbas, S.; Altunbasak, Y. A histogram modification framework and its application for image contrast enhancement. IEEE Trans. Image Process. 2009, 18, 1921–1935. [Google Scholar] [CrossRef] [PubMed]
  18. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.T.H.; Zimmerman, J.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  19. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef] [PubMed]
  20. Celik, T.; Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [Google Scholar] [CrossRef] [PubMed]
  21. Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
  22. Chen, S.D.; Ramli, A.R. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
  23. Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  24. Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
  25. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  26. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  27. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A variational framework for retinex. Int. J. Comput. Vis. 2003, 52, 7–23. [Google Scholar] [CrossRef]
  28. Fu, X.; Zeng, D.; Huang, Y.; Ding, X.; Zhang, X.P. A variational framework for single low light image enhancement using bright channel prior. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 1085–1088. [Google Scholar]
  29. Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
  30. Fu, G.; Duan, L.; Xiao, C. A hybrid L2-Lp variational model for single low-light image enhancement with bright channel prior. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, China, 22–25 September 2019; pp. 1925–1929. [Google Scholar]
  31. Zhang, Y.; Di, X.; Zhang, B.; Wang, C. Self-supervised image enhancement network: Training with low light images only. arXiv 2020, arXiv:2002.11300. [Google Scholar]
  32. Lore, K.G.; Akintayo, A.; Sarkar, S. LLnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  33. Tao, L.; Zhu, C.; Xiang, G.; Li, Y.; Jia, H.; Xie, X. LLcnn: A convolutional neural network for low-light image enhancement. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
  34. Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3277–3285. [Google Scholar]
  35. Shen, L.; Yue, Z.; Feng, F.; Chen, Q.; Liu, S.; Ma, J. Msr-net: Low-light image enhancement using deep convolutional network. arXiv 2017, arXiv:1711.02488. [Google Scholar]
  36. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  37. Li, C.; Guo, J.; Porikli, F.; Pang, Y. Lightennet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recogn. Lett. 2018, 104, 15–22. [Google Scholar] [CrossRef]
  38. Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–19 June 2019; pp. 6849–6857. [Google Scholar]
  39. Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  40. Zhu, M.; Pan, P.; Chen, W.; Yang, Y. Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13106–13113. [Google Scholar]
  41. Li, J.; Li, J.; Fang, F.; Li, F.; Zhang, G. Luminance-aware pyramid network for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 3153–3165. [Google Scholar] [CrossRef]
  42. Wang, L.W.; Liu, Z.S.; Siu, W.C.; Lun, D.P. Lightening network for low-light image enhancement. IEEE Trans. Image Process. 2020, 29, 7984–7996. [Google Scholar] [CrossRef]
  43. Li, J.; Feng, X.; Hua, Z. Low-light image enhancement via progressive-recursive network. IEEE Trans. Circ. Syst. Video Technol. 2021, 31, 4227–4240. [Google Scholar] [CrossRef]
  44. Fu, Y.; Hong, Y.; Chen, L.; You, S. LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl. Based Syst. 2022, 240, 108010. [Google Scholar] [CrossRef]
  45. Xiong, W.; Liu, D.; Shen, X.; Fang, C.; Luo, J. Unsupervised low-light image enhancement with decoupled networks. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022; pp. 457–463. [Google Scholar]
  46. Han, G.; Zhou, Y.; Zeng, F. Unsupervised learning based dual-branch fusion low-light image enhancement. Multimed. Tools Appl. 2023, 82, 37593–37614. [Google Scholar] [CrossRef]
  47. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3063–3072. [Google Scholar]
  48. Chen, J.; Wang, Y.; Han, Y. A semi-supervised network framework for low-light image enhancement. Eng. Appl. Artif. Intell. 2023, 126, 107003. [Google Scholar] [CrossRef]
  49. Malik, S.; Soundararajan, R. Semi-supervised learning for low-light image restoration through quality assisted pseudo-labeling. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4105–4114. [Google Scholar]
  50. Yu, R.; Liu, W.; Zhang, Y.; Qu, Z.; Zhao, D.; Zhang, B. Deepexposure: Learning to expose photos with asynchronously reinforced adversarial learning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 2153–2163. [Google Scholar]
  51. Zhang, R.; Guo, L.; Huang, S.; Wen, B. Rellie: Deep reinforcement learning for customized low-light image enhancement. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 2429–2437. [Google Scholar]
  52. Cotogni, M.; Cusano, C. Treenhance: A tree search method for low-light image enhancement. Pattern Recogn. 2023, 136, 109249. [Google Scholar] [CrossRef]
  53. Zhang, L.; Zhang, L.; Liu, X.; Shen, Y.; Zhang, S.; Zhao, S. Zero-shot restoration of back-lit images using deep internal learning. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1623–1631. [Google Scholar]
  54. Zhu, A.; Zhang, L.; Shen, Y.; Ma, Y.; Zhao, S.; Zhou, Y. Zero-shot restoration of underexposed images via robust retinex decomposition. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
  55. Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. Retinexdip: A unified deep framework for low-light image enhancement. IEEE Trans. Circ. Syst. Video Technol. 2021, 32, 1076–1088. [Google Scholar] [CrossRef]
  56. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar]
  57. Zheng, S.; Gupta, G. Semantic-guided zero-shot learning for low-light image/video enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 581–590. [Google Scholar]
  58. Gao, X.; Zhang, M.; Luo, J. Low-light image enhancement via retinex-style decomposition of denoised deep image prior. Sensors 2022, 22, 5593. [Google Scholar] [CrossRef]
  59. Xie, C.; Tang, H.; Fei, L.; Zhu, H.; Hu, Y. IRNet: An improved zero-shot retinex network for low-light image enhancement. Electronics 2023, 12, 3162. [Google Scholar] [CrossRef]
  60. Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
  61. Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
  62. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  63. Gu, K.; Tao, D.; Qiao, J.F.; Lin, W. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 1301–1313. [Google Scholar] [CrossRef]
  64. Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  65. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
  66. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  67. Lee, C.; Lee, C.; Lee, Y.Y.; Kim, C.S. Power-constrained contrast enhancement for emissive displays based on histogram equalization. IEEE Trans. Image Process. 2011, 21, 80–93. [Google Scholar] [PubMed]
Figure 1. Overview of our method.
Figure 1. Overview of our method.
Sensors 23 09593 g001
Figure 2. Comparisons of enhanced images on the DICM dataset.
Figure 2. Comparisons of enhanced images on the DICM dataset.
Sensors 23 09593 g002
Figure 3. Comparisons of enhanced images on the LIME dataset.
Figure 3. Comparisons of enhanced images on the LIME dataset.
Sensors 23 09593 g003
Figure 4. Comparisons of enhanced images on the MEF dataset.
Figure 4. Comparisons of enhanced images on the MEF dataset.
Sensors 23 09593 g004
Figure 5. Comparisons of enhanced images on the NPE dataset.
Figure 5. Comparisons of enhanced images on the NPE dataset.
Sensors 23 09593 g005
Figure 6. Comparisons of enhanced images on the VV dataset.
Figure 6. Comparisons of enhanced images on the VV dataset.
Sensors 23 09593 g006
Figure 7. NIQE comparisons of different numbers of control points.
Figure 7. NIQE comparisons of different numbers of control points.
Sensors 23 09593 g007
Figure 8. Inference time comparisons of different numbers of control points.
Figure 8. Inference time comparisons of different numbers of control points.
Sensors 23 09593 g008
Table 1. Decom-Net architecture.
Table 1. Decom-Net architecture.
LayerParamsInput DimOutput DimActivate FunctionInput Layer
Max-(3,H,W)(1,H,W)-Input
Conv0(4,64,9,9)(4,H,W)(64,H,W)ReLUCat (Input,Max)
Conv1(64,64,3,3)(64,H,W)(64,H,W)ReLUConv0
Conv2(64,64,3,3)(64,H,W)(64,H,W)ReLUConv1
Conv3(64,64,3,3)(64,H,W)(64,H,W)ReLUConv2
Conv4(64,64,3,3)(64,H,W)(64,H,W)ReLUConv3
Conv5(64,64,3,3)(64,H,W)(64,H,W)ReLUConv4
Conv6(64,4,3,3)(64,H,W)(4,H,W)SigmoidConv5
Split-(4,H,W)R:(3,H,W); I:(1,H,W)-Conv6
Table 2. BCE-Net architecture.
Table 2. BCE-Net architecture.
LayerParamsInput DimOutput DimActivate FunctionInput Layer
Conv0(1,32,3,3)(1,H,W)(32,H,W)ReLUInput
Conv1(32,32,3,3)(32,H,W)(32,H,W)ReLUConv0
Conv2(32,32,3,3)(32,H,W)(32,H,W)ReLUConv1
Conv3(32,32,3,3)(32,H,W)(32,H,W)ReLUConv2
Conv4(64,32,3,3)(64,H,W)(32,H,W)ReLUCat(Conv2,Conv3)
Conv5(64,32,3,3)(64,H,W)(32,H,W)ReLUCat(Conv1,Conv4)
Conv6(64,t,3,3)(64,H,W)(t,H,W)-Cat(Conv0,Conv5)
Softmax----Conv6
Table 3. Comparison of the average NIQE on five datasets.
Table 3. Comparison of the average NIQE on five datasets.
LearningMethodDICMLIMEMEFNPEVV
CMLIME3.53604.14233.70224.26252.7475
NPE3.45303.90313.51553.95013.0290
SRIE3.57683.78683.47423.98833.1357
SLKinD4.26914.35254.13183.95893.4255
ZSLZero-DCE3.60913.93543.40444.09443.2245
Ours3.63343.85533.39393.90213.1680
Table 4. Comparison of the average CPCQI on five datasets.
Table 4. Comparison of the average CPCQI on five datasets.
LearningMethodDICMLIMEMEFNPEVV
CMLIME0.89861.08821.03850.98440.9555
NPE0.91391.08121.03721.02280.9557
SRIE0.90561.11211.09671.02580.9629
SLKinD0.74590.83360.78770.80070.7418
ZSLZero-DCE0.78180.98030.94610.85780.8396
Ours0.85911.10161.05441.01350.9402
Table 5. Comparison of the variance NIQE on five datasets.
Table 5. Comparison of the variance NIQE on five datasets.
LearningMethodDICMLIMEMEFNPEVV
CMLIME1.61565.72420.86491.58400.4898
NPE1.82383.83161.22911.52720.5843
SRIE1.71112.68500.89351.02070.7014
SLKinD1.29682.99600.57951.93150.5780
ZSLZero-DCE2.15905.12921.08770.96270.5733
Ours2.22092.57131.04171.15590.7503
Table 6. Comparison of the variance CPCQI on five datasets.
Table 6. Comparison of the variance CPCQI on five datasets.
LearningMethodDICMLIMEMEFNPEVV
CMLIME0.01170.01380.00970.00400.0043
NPE0.00640.02100.01540.00620.0045
SRIE0.00340.01230.00820.01120.0052
SLKinD0.00610.01530.00430.00930.0032
ZSLZero-DCE0.01500.02090.01350.01080.0102
Ours0.00220.00910.00350.00430.0032
Table 7. Runtime (RT) comparisons (in seconds).
Table 7. Runtime (RT) comparisons (in seconds).
Method( 640 × 480 )( 1280 × 960 )( 1920 × 1440 )( 2560 × 1920 )( 3200 × 2400 )
LIME0.11330.41961.01481.57132.3901
NPE5.886126.634058.5019104.8345163.9938
SRIE4.764333.6684121.5802343.9839726.5981
KinD0.15540.0464---
Zero-DCE0.125590.13900.25390.40510.83371
Ours0.03010.03250.07240.11920.1882
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, X.; Zhao, K.; Han, L.; Luo, J. BézierCE: Low-Light Image Enhancement via Zero-Reference Bézier Curve Estimation. Sensors 2023, 23, 9593. https://doi.org/10.3390/s23239593

AMA Style

Gao X, Zhao K, Han L, Luo J. BézierCE: Low-Light Image Enhancement via Zero-Reference Bézier Curve Estimation. Sensors. 2023; 23(23):9593. https://doi.org/10.3390/s23239593

Chicago/Turabian Style

Gao, Xianjie, Kai Zhao, Lei Han, and Jinming Luo. 2023. "BézierCE: Low-Light Image Enhancement via Zero-Reference Bézier Curve Estimation" Sensors 23, no. 23: 9593. https://doi.org/10.3390/s23239593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop