Study on a Low-Illumination Enhancement Method for Online Monitoring Images Considering Multiple-Exposure Image Sequence Fusion
Abstract
:1. Introduction
- To begin, different from [11,12,13], principal component analysis (PCA) is used to fuse the images processed using logarithmic and non-complex exponential functions, because it can be found that PCA expands the unique features of both images while maintaining the features shared by both images, solving the problem of missing features caused by their direct weighting. A weight based on the image’s overall average lightness and gradients is used to increase the overall lightness while also boosting the local lightness and restoring the details of the source image.
- Secondly, different from [14,15], for the noise problem that appears in the processed image, a fast-guided-filter is used to suppress it and obtain a higher quality output image, because we find that the application of filter can effectively suppress the noise problem in the process of low illumination image processing.
- Finally, we apply the algorithm to the power industry monitoring, and get an effective processing result, can effectively improve the overall lightness of the image, so that the restored image has a certain degree of fidelity.
2. Materials and Methods
2.1. Proposed Framework
2.2. Image Multi-Exposure
- 1.
- Construct n-dimensional vector matrix X using two source images.
- 2.
- Calculate the covariance matrix of the data matrix .
- 3.
- Calculate the eigenvalues of the covariance matrix , and the corresponding eigenvectors , where .
- 4.
- Select a large eigenvalue , and calculate the weight coefficient using the feature vector corresponding to the largest eigenvalue.
- 5.
- Based on the above, we fuse the images and : . The fused image is obtained, which contains the features of images and while retaining the similarity between the two, solving the problem of missing features caused by direct weighting of the two images. However, the lightness of image after PCA fusion is still dark, and the lightness of the image needs to be enhanced.
2.3. Image Fusion
2.3.1. Weight Design Based on the Average Luminance
2.3.2. Weight Design Based on the Global Gradient
2.3.3. Fusion Based on Pyramid
Algorithm 1 |
Input: input image I |
Do
|
Output: enhanced image Io |
3. Experimental Results and Analysis
3.1. Dataset and Experimental Environment
3.2. Comparison with Others
3.2.1. Processing Extremely Dark Images
3.2.2. Processing Low-Light Natural Images
3.3. Ablation Experiment
3.4. Application in Electric Power Equipment
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yudita, S.I.; Mantoro, T.; Ayu, M.A. Deep Face Recognition for Imperfect Human Face Images on Social Media using the CNN Method. In Proceedings of the 4th International Conference of Computer and Informatics Engineering, IC2IE, Beijing, China, 23–24 August 2021; pp. 412–417. [Google Scholar]
- Paiva, S.; Santos, D.; Rossetti, R.J.F. A Methodological Approach for Inferring Urban Indicators Through Computer Vision. In Proceedings of the 4th IEEE International Smart Cities Conference, ISC2, Kansas, MO, USA, 16–19 September 2018; pp. 1–7. [Google Scholar]
- Sun, Y.; Zhai, X.; He, Y.; Sun, Y.; Xing, Y.; Li, L. Research and Development of Integral Test System for Transformer Calibrator Based on Machine Vision and Servo Control. In Proceedings of the 2nd IEEE Conference on Energy Internet and Energy System Integration, EI2, Beijing, China, 21 October 2018; pp. 1–5. [Google Scholar]
- Sharma, M.; Sarma, K.K.; Mastorakis, N. AE and SAE Based Aircraft Image Denoising. In Proceedings of the 25th International Conference on Mathematics and Computers in Sciences and Industry, MCSI, Confu, Greece, 25–27 August 2018; pp. 81–85. [Google Scholar]
- Park, S.; Kim, K.; Yu, S.; Paik, J. Contrast Enhancement for Low-light Image Enhancement: A Survey. IEEE Trans. Smart Process. Comput. 2018, 13, 36–48. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dong, X.; Wang, G.; Pang, Y.; Li, W.; Wen, J.; Meng, W.; Lu, Y. Fast Efficient Algorithm for Enhancement of Low Lighting Video. In Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar]
- Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Las Vegas, NV, USA, 26–30 June 2016; pp. 2782–2790. [Google Scholar]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Wang, L.; Fu, G.; Jiang, Z.; Ju, G.; Men, A. Low-Light Image Enhancement with Attention and Multi-Level Feature Fusion. In Proceedings of the 5th IEEE International Conference on Multimedia & Expo Workshops, ICMEW, Shanghai, China, 8–12 July 2019; pp. 276–281. [Google Scholar]
- Le, S.-H.; Li, H. Fused logarithmic transform for contrast enhancement. Electron. Lett. 2008, 44, 19–20. [Google Scholar] [CrossRef]
- Yamakawa, M.; Sugita, Y. Image enhancement using Retinex and image fusion techniques. Electron. Commun. Jpn. 2018, 8, 52–62. [Google Scholar] [CrossRef]
- Wang, W.; Chen, Z.; Yuan, X.; Wu, X. Adaptive image enhancement method for correcting low-illumination images. Inf. Sci. 2019, 496, 25–41. [Google Scholar] [CrossRef]
- Celebi, A.T.; Duvar, R.; Urhan, O. Fuzzy fusion based high dynamic range imaging using adaptive histogram separation. IEEE Trans. Consum. Electron. 2015, 61, 119–127. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusionbased enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
- Wang, S.; Luo, G. Naturalness Preserved Image Enhancement Using a priori Multi-Layer Lightness Statistics. IEEE Trans. Image process. 2018, 27, 938–948. [Google Scholar] [CrossRef]
- Ma, K.; Li, H.; Yong, H.; Wang, Z.; Meng, D.; Zhang, L. Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans. Image Process. 2017, 26, 2519–2532. [Google Scholar] [CrossRef]
- Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index. IEEE Trans. Comput. Imaging. 2018, 4, 60–72. [Google Scholar] [CrossRef]
- Ghosh, S.; Chaudhury, K.N. Fast Bright-Pass Bilateral Filtering for Low-Light Enhancement. In Proceedings of the 26th International Conference on Image Processing, ICIP, Taipei, Taiwan, China, 22–25 September 2019; pp. 205–209. [Google Scholar]
- Ying, Z.; Ge, L.; Wen, G. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar]
- Ren, Y.; Ying, Z.; Li, T.; Li, G. LECARM: Low-Light Image Enhancement Using the Camera Response Model. IEEE Tran. Circuits Syst. Video 2019, 29, 968–981. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Xu, K.; Yang, X.; Yin, B.; Lau, R.W.H. Learning to Restore Low-Light Images via Decomposition-and-Enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Atoum, Y.; Ye, M.; Ren, L.; Tai, Y.; Liu, X. Color-wise Attention Network for Low-light Image Enhancement. arXiv 2019, arXiv:1911.08681. [Google Scholar]
- Kwon, D.; Kim, G.; Kwon, J. DALE: Dark Region-Aware Low-light Image Enhancement. arXiv 2020, arXiv:2008.12493. [Google Scholar]
- Łoza, A.; Bull, D.R.; Hill, P.R.; Achim, A.M. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Digit. Signal Process. 2013, 23, 1856–1866. [Google Scholar] [CrossRef]
- Buchsbaum, G.; Gottschalk, A. Trichromacy, Opponent Colours Coding and Optimum Colour Information Transmission in the Retina. Proc. R. Soc. Lond. B 1983, 220, 89–113. [Google Scholar]
- Fischer, L.J.; Vaughan, D. The Beta-hyperbolic secant distribution. Aust. J. Stat. 2010, 3, 245–258. [Google Scholar] [CrossRef]
- Nilsson, M. SMQT-based Tone Mapping Operators for High Dynamic Range Images. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISIGRAPP 2013)–Volume 1: VISAPP, Barcelona, Spain, 21–24 February 2013; pp. 61–68. [Google Scholar] [CrossRef] [Green Version]
- Lee, S.H.; Park, J.S.; Cho, N.I. A Multi-Exposure Image Fusion Based on the Adaptive Weights Reflecting the Relative Pixel Intensity and Global Gradient. In Proceedings of the 25th IEEE International Conference on Image Processing, ICIP, Athens, Greece, 7–10 October 2018. [Google Scholar]
- Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography; Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 28 September 2009; pp. 161–171. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
- Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A Patch-Structure Representation Method for Quality Assessment of Contrast Changed Images. IEEE Signal Process. Lett. 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
- Cui, G.; Feng, H.; Xu, Z.; Li, Q.; Chen, Y. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt. Commun. 2015, 341, 199–209. [Google Scholar] [CrossRef]
- Eskicioglu, A.M.; Fisher, P.S. Image quality measures and their performance. IEEE Trans. Commun. 1995, 43, 2959–2965. [Google Scholar] [CrossRef] [Green Version]
Method | AG | EI | FD | SF | SH |
---|---|---|---|---|---|
Ours | 10.21 | 96.51 | 14.54 | 32.80 | 7.78 |
BIMEF | 6.95 | 65.87 | 9.92 | 21.17 | 7.48 |
RRM | 7.76 | 73.88 | 10.98 | 22.68 | 7.53 |
LECARM | 9.11 | 86.60 | 12.98 | 28.00 | 7.51 |
BPBF | 6.97 | 65.98 | 9.95 | 20.31 | 7.47 |
MF | 8.08 | 76.08 | 11.62 | 24.46 | 7.58 |
SRIE | 6.07 | 58.49 | 8.49 | 18.89 | 7.38 |
NPE | 6.57 | 62.34 | 9.35 | 19.19 | 7.37 |
Method | AG | EI | FD | SF | SH |
---|---|---|---|---|---|
Ours | 10.41 | 105.52 | 13.19 | 28.22 | 7.78 |
BIMEF | 7.80 | 80.61 | 9.64 | 21.76 | 7.28 |
RRM | 8.32 | 88.03 | 10.85 | 22.39 | 7.48 |
LECARM | 10.34 | 106.32 | 12.87 | 28.06 | 7.43 |
BPBF | 8.32 | 85.13 | 10.44 | 21.14 | 7.71 |
MF | 9.57 | 97.82 | 12.03 | 25.34 | 7.82 |
SRIE | 7.82 | 81.66 | 9.44 | 22.36 | 7.62 |
NPE | 8.57 | 87.65 | 10.79 | 22.53 | 7.59 |
Method | AG | EI | FD | SF | SH |
---|---|---|---|---|---|
Ours | 6.99 | 74.36 | 8.25 | 20.79 | 7.39 |
BIMEF | 4.11 | 43.47 | 4.88 | 11.88 | 6.48 |
RRM | 6.26 | 66.48 | 7.36 | 17.98 | 6.55 |
LECARM | 4.89 | 43.60 | 4.79 | 12.68 | 6.38 |
BPBF | 4.22 | 44.77 | 4.97 | 11.50 | 6.26 |
MF | 5.49 | 58.08 | 6.51 | 15.67 | 6.58 |
SRIE | 3.33 | 35.47 | 3.89 | 10.26 | 6.07 |
NPE | 5.54 | 58.83 | 6.54 | 15.50 | 6.51 |
Experiment | Ours | Ex_I | Ex_II |
---|---|---|---|
PSNR | 31.09 | 29.24 | 29.79 |
SSIM | 0.74 | 0.74 | 0.71 |
PCQI | 0.62 | 0.60 | 0.59 |
AG | 9.37 | 5.82 | 7.67 |
EI | 76.03 | 56.02 | 67.29 |
SH | 7.31 | 7.26 | 7.11 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, W.; Jiang, C.; An, Y.; Yan, X.; Dai, C. Study on a Low-Illumination Enhancement Method for Online Monitoring Images Considering Multiple-Exposure Image Sequence Fusion. Electronics 2023, 12, 2654. https://doi.org/10.3390/electronics12122654
Zhao W, Jiang C, An Y, Yan X, Dai C. Study on a Low-Illumination Enhancement Method for Online Monitoring Images Considering Multiple-Exposure Image Sequence Fusion. Electronics. 2023; 12(12):2654. https://doi.org/10.3390/electronics12122654
Chicago/Turabian StyleZhao, Wenlong, Chengwei Jiang, Yunzhu An, Xiaopeng Yan, and Chaofeng Dai. 2023. "Study on a Low-Illumination Enhancement Method for Online Monitoring Images Considering Multiple-Exposure Image Sequence Fusion" Electronics 12, no. 12: 2654. https://doi.org/10.3390/electronics12122654