Next Article in Journal
A Novel Fusion Pruning Algorithm Based on Information Entropy Stratification and IoT Application
Previous Article in Journal
Face Image Analysis Using Machine Learning: A Survey on Recent Trends and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering

1
School of Integrated Circuits, Anhui University, Hefei 230601, China
2
AnHui Engineering Laboratory of Agro-Ecological Big Data, Hefei 230601, China
3
School of Humanities, Shanghai University of Finance and Economics, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(8), 1211; https://doi.org/10.3390/electronics11081211
Submission received: 21 March 2022 / Revised: 7 April 2022 / Accepted: 8 April 2022 / Published: 11 April 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Due to the large dynamic range of real scenes, it is difficult for images taken by ordinary devices to represent high-quality real scenes. To obtain high-quality images, the exposure fusion of multiple exposure images of the same scene is required. The fusion of multiple images results in the loss of edge detail in areas with large exposure differences. Aiming at this problem, this paper proposes a new method for the fusion of multi-exposure images with detail enhancement based on homomorphic filtering. First, a fusion weight map is constructed using exposure and local contrast. The exposure weight map is calculated by threshold segmentation and an adaptively adjustable Gaussian curve. The algorithm can assign appropriate exposure weights to well-exposed areas so that the fused image retains more details. Then, the weight map is denoised using fast-guided filtering. Finally, a fusion method for the detail enhancement of Laplacian pyramids with homomorphic filtering is proposed to enhance the edge information lost by Laplacian pyramid fusion. The experimental results show that the method can generate high-quality images with clear edges and details as well as similar color appearance to real scenes and can outperform existing algorithms in both subjective and objective evaluations.

1. Introduction

In natural scenes, the dynamic range can reach six orders of magnitude, and the normal human eye can capture four orders of magnitude, dynamic range refers to the ratio between the maximum brightness and the minimum brightness in the same scene, while the dynamic range that can be captured by ordinary digital cameras [1] is only two orders of magnitude [2]; therefore, the images captured by ordinary digital cameras are exposed and poor areas can cause a loss of detail. High Dynamic Range (HDR) images can truly reflect natural scenes [3].
The generation of HDR images is divided into two methods, hardware and software. HDR equipment can directly obtain images of natural scenes; however, dedicated equipment is expensive, and HDR images cannot be displayed on ordinary low dynamic range (LDR) equipment, and thus it is difficult to use [4]. The software method uses HDR imaging technology to make the images displayed by LDR devices richer in detail. The technique includes two methods, tone mapping [5,6], and multi-exposure image fusion. The first requires estimating the camera response function (CRF) to construct an HDR image and then uses tone mapping to display the HDR image on a normal LDR display device; however, this has a disadvantage.
The calculation of CRF requires multiple exposure parameters, and these exposure parameters need to be calculated separately. Thus, it is time-consuming and limited [7]. In contrast, the computationally simple multi-exposure image fusion method is more efficient as it directly fuses multiple images of different exposures of an HDR scene into a high-quality image that can be displayed on an LDR device [8].
Existing traditional multi-exposure fusion methods can be divided into two categories: based on the spatial domain and based on the transform domain. The spatial domain mainly analyzes and operates on pixel values or structural blocks. Its three-dimensional information is rich [9], the method is simple, and the calculation cost is low; however, the generated images will have problems, such as noise and information loss. For example, Gu et al. [10] used the structural tensor fusion of the input image to iteratively correct the gradient field using quadratic mean filtering and multi-scale nonlinear compression; however, this method made the image noisy.
Li et al. [11] used local contrast, brightness, and color dissimilarity to construct a weight map, followed by recursive filtering for denoising and refinement, and finally, weighted fusion to obtain the result; however, this method produced unnatural artifacts. Huang et al. [12] decomposed the image into contrast extraction, structure preservation, and intensity adjustment, where structure preservation and intensity adjustment were calculated by local weights, global weights, and saliency weights, and finally reconstructed the resulting map. The details of the images generated by this method were well preserved, but the colors were quite different from the input images.
In the transform domain, the image is transformed into the frequency domain through discrete Fourier transform (DFT) [13], pyramid transform [14], etc., and the image color generated by these methods is very close to reality; however, it is easy to lose texture details [15]. For example, Mertens et al. [16] used contrast, saturation, and good exposure to construct a weight map, and finally used multi-resolution fusion, which can generate images with natural colors but cannot preserve the full edge texture details of the image.
Wang et al. [17] added local Laplacian filtering to the Mertens method to enhance details in local overexposed and underexposed regions and proposed discrete sampling and interpolation to speed up the results. With the development of machine learning, methods for multi-exposure image fusion with neural networks have begun to appear. Xu et al. [18] proposed a multi-exposure image fusion method based on generative adversarial networks, in which the generator network and the discriminator network are simultaneously trained to form an adversarial relationship, and a self-attention mechanism was introduced for the problem of large image exposure differences. In addition, some new fusion methods have gradually emerged.
For example, Yang et al. [19] first used the K-means-based algorithm (K-SVD) to calculate the sparsity exposure dictionary (SED) to construct the exposure estimation map and then used the exposure estimation map and adaptive guided filter to construct the final fusion decision map and finally pyramid fusion. The method of Ulucan et al. [20] utilizes the histogram and K-means classified atlas to extract linear embedding weights and watershed masks for fusion and finally corrects unsatisfactory color intensities.
In this paper, the transform domain pyramid fusion method is used and based on the Mertens method, threshold segmentation and an adaptively adjustable Gaussian curve are proposed to calculate the exposure weight, and the homomorphic filter is used to enhance the detail layer of the Laplacian pyramid. This method not only generates images with rich colors but also preserves the texture details of the images. The image fusion algorithm proposed in this paper has three main contributions:
  • This paper applies homomorphic filtering to the multi-exposure image fusion algorithm for the first time. Other detail enhancement algorithms will lose some low-frequency signals when enhancing high-frequency details, while homomorphic filtering can enhance details while retaining low-frequency signals and can enhance the details based on retaining the original image information.
  • An exposure weighting algorithm based on threshold segmentation and adaptively adjustable Gaussian curve is proposed, which assigns more reasonable weights to well-exposed areas and retains more detailed information.
  • The Laplacian pyramid is improved based on homomorphic filtering, which enhances the edge details of the fused image and generates an image with obvious details.
The rest of this article is organized as follows. In the second part, we discuss the most common and state-of-the-art methods related to multi-exposure image fusion. Section 3 introduces the proposed method in detail. The fourth part analyzes the experimental results for subjective and objective evaluation. The last section discusses the conclusion of this paper and the next steps.

2. Related Works

In recent years, many studies have been conducted on multi-exposure fusion algorithms. Mertens et al. [16] first used contrast, saturation, and well exposure to construct a fused image weight map, then decomposed the source image sequence and weight map into a Laplacian pyramid and a Gaussian pyramid, respectively, and finally fused the resulting image with multi-resolution. The result of this fusion method is very close to reality, and the naked eye can hardly distinguish the large color difference but will lose detail.
Shen et al. [21] proposed a method for exposure fusion using enhanced Laplacian pyramids, the principle of which is to use local weights, global weights, and JND-based saliency weights to estimate the exposure weight map to enhance the detail signal and base signal of the image to construct a novel augmented Laplacian pyramid. This method effectively preserves the texture structure of the image but suffers from artifacts and color distortion due to excessive detail enhancement.
Li et al. [22] proposed extracting image details with a weighted structure tensor, used the Gaussian pyramid of the luminance component of the source image sequence as the guide image, smoothed the weighted Gaussian pyramid of all LDR images using weighted guided image filter (WGIF) [23], and the final result map is obtained by multi-resolution fusion. The detail preservation effect of this method is relatively good; however, the sharpness is flawed where there is a large difference between light and dark between the source image sequences.
Ma et al. [24] proposed a new structural block decomposition method, which first decomposes the source image sequence into three components: signal intensity, signal structure, and average intensity, then fuses each component of different images separately, and finally reconstructs into a fused image. The algorithm is capable of producing images with distinct color appearances but is prone to over-sharpening and local color distortion in areas with large differences in brightness.
Hayat et al. [25] proposed to estimate the initial weights with three indicators of local contrast, brightness, and color dissimilarity where the local contrast was calculated by the dense SIFT [26] descriptor, and then smoothed the weight map with a fast-guided filter, and finally used Pyramid fusion to generate the resulting graph. The algorithm maintains good global contrast but loses some details.
Qi et al. [27] proposed an accurate multi-exposure image fusion method based on low-order features. The principle is to use guided filters to decompose the source image into base layers and detail layers. For the base layer, the method of Ma et al. is used to decompose the image blocks for calculation, and the detail layer is weighted according to the average level of local brightness changes. Finally, the base layer and the detail layer are weighted and fused. This method performs well on image sharpness and color information but is prone to noise and halos.
Huang et al. [28] proposed a multi-exposure image fusion method based on adaptive factor feature evaluation. This method uses the adaptive factor exposure to evaluate the weight, uses the Sobel operator to calculate the texture change weight, and obtains the image by pyramid fusion. The obtained fusion image has better brightness and can retain certain details; however, the details will still be lost in the areas where the image exposure difference is too large. The detailed method comparison is shown in Table 1. Except for the methods of Ma and Qi, the above fusion methods all use pyramid fusion, and pyramid fusion has the defect of the loss of edge details.
Although Shen’s method improves the Laplacian pyramid to enhance the image details, the generated image has the problem of color distortion—the color of the image is quite different from the actual scene. Ma’s results are still poorly exposed where the exposure is poor, and Qi synthesizes methods based on pixel values or building blocks but does not do a good job of removing noise interference on the fusion of detail layers. To produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method.
In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

3. Proposed Method

In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fast-guided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process.
Algorithm 1: The proposed algorithm.
Parameter: I n gray represents the grayscale image of I n , I ^ n gray means normalized to I n gray , T 0 0.01 ,   T 1 0.5
Input: Source image sequences I n ,   1 n N
Output: The result F after fusion
1: for each image I ^ n gray  do
2:  Calculate T 2 by Equation (3)
3:  while Equation (4) do
4:    T 1 T 2
5:   Calculate T 2 by Equation (3)
6:  end while
7:  The optimal threshold of the n th image T 2 n T 2
8: end for
9: for each image I ^ n gray  do
10:  Calculate α n and β n separately by Equations (5) and (6)
11:  Use Equations (7) and (8) to assign weights to W 1 n
12: end for
13: for each image I n gray ( i , j ) do
14:  the local contrast value A n is calculated by Equation (9)
15:  Use Equation (11) to assign weights to W 2 n
16: end for
17: Use Equation (12) to calculate the initial weight map W ^ n
18: Use Equation (13) to denoise W ^ n , and get W n after normalization
19: Use Equations (14)–(16) to calculate L { I n } ( l )
20: Reconstructed pyramid fused into F by Equations (17)–(19)

3.1. Exposure Weight

The purpose of exposure weighting is to select better exposed areas, which usually contain more image information. The principle of the exposure weight algorithm in the method of Mertens et al. [16] is to use a Gaussian curve to assign the exposure weight after the image is normalized, as shown in Figure 2. The gray value close to 0.5 is considered to be medium exposure and is given a larger weight. When it deviates from 0.5, the larger the deviation, the smaller the weight is given. The specific formula is as follows:
W 1 n ( i , j ) = e ( I ^ n gray ( i , j ) 0.5 ) 2 2 σ 2  
where I ^ n gray ( i , j ) represents the pixel value in the i th row and j th column of the normalized grayscale image of the n th input image, and W 1 n ( i , j ) represents the exposure weight value of the n th image in the i th row and j th column, the value range of n is 1 ,   2 ,   ,   N , N refers to the number of input image sequences, and σ controls the amplitude of the curve and generally takes a value of 0.2.
The method used by Mertens et al. [16] is suitable for images with moderate overall brightness, and this algorithm is not optimal for images with too high or low overall brightness. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the high-brightness areas. To this end, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve, which can adaptively adjust the weights according to the average brightness of the input image sequence. The algorithm is divided into two steps: threshold segmentation and the calculation of adaptive exposure weights.
First, we perform threshold segmentation and normalize the image sequence gray image I n g r a y to the [0, 1] interval. Since the gray value of the darker part of a well-exposed gray image is low, the color is lighter Part of the gray value is high but also contains detailed information. To avoid mistaking the dark and light parts as bad exposure, we adopt an iterative threshold algorithm [30] that divides each image into dark and light parts with a threshold and calculates the exposure weight separately. The specific algorithms of the threshold are as follows:
m e a n { A ( i , j ) } = i = 1 r j = 1 c A ( i , j ) r × c
T 2 = mean { I ^ n gray ( i , j ) > T 1 } + mean { I ^ n gray ( i , j ) T 1 } 2  
| T 2 T 1 | < T 0  
where m e a n { · } represents the mean value calculation, and the specific algorithm is shown in Equation (2). A ( i , j ) represents the area where the mean is calculated, and the size of A ( i , j ) is equal to r × c , the initial value of T 1 is 0.5, and T 0 is a very small number. If Equation (3) holds, then T 2 is the optimal threshold, otherwise, we assign the value of T 2 to T 1 , and repeat the above steps are iterated until the optimal threshold T 2 is obtained. Then the image is divided into two parts G 1 and G 2 , according to the optimal threshold. The G 1 part is composed of pixels whose gray value is greater than T 2 , and the G 2 part is composed of pixels whose gray value is less than or equal to T 2 .
Finally, the weight of adaptive exposure is calculated. If the brightness value of this image is low in the whole image sequence, then the part with high brightness is well-exposed, and the weight of this part needs to be appropriately increased. Similarly, if the brightness value of this image is high in the whole image sequence, it is necessary to appropriately increase the weight of the low brightness part. Therefore, we constructed two adaptive variables α n and β n , which can reflect the offset between the exposure of the input image and 0.5. The calculation formulas are as follows:
α n = n = 1 N mean { I ^ n gray ( i , j ) > T 2 n } - mean { I ^ n gray ( i , j ) > T 2 n }
β n = n = 1 N mean { I ^ n gray ( i , j ) T 2 n } - mean { I ^ n gray ( i , j ) T 2 n }  
where T 2 n is the optimal threshold of the n th image; α n and β n represent the adaptive variables of the corresponding parts of G 1 and G 2 , respectively; and the Gaussian curve is used to assign the exposure weight W 1 n . The formulas are as follows:
W 1 n ( i 1 , j 1 ) = e ( I ^ n g r a y ( i 1 , j 1 ) ( 0.5 + α n ) ) 2 2 σ 2  
W 1 n ( i 2 , j 2 ) = e ( I ^ n g r a y ( i 2 , j 2 ) ( 0.5 + β n ) ) 2 2 σ 2
where I ^ n g r a y ( i 1 , j 1 ) { I ^ n g r a y ( i , j ) > T 2 n } , I ^ n g r a y ( i 2 , j 2 ) { I ^ n g r a y ( i , j ) T 2 n } , σ controls the amplitude of the curve, we take the value as 0.2, and the process of calculating the exposure weight map is shown in Figure 3.

3.2. Local Contrast Weight

The computation of local contrast can be used to preserve important details, such as edges and textures. These edge and texture information is included in the gradient change, and thus the Laplacian filter with good edge detection is used to calculate the local contrast weight, and the Laplacian filter is used for each grayscale image. The specific algorithm is as follows:
A n ( i , j ) = | I n gray ( i , j ) h ( i , j ) |  
where A n ( i , j ) represents the local contrast value of the n th input image in the i th row and j th column, | · | represents the calculation of the absolute value, I n g r a y ( i , j ) represents the pixel value of the n th grayscale image in the i th row and j th column, represents the convolution operation, h is the Laplacian filter kernel, and the value of h is as follows:
h = [ 0 1 0 1 4 1 0 1 0 ] .
We use the maximum value of the same pixel position in all images as the local contrast weight W 2 n ( i , j ) , and the specific algorithm is as follows:
W 2 n ( i , j ) = { 1 A n ( i , j ) = max { A n ( i , j ) ,   n = 1 , 2 , , N } 0 o t h e r w i s e  
where W 2 n ( i , j ) represents the local contrast weight value of the n th image in the i th row and j th column.

3.3. Pyramid Fusion Based on Homomorphic Filter Detail Enhancement

The initial weight map is constructed from the calculated two indicators. The obtained initial weight map is noisy and discontinuous. Therefore, it is very important to refine the initial weight map. The fast-guided filter uses the input image as the guide image, and the weight map is refined without damaging the edges, which can remove noise very well. Therefore, we choose to use the guided filter to refine the weight map and normalize it, the specific algorithms are as follows:
W ^ n ( i , j ) = W 1 n ( i , j ) × W 2 n ( i , j )  
W n ( i , j ) = G F r , ε ( W ^ n ( i , j ) , W ^ n ( i , j ) )  
where G F r , ε ( I , G ) represents the fast-guided filtering, r represents the filter radius, ε manages the blur degree of the filtering, I represents the input image, and G represents the guided image. The fast bootstrap filter parameter settings are the same as the algorithm of Hayat et al. [25].
Direct weighted fusion can lead to the problem of seams and blurring in the output image, which can be solved by pyramid-based multi-resolution methods; however, pyramid fusion will lose some edge texture details. In order to preserve the detailed information of the image, a pyramid fusion based on homomorphic filter detail enhancement is proposed. Many detail enhancement algorithms lose some low-frequency signals when enhancing high-frequency details, while homomorphic filtering [31] can preserve low-frequency signals while enhancing details; therefore, we use homomorphic filtering for detail enhancement.
The principle of the method is to decompose the weight map and the input image with the Gaussian pyramid and the Laplacian pyramid, respectively. The Laplacian pyramid decomposition decomposes the input image into the base layer and the detail layer. The highest layer is the base layer, and the other layers are the details. We augment each detail layer with homomorphic filtering. For the detailed calculation process of homomorphic filtering, see Algorithm 2 and see Equations (14)–(16) for details:
L = floor ( log 2 min ( r , c ) ) 2  
L { I n ( i , j ) } ( l ) = { I n ( l ) ( i , j ) upsample ( I n ( l + 1 ) ( i , j ) ) ,   l = 1 , 2 , , L 1 L { I n ( i , j ) } ( l ) = I n ( l ) ( i , j ) ,   l = L  
L { I n ( i , j ) } ( l ) = homomorphic ( L { I n ( i , j ) } ( l ) ) ,   l = 1 , 2 , , L 1  
where floor ( · ) means rounding towards negative infinity, r and c are the height and width of the input image, respectively; m i n ( · ) means taking the minimum value function; I n l ( i , j ) means the l th layer of the n th image for pixel values in the i th row and j th column; u p s a m p l e ( · ) is an up-sampling operation; L { · } ( l ) represents the Laplacian pyramid image of the l th layer; and h o m o m o r p h i c ( · ) represents the Homomorphic filtering operation. After the enhanced Laplacian pyramid is obtained, it is fused and reconstructed with the Gaussian pyramid of the weight map to obtain the final fused image. The equations are as follows:
L { F ( i , j ) } ( l ) = n = 1 N G { W n ( i , j ) } ( l ) × L { I n ( i , j ) } ( l )  
L { F ( i , j ) } ( L l ) = L { F ( i , j ) } ( L l ) + upsample ( L { F ( i , j ) } ( L l + 1 ) ) ,   l = 1 , 2 , , L 1  
F ( i , j ) = L { F ( i , j ) } ( 1 )  
where G { · } ( l ) represents the Gaussian pyramid image of the n th layer, and F ( i , j ) is the pixel value of the fusion image in the i th row and the j th column—that is, the final output image.
Algorithm 2: Homomorphic filtering algorithm.
Parameter:  n 0.1 ,   D 0 0 ,   r h 0.7 ,   r l 0.1
Input:
Laplacian pyramid of the source image L { I n } ( l ) ,   1 l L 1
Output: Laplacian pyramid with enhanced detail L { I n } ( l )
1: for each layer of L { I n } ( l ) do
2: L ^ { I n } ( l ) : = l o g ( L { I n } ( l ) + 1 )
3: The Fourier transform L ^ f f t { I n } ( l ) : = f f t 2 ( L ^ { I n } ( l ) )
4: D 1 s q r t ( i 2 + j 2 )
5: H r l + ( r h / ( 1 + ( D 0 / D 1 ) 2 n ) )
6: Filtering the Fourier transform L ^ f f t { I n } ( l ) L ^ f f t { I n } ( l ) . H
7: Inverse the Fourier transform L ^ { I n } ( l ) i f f t 2 ( L ^ f f t { I n } ( l ) )
8: Indexation L ^ { I n } ( l ) e x p ( L ^ { I n } ( l ) ) 1
9: Take the real part to get the enhanced pyramid L { I n } ( l ) r e a l ( L ^ { I n } ( l ) )
10: end for

4. Experimental Results and Analysis

In this section, to examine the performance of the proposed fusion method, seventeen different LDR multiple-exposure image sequences [24] were selected (which include different scenes) as listed in Table 2. We tested our method using these seventeen natural scenes with different exposure levels and compared them with the seven most popular existing algorithms [16,21,22,24,25,27,28]. Four representative sets of input images were selected for presentation in Figure 4, and Figure 5, Figure 6, Figure 7 and Figure 8 show the results of these four sets of images fused with different methods. All experiments were run with MATLAB R2019a on a PC with an Intel i5 6200U @ 2.40 GHz processor and 4.00 GB RAM.

4.1. Subjective Analysis

Figure 5, Figure 6, Figure 7 and Figure 8 show the overall results and partial magnified images of the four experiments. The method of Mertens [16] has adequate color vibrancy; however, there is a certain loss of details, as shown in Figure 7a and Figure 8a. The method of Shen [21] has rich texture details; however, there are artifacts at the junction of light and dark and serious color distortion of the entire image due to excessive detail enhancement. For example, there are obvious black shadows at the junction of light and dark at the entrance of the cave in Figure 5b, and the color of the lamp is severely distorted in Figure 6b.
The method of Li [22] uses a weighted structure tensor to extract details for detail enhancement; however, the brightness and clarity are not good in some places, for instance the branches and leaves in Figure 7c are missing. In Ma’s method [24], based on the structural block decomposition, the image block is decomposed into the signal intensity, signal structure and average intensity, which exhibit good global contrast but are prone to over-sharpening, resulting in local color distortion, as shown in Figure 5d, Figure 6d and Figure 7d.
Hayat [25] used the dense SIFT descriptor to calculate the contrast and the smoothing weight of the guided filter. The generated image maintains a good global contrast; however, it is easy to lose details. As shown in Figure 6e, the exposure of the desk lamp is too high, and in Figure 8e, the texture details of the cloud are lost and the clarity is poor.
The method of Qi [27] performs well in image clarity and color saturation but is prone to noise and halos at the connection between light and dark. For example, there are orange noise spots at the entrance of the cave in Figure 5f, there are obvious noise spots on the surface of the desk lamp in Figure 6f, and the leaves have unnatural halos in Figure 7f. The method of Huang [28] has moderate brightness and can retain certain details, however, still loses details in areas where the image exposure difference is too large. For example, the leaves and branches in Figure 7g are missing a part.
The algorithm proposed in this paper shows advantages in all aspects. As shown in Figure 5h, the light–dark transition area of the cave entrance is rich in details, and the color is more realistic. The edge and texture of the lamp in Figure 6h can be seen, and the leaves and branches are visible in Figure 7h. The detailed texture of the cloud is richer without distortion in Figure 8h. To summarize, compared with the other seven methods, our method can preserve more details and edge information and exhibits a comfortable visual effect.

4.2. Objective Evaluation

The evaluation metric can objectively display the comprehensive performance of the algorithm. We used two objective evaluation indicators to evaluate. Detail-preserving assessment ( D P A ) [39] is used to evaluate detail preservation, and Q A B / F [40] utilizes gradients to measure the edge information between fused and source images. The D P A and Q A B / F values of seventeen different groups of fused images were obtained through eight different algorithms, as shown in Table 3, and the bolded ones are the maximum D P A and Q A B / F values using different algorithms.
The higher the DPA score, the higher the detail retention rate. DPA is defined as follows:
D P A ( r , c ) = l o g ( x = 1 r y = 1 c | E R ( x , y ) m a x ( E A ( x , y ) ) | r × c )  
where E R ( x , y ) is the exposure weight of the resulting image in row x and column y , m a x ( E A ( x , y ) ) is the maximum exposure weight of the input image in row and column, and the image size is equal to r × c .
The D P A score is calculated by comparing the resulting image to the input images; thus, when the resulting image has more detail than all the input images, the numerator in the above equation is larger, and the D P A value becomes low. Table 3 shows the numerical results of the method in this paper and seven other popular methods. Among the seventeen comparison charts, there are ten D P A test results of the algorithm in this paper with the maximum value. For the remaining seven D P A test results, the algorithm in this paper ranks second; thus, the detail retention ability of the algorithm in this paper is excellent.
The larger the Q A B / F value, the better the ability to maintain edge information. As shown in Table 3, among the seventeen sets of comparison results, the Q A B / F value of the algorithm in this paper is the largest in the twelve groups. The remaining five groups are also good, and, in general, this paper has better edge preservation performance.
As shown in Figure 9, in the mean test results of the method proposed in this paper, D P A ranks second, and Q A B / F ranks first. The first place of D P A means is the method proposed by Shen et al. [21]; however, the Q A B / F mean of their method ranks last, indicating that its ability to preserve edge information is not very superior. According to the line graph of the average value of D P A and Q A B / F in Figure 9, it can be seen that the method proposed in this paper ranks first. In general, the image fusion method proposed in this paper can better preserve the edge information of the source image compared with the tested methods and can retain most of the details of the image.

4.3. The Comparative Experiment of Adaptive Exposure Weight Calculation

We conducted experiments with 10 images with and without our adaptive exposure weighting algorithm to show the effectiveness of the algorithm. We only replaced our exposure weighting algorithm with the Mertens calculation exposure weighting algorithm, and the other calculation steps remained unchanged. Figure 10 shows Q A B / F comparison of the proposed and alternative exposure weighting algorithms. It can be seen that the Q A B / F value of the adaptive exposure weight calculation algorithm we used was higher than that of the other algorithm, that is to say, our adaptive exposure weight algorithm had a better ability to maintain edge information.

5. Conclusions

In this paper, we proposed a multi-exposure image fusion method for detail enhancement based on homomorphic filtering. We noticed that high-exposure images have more details at low exposure values, and similarly, low-exposure images have more details at high exposure values. Therefore, we proposed an exposure-weighting algorithm based on threshold segmentation and an adaptively adjustable Gaussian curve. The initial weight map was constructed by using exposure weight and local contrast weight, and then the initial weight map was denoised and refined by fast-guided filtering to obtain a comprehensive weight map.
Finally, the input image was decomposed into the improved Laplacian pyramid. We conducted the decomposition of comprehensive weights into a Gaussian pyramid and the multi-resolution fusion of pyramids into a result map. We compared seventeen sets of static natural image sequences with different exposure levels and then compared and analyzed the algorithms from both subjective and objective aspects. The experimental results show that the method proposed in this paper can better preserve the edge details of the source image and generate images with uniform illumination distribution. At present, the algorithm in this paper is only applicable in static scenes and cannot solve the ghosting problem of dynamic image fusion. In the future, we plan to study the problem of ghosting removal in dynamic scenes to enhance the practicability of the algorithm.

Author Contributions

Methodology, Y.H.; software, C.X.; validation, Z.L., F.L., B.F. and L.C.; resources, C.N.; data curation, D.W.; writing—original draft preparation, Y.H.; writing—review and editing, C.X.; visualization, Y.H.; supervision, C.X.; project administration, C.X.; and funding acquisition, C.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2019YFC0117800).

Acknowledgments

Input image sequences were provided by Kede Ma, Department of Computer Science, City University of Hong Kong.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akçay, Ö.; Erenoğlu, R.C.; Avşar, E.Ö. The effect of JPEG compression in close range photogrammetry. Int. J. Eng. Geosci. 2017, 2, 35–40. [Google Scholar] [CrossRef] [Green Version]
  2. Chaurasiya, R.K.; Ramakrishnan, K. High dynamic range imaging. In Proceedings of the 2013 International Conference on Communication Systems and Network Technologies, Gwalior, India, 6–8 April 2013; pp. 83–89. [Google Scholar]
  3. Wang, S.; Zhao, Y. A Novel Patch-Based Multi-Exposure Image Fusion Using Super-Pixel Segmentation. IEEE Access 2020, 8, 39034–39045. [Google Scholar] [CrossRef]
  4. Shao, H.; Jiang, G.; Yu, M.; Song, Y.; Jiang, H.; Peng, Z.; Chen, F. Halo-Free Multi-Exposure Image Fusion Based on Sparse Representation of Gradient Features. Appl. Sci. 2018, 8, 1543. [Google Scholar] [CrossRef] [Green Version]
  5. Li, Z.; Zheng, J. Visual-Salience-Based Tone Mapping for High Dynamic Range Images. IEEE Trans. Ind. Electron. 2014, 61, 7076–7082. [Google Scholar] [CrossRef]
  6. Yilmaz, I.; Bildirici, I.O.; Yakar, M.; Yildiz, F. Color calibration of scanners using polynomial transformation. In Proceedings of the XXth ISPRS Congress Commission V, Istanbul, Turkey, 12–23 July 2004; pp. 890–896. [Google Scholar]
  7. Grossberg, M.D.; Nayar, S.K. Determining the Camera Response from Images: What Is Knowable? IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1455–1467. [Google Scholar] [CrossRef] [Green Version]
  8. Kou, F.; Li, Z.; Wen, C.; Chen, W. Edge-preserving smoothing pyramid based multi-scale exposure fusion. J. Vis. Commun. Image Represent. 2018, 53, 235–244. [Google Scholar] [CrossRef]
  9. Sağlam, A.; Baykan, N.A. A new color distance measure formulated from the cooperation of the Euclidean and the vector angular differences for lidar point cloud segmentation. Int. J. Eng. Geosci. 2021, 6, 117–124. [Google Scholar] [CrossRef]
  10. Gu, B.; Li, W.; Wong, J.; Zhu, M.; Wang, M. Gradient field multi-exposure images fusion for high dynamic range image visualization. J. Vis. Commun. Image Represent. 2012, 23, 604–610. [Google Scholar] [CrossRef]
  11. Li, S.T.; Kang, X.D. Fast Multi-exposure Image Fusion with Median Filter and Recursive Filter. IEEE Trans. Consum. Electron. 2012, 58, 626–632. [Google Scholar] [CrossRef] [Green Version]
  12. Huang, F.; Zhou, D.; Nie, R.; Yu, C. A color multi-exposure image fusion approach using structural patch decomposition. IEEE Access 2018, 6, 42877–42885. [Google Scholar] [CrossRef]
  13. Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region based image fusion methods. Inf. Fusion 2018, 48. [Google Scholar] [CrossRef]
  14. Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph. 1983, 2, 217–236. [Google Scholar] [CrossRef]
  15. Singh, S.; Mittal, N.; Singh, H. Review of Various Image Fusion Algorithms and Image Fusion Performance Metric. Arch. Comput. Methods Eng. 2021, 28, 3645–3659. [Google Scholar] [CrossRef]
  16. Mertens, T.; Kautz, J.; van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Comput. Graph. Forum 2009, 28, 161–171. [Google Scholar] [CrossRef]
  17. Wang, C.M.; He, C.; Xu, M.F. Fast exposure fusion of detail enhancement for brightest and darkest regions. Vis. Comput. 2021, 37, 1233–1243. [Google Scholar] [CrossRef]
  18. Xu, H.; Ma, J.; Zhang, X.-P. MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks. IEEE Trans. Image Process. 2020, 29, 7203–7216. [Google Scholar] [CrossRef]
  19. Yang, Y.; Wu, J.H.; Huang, S.Y.; Lin, P. Multiexposure Estimation and Fusion Based on a Sparsity Exposure Dictionary. IEEE Trans. Instrum. Meas. 2020, 69, 4753–4767. [Google Scholar] [CrossRef]
  20. Ulucan, O.; Karakaya, D.; Turkan, M. Multi-exposure image fusion based on linear embeddings and watershed masking. Signal Process. 2021, 178. [Google Scholar] [CrossRef]
  21. Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure fusion using boosting Laplacian pyramid. IEEE Trans Cybern 2014, 44, 1579–1590. [Google Scholar] [CrossRef]
  22. Li, Z.; Wei, Z.; Wen, C.; Zheng, J. Detail-Enhanced Multi-Scale Exposure Fusion. IEEE Trans Image Process. 2017, 26, 1243–1252. [Google Scholar] [CrossRef]
  23. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted Guided Image Filtering. IEEE Trans. Image Process. 2014, 24, 120–129. [Google Scholar] [CrossRef]
  24. Kede, M.; Hui, L.; Hongwei, Y.; Zhou, W.; Deyu, M.; Lei, Z. Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach. IEEE Trans Image Process. 2017, 26, 2519–2532. [Google Scholar] [CrossRef]
  25. Hayat, N.; Imran, M. Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter. J. Vis. Commun. Image Represent. 2019, 62, 295–308. [Google Scholar] [CrossRef]
  26. Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
  27. Qi, G.; Chang, L.; Luo, Y.; Chen, Y.; Zhu, Z.; Wang, S. A Precise Multi-Exposure Image Fusion Method Based on Low-level Features. Sensors 2020, 20, 1597. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Huang, L.; Li, Z.; Xu, C.; Feng, B. Multi-exposure image fusion based on feature evaluation with adaptive factor. IET Image Process. 2021, 15, 3211–3220. [Google Scholar] [CrossRef]
  29. Agrawal, A.; Raskar, R.; Nayar, S.K.; Li, Y.Z. Removing photography artifacts using gradient projection and flash-exposure sampling. Acm Trans. Graph. 2005, 24, 828–835. [Google Scholar] [CrossRef]
  30. Chen, Y.B.; Chen, O.T.C. Image Segmentation Method Using Thresholds Automatically Determined from Picture Contents. EURASIP J. Image Video Process. 2009, 2009, 140492. [Google Scholar] [CrossRef] [Green Version]
  31. Yugander, P.; Tejaswini, C.H.; Meenakshi, J.; Kumar, K.S.; Varma, B.V.N.S.; Jagannath, M. MR Image Enhancement using Adaptive Weighted Mean Filtering and Homomorphic Filtering. Procedia Comput. Sci. 2020, 167, 677–685. [Google Scholar] [CrossRef]
  32. Okonek, B. HDR Photography Gallery Samples. Available online: http://www.easyhdr.com/examples (accessed on 8 March 2022).
  33. HDR Projects Software. Available online: http://www.projects-software.com/HDR (accessed on 8 March 2022).
  34. Cadik, M. Martin Cadik HDR Webpage. Available online: http://cadik.posvete.cz/tmo (accessed on 9 March 2022).
  35. HDRsoft Gallery. Available online: http://www.hdrsoft.com/gallery (accessed on 7 March 2022).
  36. Verma, C.S. Chaman Singh Verma HDR Webpage. Available online: http://pages.cs.wisc.edu//CS766_09/HDRI/hdr.html (accessed on 7 March 2022).
  37. HDR Pangeasoft. Available online: http://pangeasoft.net/pano/bracketeer/ (accessed on 7 March 2022).
  38. Hvdwolf. Enfuse HDR Webpage. Available online: http://www.photographers-toolbox.com/products/lrenfuse.php (accessed on 11 March 2022).
  39. Keerativittayanun, S.; Kondo, T.; Kotani, K.; Phatrapornnant, T.; Karnjana, J. Two-layer pyramid-based blending method for exposure fusion. Mach. Vis. Appl. 2021, 32, 1–18. [Google Scholar] [CrossRef]
  40. Xydeas, C.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic of the proposed method. The operation process of “ ” in the figure is shown in Equation (17).
Figure 1. Schematic of the proposed method. The operation process of “ ” in the figure is shown in Equation (17).
Electronics 11 01211 g001
Figure 2. Gaussian curve.
Figure 2. Gaussian curve.
Electronics 11 01211 g002
Figure 3. Process diagram of exposure weight map calculation.
Figure 3. Process diagram of exposure weight map calculation.
Electronics 11 01211 g003
Figure 4. Four sets of multiple exposure image sequences. (a) ‘Cave’ image sequence. (b) ‘Lamp’ image sequence. (c) ‘House’ image sequence. (d) ‘Tower’ image sequence.
Figure 4. Four sets of multiple exposure image sequences. (a) ‘Cave’ image sequence. (b) ‘Lamp’ image sequence. (c) ‘House’ image sequence. (d) ‘Tower’ image sequence.
Electronics 11 01211 g004
Figure 5. Fusion results for Figure 4a. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Figure 5. Fusion results for Figure 4a. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Electronics 11 01211 g005
Figure 6. Fusion results for Figure 4b. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Figure 6. Fusion results for Figure 4b. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Electronics 11 01211 g006
Figure 7. Fusion results for Figure 4c. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Figure 7. Fusion results for Figure 4c. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Electronics 11 01211 g007
Figure 8. Fusion results for Figure 4d. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Figure 8. Fusion results for Figure 4d. (a) Mertens et al.’s method [16]. (b) Shen et al.’s method [21]. (c) Li et al.’s method [22]. (d) Ma et al.’s method [24]. (e) Hayat et al.’s method [25]. (f) Qi et al.’s method [27]. (g) Huang et al.’s method [28]. (h) The proposed method.
Electronics 11 01211 g008
Figure 9. Use different methods to analyze the mean values of D P A , Q A B / F , and both of them.
Figure 9. Use different methods to analyze the mean values of D P A , Q A B / F , and both of them.
Electronics 11 01211 g009
Figure 10. Q A B / F comparison of the proposed and alternative exposure weighting algorithms.
Figure 10. Q A B / F comparison of the proposed and alternative exposure weighting algorithms.
Electronics 11 01211 g010
Table 1. Detailed comparison of other methods.
Table 1. Detailed comparison of other methods.
AlgorithmMethodApplicationDatasetResult
Mertens [16]Contrast, saturation, and well-exposedness are proposed to construct a fused image weight map, fused using Laplacian pyramids.MATLAB15 static images by Jacques
Joffre, Jesse Levinson, Agrawal [29] and themselves
Subjective comparison test with three algorithms
Shen [21]Using local weights, global weights, and JND-based saliency weights to estimate exposure weight maps for enhancing detail and base signals of Laplacian pyramids.MATLAB18 static images by themselvesSubjective comparison test with five algorithms
Li [22]The image details are extracted with the weighted structure tensor, the Gaussian pyramid of the luminance component of the source image sequence is used as the guide image, the weighted smoothing is applied to all the weighted Gaussian pyramids, and finally the multi-resolution fusion is performed.MATLAB6 static images by Laurance Meylan, Dani Lischinski, Jacques Joffre, Martin Cadik, Erik Reinhard, and themselvesSubjective and M E F S S I M comparative testing of three algorithms
Ma [24]It is proposed to decompose the image into three components: signal intensity, signal structure, and average intensity fuse each component separately and finally reconstruct a fused image.MATLAB21 static scenes and 19 dynamic scenes by Bartlomiej Okonek, Erik Reinhard, Dani Lischinski, Jianbing Shen, Mertens, Orazio Gallo et al.Subjective comparison with 12 algorithms, Compare M E F S S I M with nine algorithms in the static scenario, computational complexity comparison with seven algorithms, average execution time comparison with six algorithms
Hayat [25]The local contrast, brightness, and color dissimilar features are proposed to estimate the initial weights, where the local contrast is calculated by DSIFT, and pyramid fusion is used after denoising by a guided filter.MATLAB8 images by Mertens, Jianbing Shen et al.Subjective, Q A B / F , M E F S S I M , and N I Q E comparative testing of three algorithms
Qi [27]The source image is decomposed into a base layer and a detail layer with a guided filter, the base layer is calculated based on image blocks, and the detail layer is weighted according to the local average brightness change.MATLAB24 static scenes and 15 dynamic scenes by Ma, Hu, and SenThe subjectivity of the six algorithms and their mean values I Q A , Q A B / F , and M I are compared
Huang [28]Based on the Mertens method, the adaptive factor exposure evaluation weight is proposed, the Sobel operator is used to calculate the texture change weight, and finally, the pyramid is fused.MATLAB20 static images by MaSubjective, M E F S S I M , and N I Q E comparative testing of eight algorithms
ProposedBased on the Mertens method, the exposure weight calculated by threshold segmentation and adaptively adjustable Gaussian curve is proposed, and the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering.MATLAB17 static images by MaSubjective, D P A and Q A B / F comparative testing of seven algorithms
Table 2. Information about the static source sequences.
Table 2. Information about the static source sequences.
Source SequenceSizeImage Origin
Arno339 × 512 × 3Bartlomiej Okonek [32]
Cave512 × 384 × 4Bartlomiej Okonek [32]
Chinese Garden512 × 340 × 3Bartlomiej Okonek [32]
Church335 × 512 × 3Jianbing Shen [21]
Farmhouse512 × 341 × 4HDR projects [33]
house512 × 340 × 4Mertens [16]
Kluki512 × 341 × 3Bartlomiej Okonek [32]
Lamp512 × 384 × 15Martin Cadik [34]
Landscape512 × 341 × 3HDRsoft [35]
Laurenziana356 × 512 × 3Bartlomiej Okonek [32]
Madison Capitol512 × 384 × 30Chaman Singh Verma [36]
Mask512 × 341 × 3HDRsoft [35]
Ostrow341 × 512 × 3Bartlomiej Okonek [32]
Room512 × 340 × 3Pangeasoft [37]
Studio512 × 341 × 5HDRsoft [35]
Tower512 × 341 × 3Jacques Joffre [35]
Window384 × 512 × 3Hvdwolf [38]
Table 3. Test results of D P A and Q A B / F .
Table 3. Test results of D P A and Q A B / F .
ImagesMertens [16]Shen [21]Li [22]Ma [24]Hayat [25]Qi [27]Huang [28]Proposed
Arno1.745/0.6162.105/0.3691.746/0.571.476/0.5931.63/0.5731.462/0.5791.689/0.6292.296/0.622
Cave1.735/0.7952.249/0.5612.26/0.8341.905/0.7411.797/0.8011.915/0.7981.716/0.7192.351/0.822
Chinese Garden1.427/0.8122.173/0.4751.628/0.8251.135/0.8261.455/0.8221.095/0.8121.335/0.8252.07/0.831
Church1.788/0.8461.823/0.6342.054/0.851.817/0.8522.053/0.8441.742/0.8291.62/0.8532.125/0.871
Farmhouse2.572/0.82.481/0.6452.544/0.8092.357/0.8042.622/0.8042.357/0.782.39/0.8042.76/0.823
house1.172/0.6931.821/0.4071.422/0.6981.131/0.5671.296/0.6851.257/0.6811.152/0.7071.496/0.712
Kluki1.433/0.8231.821/0.5351.662/0.8371.386/0.831.57/0.8311.37/0.8111.404/0.8391.879/0.837
Lamp1.235/0.7432.797/0.3781.536/0.730.892/0.7241.168/0.7080.869/0.7211.118/0.7491.417/0.731
Landscape1.81/0.6112.281/0.3392.012/0.6141.211/0.6311.886/0.6041.197/0.5941.609/0.6412.534/0.626
Laurenziana1.416/0.8071.905/0.5721.551/0.821.12/0.8141.459/0.8161.053/0.7991.417/0.8181.989/0.826
Madison Capitol1.02/0.8172.384/0.4431.082/0.8190.838/0.7811.046/0.7870.83/0.7660.964/0.8221.527/0.816
Mask1.631/0.8282.443/0.4962.084/0.8471.28/0.8411.729/0.8461.247/0.8221.51/0.8352.357/0.853
Ostrow2.139/0.62.035/0.372.17/0.5391.863/0.5692.136/0.5451.869/0.5762.102/0.6062.337/0.641
Room1.968/0.7892.376/0.4782.11/0.8151.833/0.8061.874/0.8081.839/0.7921.956/0.7982.348/0.817
Studio1.334/0.7352.475/0.491.632/0.771.048/0.7091.23/0.7321.095/0.7241.269/0.7431.923/0.773
Tower1.78/0.7812.316/0.5042.23/0.8111.467/0.811.932/0.811.442/0.7871.683/0.7822.56/0.819
Window1.775/0.7791.989/0.5692.066/0.7931.707/0.7871.899/0.7921.803/0.7721.665/0.7682.196/0.804
Average1.646/0.7572.19/0.4861.87/0.7641.439/0.7461.693/0.7531.438/0.7441.564/0.7612.127/0.778
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, Y.; Xu, C.; Li, Z.; Lei, F.; Feng, B.; Chu, L.; Nie, C.; Wang, D. Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering. Electronics 2022, 11, 1211. https://doi.org/10.3390/electronics11081211

AMA Style

Hu Y, Xu C, Li Z, Lei F, Feng B, Chu L, Nie C, Wang D. Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering. Electronics. 2022; 11(8):1211. https://doi.org/10.3390/electronics11081211

Chicago/Turabian Style

Hu, Yunxue, Chao Xu, Zhengping Li, Fang Lei, Bo Feng, Lingling Chu, Chao Nie, and Dou Wang. 2022. "Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering" Electronics 11, no. 8: 1211. https://doi.org/10.3390/electronics11081211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop