Next Article in Journal
Block Kaczmarz–Motzkin Method via Mean Shift Clustering
Next Article in Special Issue
Unsupervised Deep Relative Neighbor Relationship Preserving Cross-Modal Hashing
Previous Article in Journal
A Multi-Start Biased-Randomized Algorithm for the Capacitated Dispersion Problem
Previous Article in Special Issue
Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Different Image Enhancement and Feature Extraction Methods

by
Lucero Verónica Lozano-Vázquez
1,†,
Jun Miura
2,†,
Alberto Jorge Rosales-Silva
1,†,
Alberto Luviano-Juárez
3,*,† and
Dante Mújica-Vargas
4,†
1
Sección de Estudios de Posgrado e Investigación, Instituto Politécnico Nacional—ESIME Zacatenco, Mexico City 07738, Mexico
2
LINCE Lab, Toyohashi University of Technology, Toyohashi 441-8580, Japan
3
Instituto Politécnico Nacional—UPIITA, Mexico City 07340, Mexico
4
Department of Computer Science, Tecnológico Nacional de México/CENIDET, Interior Internado Palmira S/N, Palmira, Cuernavaca 62490, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(14), 2407; https://doi.org/10.3390/math10142407
Submission received: 23 June 2022 / Revised: 5 July 2022 / Accepted: 7 July 2022 / Published: 9 July 2022
(This article belongs to the Special Issue Advances in Pattern Recognition and Image Analysis)

Abstract

:
This paper describes an image enhancement method for reliable image feature matching. Image features such as SIFT and SURF have been widely used in various computer vision tasks such as image registration and object recognition. However, the reliable extraction of such features is difficult in poorly illuminated scenes. One promising approach is to apply an image enhancement method before feature extraction, which preserves the original characteristics of the scene. We thus propose to use the Multi-Scale Retinex algorithm, which is aimed to emulate the human visual system and it provides more information of a poorly illuminated scene. We experimentally assessed various combinations of image enhancement (MSR, Gamma correction, Histogram Equalization and Sharpening) and feature extraction methods (SIFT, SURF, ORB, AKAZE) using images of a large variety of scenes, demonstrating that the combination of the Multi-Scale Retinex and SIFT provides the best results in terms of the number of reliable feature matches.

1. Introduction

Image feature matching is one of the fundamental operations in image processing, used in various vision and robotic applications such as stereo matching [1], image mosaicking [2], specific object recognition [3], feature-based robot localization [4], and SLAM (Simultaneous localization and mapping) [5], among others. Although many robust features extraction algorithms have been proposed such as Scale -Invariant Feature Transform (SIFT) [6,7] Speeded-Up Robust Features (SURF) [8,9], and AKAZE [10], they do not work well for feature extraction in degraded images.
Image degradation is often observed in poorly illuminated environments due to, for example, darkness, fog, and pollution. The Figure 1 shows several examples of degraded images. Since the image details are missing in such images, image features are harder to extract and, even when they are extracted, their characteristics are not well recovered. In addition, with the visual subjective results improved, this proposal helps in different fields such as medical, robotics, industrial inspection, SLAM (Simultaneous localization and mapping) algorithms, defect detection, etc.
There are basically two approaches to dealing with this problem. One is to develop more robust feature extraction and description methods. The other is to increase inherent characteristics of the images so that feature extraction and description become more robust. Thus, we propose to work with the latter one.
A promising approach to image modification is image enhancement algorithms. Examples of the use of popular methods are gamma correction [11], image sharpening [12], and histogram equalization [13]. Furthermore, Retinex [14] is a color image enhancement method which emulates human vision, and it is usually used for improving the quality of images taken under low illumination conditions. Since most image features are extracted and described based on image gradient information, Retinex is suitable for enhancing low contrast regions, thereby making image feature extraction easier and more robust.
In this paper, we propose a method of robust feature extraction using Retinex-based image enhancement. The method is quantitatively evaluated with various real images in terms of features extraction and matching performance, with comparison with other image enhancement methods. Besides, we propose to join the SIFT and MSR algorithms to obtain more information of the different scenes, and, to generate the correct match, our proposal demonstrates better results against other ones using the Sensitivity, Specificity, ROC curve, and SRCC analysis criteria.

2. Related Work

2.1. Feature Extraction and Matching Algorithms

Various image features extraction algorithms have been proposed for years, including SIFT [6], SURF [8], ORB [15] and AKAZE [10], among others. Mistry et al. [16] made a comparison between SIFT and SURF, reporting that each algorithm presents good results in different circumstances. For example, SURF is better than SIFT in terms of rotation invariance, blur, and warp transform, while SIFT is better than SURF in terms of scale invariance. Ma et al. [17] proposed to use an improved ORB feature in a low-frequency domain obtained by non-subsampled contourlet (NSCT) for remote sensing image matching. Alcantarilla et al. [10] proposed the AKAZE algorithm, a fast multiscale feature detection and description approach that exploits the benefits of nonlinear scale spaces. Lecca, Michela, et al. [18] shows perceptual features such as image brightness, contrast, and regularity, which enable increases in the accuracy of SIFT and ORB. This study provides a scheme to evaluate image enhancement from an application viewpoint, to demonstrate better results when using an image enhancement together with a feature extraction algorithm.
To make correspondence between image features, a similarity measure between their descriptors is used. Karim et al. [19] proposed to combine SURF features with FAST [20] or BRISK [20] descriptors to provide an optimal solution for reliable and efficient feature matching. Since different image features may have similar descriptors, a robust matching algorithm needs to be adopted. RANSAC (Random Sample Consensus) [21] is one of the most powerful algorithms for outlier rejection. Lati et al. [22] developed an extension of RANSAC with a bidirectional matching with a fuzzy inference. Since the image enhancement algorithms usually increase the number of features, the combination with such a robust matching algorithm is indispensable.

2.2. Image Enhancement

Image enhancement consists of modifying some characteristics of the original image, such as sharpness and noise removal, so that the resulting image can be used in specific applications [23]. Since this paper deals with an improvement of extraction and matching of gradient-based image features, we focus on contrast enhancement, which is to provide a better extraction of features.
Xu et al. [24] presented features’ enhancement of images taken in low-light environments using multi-scale fusion. Using the high dynamic range imaging technique, combined with weight maps, a pyramidal fusion is performed to obtain a layer-by-layer fusion of the different frequency bands. It also develops an extraction of characteristics of the original image without generating color distortions. Sun et al. [25] reported a digital image correlation (DIC) method, where, first, a comparison experiment of numerical simulation speckle images, acquired under different low-light environments, is performed. Then, an image correction algorithm based on Retinex-multiscale is then applied to eliminate or reduce non-uniform lighting effects. Finally, the rotation of the rigid body and the uniaxial traction experiment are quantitatively evaluated to verify the feasibility of the correction method for the images. There is another option, such as deep learning, for example, R. Zhang [26], which reported a feature transformation using a self-monitoring feature extractor pre-trained on a Gaussian-like distribution that allows for reducing the mismatch in the distribution of features describing images taken in low-light environments, significantly benefiting meta-training graphics network. On the other hand, R. Zhang [27] present an analysis using infrared images, working with a novel backbone called Deep-IRTarget.
Systems that use deep learning have the disadvantage of the high computational cost that is carried out during the process, as well as the use of very large databases, which can reach a size of 50 GB, for which training neural networks can take a long time, this being a disadvantage when we intend to work with systems with real-time responses.
To enhance the image contrast, gray level transformation methods are often used such as gamma correction [11] and histogram equalization [13]. These are effective in many cases, but some of them need parameter adjustment and may fail to effectively enhance a local image region in gray and color images. Retinex [14] is an effective method for contrast enhancement in color images applied in real scenarios. These methods will be discussed in more detail and evaluated in terms of the effectiveness in feature extraction and matching in Section 3 and Section 5.

2.3. Image Registration and Stitching

Image registration is the process of overlaying images of the same scene taken at different times, from different viewpoints, and/or by different sensors. Zitova et al. [28] reviewed classical image registration methods such as the sequential similarity detection algorithm, cross-correlation, and the Hausdorff distance. Brown and Lowe developed a fully automatic panoramic image stitching method [29], which performs feature extraction and matching, bundle adjustment, and photometric adjustment and blending. Robust feature extraction and matching are keys to high-quality stitching. The quality of stitching is one of the evaluation criteria, as shown later.

3. Image Enhancement Methods

This section explains the Retinex algorithm and some others which are used for performance comparison.

3.1. Retinex

The Retinex algorithms are primarily for color recovery independently of illumination conditions. They can also improve visual conditions of images such as luminosity and contrast, especially when applied to images taken in low-illumination conditions [14].
The following equation defines the calculation of single-scale Retinex (SSR):
R ( x , y ) = log I ( x , y ) log [ F ( x , y ) I ( x , y ) ] ,
where I ( x , y ) is the intensity of the image pixel and * is the convolution operator. F ( x , y ) is a Gaussian function defined as:
F ( x , y ) = z exp x 2 + y 2 2 σ 2 ,
where σ 2 is the variance and z is the normalization constant.
Retinex has several extensions, such as Multi-scale Retinex (MSR) [30], Multi-scale Retinex with color restoration (MSRCR) [31] and Retinex algorithms to high dynamic range (HDR) [32]. As MSR calculates and combines Retinex values on scales, it provides us with tonal interpretation and a high dynamic range simultaneously, making the results favorable for our purpose. The MSR value for channel c (R, G, or B) is defined as:
R M S R c = s = 1 N s w s R s c ,
where R s c is the SSR value obtained by Equation (1) and w s is the scale-wise weight. According to [33], we choose N s = 3 , [ 80 , 154 , 250 ] for the variances, and w s = 1 / 3 . Figure 2 shows the results of applying the MSR algorithm to the images shown in Figure 1. From the comparison of the histograms before and after the application of MSR, we can observe the improvements on the dynamic range and perceivable details.

3.2. Gamma Correction

Gamma correction is usually used for adjusting the different characteristics in brightness and color between monitors. The gamma coefficient is introduced to characterize the non-linear relationship between the pixel value and its actual luminance [34]. The higher the gamma value is, the steeper the curve of this relationship is, thereby causing the increase of contrast [11]. Gamma correction is defined as:
I = I γ ,
where I is the original image, I is the correction result, and γ = [ , ] . We should choose an appropriate gamma value for an effective conversion. In our case, it is necessary to adjust the value on an image-by-image due to a variety of illumination conditions over scenes.

3.3. Histogram Equalization

The objective of histogram equalization is to convert the images so that the cumulative probability of pixel values becomes linear. This is achieved by converting each pixel value to the new one so that the number of pixels in each bin of the intensity histogram becomes as similar as possible, without inverting the pixel orders in terms of intensity.

3.4. Sharpening with Unsharp Masking

Image sharpening with unsharp masking is another image enhancement method [12]. The procedure is to blur the original image (unsharp mask) first, and then subtract the blurred image from the original image. The method is effective for contrast enhancement.

3.5. Gan-Based Low Light Enhancement Method

EnlightenGAN is a method [35] that can be easily set aside in improving images acquired in low-light environments since it eliminates the dependence on training data and it allows working with a wide variety of images from different domains.

3.6. Comparison of Image Enhancement Methods

Figure 3 shows a comparison between the image enhancement methods mentioned above. We can see the improvements in MSR, histogram equalization, and image sharpening methods. Besides MSR providing good results in most cases, for example in situations of well illuminated scenarios, where the perception of more details is noticeably improved using an MSR algorithm, as shown in Figure 4.
Although the original image may be the best option, when implementing contrast enhancement software, it will run regardless of the nature of the image, so it is important that the proposed algorithm continues to perform proper processing by improving the amount of perceptible information.

4. Feature Extraction and Matching

4.1. Feature Extraction

Once the images are properly enhanced, the next step is to extract and describe feature points, which will then be matched between images to calculate the image-to-image transformation. In this paper, we use four representative image features: SIFT, SURF, ORB, and AKAZE, explained below.

4.1.1. SIFT 

SIFT is a method of obtaining invariant characteristics of a local image region as a feature vector called a descriptor. Each descriptor is invariant to translation, scaling, and rotation. Furthermore, it is robust to illumination changes [6].
The SIFT algorithm detects feature points (called key points) independently of scale variation, by analyzing the response to the DoG (Difference of Gaussian) function defined as:
ψ ( x , y , σ ) = g ( x , y , d σ ) g ( x , y , σ ) ,
in a scale space, which is obtained by repeatedly applying the convolution with a Gaussian kernel g ( x , y , σ ) with σ = 2 to the input image with a different scale d of Gaussian blurs.

4.1.2. SURF

SURF is a feature detection that uses the integral image to decrease the computation required to detect and describe interest points. The integral image makes it possible to calculate the sum of pixels inside a rectangular region of the input image with only three additions and four memory accesses [8].
Similar to SIFT, SURF is also based on the scale space theory. The difference is that SURF uses the DoH (Determinant of Hessian) defined as:
H ( x , σ ) = L x x ( x , σ ) L x y ( x , σ ) L y x ( x , σ ) L y y ( x , σ ) ,
where L x x , L y y , and L x y indicate the convolutions of the Gaussian second-order partial derivatives approximated with the box-type filters based on the integral image in horizontal, vertical, and diagonal directions, respectively [36].

4.1.3. ORB

ORB is a feature detection and description algorithm, realized by a combination of the Oriented FAST detector and the BRIEF descriptor. The orientation component for FAST is calculated using the intensity centroid [15]. The BRIEF feature is constructed from a set of n binary tests. The binary test τ is defined as:
τ ( p ; x , y ) = 1 if p ( x ) < p ( y ) 0 otherwise ,
where p is a smoothed image patch and x and y are points to be compared. The feature is represented as a n-bit vector:
f n ( p ) = 1 i n 2 i 1 τ ( p ; x i , y i ) ,
and rotated by the FAST orientation for the rotation invariance.

4.1.4. AKAZE

AKAZE is a 2D feature detection and description method that operates completely in a nonlinear scale-space [10]. The AKAZE detector is based on the determinant of Hessian Matrix. The use of Scharr filters improves the quality of the rotational invariance. As a result, the AKAZE features are invariant to scale, rotation, and limited affine. It also has more distinctiveness at varying scales because of nonlinear scale spaces [37].

4.2. Feature Point Matching Using RANSAC

Once two sets of image features are obtained for an image pair, we determine feature matches based on the sum of squared differences (SSD) between the feature vectors. Let f i 1 be the ith feature in image 1 and f j 2 be the closest feature to f i 1 in image 2. Feature match ( i , j ) is determined, which satisfies the following two conditions:
S S D i , j t h A S S D ,
S S D i , j S S D i , k t h R S S D ,
where i and j are the features, k is the index of the second closest feature in image 2, and t h A S S D and t h R S S D are thresholds. In the experiments shown below, we used t h A S S D = 8 and t h R S S D = 0.6 , which were selected from the ranges [ 7 , 10 ] and [ 0.4 , 0.8 ] , respectively, by a different test.
These conditions contribute to eliminating ambiguous matches. However, considering further the case where multiple non-identical features may have very similar feature descriptors, we adopt Random sample consensus (RANSAC) [21], one of the most popular robust matching algorithms in computer vision.
RANSAC first randomly selects the minimum number of feature pairs required to determine the transformation parameters. It then transforms the other features in one image to the other using the estimated parameters to find a set of matched points (i.e., inliers). The algorithm iterates these steps for a specified time and chooses the parameter set with the maximum number of inliers. In this paper, we consider the homography between images as the transformation. Then, the number of parameters is eight [22] and that of required feature pairs is four.

5. Assessment Results

5.1. Effect of Image Enhancement for Feature Extraction and Matching

The objective of image enhancement in this paper is to increase the number of correct feature matches for poorly illuminated scenes. We first qualitatively examine how the image enhancement using MSR improves feature extraction and feature matching, this by using the SIFT algorithm, since when working with the characteristics of a local image, we can have invariant descriptors to translation, rotation and scaling, which helps to make a better connection between the images that make up the sequence of the scene. Figure 5 shows the detected image features (indicated as green and red points) in the original and the MSR-enhanced images. The number of detected features are larger for the orignal ones because of a relatively high noise level in low contrast images. Figure 6 shows the feature matches (indicated as yellow lines) between two images for both cases, obtained by the RANSAC-based homography estimation. Apparently, the enhanced image pair has a much larger number of correct matches. These results show the effect of image enhancement for feature extraction and matches.

5.2. Quantitative Evaluation for a Variety of Scenes

The Figure 4 shows an example where the lighting condition is reasonably good and image enhancement is not necessarily effective, therefore a quantitative evaluation of the effectiveness of MSR-based image enhancement was performed, using a set of images with 40 color scenes. The image set was taken by ourselves under a variety of locations and illumination conditions, using a cellphone camera of 31 Mega Pixels; these images were acquired in .jpg format (see Figure 7). For each scene, we took five consecutive images, having a total of 200 images, by moving a camera so that they can be used for feature matching and image stitching experiments.
We limited the numbers of features and feature matches to 300 and 200, respectively, in order to reduce the computation time. The number of iterations in RANSAC is set to 1000.
We first examine feature detection and matching performance by all combinations of image enhancement and feature extraction methods in detail for one of the 40 scenes. Table 1 shows a comparison for the sequence of the first image, which is the leftmost image in the first row in Figure 7. In the table, the Im2 through to the Im5 column indicate the number of detected features and that of matched pairs in parentheses. The results demonstrate that the combination of MSR+SIFT gives the best performance and that of MSR+AKAZE the second; this is because the SIFT and AKAZE algorithms are more robust according to the mathematical procedure that describes them. This is by comparing the image improvement methods that do not use deep learning, since we can see that when we use the GAN method, the best results are obtained, although to obtain them the process takes longer due to the training that must be carried out with the neural network. We can observe that by having an image pre-processing method it is possible to generate a greater number of characteristics and therefore a better splicing between them.
The same experimentations were realized over all 40 scenes. For each scene, the total numbers of detected and matched features were normalized. Then, we calculate the averaged and the standard deviation for all scenes for each combination. The result is summarized in Table 2. We also examined the ratio of the number of matched features to that of the detected features, as summarized in Table 3. Again, the combination of MSR+SIFT exhibits the best performance, demonstrating that it can detect not only a larger number of features but also more reliable features.
Feature extraction and matches depend on the threshold values. If we set loose thresholds, more matched features are obtained, but more incorrect ones are included. If we set tight thresholds, less matched features are obtained, but many of them are correct ones. Therefore, we conducted the ROC analysis [38].
We calculated the sensibility and the specificity for combinations of thresholds, t h A S S D and t h R S S D , shown in Equations (9) and (10). The ranges of thresholds for t h A S S D and t h R S S D are [ 7 , 8 , 9 , 10 ] and [ 0.4 , 0.5 , 0.6 , 0.7 , 0.8 ] , respectively.
The sensibility and the specificity are defined as:
S e n s i t i v i t y = T P T P + F N ,
S p e c i f i c i t y = T N T N + F P ,
where T P , T N , F P , and F N are the number of true positive cases, that of true negative ones, that of false positive ones, and that of false negative ones, respectively. In determining the ground truth data, we use the matches obtained by the threshold pair used (i.e., [ t h A S S D , t h R S S D ] = [ 8 , 0.6 ] ) and the RANSAC-based outlier rejection.
Figure 8 shows the ROC curves of all combinations of image enhancement methods and features. Each value is the averaged one for all images sequences. Table 4 shows the numerical results for the threshold we used. MSR+SIFT and MSR+AKAZE exhibit the best results for all threshold values.
On the other hand, the Spearman’s rank correlation coefficient (SRCC [39]) analysis was performed, which indicates the level of correlation that exists between two variables, in our case, the number of detectors obtained and the number of splices performed correctly, which is defined by Equation (13), where ρ = Pearson correlation coefficient, d i 2 = difference between the two ranks of each observation and n = number of observations. To perform this evaluation, we used the values shown in the last column of Table 1.
ρ = 1 6 d i 2 n ( n 2 1 ) ,
In the Table 5, the second column refers to the detectors located in the image sets, the fourth column d i 2 , knowing that n = 40 , the fifth column shows the value of ρ for each case; when the ρ value is closer or equal to one, the result means that the splicing between the images presents good results. It is possible to observe that the best splice results were obtained when using the combination of MSR and SIFT.

5.3. Comparison of Enhancement Methods in Image Stitching

Figure 9 compares the image enhancement methods in an image stitching scenario. For all of five methods, we extract SIFT features and apply the RANSAC-based matching for image stitching. The quality of image stitching results change depending on the number and the accuracy of feature matches. The MSR, the histogram equalization and the Sharpening case show reasonable stitching, while the gamma correction case fails to correctly recover the geometry.
We evaluate the accuracy of the estimated image-to-image transformation. To obtain the ground truth transformation for evaluation, we manually matched feature points, and use the transformation estimated from that set of feature matches as a ground truth. The number of feature matches is 20.
As a criterion of evaluating the matching accuracy between the images, we use Sampson distance, due to the Sampson error, which can be roughly thought as the squared distance between a point x to the corresponding epipolar line [40], which provides a first-order approximation of reprojection error and is known to present better estimations than the other criteria such as Kanatani distance and symmetric epipolar distance [41]. Table 6 results on Sampson distance. The combination of MSR and SIFT gives the best result, and that of MSR and AKZE gives the second. This is because a larger number of reliable feature matches are obtained for those combinations than the others, as shown above.

6. Conclusions and Future Work

This paper described some methods of image enhancement for robust feature matching in poorly illuminated environments. Among various image enhancement methods, we proposed to use RETINEX, more specifically, Multi-scale Retinex (MSR). The quantitative evaluation using a large variety of 40 sequences of scenes shows that the MSR, when combined with SIFT or AKAZE, gives the best performance in terms of the number of reliable feature matches as well as the accuracy of the recovered transformation for image stitching.
Although the MSR performs best for almost all scenes, there are complicated scenes which the MSR does not work properly, for example, when the image is completely dark, that is, in images taken at night. Therefore, it is future work to analyze and classify scenes based on the illumination condition so that we can select an appropriate image enhancement method, including keeping the original image as an option, depending on the characteristics of the scene. Likewise, it is proposed to use deep learning, more specifically the method described (EnlightenGAN), since it was verified that it generates good results in the use of completely dark images and in this way to be able to apply it in specific systems such as the detection of forest fires.

Author Contributions

Conceptualization, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; methodology, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; software, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; validation, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; formal analysis, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; investigation, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; resources, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; data curation, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; writing—original draft preparation, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; writing—review and editing, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; visualization, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; supervision, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; project administration, L.V.L.-V., J.M., A.J.R.-S., A.L.-J. and D.M.-V.; funding acquisition, A.J.R.-S. and A.L.-J. All authors have read and agreed to the published version of the manuscript.

Funding

This article was supported by Secretaría de Investigación y Posgrado del Instituto Politécnico Nacional (SIP-IPN), under grant 20220623.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available on request from the First Author (L.V.L.-V.).

Acknowledgments

A.L.-J. wishes to express his gratitude to the Secretará de Investigación y Posgrado del Instituto Politécnico Nacional for the support of the financial support of this article and for the institutional support.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MSRMulti-Scale Retinex
SIFTScale-Invariant Feature Transform
SURFSped-Up Robust Features
AKAZEAccelerated-KAZE
ORBOriented FAST and Rotated BRIEF
GCGamma Correction
HEHistogram Equalization

References

  1. Bhalerao, R.H.; Gedam, S.S.; Buddhiraju, K.M. Modified Dual Winner Takes All Approach for Tri-Stereo Image Match. Using Disparity Space Imags. J. Indian Soc. Remote Sens. 2017, 45, 45–54. [Google Scholar] [CrossRef]
  2. Wang, Z.; Chen, Y.; Zhu, Z.; Zhao, W. An automatic panoramic image mosaic method based on graph model. Multimed. Tools Appl. 2015, 75, 2725–2740. [Google Scholar] [CrossRef]
  3. Zhang, J.; Yin, X.; Luan, J.; Liu, T. An improved vehicle panoramic image generation algorithm. Multimed. Tools Appl. 2019, 78, 27663–27682. [Google Scholar] [CrossRef]
  4. Valgren, C.; Lilienthal, A. SIFT, SURF and Seasons: Long-term Outdoor Localization Using Local Features. In Proceedings of the 3rd European Conference on Mobile Robots, Freiburg, Germany, 19–21 September 2007; pp. 253–258. [Google Scholar]
  5. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
  6. Lowe, D.G. Object reconition from local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  7. Hamid, N.; Yahya, A.; Badlishah, R.; Al-Qershi, O.M. A Comparison between Using SIFT and SURF for Characteristic Region Based Image Steganography. Int. J. Comput. Sci. Issues 2012, 9, 110–117. [Google Scholar]
  8. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  9. Cheon, S.H.; Eom, I.K.; Ha, S.W.; Moon, Y.H. An enhanced SURF algorithm based on new interest point detection procedure and fast computation technique. J. Real-Time Image Process. 2019, 16, 1177–1187. [Google Scholar] [CrossRef]
  10. Alcantarilla, P.F.; Nuevo, J.; Bartoli, A. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In Proceedings of the 12th European Conference on Computer Vision (ECCV), Fiorenze, Italy, 7–13 October 2012. [Google Scholar]
  11. Rahaman, S.; Rahaman, M.M.; Abdullah-Al-Wadud, M.; Al-Quaderi, G.D.; Shoyaib, M. An adaptive gamma correction for image enhancement. Eurasip J. Image Video Process. 2016, 35, 1–13. [Google Scholar] [CrossRef] [Green Version]
  12. Archana, J.; Aishwarya, P. A Review on the Image Sharpening Algorithms Using Unsharp Masking. Int. J. Eng. Sci. Comput. 2016, 6, 8729–8733. [Google Scholar]
  13. Kansal, S.; Purwar, S.; Tripathi, R.K. Image contrast enhancement using unsharp masking and histogram equalization. Multimed. Tools Appl. 2018, 77, 26919–26938. [Google Scholar] [CrossRef]
  14. Sbert, C.; Morel, J.M.; Petro, A.B. A PDE formalization of Retinex theory. IEEE Trans. Image Process. 2010, 19, 2825–2837. [Google Scholar] [CrossRef]
  15. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT and SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  16. Mistry, D.; Banerjee, A. Comparison of Feature Detection an Matching Approaches: SIFT and SURF. Glob. Res. Dev. J. Eng. 2017, 2, 7–13. [Google Scholar]
  17. Ma, D.; Lai, H. Remote Sensing image matching based improved ORB in NSCT Domain. J. Indian Soc. Remote. Sens. 2019, 47, 801–807. [Google Scholar] [CrossRef] [Green Version]
  18. Lecca, M.; Torresani, A.; Remondino, F. Comprehensive Evaluation of Image Enhancement for Unsupervised Image Description and Matching. IET Image Process. 2020, 14, 4329–4339. [Google Scholar] [CrossRef]
  19. Karim, S.; Zhang, Y.; Brohi, A.A.; Asif, M.R. Feature Matching Improvement through Merging Features for Remote Sensing Imagery. 3D Res. 2018, 9, 1–10. [Google Scholar] [CrossRef]
  20. Azimi, E.; Behrad, A.; Bagher, M.; Ghoushchi, G.; Shanbehzadeh, J. A fully pipelined and parallel hardware architecture for real-time BRISK salient point extraction. J. Real-Time Image Process. 2019, 16, 1859–1879. [Google Scholar] [CrossRef]
  21. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applicts. to Image Analysis and Automtd. Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  22. Lati, A.; Belhocine, M.; Achour, N. Robust aerial image mosaicing algorithm based on fuzzy outliers rejection. Evol. Syst. 2020, 11, 717–729. [Google Scholar] [CrossRef]
  23. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 2001. [Google Scholar]
  24. Xu, Y.; Yang, C.; Sun, B.; Yan, X.; Chen, M. A Novel Multi-scale Fusion Framework for Detail-preserving Low-light Image Enhancement. Inf. Sci. 2021, 548, 378–397. [Google Scholar] [CrossRef]
  25. Sun, L.; Tang, C.; Xu, M.; Lei, Z. Non-uniform illumination correction based on multi-scale Retinex in digital image correlation. Appl. Opt. 2021, 60, 5599–5609. [Google Scholar] [CrossRef]
  26. Zhang, R.; Yang, S.; Zhang, Q.; Xu, L.; He, Y.; Zhang, F. Graph-based few-shot learning with transformed feature propagation and optimal class allocation. Neurocomputing 2022, 470, 247–256. [Google Scholar] [CrossRef]
  27. Zhang, R.; Xu, L.; Yu, Z.; Shi, Y.; Mu, C.; Xu, M. Deep-IRTarget: An Automatic Target Detector in Infrared Imagery using Dual-domain Feature Extraction and Allocation. IEEE Trans. Multimed. 2021, 24, 1735–1749. [Google Scholar] [CrossRef]
  28. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
  29. Brown, M.; Lowe, D.G. Automatic Panoramic Image Stitching using Invariant Features. Int. J. Comput. Vis. 2006, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  30. Mario, D.G.; Alberto, J.R.S.; Francisco, J.G.F.; Ponomaryov, V. Cromaticity Improvement in Images with Poor Lighting Using the Multiscale-Retinex MSR Algorithm. In Proceedings of the 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (MSMW), Kharkiv, Ukraine, 20–24 June 2016. [Google Scholar]
  31. Londoño, N.D.; Bizai, G.; Drozdowicz, B. Implementation and application of RETINEX algorithms to the preprocessing of retinography color images. Rev. Ing. Biomed. 2009, 3, 36–46. [Google Scholar]
  32. McCann, J. Retinex Theory. In Encyclopedia of Color Science and Technology; Springer: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  33. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A Multiscale Retinex for Bridging the Gap between Color Images and the Human Observation of Scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
  34. Hasan, M.M. A New PAPR Reduction Scheme for OFDM Systems Based on Gamma Correction. Circuits Syst. Signal Process. 2014, 33, 1655–1668. [Google Scholar] [CrossRef]
  35. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  36. Qi, F.; Weihong, X.; Qiang, L. Research of Image Matching Based on Improved SURF Algorithm. Indones. J. Electr. Eng. 2014, 12, 1395–1402. [Google Scholar] [CrossRef]
  37. Khan, S.A.; Saleem, Z. A comparative analysis of sift, surf, kaze, akaze, orb, and brisk. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies, Sukkur, Pakistan, 3–4 March 2018. [Google Scholar]
  38. Kerekes, J. Receiver Operating Characteristic Curve Confidence Intervals and Regions. IEEE Geosci. Remote. Sens. Lett. 2008, 5, 251–255. [Google Scholar] [CrossRef] [Green Version]
  39. Okarma, K.; Chlewicki, W.; Kopytek, M.; Marciniak, B.; Lukin, V. Entropy-Based Combined Metric for Automatic Objective Quality Assessment of Stitched Panoramic Images. Entropy 2021, 23, 1525. [Google Scholar] [CrossRef] [PubMed]
  40. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  41. Fathy, M.E.; Hussein, A.S.; Tolba, M.F. Fundamental Matrix Estimation: A Study of Error Criteria. Pattern Recognit. Lett. 2011, 32, 383–391. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example images taken under poorly illuminated environments.
Figure 1. Example images taken under poorly illuminated environments.
Mathematics 10 02407 g001
Figure 2. Comparison between images and their respective histograms applying the MSR.
Figure 2. Comparison between images and their respective histograms applying the MSR.
Mathematics 10 02407 g002
Figure 3. Comparison between image enhancement methods.
Figure 3. Comparison between image enhancement methods.
Mathematics 10 02407 g003
Figure 4. Comparison between the analyzed methods.
Figure 4. Comparison between the analyzed methods.
Mathematics 10 02407 g004
Figure 5. Comparison of keypoint extraction with and without MSR-based image enhancement. First row: original images; second row: MSR-enhanced images.
Figure 5. Comparison of keypoint extraction with and without MSR-based image enhancement. First row: original images; second row: MSR-enhanced images.
Mathematics 10 02407 g005
Figure 6. Comparison of keypoints matches with and without MSR-based image enhancement. First row: original images; Second row: MSR-enhanced images.
Figure 6. Comparison of keypoints matches with and without MSR-based image enhancement. First row: original images; Second row: MSR-enhanced images.
Mathematics 10 02407 g006
Figure 7. Proposal dataset used for the experimentations.
Figure 7. Proposal dataset used for the experimentations.
Mathematics 10 02407 g007
Figure 8. ROC curve of all combinations of image enhancement methods and features.
Figure 8. ROC curve of all combinations of image enhancement methods and features.
Mathematics 10 02407 g008
Figure 9. Comparison of image stitching results.
Figure 9. Comparison of image stitching results.
Mathematics 10 02407 g009
Table 1. Comparison of combinations of image enhancement methods and image features in terms of the number of detected and matched features.
Table 1. Comparison of combinations of image enhancement methods and image features in terms of the number of detected and matched features.
Image Enhancement Method against Image FeatureNumber of Detected and Matched FeaturesTotal Number
Im1Im2Im3Im4Im5
SIFT (no image enhancement)170 (130)165 (145)168 (148)172 (150)167 (158)842 (731)
SURF (no image enhancement)165 (144)150 (135)160 (152)165 (158)155 (140)795 (729)
ORB (no image enhancement)167 (142)175 (157)160 (140)165 (145)170 (155)837 (739)
AKAZE (no image enhancement)170 (130)163 (147)165 (145)170 (150)165 (156)833 (728)
MSR+SIFT200 (190)198 (189)199 (195)200 (190)198 (192)995 (956)
MSR+SURF160 (158)165 (162)168 (164)165 (162)150 (150)808 (769)
MSR+ORB162 (156)170 (160)164 (159)168 (165)164 (158)828 (798)
MSR+AKAZE200 (188)198 (187)197 (194)198 (190)198(190)991 (949)
Sharpening+SIFT175 (160)165 (152)175 (168)165 (156)168 (161)848 (797)
Sharpening+SURF155 (140)145 (132)158 (144)140 (132)144 (135)742 (683)
Sharpening+ORB158 (146)160 (152)161 (154)158 (150)162 (154)799 (756)
Sharpening+AKAZE172 (164)162 (150)168 (163)164 (154)163 (157)991 (925)
GC+SIFT120 (100)125 (123)130 (127)110 (108)128 (125)613 (583)
GC+SURF112 (96)120 (115)115 (105)100 (95)105 (100)552 (511)
GC+ORB118 (102)120 (115)125 (122)115 (110)128 (124)606 (573)
GC+AKAZE120 (108)127 (123)132 (127)106 (108)125 (122)610 (588)
HE+SIFT150 (135)162 (158)178 (172)164 (158)170 (165)824 (788)
HE+SURF150 (138)130 (124)145 (138)120 (116)130 (128)675 (644)
HE+ORB158 (153)160 (155)152 (148)155 (149)160 (156)785 (761)
HE+AKAZE150 (144)162 (154)180 (174)162 (154)167 (162)821 (788)
GAN+SIFT200 (194)198 (192)200 (195)200 (197)198 (195)996 (973)
GAN+SURF190 (186)192 (186)194 (184)188 (184)190 (186)954 (926)
GAN+ORB192 (188)194 (190)190 (184)188 (185)190 (186)954 (933)
GAN+AKAZE198 (194)200 (194)189 (185)196 (193)194 (191)977 (957)
Table 2. Average and standard deviation of the normalized number of matches.
Table 2. Average and standard deviation of the normalized number of matches.
Enhancement MethodsSIFTSURFORBAKAZE
No enhancement0.80 (1)0.80 (0.94)0.80 (0.90)0.80 (0.88)
MSR1 (0.39)0.83 (0.54)0.83 (0.51)0.99 (0.43)
Sharpening0.85 (0.65)0.76 (0.72)0.81 (0.55)0.86 (0.67)
Gamma correction0.62 (0.68)0.53 (0.81)0.61 (0.51)0.61 (0.71)
Histogram equalization0.81 (0.75)0.74 (0.76)0.80 (0.61)0.80 (0.73)
GAN1 (0.28)0.95 (0.46)0.88 (0.50)1 (0.29)
Table 3. Average and standard deviation of the ratio of the number of matched features to that of the detected.
Table 3. Average and standard deviation of the ratio of the number of matched features to that of the detected.
Enhancement MethodsSIFTSURFORBAKAZE
No enhancement0.90 (0.02)0.92 (0.04)0.88 (0.01)0.92 (0.04)
MSR0.97 (0.01)0.94 (0.01)0.95 (0.02)0.96 (0.01)
Sharpening0.96 (0.01)0.94 (0.01)0.93 (0.01)0.95 (0.02)
Gamma correction0.85 (0.02)0.84 (0.01)0.83 (0.01)0.84 (0.01)
Histogram equalization0.97 (0.01)0.94 (0.01)0.94 (0.01)0.96 (0.01)
GAN0.98 (0.01)0.97 (0.01)0.96 (0.01)0.98 (0.01)
Table 4. Comparison in terms of the sensitivity and the specificity.
Table 4. Comparison in terms of the sensitivity and the specificity.
Enhancement MethodsSensitivitySpecificity
SIFTSURFORBAKAZESIFTSURFORBAKAZE
No enhancement0.800.790.800.800.720.740.710.71
MSR0.930.900.910.920.920.880.880.92
Sharpening0.860.820.800.840.840.830.830.82
Gamma correction0.700.670.700.700.670.640.680.68
H.E.0.820.800.800.830.790.770.770.80
GAN0.950.920.900.950.940.900.880.94
Table 5. Spearman’s rank correlation coefficient analysis.
Table 5. Spearman’s rank correlation coefficient analysis.
Image Enhancement MethodDetectorsMatching d i 2 ρ
SIFT (no image enhancement)842731360.87
SURF (no image enhancement)79572910.99
ORB (no image enhancement)837739490.77
AKAZE (no image enhancement)833728250.94
MSR+SIFT99595601
MSR+SURF808769160.97
MSR+ORB82879810.99
MSR+AKAZE99194940.99
Sharp.+SIFT848797160.97
Sharp.+SURF74268390.99
Sharp.+ORB79975690.99
Sharp.+AKAZE99192590.99
GC+SIFT613583160.97
GC+SURF55251190.99
GC+ORB60657390.99
GC+AKAZE610588160.97
HE+SIFT824788250.94
HE+SURF67564490.99
HE+ORB785761160.97
HE+AKAZE821768250.94
GAN+SIFT99697301
GAN+SURF95497301
GAN+ORB954926160.97
GAN+AKAZE97795790.99
Table 6. Average and standard deviation of the Sampson distances.
Table 6. Average and standard deviation of the Sampson distances.
Enhancement MethodsSIFTSURFORBAKAZE
No enhancement4.72 (0.53)4.94 (0.47)4.82 (0.54)4.98 (0.56)
MSR0.59 (0.04)0.67 (0.06)0.62 (0.04)0.60 (0.05)
Sharpening0.68 (0.07)0.75 (0.10)0.72 (0.97)0.68 (0.07)
Gamma correction6.18 (0.73)6.77 (0.75)6.36 (0.69)6.22 (0.76)
Histogram equalization0.72 (0.05)0.81 (0.05)0.76 (0.04)0.74 (0.05)
GAN0.48 (0.03)0.55 (0.04)0.58 (0.04)0.48 (0.03)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lozano-Vázquez, L.V.; Miura, J.; Rosales-Silva, A.J.; Luviano-Juárez, A.; Mújica-Vargas, D. Analysis of Different Image Enhancement and Feature Extraction Methods. Mathematics 2022, 10, 2407. https://doi.org/10.3390/math10142407

AMA Style

Lozano-Vázquez LV, Miura J, Rosales-Silva AJ, Luviano-Juárez A, Mújica-Vargas D. Analysis of Different Image Enhancement and Feature Extraction Methods. Mathematics. 2022; 10(14):2407. https://doi.org/10.3390/math10142407

Chicago/Turabian Style

Lozano-Vázquez, Lucero Verónica, Jun Miura, Alberto Jorge Rosales-Silva, Alberto Luviano-Juárez, and Dante Mújica-Vargas. 2022. "Analysis of Different Image Enhancement and Feature Extraction Methods" Mathematics 10, no. 14: 2407. https://doi.org/10.3390/math10142407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop