Next Article in Journal
Factors Affecting Site Use Preference of Grazing Cattle Studied from 2000 to 2020 through GPS Tracking: A Review
Previous Article in Journal
Circular Patch Fed Rectangular Dielectric Resonator Antenna with High Gain and High Efficiency for Millimeter Wave 5G Small Cell Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
3
Yangtze Delta Region Institute (HuZhou), University of Electronic Science and Technology of China, Huzhou 313099, China
4
School of Resources and Environment, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(8), 2695; https://doi.org/10.3390/s21082695
Submission received: 11 March 2021 / Revised: 8 April 2021 / Accepted: 8 April 2021 / Published: 11 April 2021

Abstract

:
Since technologies in image fusion, image splicing, and target recognition have developed rapidly, as the basis of many image applications, the performance of image registration directly affects subsequent work. In this work, for rich features of satellite-borne optical imagery such as panchromatic and multispectral images, the Harris corner algorithm is combined with the scale invariant feature transform (SIFT) operator for feature point extraction. Our rough matching strategy uses the K-D (K-Dimensional) tree combined with the BBF (Best Bin First) method, and the similarity measure is the nearest neighbor/the second-nearest neighbor ratio. Finally, a triangle-area representation (TAR) algorithm is utilized to eliminate false matches in order to ensure registration accuracy. The performance of the proposed algorithm is compared with existing popular algorithms. The experimental results indicate that for visible light and multi-spectral satellite remote sensing images of different sizes and different sources, the proposed algorithm in this work is excellent in accuracy and efficiency.

1. Introduction

The specific objective of image registration is to find the geometric correspondence between different images that contain the same contents. It uses an accurate model to describe the internal relationship among pixels in two images and accurately match them together. Hence, a unified model is essential to find the commonalities of features in different images [1,2]. Image registration frequently appears in research works of scholars, and it possesses important value in multi-temporal and large-scale application scenarios. In the field of remote sensing, there are many types of satellite-borne optical image sensors, and their acquired remote sensing images are of different resolutions and bands. Though the accomplishment of registration, mosaicking and fusion among these images, more integral and abundant data can be generated, which can lay the foundation for subsequent work, such as change detection and scene expansion.
Recently, with the continuous development of sensor technology, researchers in the satellite remote sensing field have paid more attention to the study of high-speed and stable information transmission [3,4,5,6,7]. As sensor types become more and more diverse, remote sensing images with different spatial and spectral resolutions can be obtained using different satellite-borne platforms. At present, there are many types of observation satellites, including Landsat series and SPOT (Systeme Probatoire d’Observation de la Terre) series, as well as Chinese Gaofen (GF) series and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). This work is mainly focused on the registration of satellite-borne optical remote sensing imagery, i.e., panchromatic and multispectral images.
The remainder of this article is organized as follows. The second section introduces the principle of the algorithm, including the Harris corner point algorithm, scale invariant feature transform (SIFT) algorithm, Best Bin First (BBF) algorithm and nearest/near neighbor ratio method. Then, the basic principle of the triangle-area representation (TAR) algorithm is described in detail, which is adopted to eliminate false matches in rough matching and realize fine registration. Meanwhile, the optimal affine transform parameters are obtained for the matched points. In the third section, the proposed algorithm in this work is evaluated by using the images from GF-1, GF-2 and ASTER. Finally, conclusions and discussions are given in the fourth section.

2. Remote Sensing Image Matching

This work uses the Harris algorithm to extract feature points from reference images and images to be registered, and then uses the SIFT operator to describe the feature points. The rough matching strategy is relied on the K-D (K-Dimensional) tree combined with the BBF algorithm, and the similarity measure is the first/second nearest neighbor ratio method. Finally, a TAR-based algorithm is used to eliminate false matches’ points in order to precisely match the image. Our algorithm procedure is demonstrated in Figure 1 as follows:

2.1. Remote Sensing Image Preprocessing

There are many factors that have an impact on optical remote sensing image quality. A major disadvantage is thermal noise or interference from other factors in the imaging processes. Another disadvantage is that the encoding mode of some image systems sacrifices grayscale representation to some degree, in order to achieve high compression ratios, which makes gray differences of pixels in adjacent regions smaller [8] and reduces the gradients of gray value between objects and backgrounds. Consequently, feature extraction for registration becomes difficult and it is necessary for these images to be filtered or enhanced, so as to improve their quality before deep processing [9,10,11].
For the purpose of improving the variation range of gray scale and contrast, linear stretching is adopted in this work. Generally, a linear stretch of 0.02 can achieve acceptable visual appearances in remote sensing images. That is, the distribution of an image histogram between 2% and 98% is linearly stretched to extend dynamic range of pixels to its whole gray space, so that the whole image has more abundant gray information. Figure 2 is an enhanced example of 0.02 linear stretching; the processed image has stronger contrast, better visual appearance, and is more beneficial in terms of subsequent feature extraction than the original one.

2.2. Feature Point Extraction

2.2.1. Harris Feature Points Extraction

The Harris corner detection and extraction algorithm can build a rectangular window of a certain size to test every pixel in an image. The testing content is the average energy of the pixels in the window, which serves as the metric for judging whether a pixel is a corner point. Namely, when the average energy of the point in the window is greater than a preset threshold, it can be regarded as a feature point [12].

2.2.2. SIFT Feature Point Description

Since it does not involve the construction of multi-scale space, the time complexity of the Harris operator is rather low. It has good robustness to illumination, scale, rotation and angle transformation [13], but its detection performance for smooth images is not satisfactory [14]. In spite of its higher time complexity, the SIFT operator can capture more feature points. Taking the main characteristics of optical remote sensing images into consideration, we use the Harris algorithm to extract feature points and apply the SIFT descriptor to describing the feature points in order to satisfy both accuracy and time requirements [15].
Generally, the SIFT algorithm consists of two parts: determining feature points of an image and describing feature points [16,17]. The process of image feature points determination is similar to the perception of point information by human vision. Usually, regardless of optical image resolution, human eyes can always distinguish valid features [18]. Hence, stable feature points of images are detected at different scales, and some information obtained in the detection process is utilized to construct a multi-dimensional description symbol to arrange the Harris feature points. In the term of feature point extraction, the SIFT operator is robust to scaling, rotation and transforms of images, and it is also resistant to external factors such as illumination and noise [19].

2.3. Rough Matching Strategy

2.3.1. BBF Search Strategy

The K-D tree, is essentially a binary tree structure. In this search space, data are usually divided into a binary tree structure according to spatial positions, and then search processes are also carried out according to the search rules of binary trees. The kernel idea of the K-D tree is to divide data into left and right uniform structures from top to bottom in a two-dimensional space. As a result, all data are split into a left subtree and a right subtree according to their spatial position, and then the same operation is repeated on the subtrees to part them into smaller subtrees until the data cannot be subdivided. In the process of dividing, it is vital to maintain the data balance between the left and right subtrees as much as possible. Otherwise, search efficiency may decline. The Best Bin First (BBF) search algorithm is a search algorithm developed for K-D tree structures. It outperforms the K-D tree search algorithm in terms of processing high-dimensional features [20]. It pushes points that can be traced into a sequence, and sorts these points, according to their distances from a hyperplane. The closest point has the highest priority, and then all the points in the queue are traversed according to their priority until the queue becomes empty. In addition, BBF also imposes restrictions on its search time. That is, once its running time exceeds the pre-set time, the algorithm will directly output the current closest point as the result. Therefore, we chose the K-D tree to organize the feature points while using the BBF algorithm to search for the feature points in this work.

2.3.2. Similarity Measure

The first/second nearest neighbor ratio method is chosen as the similarity metric in registration for reducing the complexity of calculation [21]. First of all, the point in an image to be registered that is closest to a search point in its reference image needs to be found, and their distance is denoted as Dis1. Then, it is necessary to find the next nearest point to the search point, and their distance is denoted as Dis2. Then, when comparing Dis1/Dis2 to a given threshold, and when the ratio is less than the threshold, the point pair can be considered to be a possible real matching pair.

2.4. Fine Matching Strategy

Since satellite-borne optical remote sensing images are acquired at high altitude, the view differences among images of the same target are slight. Changes between images only involve transforms such as translation, scaling, and rotation. Therefore, affine transform models can satisfy geometric transform requirements in our cases [22,23]. Therefore, an affine transform based on Triangle-area representation (TAR) is utilized in fine matching [24].
TAR is a framework in which features at every scale, i.e., edge lengths of a triangle, are normalized locally according to their scales. Among shape attributes at different scales, local normalization features are more distinct and can more accurately describe shapes in remote sensing images. Unlike some other matching methods that use a limited number of boundary points (such as corners or knee points), TAR is an exhaustive method for all boundary points.
TAR value is calculated from a triangle region formed by points on the shape boundary. Each contour point is represented by its coordinate (x, y), and a discrete parameter sequence (xn, yn) (n = 1, …, N) represents the N points obtained by resampling a shape. Then, the curvature of each point is represented using the TAR value defined below. For three sequential points, ( x n t s , y n t s ) , ( x n , y n ) and ( x n + t s , y n + t s ) , where n ∈ [1, N], ts ∈ [1, Ts] represents arbitrary edge length of a triangle, and Ts is the longest distance between any two sample points. Thus, the TAR value formed by these points can be expressed as:
T A R ( n , t s ) = 1 2 | x n t s y n t s 1 x n y n 1 x n + t s y n + t s 1 |
Given that running counterclockwise, the TAR value is positive when the local contour denoted by three sample points is convex. The TAR value is negative when the contour is concave. And the TAR value is 0 when the contour is straight. Figure 3 depicts triangular regions at different positions of a closed contour.
The above figure is a closed hammer-shaped contour. Region 1 is a convex shape, and its TAR value is greater than 0. Region 2 is a concave shape, and its TAR value is less than 0. Region 3 is a straight line. The curvature function of the TAR value of discrete points can be rewritten as:
c ( n ) = x ˙ n y ¨ n x ¨ n y ˙ n ( x ˙ n 2 + y ˙ n 2 ) 3 2 = T A R ( n , n + 1 ) ( d s n ) 3 ,
where TAR(n, n + 1) is the TAR value at ts = 1, and d s n = x ˙ n 2 + y ˙ n 2 corresponds to the first edge length of a triangle, that is, the distance between the first and second vectors of the triangle formed by points (xn, yn), (xn+1, yn+1), and (xn+2, yn+2). This equation clearly expresses the relationship between the curvature and TAR value of a shape. It is known that zero crossings of a curvature function are invariant under a general affine transformation [26], and points with non-zero curvature are also not invariant to the affine transformation [27]. Thus, considering the contour sequences xn and yn of a two-dimensional shape, if an affine transform is performed, the relationship between the original sequences and the transformed sequences is:
[ x ^ n y ^ n ] = [ a 1 a 2 a 3 a 4 ] [ x n y n ] + [ b 1 b 2 ] ,
where x ^ n and y ^ n are the transformed sequences after the affine transform, b1 and b2 are translation parameters, and a1, a2, a3 and a4 are scaling and rotation factors. The influence of translation parameters is easily eliminated by normalizing the shape boundary corresponding to a centroid. This normalization is accomplished by subtracting the average value from each boundary sequence. Substituting Equation (3) into Equation (1), we can obtain:
T A ^ R ( n , t s ) = ( a 1 a 4 a 2 a 3 ) T A R ( n , t s )
where T A ^ R is TAR value after the affine transformation. It is obvious that T A ^ R is invariant for affine transformation.

2.5. Elimilation of Fales Matches

The difficulty of feature points matching lies in noise, intensive affine transformation, etc. For instance, some feature points are shifted from their original positions and become abnormal points in an image. Nowadays, there are many point matching algorithms, most of which are based on the similarity of local features, spatial relationships, or both. In some existing algorithms, affine invariant operators are utilized to detect whether a matching point is an abnormal point, through global information [28]. For example, the RANSAC (Random Sample Consensus) algorithm establishes a model for the correspondence between point pairs to estimate transform parameters. If false matches are not more than 50%, the algorithm can eliminate them effectively [29]. In this work, we use the affine invariance of TAR to eliminate false matches. The procedure consists of three steps: constructing KNN-TAR (K-Nearest Neighbor-Triangle-Area Representation) operators [30], processing candidate outliers and removing false matches.
Most of the outliers can be found by KNN-TAR, but a few outliers have the same nearest neighbors. The removal of such outliers is very important, and it directly effects the registration performance of the proposed algorithm. The outlier removal in this study includes three parts: the KNN-TAR descriptor, the process of the candidate outliers, and the removal of the remaining outliers. That is:
  • Constructing KNN-TAR operators. Supposing that the nearest neighbors of the outliers have more structural dissimilarity, the TAR value is used to construct an affine invariant variable, which is calculated by the K nearest neighbor (KNN) in order to find outliers.
  • Dealing with candidate outliers. Whether the suspected outliers sifted by KNN-TAR are real false matches is determined by the local structure of the single matching pair and the global transform error.
  • Removing false matches. Adjust the parameter setting of KNN-TAR, so as to eliminate the outliers with the same KNN.

3. Experimental Results

The setup of the hardware environment requires an Intel core i5-4570 processor at 3.20 Hz, with 4.00 GB RAM. The operating system is 64-bit Windows 7, and our programming software is MATLAB R2014a. Here, we compare the proposed algorithm with other existing algorithms using four pairs of satellite-borne optical remote sensing images.
The first pair of experimental images is acquired by the GF-1 Panchromatic and Multispectral sensor (PMS) at Mao County, which is a multispectral image with 3000 × 1012 pixels. The preprocessed images are shown in Figure 4.
Figure 5 and Figure 6 display the extracted feature points and the rough matching result for the GF-1 image pair, respectively. It can be seen that our proposed algorithm in this work can detect enough point features.
By rough matching, 251 pairs of points are obtained. The straight lines in Figure 6 connect the corresponding points in the two images. It is obvious that there are some crossing lines between the two images, that is, there are obvious false matches. After the RANSAC algorithm eliminated one point pair, the obtained affine transform matrix for registration is presented in Equation (5).
H R A N S A C = [ 1.0292 0.0006 363.8828 0.0003 1.0308 50.3609 0 0 1 ]
After 16 point pairs were eliminated by the proposed KNN-TAR algorithm, the obtained affine transform matrix for registration is given in Equation (6).
H T A R = [ 1.0293 0.0007 363.9619 0.0004 1.0308 50.3757 0 0 1 ] ,
Figure 7 exhibits the GF-1 image pairs after eliminating false matches using the two methods. The registration results for GF-1 are listed in Table 1. The RMSE (Root Mean Square Error) of our algorithm is 0.8619 pixels, which is better than that of the RANSAC algorithm.
Moreover, according to the obtained transformation parameters, the bilinear interpolation method is utilized to realize image mosaicking. The final stitched GF-1 image is shown in Figure 8.
The second pair of experimental images was acquired by the GF-2 PMS sensor at Mao County, which is a multispectral image with 2400 × 1800 pixels. The preprocessed and stitched GF-2 images are shown in Figure 9. The registration results for GF-2 are listed in Table 2. The RMSE of our algorithm is 5.7423 pixels, which is also better than that of RANSAC.
The third pair of experimental images are acquired by the visible/near infrared part of the ASTER sensor, with spatial resolutions of 15 m. The preprocessed and stitched ASTER images are shown in Figure 10 and they are pseudo-color synthetic images. The registration results for ASTER are listed in Table 3. The RMSE of our algorithm is 0.5362 pixels, which is also better than that of the RANSAC algorithm.
The fourth pair of experimental images is from GF-1 and GF-2. The preprocessed and stitched images are shown in Figure 11. The registration results for ASTER are listed in Table 4. The RMSE of our algorithm is 0.5362 pixels, which is also better than that of the RANSAC algorithm.
From the above experimental results, it is found that as the number of rough matching points increases, there are slight increases in the matching time of the KNN-TAR algorithm compared to that of the RANSAC algorithm. However, as for satellite-borne optical remote sensing images from different sources, the proposed algorithm in this work can eliminate false matches and realize accurate registration. Through comparative experiments, it can be proven that its registration accuracy is better than that of the RANSAC algorithm.
Moreover, we also compare the proposed method with three popular image registration methods, SIFT, SURF (Speed-Up Robust Features) and ORB (Oriented FAST and Rotated BRIEF). They also combined the BBF with RANSAC algorithm in feature point matching, and the SIFT algorithm still uses the Harris operator for feature extraction instead of global matching in the original version. The comparison results are also presented in Table 5. As revealed in Table 5, the proposed registration method outperforms the other methods for GF-1, ASTER and Multi-view images. Even for the GF-2 image, its registration result is also acceptable.

4. Conclusions

Because of small view differences of satellite remote sensing images, a registration method based on point features is designed in this study. The Harris operator, with its fast detection speed, is chosen to extract image features, and the SIFT operator is used to describe the features in order to ensure accuracy. After that, the BBF algorithm combined with the first/second-nearest neighbor method is adopted to realize rough matching of feature points. Then, a TAR method is introduced into false match elimination in order to enhance matching accuracy. The experimental results indicate that the method used in this work has better registration accuracy compared with RANSAC and some other existing registration algorithms. However, due to the combination of different optimization methods, the proposed algorithm has no significant advantages in time efficiency. With the increase in image resolution and size, registration time will increase. Therefore, other effective optimization algorithms may be utilized to accelerate parameter fitting processes to improve the speed of registration. Our proposed strategy is only effective in two-dimensional space, and cannot perform well for images with many view differences. In the future, it can be modified to adapt to three-dimensional space.

Author Contributions

Conceptualization, Q.G. and X.L.; Data curation, X.H. and R.W.; Formal analysis, Q.G. and X.L.; Methodology, Q.G. and X.L.; Software, X.H. and R.W.; Validation, X.H.; Writing—original draft, X.H. and Q.G.; Writing—review and editing, R.W. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Program of Sichuan (Grant No. 2017GZ0327 and 2020YFG0240), the Science and Technology Program of Hebei (Grant No. 20355901D), the Science and Technology Program of Hebei (Grant No. 19255901D), the National Defense Science and Technology Key Laboratory of Remote Sensing Information and Image Analysis Technology of China (Grant No. 6142A010301), and the Chinese Air-Force Equipment Pre-Research Project (Grant No. 10305***02).

Informed Consent Statement

Not applicable.

Data Availability Statement

The download website of the ASTER data images used in this work is https://e4ftl01.cr.usgs.gov/, accessed on 10 April 2021. GF-1 and GF-2 images are available at the following link: http://www.cresda.com/CN/sjfw/zxsj/index.shtml, accessed on 10 April 2021.

Acknowledgments

We would like to thank the journal’s editors and reviewers for their kind comments and valuable suggestions to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, Y.; Chen, H.Y. A Remote sensing image registration algorithm based on multiple constraints and a variational Bayesian framework. Remote Sens. Lett. 2021, 12, 296–305. [Google Scholar]
  2. Zhang, Q.; Cao, Z.G.; Hu, Z.W.; Jia, Y.H.; Wu, X.L. Joint image registration and fusion for panchromatic and multispectral images. IEEE Trans. Geosci. Remote Sens. 2015, 12, 467–471. [Google Scholar] [CrossRef]
  3. Öfverstedt, J.; Lindblad, J.; Sladoje, N. Fast and robust symmetric image registration based on distances combining intensity and spatial information. IEEE Trans. Image Process. 2019, 28, 3584–3597. [Google Scholar] [CrossRef]
  4. Ma, W.P.; Zhang, J.; Wu, Y.; Jiao, L.C.; Zhu, H.; Zhao, W. A novel two-step registration method for remote sensing images based on deep and local features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4834–4843. [Google Scholar] [CrossRef]
  5. Li, Y.S.; Xu, J.J.; Xia, R.J.; Huang, Q.H.; Xie, W.X.; Li, X.L. Extreme-constrained spatial-spectral corner detector for image-level hyperspectral image classification. Pattern Recognit. Lett. 2018, 109, 110–119. [Google Scholar] [CrossRef]
  6. Ma, W.P.; Wen, Z.L.; Wu, Y.; Jiao, L.C.; Gong, M.G.; Zheng, Y.F.; Liu, L. Remote sensing image registration with modified sift and enhanced feature matching. IEEE Geosci. Remote Sens. Lett. 2017, 14, 3–7. [Google Scholar] [CrossRef]
  7. Fernández, F.J.; Gonzalez, E.A.; Llanos, D.R. Distributed programming of a hyperspectral image registration algorithm for heterogeneous GPU clusters. J. Parallel Distrib. Comput. 2021, 151, 86–93. [Google Scholar] [CrossRef]
  8. Sedaghat, A.; Mohammadi, N. Uniform competency-based local feature extraction for remote sensing images. ISPRS J. Photogram. Remote Sens. 2018, 135, 142–157. [Google Scholar] [CrossRef]
  9. Li, Z.G.; Zheng, J.H.; Zhu, Z.J.; Yao, W.; Wu, S.Q. Weighted guided image filtering. IEEE Trans. Image Process. 2015, 24, 120–129. [Google Scholar]
  10. Versaci, M.; Morabito, F.C. Image edge detection: A new approach based on fuzzy entropy and fuzzy divergence. Int. J. Fuzzy Syst. 2021, 1–10. [Google Scholar] [CrossRef]
  11. Dhanachandra, N.; Chanu, Y.J. An image segmentation approach based on fuzzy c-means and dynamic particle swarm optimization algorithm. Multimed. Tools Appl. 2020, 79, 18839–18858. [Google Scholar] [CrossRef]
  12. Xiang, Y.; Wang, F.; You, H.J. OS-SIFT: A Robust SIFT-like algorithm for high-resolution optical-to-SAR image registration in suburban areas. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3078–3090. [Google Scholar] [CrossRef]
  13. Etezadifar, P.; Farsi, H. A new sample consensus based on sparse coding for improved matching of SIFT features on remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5254–5263. [Google Scholar] [CrossRef]
  14. Liu, G.; Gousseau, Y.; Tupin, F. A contrario comparison of local descriptors for change detection in very high spatial resolution satellite images of urban areas. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3904–3918. [Google Scholar] [CrossRef]
  15. Fan, J.W.; Wu, Y.; Wang, F.; Zhang, Q.; Liao, G.S.; Li, M. SAR Image registration using phase congruency and nonlinear diffusion-based SIFT. IEEE Geosci. Remote Sens. Lett. 2015, 12, 562–566. [Google Scholar]
  16. Yu, W.; Sun, X.S.; Yang, K.Y.; Rui, Y.; Yao, H.X. Hierarchical semantic image matching using CNN feature pyramid. Comput. Vis. Image Understand. 2018, 169, 40–51. [Google Scholar] [CrossRef]
  17. Qian, X.L.; Lin, S.; Cheng, G.; Yao, X.W.; Ren, H.L.; Wang, W. Object detection in remote sensing images based on improved bounding box regression and multi-level features fusion. Remote Sens. 2020, 12, 143. [Google Scholar] [CrossRef] [Green Version]
  18. Feng, R.; Du, Q.; Li, X.; Shen, H. Robust registration for remote sensing images by combining and localizing feature-and area-based methods. ISPRS J. Photogramm. Remote Sens. 2019, 151, 15–26. [Google Scholar] [CrossRef]
  19. Lv, X.; Ming, D.; Chen, Y.Y.; Wang, M. Very high resolution remote sensing image classification with SEEDS-CNN and scale effect analysis for superpixel CNN classification. Int. J. Remote Sens. 2018, 40, 506–531. [Google Scholar] [CrossRef]
  20. Du, J.; Bian, F. A Privacy-Preserving and Efficient k-nearest neighbor query and classification scheme based on k-dimensional tree for outsourced data. IEEE Access 2020, 8, 69333–69345. [Google Scholar] [CrossRef]
  21. Li, S.; Wang, J.; Liang, Z.; Su, L. Tree point clouds registration using an improved ICP algorithm based on kd-tree. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 4545–4548. [Google Scholar] [CrossRef]
  22. Zhang, J.; Ma, W.P.; Wu, Y.; Jiao, L.C. Multimodal remote sensing image registration based on image transfer and local features. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1210–1214. [Google Scholar] [CrossRef]
  23. Xie, Y.H.; Shen, J.; Wu, C.D. Affine geometrical region CNN for object tracking. IEEE Access 2020, 8, 68638–68648. [Google Scholar] [CrossRef]
  24. Tran, N.T.; Tan, D.K.L.; Doan, A.D.; Do, T.T.; Bui, T.A.; Tan, M.X.; Cheung, N.M. On-device scalable image-based localization via prioritized cascade search and fast one-many RANSAC. IEEE Trans. Image Process. 2019, 28, 1675–1690. [Google Scholar] [CrossRef] [PubMed]
  25. Shi, L.; Zhao, H.Q.; Zakharov, Y.; Chen, B.D.; Yang, Y.R. Variable step-size widely linear complex-valued affine projection algorithm and performance analysis. IEEE Trans. Signal Process. 2020, 68, 5940–5953. [Google Scholar] [CrossRef]
  26. Ferrer, M.; Gonzalez, A.; Diego, M.D.; Pinero, G. Distributed affine projection algorithm over acoustically coupled sensor networks. IEEE Trans. Signal Process. 2017, 65, 6423–6434. [Google Scholar] [CrossRef] [Green Version]
  27. Li, H.; Qin, J.H.; Xiang, X.Y.; Pan, L.L.; Ma, W.T.; Xiong, N.N. An efficient image matching algorithm based on adaptive threshold and RANSAC. IEEE Access 2018, 6, 66963–66971. [Google Scholar] [CrossRef]
  28. Tong, X.H.; Ye, Z.; Xu, Y.S.; Liu, S.J.; Li, L.Y.; Xie, H.; Li, T.P. A novel subpixel phase correlation method using singular value decomposition and unified random sample consensus. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4143–4156. [Google Scholar] [CrossRef]
  29. Morley, D.; Foroosh, H. Improving RANSAC-based segmentation through CNN encapsulation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2661–2670. [Google Scholar] [CrossRef]
  30. Zhang, K.; Li, X.Z.; Zhang, J.X. A robust point-matching algorithm for remote sensing image registration. IEEE Geosci. Remote Sens. Lett. 2014, 11, 469–473. [Google Scholar] [CrossRef]
  31. Wang, X.; Lv, X.; Li, L.; Cui, G.; Zhang, Z. A new method of speeded up robust features image registration based on image preprocessing. In Proceedings of the 2018 International Conference on Information Systems and Computer Aided Education (ICISCAE), Changchun, China, 6–8 July 2018; pp. 317–321. [Google Scholar] [CrossRef]
  32. Yeh, C.; Chang, Y.; Hsu, P.; Hsien, C. GPU Acceleration of UAV image splicing using oriented fast and rotated brief combined with PCA. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5700–5703. [Google Scholar] [CrossRef]
Figure 1. The proposed image registration algorithm flow chart.
Figure 1. The proposed image registration algorithm flow chart.
Sensors 21 02695 g001
Figure 2. Stretching of a pseudo-color image synthetized from a GF-1 (Gaofen-1) multispectral image, acquired at 38° N and 117.7° E on 9 September 2014: (a) An original image; (b) the corresponding enhanced image.
Figure 2. Stretching of a pseudo-color image synthetized from a GF-1 (Gaofen-1) multispectral image, acquired at 38° N and 117.7° E on 9 September 2014: (a) An original image; (b) the corresponding enhanced image.
Sensors 21 02695 g002
Figure 3. Triangular regions of a closed contour [25].
Figure 3. Triangular regions of a closed contour [25].
Sensors 21 02695 g003
Figure 4. A GF-1 image pair: (a) acquired on 19 February 2015; (b) acquired on 11 May 2015.
Figure 4. A GF-1 image pair: (a) acquired on 19 February 2015; (b) acquired on 11 May 2015.
Sensors 21 02695 g004
Figure 5. The extracted feature points in the GF-1 image pair: (a) acquired on 19 February 2015; (b) acquired on 11 May 2015.
Figure 5. The extracted feature points in the GF-1 image pair: (a) acquired on 19 February 2015; (b) acquired on 11 May 2015.
Sensors 21 02695 g005
Figure 6. The rough matching result of the GF-1 image pair.
Figure 6. The rough matching result of the GF-1 image pair.
Sensors 21 02695 g006
Figure 7. The GF-1 image pairs after eliminating false matches: (a) Finely matched by the RANSAC (Random Sample Consensus) algorithm; (b) finely matched by the KNN-TAR (K-Nearest Neighbor-Triangle-Area Representation) algorithm.
Figure 7. The GF-1 image pairs after eliminating false matches: (a) Finely matched by the RANSAC (Random Sample Consensus) algorithm; (b) finely matched by the KNN-TAR (K-Nearest Neighbor-Triangle-Area Representation) algorithm.
Sensors 21 02695 g007
Figure 8. The stitched GF-1 image.
Figure 8. The stitched GF-1 image.
Sensors 21 02695 g008
Figure 9. A GF-2 image pair: (a) Acquired on 19 February 2015; (b) acquired on 24 February 2015; (c) the stitched GF-2 image.
Figure 9. A GF-2 image pair: (a) Acquired on 19 February 2015; (b) acquired on 24 February 2015; (c) the stitched GF-2 image.
Sensors 21 02695 g009
Figure 10. An ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) image pair: (a) Acquired on 15 August 2020; (b) acquired on 20 October 2020; (c) the stitched ASTER image.
Figure 10. An ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) image pair: (a) Acquired on 15 August 2020; (b) acquired on 20 October 2020; (c) the stitched ASTER image.
Sensors 21 02695 g010
Figure 11. A multi-source image pair: (a) Acquired by GF-1 on 19 February 2015 (800 × 600); (b) acquired by GF-2 on 24 February 2015 (2400 × 1800); (c) the stitched multi-source image.
Figure 11. A multi-source image pair: (a) Acquired by GF-1 on 19 February 2015 (800 × 600); (b) acquired by GF-2 on 24 February 2015 (2400 × 1800); (c) the stitched multi-source image.
Sensors 21 02695 g011
Table 1. Comparison of the KNN-TAR and RANSAC algorithms for GF-1 images.
Table 1. Comparison of the KNN-TAR and RANSAC algorithms for GF-1 images.
Match MethodsNumber of Pairs by Rough MatchingNumber of Pairs by Fine MatchingTime (ms)RMSE (Pixels)
RANSAC25125044 0.8818
KNN-TAR23551 0.8619
Table 2. Comparison of the KNN-TAR and RANSAC algorithms for GF-2 images.
Table 2. Comparison of the KNN-TAR and RANSAC algorithms for GF-2 images.
Match MethodsNumber of Pairs by Rough MatchingNumber of Pairs by Fine MatchingTime (ms)RMSE (Pixels)
RANSAC1150115096 5.8743
KNN-TAR1135123 5.7423
Table 3. Comparison of TAR and RANSAC algorithms for ASTER images.
Table 3. Comparison of TAR and RANSAC algorithms for ASTER images.
Match MethodsNumber of Pairs by Rough MatchingNumber of Pairs by Fine MatchingTime (ms)RMSE (Pixels)
RANSAC11111120 0.5666
KNN-TAR10323 0.5362
Table 4. Comparison of the TAR and RANSAC algorithms for multi-source image images.
Table 4. Comparison of the TAR and RANSAC algorithms for multi-source image images.
Match MethodsNumber of Pairs by Rough MatchingNumber of Pairs by Fine MatchingTime (ms)RMSE (Pixels)
RANSAC2625333.0044
KNN-TAR24282.9001
Table 5. Registration performance comparison of different methods.
Table 5. Registration performance comparison of different methods.
Image SensorsSIFTSURF [31]ORB [32]Proposed Algorithm
Matched Point PairsRMSE (Pixels)Matched Point PairsRMSE (Pixels)Matched Point PairsRMSE (Pixels)Matched Point PairsRMSE (Pixels)
GF-12500.88181141.0976411.25302350.8619
GF-211505.874312215.5924631.635811355.8423
ASTER1110.5666640.7330171.31321030.5362
GF-1 & GF-2233.0044705.2689473.6736242.9001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hou, X.; Gao, Q.; Wang, R.; Luo, X. Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features. Sensors 2021, 21, 2695. https://doi.org/10.3390/s21082695

AMA Style

Hou X, Gao Q, Wang R, Luo X. Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features. Sensors. 2021; 21(8):2695. https://doi.org/10.3390/s21082695

Chicago/Turabian Style

Hou, Xinan, Quanxue Gao, Rong Wang, and Xin Luo. 2021. "Satellite-Borne Optical Remote Sensing Image Registration Based on Point Features" Sensors 21, no. 8: 2695. https://doi.org/10.3390/s21082695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop