# A Fast Registration Method for Optical and SAR Images Based on SRAWG Feature Description

^{*}

## Abstract

**:**

## 1. Introduction

- A 3D dense feature description based on SRAWG is proposed, combined with non-maximum suppression template search, to further improve registration accuracy. The single-scale Sobel and ROEWA operators are used to calculate the consistent gradients of optical and SAR images, respectively, which have better speckle noise suppression ability than the differential gradient operator. The gradient of each pixel is projected to its adjacent quantized gradient direction by means of angle weighting, and the weighted gradient amplitudes in the 3 × 3 neighborhood of each pixel are fused to form the feature description of each pixel. SRAWG can more accurately describe the structural features of optical and SAR images. We introduce a non-maximum suppression method to address the multi-peak phenomenon of search surfaces in frequency-domain-based template search methods. This further improves registration accuracy.
- In order to maintain high registration efficiency, we modify the original multi-scale Sobel and ROEWA operators into the single-scale case for the computation of image gradients. For the repeated feature construction problem in template matching, we adopt the overlapping template merging method and analyze the efficiency improvement brought by this method.

## 2. Method

#### 2.1. Proposed Fast Registration Method

#### 2.2. Implementation Details of Each Step

#### 2.2.1. Feature Point Extraction

- The detector relies on the differential gradient of the image; so when it is directly applied to the SAR image detection task, many false corner points are often obtained.
- Due to the lack of spatial location constraints, the detected corner points are unevenly distributed in the image space, which has an impact on the final image registration accuracy.

#### 2.2.2. 3D Dense Feature Descriptions Based on SRAWG

_{1}, y

_{1}) and (x

_{2}, y

_{2}), respectively. The red box in the figure corresponds to the template region of the two feature points, the blue box corresponds to the overlapping region of the two templates, and the yellow dotted line box corresponds to the minimum circumscribed rectangle R of the two templates. The coordinates of the four vertices A, B, C, and D corresponding to the rectangle are (x

_{1}− w/2, y

_{2}− w/2), (x

_{1}− w/2, y

_{1}+ w/2), (x

_{2}+ w/2, y

_{1}+ w/2), and (x

_{2}+ w/2, y

_{2}− w/2), where w is the template window size.

^{2}. However, if the feature description is constructed with the coverage region of the circumscribed rectangle R, the number of feature description vectors that need to be constructed is s = (x

_{2}− x

_{1}+ w) × (y

_{2}− y

_{1}+ w). When s < 2w

^{2}is satisfied, the calculation amount of constructing the feature description with the circumscribed rectangle R is smaller than that of constructing the feature description for each template region separately. We count the sets of all feature points in the image that satisfy the aforementioned relationship. For each set of points, we construct the minimum circumscribed rectangles corresponding to these sets of points, respectively. Finally, we can construct feature descriptions for these circumscribed rectangles.

_{h}

_{,βm}and S

_{v}

_{,βm}represent the horizontal and vertical gradients, respectively. ${\mathcal{G}}_{{\beta}_{m}}$ denotes a Gaussian kernel with standard deviation β

_{m}. H

_{βm}and V

_{βm}denote horizontal and vertical Sobel windows of length β

_{m}, respectively. × represents matrix dot product. Using formula (1) to convolve the optical image, we can obtain the amplitude and direction of the optical image as follows:

_{x}

_{,αn}and GR

_{y}

_{,αn}represent the horizontal and vertical gradients of the image, respectively, where:

_{n}, usually taken as M = N = 2α

_{n}; (x,y) represents the pixel position of the gradient to be calculated; I represents the pixel intensity of the image.

_{So}

_{,αn}and G

_{Oo}

_{,βm}are restricted to [0,180°).

_{n}and β

_{m}of the ROEWA and the multi-scale Sobel operators need to satisfy α

_{n}= β

_{m}. In this paper, considering that the registration efficiency and the coarse registration eliminate large-scale differences, we modify the multi-scale Sobel and ROEWA operators to the single-scale case. We set the scale parameter to α

_{n}= β

_{m}=2.

_{o}(x,y) and the corresponding magnitude is G

_{m}(x,y); then, the gradient projections of this pixel to its right feature direction and left feature direction are:

_{mr}(x,y) and G

_{ml}(x,y) are the right-weighted gradient magnitude and the left-weighted gradient magnitude of G

_{m}(x,y), respectively. α

_{r}(x,y) and α

_{l}(x,y) are the angles between the gradient direction of the pixel and its right and left feature directions, respectively. A θ is the gradient quantization angle interval. Their calculation formulas are as follows:

_{ml}(x,y) and the right-weighted gradient magnitude G

_{mr}(x,y) of the image are obtained, a 3D dense feature description is constructed by characterizing each pixel according to the process shown in Figure 6.

^{T}, and the L2 norm normalized is used to obtain the final 3D dense feature description. So far, we have completed the construction of the SRAWG feature description.

#### 2.2.3. Feature Matching

_{i}is as follows:

^{i}

_{Opt}(x,y) and D

^{i}

_{SAR}(x,y) represent the corresponding template region 3D feature description on the optical and SAR images, respectively. SSD

_{i}(Δx, Δy) represents the SSD similarity measure that the template window corresponding to the feature point P

_{i}is offset by the search center (Δx, Δy) on the search area of the SAR image.

_{i}(Δx, Δy) in the above formula reaches the minimum value is the offset of the matching point relative to the center of the search region. The first term on the right side of the above formula is obviously a constant. In addition, the calculation results of the second term at different positions of the image have little effect on the calculation of the value of the third term [49].

^{−}

^{1}represent the Fourier transform and inverse transform, respectively. [∙]

^{*}represents a complex conjugate. Although the above method can quickly search to obtain the corresponding matching point coordinates on the SAR image, due to the large difference in imaging between optical and SAR images, there are often multiple peaks in the search surface. Mismatches often occur when the largest peak is selected.

_{s}values with larger intensities are selected as candidate points. The point with the largest intensity value is taken as the main peak point (x

_{m}, y

_{m}). Among the remaining N

_{s}− 1 candidate points, we take each point’s coordinates (x

_{i}, y

_{i}) as the upper left corner to construct a square search window with a window width of w

_{s}, as shown in Figure 8. Calculate the region overlap ratio of the search window corresponding to the candidate point and the fixed window corresponding to the main peak point, and set the threshold R

_{t}. Eliminate the candidate points whose region overlap ratio is greater than R

_{t}(as shown in Figure 8a); otherwise, keep the candidate points (as shown in Figure 8b). After this step, the interference points around the main peak point will be eliminated. Then, the secondary peak point (if any) is found from the remaining candidate points, and the main-to-secondary-peak ratio is calculated. If there is no secondary peak point (it is considered that the ratio of the main and secondary peaks is equal to the set threshold), this point is directly output as a matching point.

_{s}takes 1% of the number of template pixels; region overlap ratio threshold R

_{t}= 0.9; search window size w

_{s}takes the size of the template window; the main-to-secondary-peak ratio T = 1/0.9. It can be clearly seen that after non-maximum suppression, the interfering pixels around the main peak are eliminated, and the secondary peak is easier to obtain at this time.

## 3. Results

#### 3.1. Effectiveness of Consistent Gradient Computation for Optical and SAR Images

#### 3.2. Comparison of SSD Search Surfaces with Different Feature Descriptions

#### 3.3. Overlapping Template Merging Performance Analysis

#### 3.4. Image Registration Experiments

#### 3.4.1. Experimental Data and Parameter Settings

#### 3.4.2. Evaluation Criteria

_{r}

^{i}

_{,}y

_{r}

^{i}) and (x

_{s}

^{i}

_{,}y

_{s}

^{i}) represent the corresponding point pair on the reference image and the sensed image, respectively, which are output after the gross error elimination algorithm. H represents the transformation model from the reference image to the sensed image, which is calculated by manually selecting 50 pairs of checkpoints evenly distributed on the reference image and the sensed image. Among all output corresponding point pairs M, the number of point pairs whose distance is less than the threshold of 1.5 pixels after H transformation is calculated as NCM.

#### 3.4.3. Results and Analysis

## 4. Discussion

_{s}is set, and the smaller the search window area overlap ratio threshold R

_{t}, the larger the suppressed range around the main peak. Increasing the main-to-secondary-peak ratio threshold reduces the number of corresponding point pairs in the final output.

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Kulkarni, S.C.; Rege, P.P. Pixel level fusion techniques for SAR and optical images: A review—ScienceDirect. Inf. Fusion
**2020**, 59, 13–29. [Google Scholar] [CrossRef] - Yu, Q.; Ni, D.; Jiang, Y.; Yan, Y.; An, J.; Sun, T. Universal SAR and optical image registration via a novel SIFT framework based on nonlinear diffusion and a polar spatial-frequency descriptor. ISPRS J. Photogramm. Remote Sens.
**2021**, 171, 1–17. [Google Scholar] [CrossRef] - Michael, W.; Thomas, S.; Zhu, X.; Matthias, W.; Taubenbö; Hannes, C. Semantic Segmentation of slums in satellite images using transfer learning on fully convolutional neural networks. ISPRS J. Photogramm. Remote Sens.
**2019**, 150, 59–69. [Google Scholar] [CrossRef] - Jia, L.; Li, M.; Wu, Y.; Zhang, P.; Liu, G.; Chen, H.; An, L. SAR Image Change Detection Based on Iterative Label-Information Composite Kernel Supervised by Anisotropic Texture. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 3960–3973. [Google Scholar] [CrossRef] - Wan, L.; Zhang, T.; You, H. Multi-sensor remote sensing image change detection based on sorted histograms. Int. J. Remote Sens.
**2018**, 39, 3753–3775. [Google Scholar] [CrossRef] - Wilfried, H.; Michal, H.; Konrad, S. Recent developments in large-scale tie-point matching. ISPRS J. Photogramm. Remote Sens.
**2016**, 115, 47–62. [Google Scholar] [CrossRef] - Xiang, Y.; Wang, F.; You, H. OS-SIFT: A robust SIFT-like algorithm for high-resolution optical-to-SAR image registration in suburban areas. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 3078–3090. [Google Scholar] [CrossRef] - Zhang, X.; Leng, C.; Hong, Y.; Pei, Z.; Cheng, I.; Anup, B. Multimodal Remote Sensing Image Registration Methods and Advancements: A Survey. Remote Sens.
**2021**, 13, 5128. [Google Scholar] [CrossRef] - Zhao, F.; Huang, Q.; Gao, W. Image Matching by Normalized Cross-Correlation. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006. [Google Scholar] [CrossRef]
- Shi, W.; Su, F.; Wang, R.; Fan, J. A visual circle based image registration algorithm for optical and SAR imagery. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar] [CrossRef]
- Suri, S.; Reinartz, P. Mutual-Information-Based Registration of TerraSAR-X and Ikonos Imagery in Urban Areas. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 939–949. [Google Scholar] [CrossRef] - Siddique, M.A.; Sarfraz, S.M.; Bornemann, D.; Hellwich, O. Automatic registration of SAR and optical images based on mutual information assisted Monte Carlo. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar] [CrossRef]
- Zhuang, Y.; Gao, K.; Miu, X.; Han, L.; Gong, X. Infrared and visual image registration based on mutual information with a combined particle swarm optimization—Powell search algorithm. Opt. -Int. J. Light Electron Opt.
**2016**, 127, 188–191. [Google Scholar] [CrossRef] - Yu, L.; Zhang, D.; Holden, E.J. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images. Comput. Geosci.
**2008**, 34, 838–848. [Google Scholar] [CrossRef] - Shi, X.; Jiang, J. Automatic registration method for optical remote sensing images with large background variations using line segments. Remote Sens.
**2016**, 8, 426. [Google Scholar] [CrossRef][Green Version] - Li, H.; Manjunath, B.; Mitra, S.K. A contour-based approach to multisensor image registration. IEEE Trans. Image Process.
**1995**, 4, 320–334. [Google Scholar] [CrossRef] [PubMed][Green Version] - Dai, X.; Khorram, S. A feature-based image registration algorithm using improved chain-code representation combined with invariant moments. IEEE Trans. Geosci. Remote Sens.
**1999**, 37, 2351–2362. [Google Scholar] [CrossRef][Green Version] - Wu, Y.; Ma, W.; Gong, M.; Su, L.; Jiao, L. A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sens. Lett.
**2017**, 12, 43–47. [Google Scholar] [CrossRef] - Wu, B.; Zhou, S.; Ji, K. A novel method of corner detector for SAR images based on Bilateral Filter. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016. [Google Scholar] [CrossRef]
- Sedaghat, A.; Ebadi, H. Remote Sensing Image Matching Based on Adaptive Binning SIFT Descriptor. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 5283–5293. [Google Scholar] [CrossRef] - Liao, S.; Chung, A. Nonrigid Brain MR Image Registration Using Uniform Spherical Region Descriptor. IEEE Trans. Geosci. Remote Sens.
**2012**, 21, 157–169. [Google Scholar] [CrossRef][Green Version] - Lindeberg, T. Feature detection with automatic scale selection. Int. J. Comput. Vis.
**1998**, 30, 79–116. [Google Scholar] [CrossRef] - Fan, B.; Huo, C.; Pan, C.; Kong, Q. Registration of Optical and SAR Satellite Images by Exploring the Spatial Relationship of the Improved SIFT. IEEE Geosci. Remote Sens. Lett.
**2013**, 10, 657–661. [Google Scholar] [CrossRef][Green Version] - Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-like algorithm for SAR images. IEEE Trans. Geosci. Remote Sens.
**2014**, 53, 453–466. [Google Scholar] [CrossRef][Green Version] - Gong, M.; Zhao, S.; Jiao, L.; Tian, D.; Wang, S. A Novel Coarse-to-Fine Scheme for Automatic Image Registration Based on SIFT and Mutual Information. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 4328–4338. [Google Scholar] [CrossRef] - Xu, C.; Sui, H.; Li, H.; Liu, J. An automatic optical and SAR image registration method with iterative level set segmentation and SIFT. Int. J. Remote Sens.
**2015**, 36, 3997–4017. [Google Scholar] [CrossRef] - Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote Sensing Image Registration With Modified SIFT and Enhanced Feature Matching. IEEE Geosci. Remote Sens. Lett.
**2017**, 14, 3–7. [Google Scholar] [CrossRef] - Ma, W.; Wu, Y.; Liu, S.; Su, Q.; Zhong, Y. Remote Sensing Image Registration Based on Phase Congruency Feature Detection and Spatial Constraint Matching. IEEE J. Transl. Eng. Health Med.
**2018**, 6, 77554–77567. [Google Scholar] [CrossRef] - Fan, J.; Wu, Y.; Li, M.; Liang, W.; Cao, Y. SAR and optical image registration using nonlinear diffusion and phase congruency structural descriptor. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 5368–5379. [Google Scholar] [CrossRef] - Liu, X.; Ai, Y.; Zhang, J.; Wang, Z. A Novel Affine and Contrast Invariant Descriptor for Infrared and Visible Image Registration. Remote Sens.
**2018**, 10, 658. [Google Scholar] [CrossRef] - Li, J.; Hu, Q.; Ai, M. RIFT: Multi-Modal Image Matching Based on Radiation-Variation Insensitive Feature Transform. IEEE Trans. Image Process.
**2020**, 29, 3296–3310. [Google Scholar] [CrossRef] - Ye, Y.; Shan, J.; Bruzzone, L.; Shen, L. Robust Registration of Multimodal Remote Sensing Images Based on Structural Similarity. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 2941–2958. [Google Scholar] [CrossRef] - Ye, Y.; Bruzzone, L.; Shan, J.; Bovolo, F.; Zhu, Q. Fast and robust matching for multimodal remote sensing image registration. IEEE Trans. Geosci. Remote Sens.
**2019**, 57, 9059–9070. [Google Scholar] [CrossRef][Green Version] - Li, S.; Lv, X.; Ren, J.; Li, J. A Robust 3D Density Descriptor Based on Histogram of Oriented Primary Edge Structure for SAR and Optical Image Co-Registration. Remote Sens.
**2022**, 14, 630. [Google Scholar] [CrossRef] - Ye, F.; Su, Y.; Xiao, H.; Zhao, X.; Min, W. Remote Sensing Image Registration Using Convolutional Neural Network Features. IEEE Geosci. Remote Sens. Lett.
**2018**, 15, 232–236. [Google Scholar] [CrossRef] - Ma, W.; Zhang, J.; Wu, Y.; Jiao, L.; Zhu, H.; Zhao, W. A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features. IEEE Trans. Geosci. Remote Sens.
**2019**, 57, 4834–4843. [Google Scholar] [CrossRef] - Li, Z.; Zhang, H.; Huang, Y. A Rotation-Invariant Optical and SAR Image Registration Algorithm Based on Deep and Gaussian Features. Remote Sens.
**2021**, 13, 2628. [Google Scholar] [CrossRef] - Merkle, N.; Auer, S.; Muller, R.; Reinartz, P. Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2018**, 11, 1811–1820. [Google Scholar] [CrossRef] - Zhang, H.; Ni, W.; Yan, W.; Xiang, D.; Wu, J.; Yang, X.; Bian, H. Registration of multimodal remote sensing image based on deep fully convolutional neural network. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2019**, 12, 3028–3042. [Google Scholar] [CrossRef] - Zhang, H.; Lei, L.; Ni, W.; Tang, T.; Wu, J.; Xiang, D.; Kuang, G. Optical and SAR Image Matching Using Pixelwise Deep Dense Features. IEEE Geosci. Remote Sens. Lett.
**2020**, 19, 6000705. [Google Scholar] [CrossRef] - Quan, D.; Wang, S.; Liang, X.; Wang, R.; Fang, S.; Hou, B.; Jiao, L. Deep generative matching network for optical and SAR image registration. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018. [Google Scholar] [CrossRef]
- Bürgmann, T.; Koppe, W.; Schmitt, M. Matching of TerraSAR-X derived ground control points to optical image patches using deep learning. ISPRS J. Photogramm. Remote Sens.
**2019**, 158, 241–248. [Google Scholar] [CrossRef] - Cui, S.; Ma, A.; Zhang, L.; Xu, M.; Zhong, Y. MAP-Net: SAR and Optical Image Matching via Image-Based Convolutional Network with Attention Mechanism and Spatial Pyramid Aggregated Pooling. IEEE Trans. Geosci. Remote Sens.
**2022**, 60, 1–13. [Google Scholar] [CrossRef] - Hughes, L.H.; Schmitt, M. A semi-supervised approach to sar-optical image matching. ISPRS Ann. Photogramm. Remote Sens. Spat.Inf. Sci.
**2019**, 4, 1–8. [Google Scholar] [CrossRef][Green Version] - Jia, H. Research on Automatic Registration of Optical and SAR Images. Master’s Thesis, Chang’an University, Xi’an, China, 2020. [Google Scholar]
- Roger, F.; Armand, L.; Phillipe, M.; Eliane, C.C. An optimal multiedge detector for SAR image segmentation. IEEE Trans. Geosci. Remote Sens.
**1998**, 36, 793–802. [Google Scholar] [CrossRef][Green Version] - Wei, Q.; Feng, D. An efficient SAR edge detector with a lower false positive rate. Int. J. Remote Sens.
**2015**, 36, 3773–3797. [Google Scholar] [CrossRef] - Fan, Z.; Zhang, L.; Wang, Q.; Liu, S.; Ye, Y. A fast matching method of SAR and optical images using angular weighted orientated gradients. Acta Geod. Cartogr. Sin.
**2021**, 50, 1390–1403. [Google Scholar] [CrossRef] - Ye, Y.; Bruzzone, L.; Shan, J.; Shen, L. Fast and Robust Structure-based Multimodal Geospatial Image Matching. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar] [CrossRef]

**Figure 2.**Comparison of corner extraction results. (

**a**) Block–Harris extraction result; (

**b**) Harris extraction result.

**Figure 7.**Similarity surface multi-peak phenomenon. (

**a**) Optical image template window; (

**b**) SAR image search window; (

**c**) search surface map.

**Figure 8.**Schematic diagram of non-maximum suppression. (

**a**) Non-extreme point that needs to be suppressed; (

**b**) extreme point that needs to be preserved.

**Figure 9.**Non-maximum suppression results. (

**a1**) Point 1 before suppression; (

**a2**) Point 1 after suppression; (

**b1**) Point 2 before suppression; (

**b2**) Point 2 after suppression; (

**c1**) Point 3 before suppression; (

**c2**) Point 3 after suppression; (

**d1**) Point 4 before suppression; (

**d2**) Point 4 after suppression.

**Figure 11.**Gradient calculation results of differential operator, single-scale Sobel, and ROEWA operators. (

**a1**,

**a2**) Differential gradient magnitude and direction for optical image; (

**b1**,

**b2**) differential gradient magnitude and direction for SAR image; (

**c1**,

**c2**) single-scale Sobel gradient magnitude and orientation for optical image; (

**d1**,

**d2**) ROEWA gradient magnitude and direction for SAR image.

**Figure 13.**Template merging effect. (

**a**) Before merging; (

**b**) after merging. The yellow points are feature points, and their corresponding red boxes represent their respective template areas.

**Figure 17.**Mosaic map for Case 2. The geometric stitching effect of the two images can be observed from the center of the area circled by circles 1 and 2.

**Figure 18.**The effect of parameter settings on the mean error. (

**a**) Template size changes; (

**b**) feature neighborhood size changes; (

**c**) feature directions.

Case | Dataset Description | ||
---|---|---|---|

Reference Image | Sensed Image | Image Characteristic | |

1 | Sensor: Google Earth | Sensor: GF-3 | The image covers the river region on the outskirts, and the terrain is relatively flat. The edge of the river is clear on the SAR image, and the speckle noise is relatively weak. However, there are significant nonlinear intensity differences compared to optical image. |

Original Resolution: 5 m | Original Resolution: 10 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 24.71° | ||

Size: 1000 × 1000 | Size: 1000 × 1000 | ||

GSD: 5 m | GSD: 5 m | ||

2 | Sensor: Google Earth | Sensor: GF-3 | The image covers an urban region with tall buildings and has local geometric distortions. There is significant speckle noise on SAR images. |

Original Resolution: 1 m | Original Resolution: 3 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 43.28° | ||

Size: 1000 × 1000 | Size: 1000 × 1000 | ||

GSD: 3 m | GSD: 3 m | ||

3 | Sensor: Google Earth | Sensor: GF-3 | The image covers the airport region, which has clear geometry in both images. There is a clear intensity inversion on the SAR image and optical image, and significant speckle noise on the SAR image. |

Original Resolution: 5 m | Original Resolution: 10 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 20.23° | ||

Size: 1500 × 1500 | Size: 1500 × 1500 | ||

GSD: 10 m | GSD: 10 m | ||

4 | Sensor: Google Earth | Sensor: Airborne SAR | The image covers suburban regions with complex structures such as buildings, reservoirs, farmland, etc. On the SAR image, some shadow areas are generated due to the large incident angle, and the image has some defocusing phenomenon. |

Original Resolution: 1 m | Original Resolution: 0.5 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 81.24° | ||

Size: 1500 × 1500 | Size: 1500 × 1500 | ||

GSD: 1 m | GSD: 1 m | ||

5 | Sensor: Google Earth | Sensor: Airborne SAR | The image covers a large region of rural farmland with relatively flat terrain. Compared with the optical image, there is a significant intensity inversion phenomenon in the SAR image. |

Original Resolution: 1 m | Original Resolution: 1 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 80.89° | ||

Size: 1500 × 1500 | Size: 1500 × 1500 | ||

GSD: 1 m | GSD: 1 m | ||

6 | Sensor: Google Earth | Sensor: Sentinel-1 | The image covers the outskirts of a city, which is flat and has complex features such as buildings, farmland, lakes, and road networks. |

Original Resolution: 5 m | Original Resolution: 10 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 38.94° | ||

Size: 2570 × 1600 | Size: 2570 × 1600 | ||

GSD: 10 m | GSD: 10 m | ||

7 | Sensor: Google Earth | Sensor: Sentinel-1 | The image covers the hilly area around the river. The terrain in this area is greatly undulating, and the geometric difference between the optical image and the SAR image is large. |

Original Resolution: 5 m | Original Resolution: 10 m | ||

Central Incidence Angle: 0° | Central Incidence Angle: 39.13° | ||

Size: 1274 × 1073 | Size: 1274 × 1073 | ||

GSD: 10 m | GSD: 10 m |

Methods | Parameter Space |
---|---|

SRAWG | N = 5, k = 8, N = 9, σ = 0.8, α_{n} = 2, β_{m} = 2, R_{t} = 0.9, T = 1/0.9, N_{s} = 1% number of template pixels, Template size: 100 × 100, Radius of search region: 20 pixels |

CFOG | n = 5, k = 8, N = 9, σ = 0.8, Template size: 100 × 100, Radius of search region: 20 pixels |

HOPC | n = 5, k = 8, N = 9, Template size: 100 × 100, Radius of search region: 20 pixels |

HOPES | n = 5, k = 8, N = 9, σ = 0.8, N_{scale} = 3, σ_{MSG} = 1.4, k_{MSG} = 1.4, γ_{MSG} = 6, R_{t} = 0.9, T = 1/0.9, N_{s} = 1% number of template pixels, Template size: 100 × 100, Radius of search region: 20 pixels |

**Table 3.**NCM, CMR, RMSE, and run time of four methods. For each pair of experimental images, four metrics are used to measure the algorithm being compared. The numbers labeled in bold represent the optimal values for all algorithms under this metric.

Case | Criteria | SRAWG | HOPES | CFOG | HOPC | RIFT |
---|---|---|---|---|---|---|

1 | NCM | 177 | 178 | 156 | 167 | 147 |

CMR | 90.31 | 91.28 | 79.59 | 84.34 | 51.22 | |

RMSE | 0.9257 | 1.0170 | 1.2618 | 1.4919 | 1.7086 | |

Run Time | 5.5 | 16.4 | 6.1 | 43.68 | 12.99 | |

2 | NCM | 171 | 139 | 159 | 145 | 92 |

CMR | 86.36 | 71.65 | 80.3 | 73.23 | 49.73 | |

RMSE | 1.0413 | 1.4455 | 1.4143 | 1.4632 | 1.8324 | |

Run Time | 5.6 | 16.5 | 6.2 | 43.01 | 12.74 | |

3 | NCM | 166 | 152 | 118 | 106 | 14 |

CMR | 90.71 | 85.88 | 67.05 | 66.67 | 28 | |

RMSE | 0.9109 | 1.0575 | 1.5540 | 1.5148 | 2.5998 | |

Run Time | 8.3 | 16.8 | 6.2 | 48.59 | 16.39 | |

4 | NCM | 112 | 96 | 114 | 103 | 13 |

CMR | 68.71 | 68.57 | 65.9 | 60.95 | 15.85 | |

RMSE | 1.9126 | 1.7773 | 2.0346 | 2.1014 | 4.5432 | |

Run Time | 8.2 | 16.6 | 6.3 | 48.86 | 17.21 | |

5 | NCM | 92 | 89 | 62 | 99 | 39 |

CMR | 92.93 | 85.58 | 59.62 | 68.28 | 41.05 | |

RMSE | 0.9936 | 1.1819 | 1.6501 | 1.5363 | 1.8114 | |

Run Time | 7.8 | 16.6 | 6.2 | 48.64 | 16.93 | |

6 | NCM | 183 | 175 | 163 | 171 | 34 |

CMR | 95.31 | 91.15 | 89.07 | 88.14 | 38.2 | |

RMSE | 0.7376 | 0.9189 | 1.0536 | 1.3901 | 2.0999 | |

Run Time | 9.70 | 16.92 | 6.34 | 53.41 | 22.23 | |

7 | NCM | 72 | 66 | 51 | 42 | 15 |

CMR | 60 | 45.52 | 58.62 | 34.71 | 25 | |

RMSE | 1.8506 | 2.2699 | 2.0861 | 2.2457 | 2.5719 | |

Run Time | 6.38 | 15.48 | 5.66 | 48.09 | 14.75 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, Z.; Yu, A.; Zhang, B.; Dong, Z.; Chen, X.
A Fast Registration Method for Optical and SAR Images Based on SRAWG Feature Description. *Remote Sens.* **2022**, *14*, 5060.
https://doi.org/10.3390/rs14195060

**AMA Style**

Wang Z, Yu A, Zhang B, Dong Z, Chen X.
A Fast Registration Method for Optical and SAR Images Based on SRAWG Feature Description. *Remote Sensing*. 2022; 14(19):5060.
https://doi.org/10.3390/rs14195060

**Chicago/Turabian Style**

Wang, Zhengbin, Anxi Yu, Ben Zhang, Zhen Dong, and Xing Chen.
2022. "A Fast Registration Method for Optical and SAR Images Based on SRAWG Feature Description" *Remote Sensing* 14, no. 19: 5060.
https://doi.org/10.3390/rs14195060