# An SRTM-Aided Epipolar Resampling Method for Multi-Source High-Resolution Satellite Stereo Observation

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

**F**, which can be recovered with only eight conjugate points, or even fewer under different conditions [12,13]. The epipolar geometry of frame images is based on the coplanarity of the two perspective centers, two conjugate image points, and the corresponding object-space point. The intersections of the epipolar plane and the two image planes result in two straightforward corresponding epipolar lines. Thus, epipolar resampling of frame images can also be implemented with very simple geometric transformations.

## 3. Epipolar Geometry of Pushbroom Imagery

#### 3.1. Property of Epipolar-Line-Direction

**p**,

**p**′) as a pair of conjugate points on two-view pushbroom stereo images, where

**p**is on the left image

**I**and

**p′**is on the right image

**I′**. Define

**P**as the corresponding object-space point of

**p**and

**p′**. The ELD at the position of

**p**, denoted as $\stackrel{\rightharpoonup}{ed}\left(p\right)$, is defined as the direction of the left-image-projection of a small segment, which is on the light of

**p′**and crosses the object-space point

**P**. When the two-view images are relatively well oriented, the point

**p′**will be lying on the right-image-projection of the light of

**p**. For frame images, the ELD $\stackrel{\rightharpoonup}{ed}\left(p\right)$ can be obtained without knowing the actual location of

**p′**because the coplanarity conveyed by the perspective projection ensures the $\stackrel{\rightharpoonup}{ed}\left(p\right)$ is not changed with different

**p′**s on the right-image-projection of the light of

**p**. However, a similar property is not established with two-view pushbroom images. Figure 1 illustrates four typical types of geometric relationships of pushbroom stereo images. For the convenience of analysis, it is assumed that the satellites are moving in straight lines with constant velocities, and that the attitudes of the sensors are not changed during the acquisitions.

**p**, denoted as $\stackrel{\rightharpoonup}{l}\left(p\right)$, correspond to the same perspective center. Thus, the points

**O**,

_{0}**p**,

**P**,

_{1}**P**,

_{2}**p**,

_{1}**p**, and

_{2}**O**(

_{1}**O**) are in a same plane, and the ELD is fixed at the direction of the x-axis on both images. The condition in Figure 1b is called along-track stereo, where only one track exists and is lying on the plane formed by points

_{2}**O**,

_{0}**p**,

**P**,

_{1}**P**,

_{2}**p**,

_{1}**p**,

_{2}**O**, and

_{1}**O**. As a result, the ELD is fixed at the direction of the y-axis in the image-space. Apparently, under the cross-track coplanar stereo condition and the along-track stereo condition, the ELD will not be influenced by different corresponding points, or, in other words, by different heights. This phenomenon explains why methods which do not consider the influence of different heights have still achieved relatively good results with some of the cross-track stereo images, and all of the along-track stereo images. Stereo observation with images captured by satellites that obtain nadir images with a fixed pitch angle (e.g., multi-track ZY3-NAD, GF-1, or GF-2) follows the cross-track coplanar stereo condition. The along-track stereo condition corresponds to the most widely used along-track stereo images captured by multi-line-array sensors (e.g., ZY3-TLC and TH1-TLC), or a single-line-array sensor with different pitch angles (e.g., IKONOS stereo, GeoEye stereo, and WorldView stereo).

_{2}#### 3.2. Obtaining Epipolar-Line-Directions

**I**(the left image) and

**I′**(the right image) have been relative-oriented accurately, the lights of the two conjugate points

**p**and

**p′**will intersect somewhere in the object-space. As a result, the epipolar-curve

**ec(p)**corresponding to the left-image-point

**p**can be easily obtained by projecting the light of

**p**onto the right image-space, as

**ec(p)**will cross the point

**p′**. Similarly, the epipolar-curve

**ec(p′)**which crosses the point

**p**can be obtained by projecting the light of

**p′**.

**p**or

**p′**can be obtained by the process of backprojection with the RFM model [23]. The RFM model projects the geographic coordinates (L, P, H) to the image-space coordinates (x, y) with two fractions of three-order polynomials, where L is latitude; P is longitude; H is height; x is the column of the image (the sample coordinate); and y is the row of the image (the number of scanning lines). When fixing the height H, the RFM can also be used to calculate the horizontal coordinates (latitude and longitude) with the image-space coordinates (x, y) through solving a set of non-linear equations, which is known as the backprojection process. For the convenience of presentation, the process of projection is denoted as:

**p**, denoted as

**p′**, is determined in two steps. First, backproject the point

**p**onto the elevation plane of SRTM through an iteration process [34] (which is called the ray-tracing process in computer graphics). Thus, the ground point

**P**and its elevation H are obtained. Second, project the ground point

**P**onto the right image

**I′**and, thus, the corresponding point

**p′**is obtained. The iterative process of backprojecting the

**p**onto the elevation plane is denoted as:

**p**is obtained by projecting a small segment of the light (denoted as $\overline{{P}_{-}^{\prime}{P}_{+}^{\prime}}$) corresponding to

**p′**around the ground point

**P**onto the left image

**I**(see Figure 2). The segment $\overline{{P}_{-}^{\prime}{P}_{+}^{\prime}}$ is obtained through the backprojection process (Equation (2)) on

**p′**with the heights (H-$\nabla $H) and (H+$\nabla $H). $\nabla $H, the height interval between point

**P**and ${P}_{-}^{\prime}$ or ${P}_{+}^{\prime}$, can be set as 20–100 m. Conversely, the ELD $\stackrel{\rightharpoonup}{ed}\left({p}^{\prime}\right)$ at the location of

**p′**is obtained by similar steps to the point

**p**. The process of obtaining the ELD of

**p**is denoted as:

## 4. Proposed Approach

#### 4.1. Workflow

- Prepare the global rotation matrix and the block-wise homography transformation models.
- Evaluate the prepared transformation model through virtual corresponding points.
- Perform epipolar resampling.

**I**and the right image

**I′**are:

**t**and

**t′**are included to ensure the entirety of the rotated overlapping areas are in a proper image-coordinate for resampling.

#### 4.2. Block-Wise Transformation of the Left Image

**.**The side length of the squares is denoted as h.

**H**

_{i,j}within the square $s{q}_{i,j}$ is computed with the four pairs of grid points (${g}_{i,j}^{\left(rot\right)}$, ${g}_{i,j}^{\left(epi\right)}$), (${g}_{i+1,j}^{\left(rot\right)}$, ${g}_{i+1,j}^{\left(epi\right)}$), (${g}_{i,j+1}^{\left(rot\right)}$, ${g}_{i,j+1}^{\left(epi\right)}$), and (${g}_{i+1,j+1}^{\left(rot\right)}$, ${g}_{i+1,j+1}^{\left(epi\right)}$).

#### 4.3. Block-Wise Transformation of the Right Image

**p**,

**p′**), and the other is to balance the ground resolution in the x-direction and the y-direction.

**H′**

_{i,j}is applied within the polygon

**P′**

_{i,j}formed by the grid point ${{g}^{\prime}}_{i,j}^{\left(rot\right)}$, ${{g}^{\prime}}_{i+1,j}^{\left(rot\right)}$, ${{g}^{\prime}}_{i,j+1}^{\left(rot\right)}$, and ${{g}^{\prime}}_{i+1,j+1}^{\left(rot\right)}$.

**H′**

_{i,j}is computed with the four pairs of grid points on the right image: (${{g}^{\prime}}_{i,j}^{\left(rot\right)}$, ${{g}^{\prime}}_{i,j}^{\left(epi\right)}$), (${{g}^{\prime}}_{i+1,j}^{\left(rot\right)}$, ${{g}^{\prime}}_{i+1,j}^{\left(epi\right)}$), (${{g}^{\prime}}_{i,j+1}^{\left(rot\right)}$, ${{g}^{\prime}}_{i,j+1}^{\left(epi\right)}$), and (${{g}^{\prime}}_{i+1,j+1}^{\left(rot\right)}$, ${{g}^{\prime}}_{i+1,j+1}^{\left(epi\right)}$).

## 5. Experiments

#### 5.1. Data and Process

**p**from to the right image

**I′**through the process in Equation (4) and denote it as

**p″**. Second, obtain the ELD through the process in Equation (5) at the location of

**p′**, which is the matched point of

**p**on the right image

**I′**. Third, compute the point-to-line distance between the point

**p″**and the straight line that crosses the point

**p′**and follows the direction of the ELD [36]. The TPs with less-than-0.5 pixel point-to-line distances were used for evaluation. It should be noted that the images with larger GSDs were set as the left images in the epipolar resampling, to avoid up-sampling the images. The y-disparities were computed with the unit of the left-image-pixel.

#### 5.2. Experimental Results

#### 5.3. Analysis and Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis.
**2002**, 47, 7–42. [Google Scholar] [CrossRef] - Zhang, Y.; Zhang, Y.; Mo, D.; Zhang, Y.; Li, X. Direct digital surface model generation by semi-global vertical line locus matching. Remote Sens.
**2017**, 9, 214. [Google Scholar] [CrossRef] - Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed] - Garg, R.; Vijay Kumar, B.G.; Carneiro, G.; Reid, I. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Berlin, Germany, 2016; pp. 740–756. [Google Scholar]
- Godard, C.; Mac Aodha, O.; Brostow, G.J. Unsupervised monocular depth estimation with left-right consistency. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 270–279. [Google Scholar]
- Cao, Y.; Wu, Z.; Shen, C. Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Trans. Circuits Syst. Video Technol.
**2018**, 28, 3174–3182. [Google Scholar] [CrossRef] - Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell.
**2008**, 30, 328–341. [Google Scholar] [CrossRef] - Toutin, T.; Schmitt, C.; Wang, H. Impact of no GCP on elevation extraction from WorldView stereo data. ISPRS J. Photogramm. Remote Sens.
**2012**, 72, 73–79. [Google Scholar] [CrossRef] - Wang, T.; Zhang, G.; Li, D.; Tang, X.; Jiang, Y.; Pan, H.; Zhu, X.; Fang, C. Geometric accuracy validation for ZY-3 satellite imagery. IEEE Geosci. Remote Sens. Lett.
**2014**, 11, 1168–1171. [Google Scholar] [CrossRef] - Zbontar, J.; LeCun, Y. Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res.
**2016**, 17, 2. [Google Scholar] - Zbontar, J.; Le Cun, Y. Computing the stereo matching cost with a convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1592–1599. [Google Scholar]
- Zhang, Z. Determining the epipolar geometry and its uncertainty: A review. Int. J. Comput. Vis.
**1998**, 27, 161–195. [Google Scholar] [CrossRef] - Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Gupta, R.; Hartley, R.I. Linear pushbroom cameras. IEEE Trans. Pattern Anal. Mach. Intell.
**1997**, 19, 963–975. [Google Scholar] [CrossRef] [Green Version] - Kim, T. A study on the epipolarity of linear pushbroom images. Photogramm. Eng. Remote Sens.
**2000**, 66, 961–966. [Google Scholar] - Al-Rousan, N.; Cheng, P.; Petrie, G.; Toutin, T.; Zoej, M.V. Automated DEM extraction and orthoimage generation from SPOT level 1B imagery. Photogramm. Eng. Remote Sens.
**1997**, 63, 965–974. [Google Scholar] - Bang, K.I.; Jeong, S.; Kim, K.-O.; Cho, W. Automatic DEM generation using IKONOS stereo imagery. In Proceedings of the 2003 IEEE International Geoscience and Remote Sensing Symposium (IGARSS’03), Toulouse, France, 21–25 July 2003; pp. 4289–4291. [Google Scholar]
- Morgan, M.; Kim, K.-O.; Jeong, S.; Habib, A. Epipolar resampling of space-borne linear array scanner scenes using parallel projection. Photogramm. Eng. Remote Sens.
**2006**, 72, 1255–1263. [Google Scholar] [CrossRef] - Morgan, M.F. Epipolar Resampling of Linear Array Scanner Scenes; University of Calgary: Calgary, AB, USA, 2004. [Google Scholar]
- Hu, F.; Wang, M.; Li, D. A novel epipolarity model of satellite stereo-imagery based on virtual horizontal plane of object-space. In Proceedings of the International Conference on Earth Observation Data Processing and Analysis (ICEODPA), Wuhan, China, 28–30 December 2008; International Society for Optics and Photonics: Bellingham, WA, USA, 2008; p. 72851D. [Google Scholar]
- Wang, M.; Hu, F.; Li, J. Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model. ISPRS J. Photogramm. Remote Sens.
**2011**, 66, 347–355. [Google Scholar] [CrossRef] - Dowman, I.J.; Michalis, P. In Generic rigorous model for along track stereo satellite sensors. In Proceedings of the ISPRS Workshop on High Resolution Mapping from Space, Hanover, Germany, 9–13 September 2003. [Google Scholar]
- Fraser, C.S.; Dial, G.; Grodecki, J. Sensor orientation via RPCs. ISPRS J. Photogramm. Remote Sens.
**2006**, 60, 182–194. [Google Scholar] [CrossRef] - Idrissa, M.; Beumier, C. Generic epipolar resampling method for perspective frame camera and linear push-broom sensor. Int. J. Remote Sens.
**2016**, 37, 3494–3504. [Google Scholar] [CrossRef] - Jannati, M.; Valadan Zoej, M.J.; Mokhtarzade, M. Epipolar resampling of cross-track pushbroom satellite imagery using the rigorous sensor model. Sensors
**2017**, 17, 129. [Google Scholar] [CrossRef] - Koh, J.-W.; Yang, H.-S. Unified piecewise epipolar resampling method for pushbroom satellite images. EURASIP J. Image Video Process.
**2016**, 2016, 11. [Google Scholar] [CrossRef] - Toutin, T. Spatiotriangulation with multisensor VIR/SAR images. IEEE Trans. Geosci. Remote Sens.
**2004**, 42, 2096–2103. [Google Scholar] [CrossRef] - Toutin, T. Spatiotriangulation with multisensor HR stereo-images. IEEE Trans. Geosci. Remote Sens.
**2006**, 44, 456–462. [Google Scholar] [CrossRef] - Jeong, J.; Kim, T. Analysis of dual-sensor stereo geometry and its positioning accuracy. Photogramm. Eng. Remote Sens.
**2014**, 80, 653–661. [Google Scholar] [CrossRef] - Jeong, J.; Kim, T. Comparison of positioning accuracy of a rigorous sensor model and two rational function models for weak stereo geometry. ISPRS J. Photogramm. Remote Sens.
**2015**, 108, 172–182. [Google Scholar] [CrossRef] - Jeong, J.; Yang, C.; Kim, T. Geo-positioning accuracy using multiple-satellite images: IKONOS, QuickBird, and KOMPSAT-2 stereo images. Remote Sens.
**2015**, 7, 4549–4564. [Google Scholar] [CrossRef] - Jeong, J.; Kim, T. Quantitative estimation and validation of the effects of the convergence, bisector elevation, and asymmetry angles on the positioning accuracies of satellite stereo pairs. Photogramm. Eng. Remote Sens.
**2016**, 82, 625–633. [Google Scholar] [CrossRef] - Sun, G.; Ranson, K.J.; Kharuk, V.I.; Kovacs, K. Validation of surface height from shuttle radar topography mission using shuttle laser altimeter. Remote Sens. Environ.
**2003**, 88, 401–411. [Google Scholar] [CrossRef] - Sheng, Y. Theoretical analysis of the iterative photogrammetric method to determining ground coordinates from photo coordinates and a DEM. Photogramm. Eng. Remote Sens.
**2005**, 71, 863–871. [Google Scholar] [CrossRef] - Ling, X.; Zhang, Y.; Xiong, J.; Huang, X.; Chen, Z. An image matching algorithm integrating global STRM and image segmentation for multi-source satellite imagery. Remote Sens.
**2016**, 8, 672. [Google Scholar] [CrossRef] - Wan, Y.; Zhang, Y. The P2L method of mismatch detection for push broom high-resolution satellite images. ISPRS J. Photogramm. Remote Sens.
**2017**, 130, 317–328. [Google Scholar] [CrossRef] - Zhang, Y.; Wan, Y.; Huang, X.; Ling, X. Dem-assisted RFM block adjustment of pushbroom nadir viewing HRS imagery. IEEE Trans. Geosci. Remote Sens.
**2016**, 54, 1025–1034. [Google Scholar] [CrossRef]

**Figure 1.**Four typical types of geometric relationships of pushbroom stereo images: (

**a**) cross-track coplanar stereo; (

**b**) along-track stereo; (

**c**) cross-track non-coplanar stereo; and (

**d**) cross-multi-directional-track stereo.

**Figure 3.**The process of transforming points between the original image-space and the epipolar image-space in the proposed approach.

**Figure 4.**The process of transforming the grid from the rotated image-space to the epipolar image-space in the proposed approach.

**Figure 5.**The relative positions of the tested satellite images: (

**a**) the Guangdong test area; (

**b**) the Jiangsu test area; and (

**c**) the South Korea test area.

**Figure 6.**The ${\theta}_{c}^{\left(org\right)}$~H curves of: (

**a**) a cross-track coplanar stereo image pair (JS-Z3N/JS-GF2-1); (

**b**) an along-track stereo image pair (GD-Z3B/GD-Z3N-1); (

**c**) a cross-track non-coplanar stereo image pair (SK-P1A/SK-W3-2); and (

**d**) a CMDT stereo image pair (GD-Z3N-2/GD-IK-2).

Image Title | Satellite | GSD (m) | Product Level | Acquisition Date | Approximate Attitude (Degree) | ||
---|---|---|---|---|---|---|---|

Roll | Pitch | Yaw | |||||

Test area near the city of Guangzhou, Guangdong Province, China | |||||||

GD-Z3N-1 | ZY-3-NAD | 2.1 | Level-1A | 14 April 2015 | 0.00 | 0.22 | 0.19 |

GD-Z3F | ZY-3-FWD | 3.2 | Level-1A | 14 April 2015 | −0.00 | −23.59 | 0.13 |

GD-Z3B | ZY-3-BWD | 3.2 | Level-1A | 14 April 2015 | −0.18 | 24.09 | 0.25 |

GD-Z3N-2 | ZY-3-NAD | 2.1 | Level-1A | 14 April 2015 | 0.00 | 0.22 | 0.19 |

GD-GF2 | GF-2-PMS | 0.8 | Level-1A | 23 January 2015 | 1.45 | −0.36 | 0.01 |

GD-IK-1 | IKONOS-2 | 1.0 | SGC ^{1} | 18 September 2001 | 12.17 | 22.21 | 0.00 |

GD-IK-2 | IKONOS-2 | 1.0 | SGC | 23 November 2001 | 5.98 | 4.93 | 0.00 |

GD-W2-1 | WorldView-2 | 0.5 | Level-1B | 28 December 2013 | −1.14 | 0.38 | 0.01 |

GD-W2-2 | WorldView-2 | 0.5 | Level-1B | 28 December 2013 | −8.45 | 29.76 | 4.87 |

Test area in Southern Jiangsu Province, China | |||||||

JS-Z3N | ZY-3-NAD | 2.1 | Level-1A | 17 October 2012 | 2.43 | 0.10 | 0.18 |

JS-GF1 | GF-1-PMS | 2.0 | Level-1A | 22 January 2014 | 1.71 | −0.47 | 0.03 |

JS-GF2-1 | GF-2-PMS | 0.8 | Level-1A | 15 October 2015 | 15.88 | −1.18 | 0.28 |

JS-GF2-2 | GF-2-PMS | 0.8 | Level-1A | 15 October 2015 | 15.88 | −1.18 | 0.27 |

Test area near the city of Seoul, South Korea | |||||||

SK-W3-1 | WorldView-3 | 0.4 | Level-1B | 7 August 2015 | 26.82 | 0.01 | 0.01 |

SK-W3-2 | WorldView-3 | 0.4 | Level-1B | 7 August 2015 | 21.18 | 33.01 | 12.26 |

SK-P1A | Pleiades-HR 1A | 0.5 | Sensor | 18 November 2017 | −11.58 | 4.52 | 0.89 |

SK-P1B | Pleiades-HR 1B | 0.5 | Sensor | 28 September 2017 | 1.56 | 13.78 | 0.28 |

^{1}SGC, Standard Geometrically Corrected.

Left Image | Right Image | Approach | y-Disparity of TPs | y-Disparity of VCPs | ||||
---|---|---|---|---|---|---|---|---|

RMS | Min | Max | RMS | Min | Max | |||

JS-Z3N | JS-GF2-1 | RBH | 0.26 | −0.53 | 0.53 | 0.01 | −0.02 | 0.02 |

PRP | 0.24 | −0.50 | 0.50 | 0.03 | −0.04 | 0.04 | ||

TPP | 0.31 | −0.77 | 0.67 | 0.11 | −0.31 | 0.19 | ||

JS-GF1 | JS-GF2-2 | RBH | 0.28 | −0.50 | 0.52 | 0.02 | −0.01 | 0.04 |

PRP | 0.25 | −0.51 | 0.51 | 0.04 | −0.07 | 0.06 | ||

TPP | 0.33 | −0.63 | 0.90 | 0.17 | −0.22 | 0.44 |

Left Image | Right Image | Approach | y-Disparity of TPs | y-Disparity of VCPs | ||||
---|---|---|---|---|---|---|---|---|

RMS | Min | Max | RMS | Min | Max | |||

GD-W2-1 | GD-W2-2 | RBH | 0.23 | −0.53 | 0.50 | 0.02 | −0.05 | 0.04 |

PRP | 0.26 | −0.72 | 0.76 | 0.14 | −0.25 | 0.25 | ||

TPP | 0.24 | −0.64 | 0.61 | 0.08 | −0.18 | 0.15 | ||

GD-Z3B | GD-Z3N-1 | RBH | 0.19 | −0.50 | 0.50 | 0.00 | −0.01 | 0.01 |

PRP | 0.23 | −0.70 | 0.71 | 0.12 | −0.23 | 0.22 | ||

TPP | 0.26 | −0.85 | 0.70 | 0.12 | −0.38 | 0.21 | ||

GD-Z3F | GD-Z3N-1 | RBH | 0.19 | −0.50 | 0.50 | 0.00 | −0.01 | 0.01 |

PRP | 0.19 | −0.58 | 0.56 | 0.04 | −0.10 | 0.10 | ||

TPP | 0.26 | −0.73 | 0.72 | 0.12 | −0.27 | 0.24 | ||

SK-W3-1 | SK-W3-2 | RBH | 0.29 | −0.59 | 0.51 | 0.05 | −0.08 | 0.03 |

PRP | 0.53 | −1.42 | 1.34 | 0.42 | −0.82 | 0.76 | ||

TPP | 0.41 | −0.66 | 1.46 | 0.32 | −0.29 | 1.00 |

Left Image | Right Image | Approach | y-Disparity of TPs | y-Disparity of VCPs | ||||
---|---|---|---|---|---|---|---|---|

RMS | Min | Max | RMS | Min | Max | |||

GD-Z3B | GD-GF2 | RBH | 0.25 | −0.50 | 0.50 | 0.01 | −0.05 | 0.07 |

PRP | 0.26 | −0.81 | 0.80 | 0.26 | −0.46 | 0.41 | ||

TPP | 0.36 | −1.06 | 1.04 | 0.37 | −0.74 | 0.70 | ||

GD-Z3F | GF-GF2 | RBH | 0.24 | −0.50 | 0.50 | 0.01 | −0.03 | 0.07 |

PRP | 0.22 | −0.71 | 0.78 | 0.20 | −0.35 | 0.37 | ||

TPP | 0.31 | −0.91 | 1.05 | 0.32 | −0.58 | 0.74 | ||

GD-IK-1 | GD-W2-1 | RBH | 0.26 | −0.50 | 0.48 | 0.00 | −0.00 | 0.01 |

PRP | 0.39 | −0.79 | 0.86 | 0.22 | −0.36 | 0.46 | ||

TPP | 0.63 | −1.20 | 1.59 | 0.50 | −0.94 | 1.18 | ||

SK-P1A | SK-W3-2 | RBH | 0.27 | −0.54 | 0.53 | 0.01 | −0.02 | 0.01 |

PRP | 0.35 | −0.90 | 1.01 | 0.25 | −0.49 | 0.68 | ||

TPP | 1.02 | −2.58 | 2.81 | 1.00 | −2.06 | 2.45 |

Left Image | Right Image | Approach | y-Disparity of TPs | y-Disparity of VCPs | ||||
---|---|---|---|---|---|---|---|---|

RMS | Min | Max | RMS | Min | Max | |||

GD-GF2 | GD-W2-2 | RBH | 0.28 | −0.67 | 0.51 | 0.06 | −0.11 | 0.04 |

PRP | 0.40 | −1.21 | 1.14 | 0.35 | −0.69 | 0.68 | ||

TPP | 0.73 | −1.90 | 1.99 | 0.73 | −1.40 | 1.63 | ||

GD-Z3N-2 | GD-IK-2 | RBH | 0.27 | −0.51 | 0.51 | 0.01 | −0.04 | 0.04 |

PRP | 1.60 | −3.15 | 3.24 | 1.57 | −2.85 | 2.84 | ||

TPP | 3.18 | −5.74 | 5.99 | 3.16 | −5.58 | 5.59 | ||

GD-Z3N-1 | GD-W2-2 | RBH | 0.26 | −0.65 | 0.50 | 0.11 | −0.32 | 0.11 |

PRP | 0.49 | −1.14 | 1.62 | 0.38 | −0.74 | 0.71 | ||

TPP | 0.92 | −2.02 | 2.00 | 0.84 | −1.66 | 1.56 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hu, J.; Xia, G.-S.; Sun, H.
An SRTM-Aided Epipolar Resampling Method for Multi-Source High-Resolution Satellite Stereo Observation. *Remote Sens.* **2019**, *11*, 678.
https://doi.org/10.3390/rs11060678

**AMA Style**

Hu J, Xia G-S, Sun H.
An SRTM-Aided Epipolar Resampling Method for Multi-Source High-Resolution Satellite Stereo Observation. *Remote Sensing*. 2019; 11(6):678.
https://doi.org/10.3390/rs11060678

**Chicago/Turabian Style**

Hu, Jingwen, Gui-Song Xia, and Hong Sun.
2019. "An SRTM-Aided Epipolar Resampling Method for Multi-Source High-Resolution Satellite Stereo Observation" *Remote Sensing* 11, no. 6: 678.
https://doi.org/10.3390/rs11060678