# GNSS-Denied Semi-Direct Visual Navigation for Autonomous UAVs Aided by PI-Inspired Inertial Priors

^{*}

## Abstract

**:**

## 1. Mathematical Notation

## 2. Introduction and Outline

## 3. Objective, Novelty, and Application

## 4. Pose Optimization within Visual Odometry

- The ECEF terrain 3D coordinates of all features j visible in the image ($\mathbf{p}}_{\mathrm{j}}^{{\scriptscriptstyle \mathrm{E}}$) obtained by the structure optimization phase (Appendix C) corresponding to the previous image. These terrain 3D coordinates are known as the terrain map, and constitute a side product generated by VO pipelines.
- The 2D position of the same features j within the current image i ($\mathbf{p}}_{\mathrm{ij}}^{{\scriptscriptstyle \mathrm{IMG}}$) supplied by the previous feature alignment phase (Appendix C).
- The rough estimation of the ECEF to camera pose $\stackrel{\circ}{\mathbf{\zeta}}}_{{\scriptscriptstyle \mathrm{EC}}\mathrm{i}}^{\U0001f7c9$ for the current frame i provided by the sparse image alignment phase (Appendix C), which acts as the initial value for the camera pose ($\stackrel{\circ}{\mathbf{\zeta}}}_{{\scriptscriptstyle \mathrm{EC}}\mathrm{i}0$) to be refined by iteration.

## 5. Proposed Pose Optimization within Visual Inertial Odometry

#### 5.1. Rationale for the Introduction of Priors

#### 5.2. Prior-Based Pose Optimization

`ECEF`to camera pose $\stackrel{\circ}{\mathbf{\zeta}}}_{{\scriptscriptstyle \mathrm{EC}}\mathrm{i}$ that minimizes the reprojection error $\mathrm{E}}_{\mathrm{RPi}$ discussed in Appendix C combined with the weighted attitude adjustment error $\mathrm{E}}_{\mathrm{q},\mathrm{i}$. The specific weight $\mathrm{f}}_{\mathrm{q}$ is discussed in Section 5.3. Inspired in [10], the main goal of the optimization algorithm is to minimize the reprojection error of the different terrain 3D points while simultaneously trying to be close to the attitude and altitude targets derived from the inertial filter.

#### 5.3. PI Control-Inspired Pose Adjustment Activation

#### 5.3.1. Pitch Adjustment Activation

- The pitch adjustment due to altitude, $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\mathrm{h}}^{\circ \circ}$, linearly varies between zero when the adjustment error is below the threshold $\mathit{\Delta}{\mathrm{h}}_{\mathrm{LOW}}$ to $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$ when the error is twice the threshold, as shown in (26). The adjustment is bounded at this value to avoid destabilizing SVO with pose adjustments that differ too much from their reprojection only optimum $\stackrel{\circ}{\mathbf{\zeta}}}_{{\scriptscriptstyle \mathrm{EB}}\mathrm{i}}^{\U0001f7c9\U0001f7c9$ (9).

- The pitch adjustment due to pitch, $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\theta}^{\circ \circ}$, works similarly but employing $\mathit{\Delta}\theta$ instead of $\mathit{\Delta}\mathrm{h}$ and $\mathit{\Delta}{\theta}_{\mathrm{LOW}}$ instead of $\mathit{\Delta}{\mathrm{h}}_{\mathrm{LOW}}$, while also relying on the same limit $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$. In addition, $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\theta}^{\circ \circ}$ is set to zero if its sign differs from that of $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\mathrm{h}}^{\circ \circ}$, and reduced so the combined effect of both targets does not exceed the limit ($|\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\mathrm{h}}^{\circ \circ}+\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\theta}^{\circ \circ}|\le \mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$).
- The pitch adjustment due to rate of climb, $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\mathrm{ROC}}^{\circ \circ}$, also follows a similar scheme but employing $\mathit{\Delta}\mathrm{ROC}$ instead of $\mathit{\Delta}\mathrm{h}$, $\mathit{\Delta}{\mathrm{ROC}}_{\mathrm{LOW}}$ instead of $\mathit{\Delta}{\mathrm{h}}_{\mathrm{LOW}}$, and $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{2,\mathrm{MAX}}^{\circ \circ}$ instead of $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$. Additionally, it is multiplied by the ratio between $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{\mathrm{h}}^{\circ \circ}$ and $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$ to limit its effects when the altitude estimated error $\mathit{\Delta}\mathrm{h}$ is small. This adjustment can act in both directions, imposing bigger pitch adjustments if the altitude error is increasing or lower one if it is already diminishing.

#### 5.3.2. Pitch and Bank Adjustment Activation

#### 5.3.3. Attitude Adjustment Activation

`Bi`” indicates that it is viewed in the pose optimized body frame. This perturbation can be decoupled into a rotating direction and an angular displacement [1,3], resulting in $\mathit{\Delta}{\stackrel{\circ}{\mathbf{r}}}^{{\scriptscriptstyle \mathrm{B}}\mathrm{i},\U0001f7c9\U0001f7c9}={\stackrel{\circ}{\mathbf{n}}}^{{\scriptscriptstyle \mathrm{B}}\mathrm{i},\U0001f7c9\U0001f7c9}\phantom{\rule{4pt}{0ex}}\mathit{\Delta}{\stackrel{\circ}{\varphi}}^{\U0001f7c9\U0001f7c9}$.

#### 5.4. Additional Modifications to SVO

## 6. Testing: High-Fidelity Simulation and Scenarios

#### 6.1. Camera

`Earth Viewer`library, a modification to

`osgEarth`[18] (which, in turn, relies on

`OpenSceneGraph`[19]) capable of generating realistic Earth images as long as the camera height over the terrain is significantly higher than the vertical relief present in the image. A more detailed explanation of the image generation process is provided in [17].

#### 6.2. Scenarios

`EuRoC`Micro Air Vehicle (MAV) datasets [21], and so do independent articles, such as [22]. These datasets contain perfectly synchronized stereo images, Inertial Measurement Unit (IMU) measurements, and ground truth readings obtained with a laser, for 11 different indoor trajectories flown with a MAV, each with a duration in the order of two minutes and a total distance in the order of $100\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$. This fact by itself indicates that the target application of exiting VIO implementations differs significantly from the main focus of this article, which is the long term flight of a fixed wing UAV in GNSS-Denied conditions, as there may exist accumulating errors that are completely non discernible after such short periods of time, but that grow non-linearly and have the capability of inducing significant pose errors when the aircraft remains aloft for long periods of time.

- Scenario #1 has been defined with the objective of adequately representing the challenges faced by an autonomous fixed wing UAV that suddenly cannot rely on GNSS and hence changes course to reach a predefined recovery location situated at approximately one hour of flight time. In the process, in addition to executing an altitude and airspeed adjustment, the autonomous aircraft faces significant weather and wind field changes that make its GNSS-Denied navigation even more challenging.With respect to the mission, the stochastic parameters include the initial airspeed, pressure altitude, and bearing ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{TAS},\mathrm{INI}}},{\mathrm{H}}_{{\scriptscriptstyle \mathrm{P},\mathrm{INI}}},{\varnothing}_{{\scriptscriptstyle \mathrm{INI}}$), their final values ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{TAS},\mathrm{END}}},{\mathrm{H}}_{{\scriptscriptstyle \mathrm{P},\mathrm{END}}},{\varnothing}_{{\scriptscriptstyle \mathrm{END}}$), and the time at which each of the three maneuvers is initiated (turns are executed with a bank angle of $\xi}_{{\scriptscriptstyle \mathrm{TURN}}}=\pm 10{\phantom{\rule{0.166667em}{0ex}}}^{\circ$, altitude changes employ an aerodynamic path angle of $\gamma}_{{\scriptscriptstyle \mathrm{TAS},\mathrm{CLIMB}}}=\pm 2{\phantom{\rule{0.166667em}{0ex}}}^{\circ$, and airspeed modifications are automatically executed by the control system as set-point changes). The scenario lasts for $\mathrm{t}}_{{\scriptscriptstyle \mathrm{END}}}=3800\phantom{\rule{0.166667em}{0ex}}\mathrm{s$, while the GNSS signals are lost at $\mathrm{t}}_{{\scriptscriptstyle \mathrm{GNSS}}}=100\phantom{\rule{0.166667em}{0ex}}\mathrm{s$.The wind field is also defined stochastically, as its two parameters (speed and bearing) are constant both at the beginning ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{INI}}},{\varnothing}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{INI}}$) and conclusion ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{END}}},{\varnothing}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{END}}$) of the scenario, with a linear transition in between. The specific times at which the wind change starts and concludes also vary stochastically among the different simulation runs. As described in [15], the turbulence remains strong throughout the whole scenario, but its specific values also vary stochastically from one execution to the next.A similar linear transition occurs with the temperature and pressure offsets that define the atmospheric properties [23], as they are constant both at the start ($\mathit{\Delta}{\mathrm{T}}_{{\scriptscriptstyle \mathrm{INI}}},\mathit{\Delta}{\mathrm{p}}_{{\scriptscriptstyle \mathrm{INI}}}$) and end ($\mathit{\Delta}{\mathrm{T}}_{{\scriptscriptstyle \mathrm{END}}},\mathit{\Delta}{\mathrm{p}}_{{\scriptscriptstyle \mathrm{END}}}$) of the flight. In contrast with the wind field, the specific times at which the two transitions start and conclude are not only stochastic but also different from each other.
- Scenario #2 represents the challenges involved in continuing with the original mission upon the loss of the GNSS signals, executing a series of continuous turn maneuvers over a relatively short period of time with no atmospheric or wind variations. As in scenario $\#1$, the GNSS signals are lost at $\mathrm{t}}_{{\scriptscriptstyle \mathrm{GNSS}}}=100\phantom{\rule{0.166667em}{0ex}}\mathrm{s$, but the scenario duration is shorter ($\mathrm{t}}_{{\scriptscriptstyle \mathrm{END}}}=500\phantom{\rule{0.166667em}{0ex}}\mathrm{s$). The initial airspeed and pressure altitude ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{TAS},\mathrm{INI}}},{\mathrm{H}}_{{\scriptscriptstyle \mathrm{P},\mathrm{INI}}$) are defined stochastically and do not change throughout the whole scenario; the bearing however changes a total of eight times between its initial and final values, with all intermediate bearing values, as well as the time for each turn varying stochastically from one execution to the next. Although the same turbulence is employed as in scenario $\#1$, the wind and atmospheric parameters ($\mathrm{v}}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{INI}}},{\varnothing}_{{\scriptscriptstyle \mathrm{WIND},\mathrm{INI}}},\mathit{\Delta}{\mathrm{T}}_{{\scriptscriptstyle \mathrm{INI}}},\mathit{\Delta}{\mathrm{p}}_{{\scriptscriptstyle \mathrm{INI}}$) remain constant throughout scenario $\#2$.

## 7. Results: Navigation System Error in GNSS-Denied Conditions

- The results obtained with the INS under the same two GNSS-Denied scenarios are described in detail in [6], a previous article by the same authors. It proves that it is possible to take advantage of sensors already present onboard fixed wing aircraft (accelerometers, gyroscopes, magnetometers, Pitot tube, air vanes, thermometer, and barometer), the particularities of fixed wing flight, and the atmospheric and wind estimations that can be obtained before the GNSS signals are lost, to develop an EKF (Extended Kalman Filter)-based INS that results in bounded (no drift) estimations for attitude (ensuring that the aircraft can remain aloft in GNSS-Denied conditions for as long as there is fuel available), altitude (the estimation error depends on the change in atmospheric pressure offset $\mathit{\Delta}\mathrm{p}$ [23] from its value at the time the GNSS signals are lost, which is bounded by atmospheric physics), and ground velocity (the estimation error depends on the change in wind velocity from its value at the time the GNSS signals are lost, which is bounded by atmospheric physics), as well as an unavoidable drift in horizontal position caused by integrating the ground velocity without absolute observations. Note that of the six $\mathbb{SE}\left(3\right)$ degrees of freedom or the aircraft pose (three for attitude, two for horizontal position, one for altitude), the INS is hence capable of successfully estimating four of them in GNSS-Denied conditions. Figure 5 graphically depicts that the INS inputs include all sensor measurements $\tilde{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{SENSED}}}$ with the exception of the camera images $\mathbf{I}$.

- Visual navigation systems (either VNS or IA-VNS) are only necessary to reduce the estimation error in the two remaining degrees of freedom (the horizontal position). Although both of them estimate the complete six dimensional aircraft pose, their attitude and altitude estimations shall only be understood as a means to provide an accurate horizontal position estimation, which represents their sole objective. Figure 6 shows that the VNS relies exclusively on the images $\mathbf{I}$ without the use of any other sensors; on the other hand, the IA-VNS represented in Figure 7 complements the images with the $\widehat{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{EST}}}$ outputs of the INS.

- As it does not rely on absolute references, visual navigation slowly accumulates error (drifts) not only in horizontal position, but also in attitude and altitude. The main focus of this article is on how the addition of INS based priors enables the IA-VNS to reduce the drift in all six dimensions, with the resulting horizontal position IA-VNSE being just a fraction of the INSE. The attitude and altitude IA-VNSEs, although improved when compared to the VNSEs, are qualitatively inferior to the driftless INSEs, but note that their purpose is just to enable better horizontal position IA-VNS estimations, not to replace the attitude and altitude INS outputs.

#### 7.1. Body Attitude Estimation

- After a short transition period following the introduction of GNSS-Denied conditions at $\mathrm{t}}_{{\scriptscriptstyle \mathrm{GNSS}}}=100\phantom{\rule{0.166667em}{0ex}}\mathrm{s$, the body attitude inertial navigation system error or INSE (blue lines) does not experience any drift with time in either scenario, and is bounded by the quality of the onboard sensors and the inertial navigation algorithms [6].

- With respect to the visual navigation system error or VNSE (red lines), most of the scenario $\#1$ error is incurred during the turn maneuver at the beginning of the scenario (refer to $\mathrm{t}}_{{\scriptscriptstyle \mathrm{TURN}}$ within [15]), with only a slow accumulation during the rest of the trajectory, composed by a long straight flight with punctual changes in altitude and speed. Additional error growth would certainly accumulate if more turns were to occur, although this is not tested in the simulation. This statement seems to contradict the results obtained with scenario $\#2$, in which the error grows with the initial turns but then stabilizes during the rest of the scenario, even though the aircraft is executing continuous turn maneuvers. This lack of error growth occurs because the scenario $\#2$ trajectories are so twisted (refer to [15]) that terrain zones previously mapped reappear in the camera field of view during the consecutive turns, and are hence employed by the pose optimization phase as absolute references, resulting in a much better attitude estimation than what would occur under more spaced turns. A more detailed analysis (not shown in the figures) shows that the estimation error does not occur during the whole duration of the turns, but only during the roll-in and final roll-out maneuvers, where the optical flow is highest and hence more difficult to track by SVO (for the two evaluated scenarios, the optical flow during the roll-in and roll-out maneuvers is significantly higher than that induced by straight flight, pull-up, and push-down maneuvers, and even the turning maneuvers themselves once the bank angle is no longer changing).

- The inertially assisted VNSE or IA-VNSE results (green lines) show that the introduction of priors in Section 5 works as intended and there exists a clear benefit for the use of an IA-VNS when compared to the standalone VNS described in Appendix C. In spite of IA-VNSE values at the beginning of both scenarios that are nearly double those of the VNSE (refer to Figure 8 and Figure 9), caused by the initial pitch adjustment required to improve the fit between the homography output and the inertial estimations (Section 5.4), the balance between both errors quickly flips as soon as the aircraft starts maneuvering, resulting in body attitude IA-VNSE values significantly lower than those of the VNSE for the remaining part of both scenarios. This improvement is more significant in the case of scenario $\#1$, as the prior based pose optimization is by design a slow adjustment that requires significant time to slowly correct attitude and altitude deviations between the visual and inertial estimations.

#### 7.2. Vertical Position Estimation

- The geometric altitude INSE (blue lines) is bounded by the change in atmospheric pressure offset since the time the GNSS signals are lost. Refer to [6] for additional information.
- The VNS estimation of the geometric altitude (red lines) is worse than that by the INS both qualitatively and quantitatively, even with the results being optimistic because of the ideal image generation process employed in the simulation. A continuous drift or error growth with time is present, and results in final errors much higher than those obtained with the GNSS-Denied inertial filter. These errors are logically bigger for scenario $\#1$ because of its much longer duration.A small percentage of this drift can be attributed to the slow accumulation of error inherent to the SVO motion thread algorithms introduced in Appendix C, but most of it results from adding the estimated relative pose between two consecutive images to a pose (that of the previous image) with an attitude that already possesses a small pitch error (refer to the attitude estimation analysis in Section 7.1). Note that even a fraction of a degree deviation in pitch can result in hundreds of meters in vertical error when applied to the total distance flown in scenario $\#1$, as SVO can be very precise when estimating pose changes between consecutive images, but lacks any absolute reference to avoid slowly accumulating these errors over time. This fact is precisely the reason why the vertical position VNSE grows more slowly in the second half of scenario $\#2$, as shown in Figure 12. As explained in Section 7.1 above, continuous turn maneuvers cause previously mapped terrain points to reappear in the camera field of view, stopping the growth in the attitude error (pitch included), which indirectly has the effect of slowing the growth in altitude estimation error.
- The benefits of introducing priors to limit the differences between the visual and inertial altitude estimations are reflected in the IA-VNSE (green lines). The error reduction is drastic in the case of the scenario $\#1$, where its extended duration allows the pose optimization small pitch adjustments to accumulate into significant altitude corrections over time, and less pronounced but nevertheless significant for scenario $\#2$, where the VNSE (an hence also the IA-VNSE) results already benefit from previously mapped terrain points reappearing in the aircraft field of view as a result of the continuous maneuvers. It is necessary to remark the amount of the improvement, as the final standard deviation $\sigma}_{{\scriptscriptstyle \mathrm{END}}\mathrm{h}$ diminishes from $287.58$ to $49.17\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ for scenario $\#1$, and from $20.56$ to $13.01\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ for scenario $\#2$.

#### 7.3. Horizontal Position Estimation

## 8. Influence of Terrain Type

- The “desert” (DS) zone (left image within Figure 15) is located in the Sonoran desert of southern Arizona (USA) and northern Mexico. It is characterized by a combination of bajadas (broad slopes of debris) and isolated very steep mountain ranges. There is virtually no human infrastructure or flat terrain, as the bajadas have sustained slopes of up to $7}^{\circ$. The altitude of the bajadas ranges from $300$ to $800\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ above MSL, and the mountains reach up to $800\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ above the surrounding terrain. Texture is abundant because of the cacti and the vegetation along the dry creeks.
- The “farm” (FM) zone (right image within Figure 15) is located in the fertile farmland of southeastern Illinois and southwestern Indiana (USA). A significant percentage of the terrain is made of regular plots of farmland, but there also exists some woodland, farm houses, rivers, lots of little towns, and roads. It is mostly flat with an altitude above MSL between $100$ and $200\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$, and altitude changes are mostly restricted to the few forested areas. Texture is non-existent in the farmlands, where extracting features is often impossible.
- The “forest” (FR) zone (left image within Figure 16) is located in the deciduous forestlands of Vermont and New Hampshire (USA). The terrain is made up of forests and woodland, with some clearcuts, small towns, and roads. There are virtually no flat areas, as the land is made up by hills and small to medium size mountains that are never very steep. The valleys range from $100$ to $300\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ above MSL, while the tops of the mountains reach $500$ to $900\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$. Features are plentiful in the woodlands.
- The “mix” (MX) zone (right image within Figure 16) is located in northern Mississippi and extreme southwestern Tennessee (USA). Approximately half of the land consists of woodland in the hills, and the other half is made up by farmland in the valleys, with a few small towns and roads. Altitude changes are always present and the terrain is never flat, but they are smaller than in the DS and FR zones, with the altitude oscillating between $100$ and $200\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ above MSL.

- The “prairie” (PR) zone (left image within Figure 17) is located in the Everglades floodlands of southern Florida (USA). It consists of flat grasslands, swamps, and tree islands located a few meters above MSL, with the only human infrastructure being a few dirt roads and landing strips, but no settlements. Features may be difficult to obtain in some areas due to the lack of texture.
- The “urban” (UR) zone (right image within Figure 17) is located in the Los Angeles metropolitan area (California, USA). It is composed by a combination of single family houses and commercial buildings separated by freeways and streets. There is some vegetation but no natural landscapes, and the terrain is flat and close to MSL.

## 9. Summary of Results

- The
**body attitude**estimation shows significant quantitative improvements over a standalone Visual Navigation System (VNS) in both pitch and bank angle estimations, with no negative influence on the yaw angle estimations. A small amount of drift with time is present, and can not be fully eliminated. Body pitch and bank angle estimations do not deviate in excess from their INS counterparts, while the body yaw angle visual estimation is significantly more accurate than that obtained by the INS. - The
**vertical position**estimation shows major improvements over that of a standalone VNS, not only quantitatively but also qualitatively, as drift is fully eliminated. The visual estimation does not deviate in excess from the inertial one, which is bounded by atmospheric physics. - The
**horizontal position**estimation, whose improvement is the main objective of the proposed algorithm, shows major gains when compared to either the standalone VNS or the INS, although drift is still present.

**terrain**texture (or lack of) and its elevation relief are key factors for the visual odometry algorithms, their influence on the aircraft pose estimation results are slim, and the accuracy of the IA-VNS does not vary significantly among the various evaluated terrain types.

## 10. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## Abbreviations

BRIEF | Binary Robust Independent Elementary Features |

DS | DeSert terrain type |

DSO | Direct Sparse Odometry |

ECEF | Earth Centered Earth Fixed |

EKF | Extended Kalman Filter |

FAST | Features from Accelerated Segment Test |

FM | FarM terrain type |

FR | FoRest terrain type |

GNSS | Global Navigation Satellite System |

IA-VNS | Inertially Assisted VNS |

IA-VNSE | Inertially Assisted Visual Navigation System Error |

IMU | Inertial Measurement Unit |

INS | Inertial Navigation System |

INSE | Inertial Navigation System Error |

iSAM | Incremental Smoothing And Mapping |

ISO | International Organization for Standardization |

LSD | Large Scale Direct |

MAV | Micro Air Vehicle |

MSCKF | Multi State Constraint Kalman Filter |

MSF | Multi-Sensor Fusion |

MSL | Mean Sea Level |

MX | MiX terrain type |

NED | North East Down |

NSE | Navigation System Error |

OKVIS | Open Keyframe Visual Inertial SLAM |

ORB | Oriented FAST and Rotated BRIEF |

PI | Proportional Integral |

PR | Praire terrain type |

RANSAC | Random SAmple Consensus |

ROC | Rate Of Climb |

ROVIO | Robust Visual Inertial Odometry |

SLAM | Simultaneous Localization And Mapping |

SLERP | Spherical linear interpolation |

SVO | Semi direct Visual Odometry |

SWaP | Size, Weight, and Power |

TAS | True Air Speed |

UAV | Unmanned Aerial Vehicle |

UR | Urban terrain type |

USA | United States of America |

VINS | Visual Inertial Navigation System |

VIO | Visual Inertial Odometry |

VNS | Visual Navigation System |

VNSE | Visual Navigation System Error |

VO | Visual Odometry |

WGS84 | World Geodetic System 1984 |

## Appendix A. Optical Flow

## Appendix B. Introduction to GNSS-Denied Navigation

#### Appendix B.1. Possible Approaches to GNSS-Denied Navigation

#### Appendix B.2. Visual Navigation

#### Appendix B.3. Visual Inertial Navigation

## Appendix C. Semi-Direct Visual Odometry

## References

- Gallo, E. The SO(3) and SE(3) Lie Algebras of Rigid Body Rotations and Motions and their Application to Discrete Integration, Gradient Descent Optimization, and State Estimation. arXiv
**2022**, arXiv:2205.12572v1. [Google Scholar] - Sola, J. Quaternion Kinematics for the Error-State Kalman Filter. arXiv
**2017**, arXiv:1711.02508v1. [Google Scholar] - Sola, J.; Deray, J.; Atchuthan, D. A Micro Lie Theory for State Estimation in Robotics. arXiv
**2018**, arXiv:1812.01537v9. [Google Scholar] - Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast Semi-Direct Monocular Visual Odometry. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2014. [Google Scholar] [CrossRef][Green Version]
- Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems. IEEE Trans. Robot.
**2016**, 33, 249–265. [Google Scholar] [CrossRef][Green Version] - Gallo, E.; Barrientos, A. Reduction of GNSS-Denied Inertial Navigation Errors for Fixed Wing Autonomous Unmanned Air Vehicles. Aerosp. Sci. Technol.
**2022**, 120. [Google Scholar] [CrossRef] - Baker, S.; Matthews, I. Lucas-Kanade 20 Years On: A Unifying Framework. Int. J. Comput. Vis.
**2004**, 56, 221–255. [Google Scholar] [CrossRef] - Huber, P.J. Robust Statistics; John Wiley & Sons: New York, NY, USA, 1981. [Google Scholar]
- Fox, J.; Weisberg, S. Robust Regression. 2013. Available online: http://users.stat.umn.edu/~sandy/courses/8053/handouts/robust.pdf (accessed on 10 January 2023).
- Baker, S.; Gross, R.; Matthews, I. Lucas-Kanade 20 Years On: A Unifying Framework: Part 4; Technical Report CMU-RI-TR-04-14; Carnegie Mellon University: Cambridge, MA, USA, 2004. [Google Scholar]
- Ogata, K. Modern Control Engineering, 4th ed.; Prentice Hall, 2002; Available online: https://scirp.org/reference/referencespapers.aspx?referenceid=123554 (accessed on 10 January 2023).
- Skogestad, S.; Postlethwaite, I. Multivariable Feedback Control: Analysis and Design, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2005. [Google Scholar]
- Stevens, B.L.; Lewis, F.L. Aircraft Control and Simulation, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
- Franklin, G.F.; Powell, J.D.; Workman, M. Digital Control of Dynamic Systems, 3rd ed.; Ellis-Kagle Press: Sunnyvale, CA, USA, 1998. [Google Scholar]
- Gallo, E. Stochastic High Fidelity Simulation and Scenarios for Testing of Fixed Wing Autonomous GNSS-Denied Navigation Algorithms. arXiv
**2021**, arXiv:2102.00883v3. [Google Scholar] - Gallo, E. High Fidelity Flight Simulation for an Autonomous Low SWaP Fixed Wing UAV in GNSS-Denied Conditions. C++ Open Source Code. 2020. Available online: https://github.com/edugallogithub/gnssdenied_flight_simulation (accessed on 10 January 2023).
- Gallo, E.; Barrientos, A. Customizable Stochastic High Fidelity Model of the Sensors and Camera onboard a Fixed Wing Autonomous Aircraft. Sensors
**2022**, 22, 5518. [Google Scholar] [CrossRef] - osgEarth. Available online: http://osgearth.org (accessed on 10 January 2023).
- Open Scene Graph. Available online: http://openscenegraph.org (accessed on 10 January 2023).
- Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S.S. An Invitation to 3-D Vision, From Images to Geometric Models; Imaging, Vision, and Graphics; Springer: Berlin, Germany, 2001. [Google Scholar]
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC MAV Datasets. IEEE Int. J. Robot. Res.
**2016**. [Google Scholar] [CrossRef] - Delmerico, J.; Scaramuzza, D. A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2502–2509. [Google Scholar] [CrossRef]
- Gallo, E. Quasi Static Atmospheric Model for Aircraft Trajectory Prediction and Flight Simulation. arXiv
**2021**, arXiv:2101.10744v1. [Google Scholar] - Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Heeger, D.J. Notes on Motion Estimation. 1998. Available online: https://www.cns.nyu.edu/csh/csh04/Articles/carandinifix.pdf (accessed on 10 January 2023).
- Hassanalian, M.; Abdelkefi, A. Classifications, Applications, and Design Challenges of Drones: A Review. Prog. Aerosp. Sci.
**2017**, 91, 99–131. [Google Scholar] [CrossRef] - Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges. IEEE Access
**2019**, 7, 48572–48634. [Google Scholar] [CrossRef] - Bijjahalli, S.; Sabatini, R.; Gardi, A. Advances in Intelligent and Autonomous Navigation Systems for Small UAS. Prog. Aerosp. Sci.
**2020**, 115, 100617. [Google Scholar] [CrossRef] - Farrell, J.A. Aided Navigation, GPS with High Rate Sensors; Electronic Engineering Series; McGraw-Hill: New York, NY, USA, 2008. [Google Scholar]
- Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; GNSS Technology and Application Series; Artech House: Norwood, MA, USA, 2008. [Google Scholar]
- Chatfield, A.B. Fundamentals of High Accuracy Inertial Navigation; American Institute of Aeronautics and Astronautics, Progress in Astronautics and Aeronautics: Reston, VA, USA, 1997; Volume 174. [Google Scholar]
- Elbanhawi, M.; Mohamed, A.; Clothier, R.; Palmer, J.; Simic, M.; Watkins, S. Enabling Technologies for Autonomous MAV Operations. Prog. Aerosp. Sci.
**2017**, 91, 27–52. [Google Scholar] [CrossRef] - Sabatini, R.; Moore, T.; Ramasamy, S. Global Navigation Satellite Systems Performance Analysis and Augmentation Strategies in Aviation. Prog. Aerosp. Sci.
**2017**, 95, 45–98. [Google Scholar] [CrossRef] - Tippitt, C.; Schultz, A.; Procino, W. Vehicle Navigation: Autonomy Through GPS-Enabled and GPS-Denied Environments; State of the Art Report DSIAC-2020-1328; Defense Systems Information Analysis Center: Belcamp, MD, USA, 2020. [Google Scholar]
- Gyagenda, N.; Hatilima, J.V.; Roth, H.; Zhmud, V. A Review of GNSS Independent UAV Navigation Techniques. Robot. Auton. Syst.
**2022**, 152, 104069. [Google Scholar] [CrossRef] - Kapoor, R.; Ramasamy, S.; Gardi, A.; Sabatini, R. UAV Navigation using Signals of Opportunity in Urban Environments: A Review. Energy Procedia
**2017**, 110, 377–383. [Google Scholar] [CrossRef] - Coluccia, A.; Ricciato, F.; Ricci, G. Positioning Based on Signals of Opportunity. IEEE Commun. Lett.
**2014**, 18, 356–359. [Google Scholar] [CrossRef] - Goh, S.T.; Abdelkhalik, O.; Zekavat, S.A. A Weighted Measurement Fusion Kalman Filter Implementation for UAV Navigation. Aerosp. Sci. Technol.
**2013**, 28, 315–323. [Google Scholar] [CrossRef] - Couturier, A.; Akhloufi, M.A. A Review on Absolute Visual Localization for UAV. Robot. Auton. Syst.
**2020**, 135, 103666. [Google Scholar] [CrossRef] - Goforth, H.; Lucey, S. GPS-Denied UAV Localization using Pre Existing Satellite Imagery. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019. [Google Scholar] [CrossRef]
- Ziaei, N. Geolocation of an Aircraft using Image Registration Coupling Modes for Autonomous Navigation. arXiv
**2019**, arXiv:1909.02875v1. [Google Scholar] - Wang, T. Augmented UAS Navigation in GPS Denied Terrain Environments using Synthetic Vision. Ph.D. Thesis, Iowa State University, Ames, IA, USA, 2018. [Google Scholar] [CrossRef][Green Version]
- Scaramuzza, D.; Fraundorfer, F. Visual Odometry Part 1: The First 30 Years and Fundamentals. IEEE Robot. Autom. Mag.
**2011**, 18, 80–92. [Google Scholar] [CrossRef] - Fraundorfer, F.; Scaramuzza, D. Visual Odometry Part 2: Matching, Robustness, Optimization, and Applications. IEEE Robot. Autom. Mag.
**2012**, 19, 78–90. [Google Scholar] [CrossRef][Green Version] - Scaramuzza, D. Tutorial on Visual Odometry; Robotics & Perception Group, University of Zurich: Zurich, Switzerland, 2012. [Google Scholar]
- Scaramuzza, D. Visual Odometry and SLAM: Past, Present, and the Robust Perception Age; Robotics & Perception Group, University of Zurich: Zurich, Switzerland, 2017. [Google Scholar]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Towards the Robust Perception Age. IEEE Trans. Robot.
**2016**, 32, 1309–1332. [Google Scholar] [CrossRef][Green Version] - Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell.
**2018**, 40, 611–625. [Google Scholar] [CrossRef] [PubMed] - Engel, J.; Schops, T.; Cremers, D. LSD-SLAM: Large Scale Direct Monocular SLAM. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; pp. 834–849. [Google Scholar] [CrossRef][Green Version]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot.
**2015**, 31, 1147–1163. [Google Scholar] [CrossRef][Green Version] - Mur-Artal, R.; Tardos, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Trans. Robot.
**2017**, 33, 1255–1262. [Google Scholar] [CrossRef][Green Version] - Mur-Artal, R. Real-Time Accurate Visual SLAM with Place Recognition. Ph.D. Thesis, University of Zaragoza, Zaragoza, Spain, 2017. [Google Scholar]
- Scaramuzza, D.; Zhang, Z. Visual-Inertial Odometry of Aerial Robots. arXiv
**2019**, arXiv:1906.03289v2. [Google Scholar] - Huang, G. Visual-Inertial Navigation: A Concise Review. arXiv
**2019**, arXiv:1906.02650v1. [Google Scholar] - von Stumberg, L.; Usenko, V.; Cremers, D. Chapter 7—A Review and Quantitative Evaluation of Direct Visual Inertial Odometry. In Multimodal Scene Understanding; Yang, M.Y., Rosenhahn, B., Murino, V., Eds.; Academic Press: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Feng, X.; Jiang, Y.; Yang, X.; Du, M.; Li, X. Computer Vision Algorithms and Hardware Implementations: A Survey. Integr. VLSI J.
**2019**, 69, 309–320. [Google Scholar] [CrossRef] - Al-Kaff, A.; Martin, D.; Garcia, F.; de la Escalera, A.; Maria, J. Survey of Computer Vision Algorithms and Applications for Unmanned Aerial Vehicles. Expert Syst. Appl.
**2017**, 92, 447–463. [Google Scholar] [CrossRef] - Mourikis, A.I.; Roumeliotis, S.I. A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar] [CrossRef]
- Leutenegger, S.; Furgale, P.; Rabaud, V.; Chli, M.; Konolige, K.; Siegwart, R. Keyframe Based Visual Inertial SLAM Using Nonlinear Optimization. In Proceedings of the International Conference on Robotics: Robotics: Science and Systems IX, Berlin, Germany, 24–28 June 2013. [Google Scholar] [CrossRef]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe Based Visual Inertial SLAM Using Nonlinear Optimization. Int. J. Robot. Res.
**2015**, 34, 314–334. [Google Scholar] [CrossRef][Green Version] - Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust Visual Inertial Odometry Using a Direct EKF Based Approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual Inertial State Estimator. IEEE Trans. Robot.
**2018**, 34, 1004–1020. [Google Scholar] [CrossRef][Green Version] - Lynen, S.; Achtelik, M.W.; Weiss, S.; Chli, M.; Siegwart, R. A Robust and Modular Multi Sensor Fusion Approach Applied to MAV Navigation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar] [CrossRef][Green Version]
- Faessler, M.; Fontana, F.; Forster, C.; Mueggler, E.; Pizzoli, M.; Scaramuzza, D. Autonomous, Vision Based Flight and Live Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle. J. Field Robot.
**2015**, 33, 431–450. [Google Scholar] [CrossRef] - Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On Manifold Pre Integration for Real Time Visual Inertial Odometry. IEEE Trans. Robot.
**2017**, 33, 1–21. [Google Scholar] [CrossRef][Green Version] - Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.; Dellaert, F. iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree. Int. J. Robot. Res.
**2012**, 31, 216–235. [Google Scholar] [CrossRef] - Mur-Artal, R.; Montiel, J.M.M. Visual Inertial Monocular SLAM with Map Reuse. IEEE Robot. Autom. Lett.
**2017**, 2, 796–803. [Google Scholar] [CrossRef][Green Version] - Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem. Proc. AAAI Conf. Artif. Intell.
**2017**. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/11215 (accessed on 10 January 2023). [CrossRef] - Paul, M.K.; Wu, K.; Hesch, J.A.; Nerurkar, E.D.; Roumeliotis, S.I. A Comparative Analysis of Tightly Coupled Monocular, Binocular, and Stereo VINS. In Proceedings of the EEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 165–172. [Google Scholar] [CrossRef]
- Song, Y.; Nuske, S.; Scherer, S. A Multi Sensor Fusion MAV State Estimation from Long Range Stereo, IMU, GPS, and Barometric Sensors. Sensors
**2017**, 17, 11. [Google Scholar] [CrossRef][Green Version] - Solin, A.; Cortes, S.; Rahtu, E.; Kannala, J. PIVO: Probabilistic Inertial Visual Odometry for Occlusion Robust Navigation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; pp. 616–625. [Google Scholar] [CrossRef][Green Version]
- Houben, S.; Quenzel, J.; Krombach, N.; Behnke, S. Efficient Multi Camera Visual Inertial SLAM for Micro Aerial Vehicles. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 1616–1622. [Google Scholar] [CrossRef]
- Eckenhoff, K.; Geneva, P.; Huang, G. Direct Visual Inertial Navigation with Analytical Preintegration. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May 2017–3 June 2017; pp. 1429–1435. [Google Scholar] [CrossRef]
- Strasdat, H.; Montiel, J.M.M.; Davison, A.J. Real Time Monocular SLAM: Why Filter? In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 2657–2664. [Google Scholar] [CrossRef][Green Version]
- Fischler, M.A.; Bolles, R.C. RANSAC Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef]

**Figure 1.**ECEF ($\mathrm{F}}_{{\scriptscriptstyle \mathrm{E}}$), NED ($\mathrm{F}}_{{\scriptscriptstyle \mathrm{N}}$), and body ($\mathrm{F}}_{{\scriptscriptstyle \mathrm{B}}$) reference frames.

$\mathit{\gamma}}_{{\scriptscriptstyle \mathbf{TAS}}$ | Aerodynamic path angle | $\mathbf{g}$ | Lie group action (transformation) |

$\delta$ | Error threshold | $\mathrm{h}$ | Geometric altitude |

$\mathbf{\delta}}_{{\scriptscriptstyle \mathrm{CNTR}}$ | Throttle and control surfaces position | $\mathrm{H}}_{{\scriptscriptstyle \mathrm{P}}$ | Pressure altitude |

$\mathbf{\delta}}_{{\scriptscriptstyle \mathrm{TARGET}}$ | Control targets | $\mathbf{I}$ | Camera image |

$\mathit{\Delta}$ | Estimation error, increment | $\mathbf{J}$ | Jacobian |

$\mathit{\Delta}\mathrm{p}$ | Atmospheric pressure offset | $\mathcal{M}$ | $\mathbb{SE}\left(3\right)$ Lie group element |

$\mathit{\Delta}\mathrm{T}$ | Atmospheric temperature offset | $\mathbf{p}$ | Point, feature |

$\theta$ | Body pitch angle | $\mathbf{q}$ | Attitude, unit quaternion |

$\lambda$ | Longitude | $\mathbf{r}$ | Attitude, rotation vector |

$\mathbf{\zeta}$ | Pose, unit dual quaternion | $\mathbf{R}$ | Attitude, rotation matrix |

$\mu$ | Mean or expected value | $\mathcal{R}$ | $\mathbb{SO}\left(3\right)$ Lie group element |

$\xi$ | Body bank angle | $\mathrm{s}}_{{\scriptscriptstyle \mathrm{PX}}$ | Pixel size |

$\mathbf{\xi}$ | Motion ($\mathbb{SE}\left(3\right)$) velocity or twist | $\mathrm{S}$ | Sensor dimension |

$\Pi$ | Camera projection | $\mathrm{t}$ | Time |

$\varrho}_{\mathrm{TUK}$ | Tukey error function | $\mathbf{T}$ | Displacement |

$\sigma$ | Standard deviation | $\mathbf{T}}^{{\scriptscriptstyle \mathrm{E},\mathrm{GDT}}$ | Geodetic coordinates |

$\mathbf{\tau}$ | Pose, transform vector | $\mathrm{v}$ | Speed |

$\phi$ | Latitude | $\mathbf{v}$ | Velocity |

$\mathbf{\varphi}$ | Attitude, Euler angles | $\mathrm{w}}_{\mathrm{TUK}$ | Tukey weight function |

$\varnothing$ | Bearing | $\mathrm{x}$ | Horizontal distance |

$\psi$ | Heading or body yaw angle | $\mathbf{x}$ | Position |

$\mathbf{\omega}$ | Angular ($\mathbb{SO}\left(3\right)$) velocity | $\widehat{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{EST}}}$ | Inertial estimated trajectory |

$\mathrm{E}}_{\mathrm{PO}$ | Pose optimization error | $\stackrel{\circ}{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{IMG}}}$ | Visual estimated trajectory |

$\mathrm{E}}_{\mathrm{q}$ | Attitude adjustment error | $\mathbf{x}}_{{\scriptscriptstyle \mathrm{REF}}$ | Reference objectives |

$\mathrm{E}}_{\mathrm{RP}$ | Reprojection error | $\tilde{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{SENSED}}}$ | Sensed trajectory |

$\mathrm{f}$ | Focal length | $\mathbf{x}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{TRUTH}}}$ | Real trajectory |

Variable | Value | Unit | Variable | Value | Unit |
---|---|---|---|---|---|

$\mathit{\Delta}{\mathrm{h}}_{\mathrm{LOW}}$ | 25.0 | m | $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{1,\mathrm{MAX}}^{\circ \circ}$ | 0.0005 | $}^{\circ$ |

$\mathit{\Delta}{\theta}_{\mathrm{LOW}}$ | 0.2 | $}^{\circ$ | $\mathit{\Delta}{\stackrel{\circ}{\theta}}_{2,\mathrm{MAX}}^{\circ \circ}$ | 0.0003 | $}^{\circ$ |

$\mathit{\Delta}{\mathrm{ROC}}_{\mathrm{LOW}}$ | 0.01 | m/s | $\mathit{\Delta}{\stackrel{\circ}{\xi}}_{1,\mathrm{MAX}}^{\circ \circ}$ | 0.0003 | $}^{\circ$ |

$\mathit{\Delta}{\xi}_{\mathrm{LOW}}$ | 0.2 | $}^{\circ$ |

Discrete Time | Frequency | Period | Variables | Systems |
---|---|---|---|---|

$\mathrm{t}}_{\mathrm{t}}=\mathrm{t}\xb7\mathit{\Delta}{\mathrm{t}}_{{\scriptscriptstyle \mathrm{TRUTH}}$ | $500\phantom{\rule{4pt}{0ex}}\mathrm{Hz}$ | $0.002\phantom{\rule{4pt}{0ex}}\mathrm{s}$ | $\mathbf{x}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{TRUTH}}}$ | Flight physics |

$\mathrm{t}}_{\mathrm{s}}=\mathrm{s}\xb7\mathit{\Delta}{\mathrm{t}}_{{\scriptscriptstyle \mathrm{SENSED}}$ | $100\phantom{\rule{4pt}{0ex}}\mathrm{Hz}$ | $0.01\phantom{\rule{4pt}{0ex}}\mathrm{s}$ | $\tilde{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{SENSED}}}$ | Sensors |

$\mathrm{t}}_{\mathrm{n}}=\mathrm{n}\xb7\mathit{\Delta}{\mathrm{t}}_{{\scriptscriptstyle \mathrm{EST}}$ | $100\phantom{\rule{4pt}{0ex}}\mathrm{Hz}$ | $0.01\phantom{\rule{4pt}{0ex}}\mathrm{s}$ | $\widehat{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{EST}}}$ | Inertial navigation |

$\mathrm{t}}_{\mathrm{c}}=\mathrm{c}\xb7\mathit{\Delta}{\mathrm{t}}_{{\scriptscriptstyle \mathrm{CNTR}}$ | $50\phantom{\rule{4pt}{0ex}}\mathrm{Hz}$ | $0.02\phantom{\rule{4pt}{0ex}}\mathrm{s}$ | $\mathbf{\delta}}_{{\scriptscriptstyle \mathrm{TARGET}}},\phantom{\rule{0.166667em}{0ex}}{\mathbf{\delta}}_{{\scriptscriptstyle \mathrm{CNTR}}$ | Guidance and control |

$\mathrm{t}}_{\mathrm{i}}=\mathrm{i}\xb7\mathit{\Delta}{\mathrm{t}}_{{\scriptscriptstyle \mathrm{IMG}}$ | $10\phantom{\rule{4pt}{0ex}}\mathrm{Hz}$ | $0.1\phantom{\rule{4pt}{0ex}}\mathrm{s}$ | $\stackrel{\circ}{\mathbf{x}}={\mathbf{x}}_{{\scriptscriptstyle \mathrm{IMG}}}$ | Visual navigation and camera |

**Table 4.**Aggregated MX final body attitude INSE, VNSE, and IA-VNSE (100 runs). The most important metrics appear in bold.

NSE | VNSE | IA-VNSE | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

$\left[{}^{\mathbf{\circ}}\right]$ | $\mathit{\Delta}\widehat{\mathbf{\psi}}$ | $\mathit{\Delta}\widehat{\mathbf{\theta}}$ | $\mathit{\Delta}\widehat{\mathbf{\xi}}$ | $\mathbf{\parallel}\mathit{\Delta}{\widehat{\mathbf{r}}}_{{\scriptscriptstyle \mathbf{NB}}}^{{\scriptscriptstyle \mathbf{B}}}\mathbf{\parallel}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\psi}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\theta}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\xi}}$ | $\mathbf{\parallel}\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{r}}}_{{\scriptscriptstyle \mathbf{NB}}}^{{\scriptscriptstyle \mathbf{B}}}\mathbf{\parallel}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\psi}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\theta}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{\xi}}$ | $\mathbf{\parallel}\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{r}}}_{{\scriptscriptstyle \mathbf{NB}}}^{{\scriptscriptstyle \mathbf{B}}}\mathbf{\parallel}$ |

Scenario #1 MX $\left({\mathrm{t}}_{{\scriptscriptstyle \mathrm{END}}}\right)$ | ||||||||||||

mean | +0.03 | $-$0.03 | $-$0.00 | 0.158 | +0.03 | +0.08 | +0.00 | 0.296 | +0.03 | $-$0.01 | $-$0.03 | 0.218 |

std | 0.18 | 0.05 | 0.06 | 0.114 | 0.13 | 0.23 | 0.21 | 0.158 | 0.11 | 0.16 | 0.14 | 0.103 |

max | $-$0.61 | $-$0.27 | $-$0.23 | 0.611 | +0.63 | +0.74 | +0.78 | 0.791 | +0.55 | $-$0.37 | $-$0.51 | 0.606 |

Scenario #2 MX $\left({\mathrm{t}}_{{\scriptscriptstyle \mathrm{END}}}\right)$ | ||||||||||||

mean | $-$0.02 | +0.01 | +0.00 | 0.128 | +0.02 | $-$0.02 | +0.00 | 0.253 | +0.02 | $-$0.00 | +0.01 | 0.221 |

std | 0.13 | 0.05 | 0.05 | 0.078 | 0.08 | 0.21 | 0.20 | 0.161 | 0.08 | 0.16 | 0.19 | 0.137 |

max | +0.33 | $-$0.15 | +0.15 | 0.369 | +0.22 | $-$0.65 | $-$0.73 | 0.730 | +0.24 | +0.62 | +0.74 | 0.788 |

**Table 5.**Aggregated MX final vertical position INSE, VNSE, and IA-VNSE (100 runs). The most important metrics appear in bold.

Scenario MX $\left({\mathbf{t}}_{{\scriptscriptstyle \mathbf{END}}}\right)$ | INSE | VNSE | IA-VNSE | |
---|---|---|---|---|

$\left[\mathbf{m}\right]$ | $\mathit{\Delta}\widehat{\mathbf{h}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{h}}$ | $\mathit{\Delta}\stackrel{\mathbf{\circ}}{\mathbf{h}}$ | |

#1 | mean | $-$4.18 | +82.91 | +22.86 |

std | 25.78 | 287.58 | 49.17 | |

max | $-$70.49 | +838.32 | +175.76 | |

#2 | mean | +0.76 | +3.45 | +3.59 |

std | 7.55 | 20.56 | 13.01 | |

max | $-$19.86 | +72.69 | +71.64 |

**Table 6.**Aggregated MX final horizontal position INSE, VNSE, and IA-VNSE (100 runs). The most important metrics appear in bold.

Scenario MX $\left({\mathbf{t}}_{{\scriptscriptstyle \mathbf{END}}}\right)$ | INSE | VNSE | IA-VNSE | |||||
---|---|---|---|---|---|---|---|---|

Distance | $\mathit{\Delta}{\widehat{\mathbf{x}}}_{{\scriptscriptstyle \mathbf{HOR}}}$ | $\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{x}}}_{{\scriptscriptstyle \mathbf{HOR}}}$ | $\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{x}}}_{{\scriptscriptstyle \mathbf{HOR}}}$ | |||||

$\left[\mathrm{m}\right]$ | $\left[\mathrm{m}\right]$ | $\left[\%\right]$ | $\left[\mathrm{m}\right]$ | $\left[\%\right]$ | $\left[\mathrm{m}\right]$ | $\left[\%\right]$ | ||

$\#1$ | mean | 107,873 | 7276 | 7.10 | 4179 | 3.82 | 488 | 0.46 |

std | 19,756 | 4880 | 5.69 | 3308 | 2.73 | 350 | 0.31 | |

max | 172,842 | 25,288 | 32.38 | 21,924 | 14.22 | 1957 | 1.48 | |

$\#2$ | mean | 14,198 | 216 | 1.52 | 251 | 1.77 | 33 | 0.23 |

std | 1176 | 119 | 0.86 | 210 | 1.48 | 26 | 0.18 | |

max | 18,253 | 586 | 4.38 | 954 | 7.08 | 130 | 0.98 |

**Table 7.**Influence of terrain type on final horizontal position IA-VNSE for scenario $\#1$ (100 runs). The most important metrics appear in bold.

Scenario $\#1$ Zone | MX | FR | FM | DS | |||||
---|---|---|---|---|---|---|---|---|---|

$\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{x}}}_{{\scriptscriptstyle \mathbf{HOR}}}\left({\mathbf{t}}_{{\scriptscriptstyle \mathbf{END}}}\right)$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | |

IA-VNSE | mean | 488 | 0.46 | 566 | 0.53 | 489 | 0.45 | 514 | 0.48 |

std | 350 | 0.31 | 406 | 0.38 | 322 | 0.28 | 352 | 0.31 | |

max | 1957 | 1.48 | 2058 | 1.71 | 1783 | 1.34 | 1667 | 1.37 |

**Table 8.**Influence of terrain type on final horizontal position IA-VNSE for scenario $\#2$ (100 runs). The most important metrics appear in bold.

Scenario $\#2$ Zone | MX | FR | FM | DS | UR | PR | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

$\mathit{\Delta}{\stackrel{\mathbf{\circ}}{\mathbf{x}}}_{{\scriptscriptstyle \mathbf{HOR}}}\left({\mathbf{t}}_{{\scriptscriptstyle \mathbf{END}}}\right)$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathrm{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | $\left[\mathbf{m}\right]$ | $\left[\mathbf{\%}\right]$ | |

IA-VNSE | mean | 33 | 0.23 | 40 | 0.28 | 33 | 0.23 | 31 | 0.22 | 32 | 0.23 | 31 | 0.22 |

std | 26 | 0.18 | 35 | 0.24 | 24 | 0.17 | 24 | 0.17 | 25 | 0.18 | 25 | 0.17 | |

max | 130 | 0.98 | 188 | 1.29 | 117 | 0.85 | 114 | 0.86 | 128 | 0.96 | 119 | 0.90 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Gallo, E.; Barrientos, A.
GNSS-Denied Semi-Direct Visual Navigation for Autonomous UAVs Aided by PI-Inspired Inertial Priors. *Aerospace* **2023**, *10*, 220.
https://doi.org/10.3390/aerospace10030220

**AMA Style**

Gallo E, Barrientos A.
GNSS-Denied Semi-Direct Visual Navigation for Autonomous UAVs Aided by PI-Inspired Inertial Priors. *Aerospace*. 2023; 10(3):220.
https://doi.org/10.3390/aerospace10030220

**Chicago/Turabian Style**

Gallo, Eduardo, and Antonio Barrientos.
2023. "GNSS-Denied Semi-Direct Visual Navigation for Autonomous UAVs Aided by PI-Inspired Inertial Priors" *Aerospace* 10, no. 3: 220.
https://doi.org/10.3390/aerospace10030220