# A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Problem Statement: Description of the General Approach

- a stereo-pair of images taken with the camera at this position of the AUV trajectory;
- coordinates of this AUV position in the CS of the initial position of the AUV trajectory (with VNM used);
- matrix of geometric transformation of coordinates from the AUV CS at the initial trajectory position into the $P{R}^{1}$ CS (with VNM used);
- parameters of filming that determines the part of the bottom visible to the camera;
- measure of the localization accuracy of this $P{R}^{1}$.

## 3. Methods

- ${S}_{2}^{WCS2}$ is the trajectory 2 starting point in the WCS2;
- ${S}_{2}^{WCS1}$ is the trajectory 2 (working) starting point in the WCS1;
- ${P}_{i}^{\mathrm{AUV}}$ is the AUV coordinates in the AUV CS at the i position of trajectory 2, i.e., ${P}_{i}^{\mathrm{AUV}}\left(0,0,0,1\right)$;
- ${P}_{i}^{SPS}$ is the AUV coordinates at position i of the trajectory 2 in the SPS CS;
- ${P}_{i}^{WCS1}$ is the coordinates of point ${P}_{i}^{}$ at the position i trajectory 2 in the WCS1;
- ${P}_{i}^{WCS2}$ is the coordinates of position ${P}_{i}^{}$ at the position i trajectory 2 in the WCS2;
- $ob{j}_{n}$ SPS is the SPS object with identifier n;
- $P{R}_{s}^{t}$—reference point of the type t at position s trajectory 1 (t = 1—first type, t = 2—second type, s—position number, if s = cur, then at current position);
- ${H}_{i,P{R}_{S}^{t}}$ is the coordinate transformation matrix from the AUV CS at the i
^{th}position of trajectory 2 into the CS of $P{R}_{s}^{t}$ of VCRN at position s of trajectory 1; - ${H}_{WCS1,WCS2}$ is the coordinate transformation matrix from the WCS2 to the WCS1;
- ${H}_{i,j}$ is the coordinate transformation matrix from the i
^{th}position to the j^{th}position of trajectory 2 in the WCS2 (obtained by the VNM); - ${H}_{1,j}$ is the coordinate transformation matrix from the start point of trajectory 2 into the j
^{th}position of the AUV trajectory (in the WCS2); - ${H}_{P{R}_{s}^{2},ob{j}_{n}SPS}$ is the coordinate transformation matrix from the CS of $P{R}^{2}$ (trajectory 1) into the CS of the SPS object n, stored in $P{R}^{2}$.

#### 3.1. Forming a Virtual Coordinate Reference Network

- comparison of feature points in the stereo-pair images taken from the current position of the AUV trajectory and the stereo-pair images stored in a data set associated with a virtual point of referencing;
- calculation of spatial coordinates of the respective two 3D clouds from the resulting set of features compared;
- calculation of the local coordinate transformation matrix relating the CS of the current AUV position and the CS of reference point;
- extraction of the stored transformation matrix into the required CS. In the case of referencing to an SPS object, use of the object recognition algorithm with calculation of the reference matrix.

#### 3.1.1. Formation of First Type Reference Points

^{th}position of the new AUV trajectory; ${H}_{j,P{R}_{1}^{1}}$ is the coordinate transformation matrix from the CS of the j

^{th}position of the new AUV trajectory into the CS of the $P{R}_{i}^{1}$ (i

^{th}position of initial trajectory); ${H}_{P{R}_{i}^{1},WCS1}$ is the coordinate transformation matrix from the CS of the i

^{th}position initial trajectory into the WCS1.

#### 3.1.2. Formation of Second Type Reference Points

^{2}, the already calculated coordinate conversion matrix is used. $P{R}^{2}$ stores a set of data similar to that stored in $P{R}^{1}$, which allows for referencing the AUV to $P{R}^{2}$.

#### 3.2. Referencing of AUV to Reference Points during a Working Mission

^{th}position of the trajectory, and the corresponding visibility area for $P{R}^{1}$. The overlap is calculated using a threshold distance that guarantees the necessary degree of overlap of visibility areas. Here, the predicted rate of accumulation of the AUV navigation error through dead-reckoning is also taken into account. In the case of multiple AUV’s runs over SPS objects, all coordinate calculations should be performed in a single coordinate space, which is provided by the presence of a geometric transformation between WCS1 and WCS2. The WCS1 and WCS2 are related by the visual odometry technique.

^{th}position and the $P{R}^{1}$ CS when the condition of overlapping visibility areas is satisfied, is performed by the ICP algorithm.

^{th}position is adjusted by using stored data. The adjustment is performed as follows: the chain of local transformations, accumulated from the initial position to the i

^{th}position, is replaced by a shorter one belonging to $P{R}^{1}$, plus the transformation H relating the current position with $P{R}^{1}$. In Figure 4, a segment of the trajectory from the start position to the $P{R}^{1}$ position corresponding to this short chain is highlighted as a bold line. Thus, the error corresponding to the trajectory segment from $P{R}^{1}$ to the i

^{th}position (in the figure, this segment is highlighted as a thin line) is reset, which leads to an abrupt decrease in the error.

#### 3.3. Calculation of the AUV Inspection Trajectory Using Virtual Coordinate Referencing Net

- the AUV localization relative to the points of referencing of the virtual network VCRN is checked continuously (with a certain frequency). For each point of referencing, the square of the neighborhood is outlined (with rough coordinate setting in the external CS); i.e., a test is performed if the AUV position belongs to this neighborhood;
- after confirming the AUV’s entry into the neighborhood area, the possibility of referencing the AUV to the point of referencing is tested, i.e., availability of a common visibility area is checked (based on the known data on camera parameters and calculated trajectory parameters). Upon confirming the possibility of referencing, the AUV is referenced to the virtual point. If referencing was done to the $P{R}^{1}$, then the current position is corrected. This leads to a step-like increase in navigation accuracy. If the referencing was done to the $P{R}^{2}$, then the AUV coordinates at subsequent positions are calculated using the matrix of referencing to the SPS object that is stored in the $P{R}^{2}$.

- the inspection trajectory conditionally divided into segments determined by the points of AUV referencing to $P{R}^{1}$ and to $P{R}^{2}$;
- the trajectory is calculated for each of the segments in taking into account the previous AUV referencing;
- the calculation of AUV motion within a segment is performed by the VNM method (visual odometry);
- the calculation of the AUV coordinates in the SPS object CS and/or in WCS1 (initial trajectory CS) on the current segment is provided by the joint use of the VNM method and the data stored in the involved virtual network reference points.

#### 3.4. Demo Example of the Trajectory Calculation Algorithm Using VCRN

- the starting point of the working trajectory is located in the neighborhood of the ini-tial trajectory beginning;
- the starting point of the working trajectory is located outside the neighborhood of the initial trajectory beginning.

#### 3.4.1. Close Location of the Starting Points of the Trajectories

^{th}position, the AUV coordinates at an arbitrary position (i + d) can be calculated using VNM and by coordinate transformation into the CS of the SPS object, stored in $P{R}_{cur}^{2}$. Note that, according to the above described SPS model, the CS of each SPS object is related to the SPS CS and, therefore, the resulting conversion from the AUV CS into the CS of the SPS object also means referencing to the SPS CS. The VNM technique provides the conversion of ${H}_{i,i+d}$ from the CS at the i

^{th}position (at which the referencing to $P{R}_{cur}^{2}$ was performed) into the CS of the current position (i + d). Then, the desired transformation ${H}_{i+d,ob{j}_{n}SPS}$ from the AUV CS at the i + d position of trajectory 2 into the CS of $ob{j}_{n}$ SPS is calculated as follows:

^{th}position, the position of the AUV in the WCS1 can be refined due to this reference:

#### 3.4.2. The Far Location of the Starting Points of the Trajectories

- Calculation of trajectory 2 in the segment [S2, ${P}_{j1}$].

- 2.
- Calculation of trajectory 2 in the segment $\left[{P}_{j1},{P}_{j2}\right]$

- 3.
- Calculation of trajectory 2 in the segment $\left[{P}_{j2},{P}_{j3}\right]$:

- 4.
- Calculation of trajectory 2 in the segment $\left[{P}_{j3},E2\right]$:

## 4. Results

- a virtual scene on the base of simulator [45];
- a manually used Karmin2 stereo camera under laboratory conditions.

#### 4.1. Simulator

- simulation of the mission of the robot;
- modeling of the external environment;
- simulation of the operation of sensor onboard equipment.

- use the simulator as a training complex for AUV operators;
- test the operability of the AUV equipment and onboard software when it is connected to the virtual environment of the simulation complex in the HIL mode (real equipment in the simulation cycle);
- visualize simulation results for any moment of the mission.

#### 4.2. Virtual Scene

- in the initial segment of the preliminary trajectory (trajectory 1), $P{R}^{1}$ of VCRN were formed;
- for the working trajectory (trajectory 2), the AUV navigation error was calculated in two variants:
- (a)
- with the use of visual odometry only;
- (b)
- using (in addition to visual odometry) two types of coordinate references: referencing to the $P{R}^{1}$ of VCRN and direct referencing to SPS object using the above-mentioned authors’ algorithm [43].

^{1}VCRN and to the SPS CS) reduce the cumulative error (characteristic of visual odometry during long-distance AUV movements) when calculating the AUV’s trajectory 2.

- for the technique of direct AUV referencing to SPS, an error of 5.4 cm was obtained in this scene;
- the error of referencing to VCRN in this case is determined by referencing to the two above-indicated $P{R}^{1}$.

#### 4.3. Experiment with a Karmin2 Camera

- the traditional visual odometry technique;
- the proposed technique, i.e., using coordinate referencing to the $P{R}^{1}$ and $P{R}^{2}$ of VCRN.

## 5. Discussion

- neutralization of the accumulated visual odometry error when AUV moves between reference points, and a guaranteed level of navigation accuracy in the object’s coordinate space due to the use of a previously obtained AUV coordinate referencing matrix to object (reference points are formed when the AUV passes along the survey trajectory). The calculation of the matrix of referencing to an object is based on the author’s algorithm for recognizing an object by its geometric model [42,43];
- reduction of computational costs due to the use of the aforementioned pre-computed and stored transition matrix to the coordinate space of the inspected object. The mathematical modeling method is used to confirm the correctness of the theory presented in the article. The limitations of the method include the fact that the error in accuracy when processing model scenes is less than for real scenes. This is due to the use of an ideal camera calibration, as well as the fact that the influence of the aquatic environment is not taken into account. However, in our case, we are talking about comparing the standard method for calculating the trajectory with the proposed method for using the virtual referencing network, all other things being equal. Therefore, the advantage of the method, confirmed for virtual scenes, should be maintained at a qualitative level for real scenes.

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Mai, C.; Hansen, L.; Jepsen, K.; Yang, Z. Subsea Infrastructure Inspection: A Review Study. In Proceedings of the 6th International Conference on Underwater System Technology: Theory and Applications, Penang, Malaysia, 13–14 December 2016. [Google Scholar] [CrossRef]
- Zhang, Y.; Zheng, M.; An, C.; Seo, J.K. A review of the integrity management of subsea production systems: Inspection and monitoring methods. In Ships and Offshore Structures; Taylor and Francis Online: Oxon, UK, 2019; Volume 14, Issue 8, pp. 1–15. [Google Scholar] [CrossRef]
- Kowalczyk, M.; Claus, B.; Donald, C. AUV Integrated Cathodic Protection iCP Inspection System—Results from a North Sea Survey. In Proceedings of the Offshore Technology Conference, Houston, TX, USA, 6–9 May 2019. [Google Scholar] [CrossRef]
- Maurelli, F.; Carreras, M.; Salvi, J.; Lane, D.; Kyriakopoulos, K.; Karras, G.; Fox, M.; Long, D.; Kormushev, P.; Caldwell, D. The PANDORA project: A success story in AUV autonomy. In Proceedings of the OCEANS 2016, Shanghai, China, 10–13 April 2016. [Google Scholar] [CrossRef]
- Manley, J.E.; Halpin, S.; Radford, N.; Ondler, M. Aquanaut: A New Tool for Subsea Inspection and Intervention. In Proceedings of the OCEANS 2018 MTS/IEEE Conference, Charleston, SC, USA, 22–25 October 2018. [Google Scholar] [CrossRef]
- Watson, S.; Duecker, D.A.; Groves, K. Localisation of Unmanned Underwater Vehicles (UUVs) in Complex and Confined Environments: A Review. Sensors
**2020**, 20, 6203. [Google Scholar] [CrossRef] [PubMed] - Jacobi, M. Autonomous inspection of underwater structures. Robot. Auton. Syst.
**2015**, 67, 80–86. [Google Scholar] [CrossRef] - Jacobi, M.; Karimanzira, D. Guidance of AUVs for Autonomous Underwater Inspection. Automatisierungstechnik
**2015**, 63, 380–388. [Google Scholar] [CrossRef] - Santos, M.M.; Zaffari, G.B.; Ribeiro, P.O.C.S.; Drews-Jr, P.L.J.; Botelho, S.S.C. Underwater place recognition using forward-looking sonar images: A topological approach. J. Field Robot.
**2018**, 36, 355–369. [Google Scholar] [CrossRef] - Vidal, E.; Palomeras, N.; Istenič, K.; Gracias, N.; Carreras, M. Multisensor online 3D view planning for autonomous underwater exploration. J. Field Robot.
**2020**, 37, 1123–1147. [Google Scholar] [CrossRef] - Palomer, A.; Ridao, P.; Ribas, D. Inspection of an underwater structure using point-cloud SLAM with an AUV and a laser scanner. J. Field Robot.
**2019**, 36, 1333–1344. [Google Scholar] [CrossRef] - Bao, J.; Li, D.; Qiao, X.; Rauschenbach, T. Integrated navigation for autonomous underwater vehicles in aquaculture: A review. Inf. Process. Agric.
**2019**, 7, 139–151. [Google Scholar] [CrossRef] - Noyer, J.; Lanvin, P.; Benjelloun, M. Model-based tracking of 3D objects based on a sequential Monte-Carlo method. In Proceedings of the Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers 2004, Pacific Grove, CA, USA, 7–10 November 2004; pp. 1744–1748. [Google Scholar] [CrossRef]
- Wirth, S.; Carrasco, P.L.N.; Codina, G.O. Visual odometry for autonomous underwater vehicles. In Proceedings of the MTS/IEEE OCEANS, Bergen, Norway, 10–14 June 2013; pp. 1–6. [Google Scholar] [CrossRef]
- Ferrera, M.; Moras, J.; Trouvé-Peloux, P.; Creuze, V. Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments. Sensors
**2019**, 19, 687. [Google Scholar] [CrossRef][Green Version] - Lwin, K.N.; Myint, M.; Mukada, N.; Yamada, D.; Matsuno, T.; Saitou, K.; Godou, W.; Sakamoto, T.; Minami, M. Sea Docking by Dual-eye Pose Estimation with Optimized Genetic Algorithm Parameters. J. Intell. Robot. Syst.
**2019**, 96, 245–266. [Google Scholar] [CrossRef] - Zacchini, L.; Bucci, A.; Franchi, M.; Costanzi, R.; Ridolfi, A. Mono visual odometry for Autonomous Underwater Vehicles navigation. In Proceedings of the 2019 MTS/IEEE Oceans, Marseille, France, 17–20 June 2019. [Google Scholar] [CrossRef]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Computer Vision—ECCV 2006. In Proceedings of the 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar] [CrossRef]
- Sivic, J.; Zisserman, A. Video Google: A text retrieval approach to object matching in videos. In Proceedings of the International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 1470–1477. [Google Scholar] [CrossRef]
- Cummins, M.; Newman, P. FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance. Int. J. Robot. Res.
**2008**, 27, 647–665. [Google Scholar] [CrossRef] - Cadena, C.; Gálvez-López, D.; Tardós, J.; Neira, J. Robust place recognition with stereo sequences. IEEE Trans. Robot.
**2012**, 28, 871–885. [Google Scholar] [CrossRef] - McDonald, J.; Kaess, M.; Cadena, C.; Neira, J.; Leonard, J. Real-time 6-DOF multi-session visual SLAM over large-scale environments. Robot. Auton. Syst.
**2013**, 61, 1144–1158. [Google Scholar] [CrossRef][Green Version] - Zhou, W.; Shiju, E.; Cao, Z.; Dong, Y. Review of SLAM Data Association Study. In Proceedings of the 6th International Con-ference on Sensor Network and Computer Engineering, Xi’an, China, 8–10 July 2016. [Google Scholar] [CrossRef][Green Version]
- Kavitha, M.L.; Deepambika, V.A. Review on loop closure detection of visual SLAM. Int. J. Curr. Eng. Sci. Res.
**2018**, V5, 81–86. [Google Scholar] [CrossRef] - Sánchez, J.; Perronnin, F.; Mensink, T.; Verbeek, J. Image Classification with the Fisher Vector: Theory and Practice. Int. J. Comput. Vis.
**2013**, 105, 222–245. [Google Scholar] [CrossRef] - Huang, Y.; Fuchun, S.; Yao, G. VLAD-based loop closure detection for monocular SLAM. In Proceedings of the IEEE International Conference on Information and Automation (ICIA), Ningbo, China, 1–3 August 2016. [Google Scholar] [CrossRef]
- Jung, J.; Li, J.-H.; Choi, H.-T.; Myung, H. Localization of AUVs using visual information of underwater structures and artificial landmarks. Intell. Serv. Robot.
**2016**, 10, 67–76. [Google Scholar] [CrossRef] - Xu, Z.; Haroutunian, M.; Murphy, A.J.; Neasham, J.; Norman, R. An Underwater Visual Navigation Method Based on Multiple ArUco Markers. J. Mar. Sci. Eng.
**2021**, 9, 1432. [Google Scholar] [CrossRef] - Hou, G.; Shao, Q.; Zou, B.; Dai, L.; Zhang, Z.; Mu, Z.; Zhang, Y.; Zhai, J. A Novel Underwater Simultaneous Localization and Mapping Online Algorithm Based on Neural Network. ISPRS Int. J. Geo-Inf.
**2019**, 9, 5. [Google Scholar] [CrossRef] - Hou, Y.; Zhang, H.; Zhou, S. Convolutional neural network-based image representation for visual loop closure detection. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2238–2245. [Google Scholar] [CrossRef][Green Version]
- Alsulami, M.; Elfouly, R.; Ammar, R. Underwater Wireless Sensor Networks: A Review. In Proceedings of the 11th International Conference on Sensor Networks (SENSORNETS 2022), Online, 7–8 February 2022; pp. 202–214. [Google Scholar] [CrossRef]
- Shermin, A.S.; Dhongdi, S.C. Review of Underwater Mobile Sensor Network for ocean phenomena monitoring. J. Netw. Comput. Appl.
**2022**, 205, 103418. [Google Scholar] [CrossRef] - Ullah, I.; Gao, M.S.; Kamal, M.M.; Khan, Z. A survey on underwater localization, localization techniques and its algorithms. In Proceedings of the 3rd Annual International Conference on Electronics, Electrical Engineering and Information Science (EEEIS), Guangzhou, China, 8–10 September 2017. [Google Scholar] [CrossRef][Green Version]
- Sozer, E.; Stojanovic, M.; Proakis, J. Underwater acoustic networks. IEEE J. Ocean. Eng.
**2000**, 25, 72–83. [Google Scholar] [CrossRef] - Groves, P.D. Principles of GNSS. Inertial and Multisensor Integrated Navigation Systems, 2nd ed.; Artech House: Boston, MA, USA, 2008; ISBN-13 978-1608070053. [Google Scholar]
- Cario, G.; Casavola, A.; Gagliardi, G.; Lupia, M.; Severino, U. Accurate Localization in Acoustic Underwater Localization Systems. Sensors
**2021**, 21, 762. [Google Scholar] [CrossRef] - Petritoli, E.; Cagnetti, M.; Leccese, F. Simulation of Autonomous Underwater Vehicles (AUVs) Swarm Diffusion. Sensors
**2020**, 20, 4950. [Google Scholar] [CrossRef] [PubMed] - Zang, W.; Yao, P.; Song, D. Standoff tracking control of underwater glider to moving target. Appl. Math. Model.
**2021**, 102, 1–20. [Google Scholar] [CrossRef] - Wu, H.; Niu, W.; Wang, S.; Yan, S.; Liu, T. Sensitivity analysis of input errors to motion deviations of underwater glider based on optimized response surface methodology. Ocean Eng.
**2021**, 209, 107400. [Google Scholar] [CrossRef] - Bobkov, V.A.; Kudryashov, A.P.; Mel’Man, S.V.; Shcherbatyuk, A.F. Autonomous Underwater Navigation with 3D Environment Modeling Using Stereo Images. Gyroscopy Navig.
**2018**, 9, 67–75. [Google Scholar] [CrossRef] - Bobkov, V.A.; Kudryashov, A.P.; Inzartsev, A.V. Technology of AUV High-Precision Referencing to Inspected Object. Gyroscopy Navig.
**2019**, 10, 322–329. [Google Scholar] [CrossRef] - Bobkov, V.; Kudryashov, A.; Inzartsev, A. Method for the Coordination of Referencing of Autonomous Underwater Vehicles to Man-Made Objects Using Stereo Images. J. Mar. Sci. Eng.
**2021**, 9, 1038. [Google Scholar] [CrossRef] - Inzartsev, A.; Eliseenko, G.; Panin, M.; Pavin, A.; Bobkov, V.; Morozov, M. Underwater pipeline inspection method for AUV based on laser line recognition: Simulation results. In Proceedings of the IEEE OES International Symposium on Underwater Technology 2019 (UT’19 Kaohsiung), Kaohsiung, Taiwan, 16–19 April 2019. [Google Scholar] [CrossRef]
- Bobkov, V.; Melman, S.; Inzartsev, A.; Pavin, A. Distributed Simulation Framework for Investigation of Autonomous Underwater Vehicles’ Real-Time Behavior. In Proceedings of the OCEANS’15 MTS/IEEE, Washington, DC, USA, 19–22 October 2015. [Google Scholar] [CrossRef]
- Himri, K.; Ridao, P.; Gracias, N. 3D Object Recognition Based on Point Clouds in Underwater Environment with Global Descriptors: A Survey. Sensors
**2019**, 19, 4451. [Google Scholar] [CrossRef] [PubMed][Green Version] - Fan, S.; Liu, C.; Li, B.; Xu, Y.; Xu, W. AUV docking based on USBL navigation and vision guidance. J. Mar. Sci. Technol.
**2018**, 24, 673–685. [Google Scholar] [CrossRef] - Bucci, A.; Zacchini, L.; Franchi, M.; Ridolfi, A.; Allotta, B. Comparison of feature detection and outlier removal strategies in a mono visual odometry algorithm for underwater navigation. Appl. Ocean Res.
**2022**, 118, 102961. [Google Scholar] [CrossRef] - Kondo, H.; Maki, T.; Ura, T.; Nose, Y.; Sakamaki, T.; Inaishi, M. Relative navigation of an autonomous underwater vehicle using a light-section profiling system. In Proceedings of the International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; pp. 1103–1108. [Google Scholar] [CrossRef]
- Marini, S.; Gjeci, N.; Govindaraj, S.; But, A.; Sportich, B.; Ottaviani, E.; Márquez, F.P.G.; Sanchez, P.J.B.; Pedersen, J.; Clausen, C.V.; et al. ENDURUNS: An Integrated and Flexible Approach for Seabed Survey Through Autonomous Mobile Vehicles. J. Mar. Sci. Eng.
**2020**, 8, 633. [Google Scholar] [CrossRef]

**Figure 2.**Used coordinate systems: WCS—world coordinate system (external CS); WCS1—CS of AUV at the start position of the survey/initial trajectory (trajectory 1); WCS2—CS of AUV at the start position of the inspection/working trajectory (trajectory 2); CS ${\mathrm{AUV}}_{j1}$—CS of AUV at position j1 of the inspection trajectory; CS ${\mathrm{AUV}}_{j2}$—CS of AUV at position j2 of the inspection trajectory; CS $P{R}_{i1}^{1}$—CS of the first type reference point of virtual of network VCRN on the survey trajectory at position i1; CS $P{R}_{i2}^{2}$—CS of the second type reference point of virtual network VCRN on the survey trajectory at position i2. Relationships between coordinate systems are indicated by dashed lines.

**Figure 3.**VCRN formation with two trajectories: (

**a**) relating the trajectories at starting points with overlapping visibility areas; (

**b**) relating a new trajectory with the initial one via $P{R}^{1}$ of the initial trajectory.

**Figure 4.**Referencing AUV to $P{R}^{1}$. Calculation transformation H from CS pos i to CS $P{R}^{1}$.

**Figure 5.**Flowchart of the algorithm for calculating the AUV coordinates in the SPS CS using the virtual network VCRN.

**Figure 6.**Calculation of inspection trajectory using virtual points of VCRN (starting points S1 and S2 are nearby).

**Figure 7.**Calculation of inspection trajectory using virtual points of VCRN (starting points S1 and S2 are not close). White circles and squares are reference points of VCRN. The black squares indicate the positions on the inspection trajectory where the binding to the reference points of VCRN is performed.

**Figure 9.**The virtual scene: (

**a**) position numbers 9 and 18 of two PR

^{1}of VCRN are indicated on the AUV preliminary pass trajectory (trajectory 1). On the working trajectory (trajectory 2) the respective numbers 45 and 57 of the binding positions of AUV to these $P{R}^{1}$ of VCRN and the number 51 of the binding position to the SPS object are indicated. The trajectories are shown in the plane of the seabed; (

**b**) the graph of the error of the preliminary run (trajectory 1). Points $P{R}_{1}^{1}$ and $P{R}_{2}^{1}$ of VCRN are formed at positions 9 and 18, respectively.

**Figure 10.**Evaluation of the effectiveness of the proposed technique: a comparison of the “standard” technique for calculating the AUV trajectory with the proposed technique.

**Figure 11.**The real scene: (

**a**) the preliminary run trajectory and the “working trajectory” with the specified points of referencing to the two $P{R}^{1}$ and to the object on the floor (in the XY plane); (

**b**) graph for error of preliminary trajectory (trajectory 1). Points $P{R}_{1}^{1}$ and $P{R}_{2}^{1}$ of VCRN are formed at positions 32 and 68, respectively.

**Figure 12.**Evaluation of the effectiveness of the proposed technique: comparison of the “standard” technique for calculating the AUV trajectory (black curve) with the proposed technique (gray curve) in the experiment with a Karmin2 camera. X-axis is film frame numbers. Y-axis is values of localization accuracy error.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Bobkov, V.; Kudryashov, A.; Inzartsev, A.
A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures. *Remote Sens.* **2022**, *14*, 5123.
https://doi.org/10.3390/rs14205123

**AMA Style**

Bobkov V, Kudryashov A, Inzartsev A.
A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures. *Remote Sensing*. 2022; 14(20):5123.
https://doi.org/10.3390/rs14205123

**Chicago/Turabian Style**

Bobkov, Valery, Alexey Kudryashov, and Alexander Inzartsev.
2022. "A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures" *Remote Sensing* 14, no. 20: 5123.
https://doi.org/10.3390/rs14205123