Next Article in Journal
STORMTOOLS, Coastal Environmental Risk Index (CERI) Risk and Damage Assessment App
Next Article in Special Issue
Ultra-High-Resolution Mapping of Posidonia oceanica (L.) Delile Meadows through Acoustic, Optical Data and Object-based Image Classification
Previous Article in Journal
Improved VIV Response Prediction Using Adaptive Parameters and Data Clustering
Previous Article in Special Issue
Using Scuba for In Situ Determination of Chlorophyll Distributions in Corals by Underwater Near Infrared Fluorescence Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photogrammetry: Linking the World across the Water Surface

1
Aix Marseille Université, CNRS, ENSAM, Université de Toulon, LIS UMR 7020, Domaine Universitaire de Saint-Jérome, Bâtiment Polytech, Avenue Escadrille Normandie Niemen, 13397 Marseille, France
2
3D Optical Metrology (3DOM) unit, Bruno Kessler Foundation (FBK), via Sommarive 18, 38123 Trento, Italy
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2020, 8(2), 128; https://doi.org/10.3390/jmse8020128
Submission received: 23 January 2020 / Revised: 7 February 2020 / Accepted: 11 February 2020 / Published: 17 February 2020
(This article belongs to the Special Issue Underwater Imaging)

Abstract

:
Three-dimensional (3D) surveying and modelling of the underwater environment is challenging; however, it becomes even more arduous when the scene or asset to measure extends from above to underwater through the water surface. While this is topic of high interest for a number of different application fields (engineering, geology, archeology), few solutions are available, usually expensive and with no guarantee of obtaining homogenous accuracy and resolution in the two media. This paper focuses on a procedure to survey and link the above and the underwater worlds based on photogrammetry. The two parts of the asset, above and underwater, are separately surveyed and then linked through two possible analytical procedures: (1) independent model adjustment or (2) relative orientation constraints. In the first case, rigid pre-calibrated rods are installed across the waterline on the object to be surveyed; in the second approach, a synchronized stereo-camera rig, with a camera in water and the other above the water, is employed. The theoretical foundation for the two approaches is provided and their effectiveness is proved through two challenging case studies: (1) the 3D survey of the leak of the Costa Concordia shipwreck and (2) 3D modelling of Grotta Giusti, a complex semi-submerged cave environment in Italy.

1. Introduction

Measuring natural and man-made structures in three dimensions (3D) across the water surface is a very challenging task, yet necessary for a large number of applications ranging from marine infrastructures, civil engineering, cultural heritage, biology, civil protection, forensics, and many others.
While on land, 3D surveying techniques are routinely applied with success, meeting different accuracy requirements, underwater, because of physical characteristics and complexity of operations related to water medium, systematic mapping has been achieved only partially and with more relaxed accuracy requirements if compared to land measurements. While on land a large variety of traditional surveying techniques can be used and exchanged to carry out similar 3D measurement tasks, most often, underwater, few solutions, if any, are available.
Underwater, radio signals do not transmit well, thus eliminating the possibility to exploit the ubiquity of global navigation satellite system(GNSS) positioning sensors and techniques; depending on water turbidity, active and passive optical techniques such as laser scanning and photogrammetry could not be feasible, while acoustic systems may not reach necessary accuracy and resolution.
For this reason, 3D surveys of subsea environments require an extensive analysis of the project requirements, often finding a trade-off between required tolerances and costs to reduce the technical complexity of measurement tasks and operations.
Whenever environmental conditions allow for the use of available optical and acoustic techniques, and, depending on the application fields (e.g., archaeological reconnaissance, mapping, subsea metrology, high precision structural monitoring), underwater surveys can be accomplished from different platforms ranging from airplane and helicopters to vessels on the water surface, divers, submarines, remotely operated vehicles (ROV), autonomous underwater vehicles (AUV).
A very peculiar and special 3D surveying case concerns the 3D modelling of objects that are partially emerged and partially submerged. This pertains for example to objects of different nature like those shown in Figure 1.
Currently, high resolution 3D measurement techniques available as commercial industrial solutions rely on a combination of optical and acoustic methods such as LIDAR and multi beam echo sounder (MBES) mounted onboard surface vessels or static optical and acoustic scanning devices (such as respectively tripod mounted SL1 Deep sea time of flight - ToF laser scanner and the BV5000) [7,8]. Static scanning devices are capable of better metrological performances but require significantly higher acquisition time and efforts to displace the devices around the object to be surveyed. A high-resolution through-water laser scanning solution has been proposed by [9] under the constraint of static, flat, and wave-less water surface.
This paper deals with high precision photogrammetry-based techniques to perform 3D measurements of objects that are partially submerged in water under the assumption that the asset, structure, or environment to be measured itself is a rigid body (it does not undergo any deformation during the surveying). The presented approaches do not require any acoustic, nor GNSS or inertial navigation system (INS) based positioning systems, underwater or above the water, and use (low-cost) passive sensors to photogrammetrically determine both the submerged and emerged parts of the object in 3D and with color information. The methods also work in case the object is not still, as for example in case of floating objects such as pontoons, ships, and floating wind turbines. Two different procedures based on photogrammetry are presented:
  • Using rigid pre-calibrated rods installed across the waterline on the object to be surveyed;
  • Using a synchronized underwater stereo-camera rig.
Figure 2 shows a simulated scenario where a boat, damaged in the underwater parts of the hull, need to be repaired. The boat would require to be docked for surveying and inspecting the damaged parts to then proceed to their replacement and repairing.
The photogrammetric methods here presented would allow to significantly reduce the overall time and related costs with a much more efficient damage assessment and repairing operations. Method 1 was presented by the authors in [10] and applied for maritime engineering as well as archaeology applications [11]. The second method was utilized by the authors for the 3D survey of a semi submerged cave system [12], but technical details about the technique have not been provided thus far. In the following, the manuscript provides a complete overview of the photogrammetric methods and principles used by the two techniques, by providing unpublished technical details and results. Two relevant applications are presented and used to highlight the main benefits and drawbacks of the two methods: the semi submerged gash of Costa Concordia shipwreck surveyed using method one, and the case of the semi submerged cave system in Italy, Grotta Giusti surveyed using method two. The accuracy assessment as well as comparative analyses will be provided for the two above mentioned case studies.
The novelties of the current can be summarized:
  • The theoretical framework for two different possible approaches has been unified;
  • The processing of the Costa Concordia survey has been updated and refined, following a fully automatic pipeline which merges the advantages of state-of-the-art structure from motion (SfM) – multi view stereo (MVS) procedures with the rigorousness of photogrammetry; moreover, the results of the newly implemented independent model adjustment to link the part below the water with the part above the water are reported for the first time;
  • Details of the relative orientation procedure for joining the above and underwater surveys of the ‘Grotta Giusti’ semi-submerged cave system based on a synchronized stereo camera are reported and discussed.

2. Materials and Methods

2.1. Overview of the Two Methods

The methods here presented assume that the object to be surveyed can be considered a rigid body thus not undergoing any significant shape deformation during the period of the 3D survey.
For both methods the 3D survey of the object entails a three-step procedure consisting of:
  • An underwater photogrammetry survey of the submerged part;
  • An above the water photogrammetric survey of the emerged part;
  • An optimized analytical process to link the two photogrammetric coordinate systems together for a seamless 3D model generation.
At the end of the first two steps, two independent photogrammetric models are derived, one for the emerged and one for the submerged part of the asset. Here, independent means that each photogrammetric model is defined in its own datum or, in other words, has its own coordinates system. The merging of the two parts, or the transformation in a common reference system, is carried out using linking devices (Figure 2) consisting, respectively, of:
(1)
A number of pre-calibrated rigid rods as shown in Figure 2a attached to the object to be surveyed showing target plates below and above the water;
(2)
A synchronized stereo camera rig (Figure 2b) used with a camera underwater and the other one above the water.
The mathematical methods behind the two delineated approaches is here after detailed.

2.2. Method 1 Precalibrated Linking Targets

Method 1 is based on rigid rods firmly attached to the partially submerged object and visible both above and underwater. They serve as hinge or joint between the two independent photogrammetric models. Each rod presents at least one target plate above and one target plate below the water, each of them featuring a minimum of three coded targets. Before being attached and after being removed from the object, the rods are calibrated, so that the relative position between the targets on the different plates is measured. The plates of each rod visible either above or underwater allow the computation of a similarity transformation1 for bringing each rod in the coordinate system of the two separated photogrammetric models. Here, it should be noted that, while the rods are roto-translated in the photogrammetric coordinate systems, their scale is preserved and ‘transferred’ to the photogrammetric models. Once all the rods are roto-translated in both the photogrammetric coordinate systems, they constitute the common elements to compute a further transformation and merge the two photogrammetric models in a common coordinate system.
A minimum configuration would require one plate with three targets above the water and one plate below the water. Theoretically, a single rod would then be sufficient, but the solution would be weak. At least three rods well distributed over the surveyed area are advised.
While here we are considering a simplified linking target consisting in a straight rod, in case of objects with rounded shapes, the targets should be accordingly designed. For example, two rods connected with a joint can be fitted to the shape of the asset, provided that the joint is properly tightened to maintain its calibrated structure throughout the survey.
From a theoretical/mathematic point of view, to converge to the final solution where the two originally independent photogrammetric models are merged in a unique coordinate system, the procedure can be summarized in three sequential steps:
  • Each single linking target is roto-translated, one at time, through a rigid similarity transformation and the target coordinates of the rod plates are determined in each of the two photogrammetric models (Figure 3a,b);
  • The two photogrammetric models of the above and underwater parts are oriented in the same coordinate system, choosing one of the them as reference;
  • In the last step, a refinement of the alignment is performed by re-computing simultaneously the similarity transformations for the models and the coordinates of all the target plates on the rods (Figure 3c).
The first two steps produce a coarse alignment between the two photogrammetric models above and underwater. The alignment is then refined through the last step, which recalls the semi-analytical or independent model aerial triangulation method [13], where photogrammetric stereo models are firstly relatively oriented and then transformed in the absolute datum defined by ground control points (GCPs).
The implemented free-network independent models adjustment estimates the optimal reference system for the two photogrammetric models, minimizing the mean variance of the target 3D coordinates. Each observation, i.e., the target coordinates known in each photogrammetric model, contributes to the final result according to the estimated uncertainty derived from the photogrammetric processing. This implies that the final errors of the transformation depend on the quality of the observations themselves. Assuming that no outliers are present, this last step improves the alignment, leading to smaller uncertainties. Not many iterations are usually required to converge to the optimal solution and the complexity of the problem is not high. Therefore, the computational time of the independent model adjustment is not significantly greater than the coarse alignment step.

2.2.1. Coarse Alignment between the Two Photogrammetric Models

This is a very common problem in all the disciplines of surveying, i.e., the transformation of the point coordinates from a coordinate system C S a to another coordinate system C S b . If a set of points are known in both the coordinate systems, the mathematical relation or transformation can be computed and applied to transform point coordinates from one system to the other. The transformation parameters2 are calculated as solution of a system of equations where point coordinates in one system are regarded as observation vector and the transformation parameters are regarded as unknown vector.
Let C S a = { 0 a ,   X a , Y a , Z a } and C S b = { 0 b , x b , y b , z b } define two spatial Cartesian coordinate systems:
  • C S a being the target, global, reference, or higher-order coordinate system, i.e., the final coordinate system where the coordinates of the points must be known;
  • C S b being the local or lower-order coordinate system, i.e., the initial coordinate system where the coordinates of the points have been measured.
Equation (1) provides the transformation between C S a and C S b , i.e., the transformation to obtain the coordinates of points originally known in system C S b in the target system C S a :
[ X P Y P Z P ] a = [ X 0 b Y 0 b Z 0 b ] a + Λ R b a [ x P y P z P ] b
where
  • [ X P Y P Z P ] a = P ( X a , Y a , Z a ) are the coordinates of a generic point P in the target system C S a (here the apex indicates the system where the coordinates are defined);
  • [ X 0 b Y 0 b Z 0 b ] a are the coordinates of the origin 0 b of system C S b known into system C S a ;
  • Λ = [ λ 1 0 0 0 λ 2 0 0 0 λ 3 ] is the diagonal matrix containing the scale factors in the three directions; usually λ 1 = λ 2 = λ 3 = λ , i.e., an isotropic scale factor exists between the two systems so that Λ reduces to:
    Λ = [ λ 0 0 0 λ 0 0 0 λ ]
  • R b a is the 3 × 3 rotation matrix from system C S b to system C S a :
    R b a = [ c φ c κ c φ s κ s φ c ω s κ + s ω s φ c κ     c ω c κ s ω s φ s κ s ω c φ s ω s κ c ω s φ c κ s ω c κ + c ω s φ s κ c ω c φ ] b a
    where c = cos ( · ) and s = sin ( · ) and ω , φ , κ are the three rotation angles between the two coordinate systems. It is noteworthy that the rotation matrix R b a contains the sequential rotations that, applied to the axes of target system C S a , transform system C S a to be parallel with system C S b :
    R b a = R X a { ω b a } · R Y a { φ b a } · R Z a { κ b a }
  • [ x P y P z P ] b = P ( x b , y b , z b ) are the coordinates of point P in system C S b .
The inverse transformation which permits to compute the coordinates of points originally known in the target system C S a into the system C S b is given by:
[ x P y P z P ] b =   Λ 1 R a b [ X P X 0 b Y P Y 0 b Z P Z 0 b ] a
where
R a b = ( R b a ) 1 = ( R b a ) T
An equivalent but more concise form of the similarity transformation is achieved with the so-called 4 × 4 homogeneous transformation matrix:
[ X P Y P Z P 1 ] a = T b a [ x P y P z P 1 ] b =   [ Λ R b a t b a 0 1 ] [ x P y P z P 1 ] b
where:
  • [ X P Y P Z P 1 ] a and [ x P y P z P 1 ] b are the homogeneous coordinates of point P with respect to system CS a and system CS b , respectively;
  • the transformation matrix is given by:
    [ Λ R b a t b a 0 1 ] = [ λ · r 11 r 21     r 31     0 r 12     λ · r 22 r 32     0 r 13     r 23     λ · r 33 0 X 0 b a Y 0 b a Z 0 b a 1 ] b a
    being r i j ,   { i = 1 : 3 j = 1 : 3 the elements of the rotation matrix (3) R b a .
For the generic case of a 3D transformation, the minimum number of points that would provide the solution for the seven unknown parameters are, for example, two complete (all the three coordinates are known in both the systems) 3D points and a third point with one of the coordinates known in the two systems. To avoid singularities in the computation, the simplest solution is to provide two 3D points not lying on the same axis of the coordinate system and the third point not aligned to the others. However, in general, more than the minimum of common points is used in order to increase the reliability of the computed transformation parameters. When the number of observations is larger than the number of unknown parameters (seven) the problem reduces to an estimation process, where the “best” solution for the unknowns are to be inferred from observations.
The most commonly used method in modern surveying for the estimation of parameters is the least squares (LS) approach. An adjustment problem requires the definition of the mathematical model, a combination of (i) a functional model, which relates the parameters to be estimated to the observations, and (ii) a stochastic model, which is a statistical description that defines the random fluctuations in the measurements and, consequently, the parameters. In Appendix A, the full derivation of the LS approach for the case of interest is reported.
The first step of the developed procedure requires that a similarity transformation is computed to roto-translate each individual rod in both the photogrammetric models, above and underwater, separately. In other words, considering one of photogrammetric models as the reference coordinate system (i.e., system C S a in Figure 1 and Equation (1)) and taking one rod at time (which constitutes system C S b in Figure 1 and Equation (1)), an LS adjustment process is performed for computing the transformation parameters that transform the rod into the photogrammetric coordinate system. This procedure is repeated for all the rods with the first photogrammetric model and, similarly, with the second photogrammetric model. Thus, if, for example, four rods are employed, 4 × 2 separate adjustment processes are to be computed.
At the end of this analytical procedure, the coordinates of the targets on the rods are known in both the two distinct photogrammetric models (up and down). It is worth to note that, after this set of transformations, in the photogrammetric model of the emerged part also the targets in water are known and, analogously, in the submerged model the targets in air are estimated. That means that now the two separate photogrammetric models have enough common points through which a further transformation can be computed, choosing one model as reference coordinate system.

2.2.2. Refinement of the Alignment through Independent Models Adjustment

The final step of the outlined procedure consists in refining the coarse alignment between the two photogrammetric models achieved in the last phase. This is performed through a procedure inspired by the adjustment of independent models employed in aerial triangulation.
A variety of approaches have been proposed in literature to solve the problem [14]; nevertheless, the mathematics behind the independent models method essentially recalls the formalism introduced for the computation of a similarity transformation. The definition provided by Krauss [13] is exceptionally explicative, i.e., a chain of spatial similarity transformations, where the transformation parameters of each independent model, which in turn provide the exterior orientation parameters, and the coordinates of all the tie points are simultaneously computed in the ground coordinate system. Additionally, in the case of block adjustment by independent models, the problem is non-linear and the linearization procedure is to be followed (Appendix B).

2.3. Method 2 Precalibrated Stereo Camera

Method 2 is based on the use of a synchronized stereo camera rig consisting of two cameras each placed in an underwater housing firmly mounted on a rigid pole (Figure 4). The stereo camera rig allows to fully survey simultaneously the semi submerged object without the need of external scale bars nor the measurement of distances between signalized points on its surface. A system calibration is required to estimate the interior orientation plus lens distortion parameters of each camera when used both under and above the water and to estimate the relative orientation between the two cameras [15]. This procedure has been exploited for photogrammetric applications both above and underwater [15,16].
The relative orientation between two cameras viewing the same scene is the procedure of finding the five degrees of freedom of one camera relative to the other (one degree of freedom is considered known or fixed). Assuming that the cameras are pre-calibrated, their relative orientation can be estimated by observing at least five homologous points between the two cameras to solve for the five unknowns by means of the co-planarity equations or by determining the absolute orientation (six degrees of freedom) relative to an external reference frame for both cameras independently. Then, a roto-translation is computed to make coincident the external reference frame with one of the two camera reference systems (the left one in general).
In the following equations, the letters G, R, and L, indicate, respectively, the Global, Right camera, and Left camera reference frames. The superscripts specify the reference frame where the quantity is defined, e.g., P G is the position vector of a generic point P known in the global frame, R G R is the rotation matrix from the global to the right camera frame, and O R G represents the origin of right camera frame expressed in the global reference frame. According to this notation, the coordinates of the point P G known in the global frame can be expressed in the right camera reference frame as function of the exterior orientation parameters of the right camera:
P R = [ X Y Z ] R = R G R · [ X X o R Y Y o R Z Z o R ] G = R G R · ( P G O R G )
R G R = [ c φ c k c ω s k + s ω s φ c k s ω s k c ω s φ c k c φ s k c ω c k s ω s φ s k s ω c k + c ω s φ s k s φ s ω c φ c ω c φ ]
O R G = [ X o R Y o R Z o R ] G
where (10)3 and (11) are, respectively, the rotation matrix containing the orientation angles and the position vector of the right camera perspective center in the global frame. To perform a relative orientation between the two cameras is equivalent to re-orienting the global reference system to be coincident with the left camera frame. This transformation is expressed as:
P R = [ X Y Z ] R = R L R · [ X X o R Y Y o R Z Z o R ] L = R L R · ( P L O R L )
R L R = R G R · ( R G L ) T
O R L = R G L · ( O R G O L G )
where (13) and (14) are, respectively, the rotation matrix (i.e., orientation angles or boresight angles) and coordinates of the right camera in the left camera reference frame. In other words, Equation (14) represents the components of the baseline (or lever arm) between the two cameras in the reference system centered on the left camera. From the above transformation, the exterior orientation parameters of the chosen refence camera become null and the exterior orientation of the second camera relative to the first is obtained. This procedure is convenient when the exterior orientation of the two cameras is known, for example from a bundle adjustment process. Typically, a self-calibration with coded targets is performed to recover the interior orientation parameters of the two cameras and as a result of the bundle adjustment, the exterior orientations of the cameras are determined. For a stereo-triangulation camera system, which is composed of two cameras fixed to each other, the relative orientation in theory should be the same for each camera pose. Thermal influences and mechanical instability of the frame to which the cameras are fixed might also have a significant contribution to relative orientation differences between several distinct stereo camera poses. In practice, the presence of errors in the image measurements or an inappropriate mathematical model (i.e., missing parameters to correct lens distortions) results in different relative orientations for each camera pose. Because several camera poses are generally used in a self-calibration procedure, most probable values for the relative orientation (three positions and three orientation angles of the second camera with respect to the first camera) can be obtained either through simple averaging with statistical removal of outliers (i.e., median filter) or through a more rigorous approach using relative orientation constraints in a simultaneous bundle adjustment [16,17].
In method two, the photogrammetric survey of the semi submerged object is carried out after the stereo camera rig calibration following the procure descried above. Let C S a = { 0 a ,   X a , Y a , Z a } and 〖 C S b = { 0 b , x b , y b , z b } be the two spatial Cartesian coordinate systems respectively above and below the water. The linking between the two C S can be performed in a way very similar to what was described in method one, but this time exploiting the stereo camera relative orientation as link between the above the water and below the water worlds.
The survey is then accomplished in three steps: the stereo camera is used to acquire the above the water and underwater parts by carrying out two separate image acquisitions with the stereo camera once completely (i) above the water and (ii) completely underwater; (iii) a third image acquisition step with the stereo camera rig placed across the water surface so that one camera acquires the part of the object below the water and the other camera the part above the water. This last step completes the procedure allowing to photogrammetrically link together the two separate surveys carried out below and above the water, exploiting the relative orientation of the two cameras measured from calibration. Theoretically, a single image pair acquired with the stereo camera placed across the water surface would suffice to provide the link between the above and below the water coordinate systems. From simple geometric considerations redundant and more spatially distributed stereo camera positions are necessary and an average transformation can be computed.
Alternatively, using the transformation parameters computed with Equations (9)–(14), for each camera position computed below the water, it is possible to compute the camera position above the water (or vice versa). This process produces the correspondent positions of the cameras in the other medium which can be used to estimate a similarity transformation between the expected positions from the relative orientation and the one from the image orientation step. Additionally, a constrained bundle adjustment can be run to adjust the camera positions so that they comply with the relative orientation.

3. Results

3.1. Case Study 1: Photogrammetric Survey of Semi-Submerged Object Using Linking Targets on Pre-Calibrated Rods. The Case Study of Costa Concordia Shipwreck

On the 12th of January 2012, the 290 m long Italian cruise ship Costa Concordia partially sank off the coast of Isola del Giglio after she struck her port side on a reef known as Le Scole (Figure 5a). After the collision, the ship, without steering control or propulsive power, ran aground close to the Giglio Porto harbor entrance, leaning with the starboard side against the seafloor with a final inclination angle of about 70 degrees (Figure 5b). An investigation to measure the dimensions and determine the characteristics of the leak produced by the collision of the ship with the rocks was required, posing non-trivial problems: (i) after the grounding, the leak was situated on the above-the-water side of the stranded ship and extended at the current waterline 6 m above and 4 m below the sea surface circa: to achieve a complete inspection of the ship’s area interested by the impact a complete survey both above and below the water was necessary; (ii) the leak was positioned on the ship’s side facing toward the open sea hence not visible from the shoreline: it was not possible either to measure the ship directly from the coast or to signalize and measure control points on the object with topographic methods; (iii) the area to be surveyed had an elongated shape of approximately 60 m long and 10 m high.
The project’s requirement to have both submerged and emerged parts with the same accuracy did not allow to integrate different techniques, e.g., optical methods (laser scanning, photogrammetry) for the upper part and acoustic equipment for the underwater area due to the limits of currently available technologies. Moreover, the collision against the rocks caused the ship steel plates to be heavily deformed, with further small cracks potentially occluded, hence, not visible from a sea-level point-of-view. Therefore, an integrated and unified photogrammetric survey of both parts was assumed to be the most feasible solution in order to satisfy the stringently inspection needs. Method one described in Section 2.2, consisting in two separate photogrammetric surveys carried out in the two different media (one in air above the sea level and one in water below the sea level) using linking targets, was then applied.
The lack of ground control elements required a carefully planning and strict acquisition protocol. It is well known that photogrammetric networks formed by bundle of rays may be distorted and twisted in areas free of control points [13] and that it is more evident for image triangulation of elongated strips where systematic errors accumulate and can cause a twist of the model [20].
Indeed, in absence of external control points deformation and twist of photogrammetric model can occur in presence of systematic errors not correctly modelled. In such a case, the unmodelled errors propagate in the solution, causing the typical dome shape shown in recent publications [20,21,22,23]. The bundle adjustment process can highlight the shape of such deformation, but it is not able to accurately estimate the magnitude, usually tending to an underestimation. Only a strong camera network geometry can mitigate the unwanted and uncontrollable effect and provide more reliable results. Camera calibration both in air and water was a crucial step to reduce systematic errors. The images were acquired with a digital single lens reflex (DSLR) camera (Nikon D300) equipped with a 35 mm for the above-the-water survey and a 24 mm lens underwater. When used in water, the camera was sealed in a waterproof housing with a dome port. Details on the calibration in air and in water can be found in [10].
Targeting operation was performed before the photogrammetric survey: 500 white magnetic circular targets (30 mm diameter and of 4 mm thick) and five 3 m long rods were evenly distributed over the surveyed area. Three rods had two plates below the waterline and one above (OD-E, OD-H, OD-D, where OD stands for orientation device, Figure 6c–e). The remaining two rods were fixed in the opposite way with one plate below and two above the sea level (OD-F, OD-G, Figure 6a,b,e).
Two scale bars were attached underwater, one horizontally (SB-C) and one vertically (SB-A), while another scale bar was fixed above the waterline (SB-B) (Figure 6e).
To strengthen the connection between the surveys above and below the sea surface, the tidal effect was considered (Figure 7): a 25 cm wide part of the hull was submerged when the tide reached its maximum and was emerged with the minimum tidal value. The in-air surveys were executed at different times of the day (morning and evening) and repeated in different days to have the common part of the hull both in underwater and dry conditions.
The photogrammetric survey of the emerged part of the ship was carried out using different boats (pilot boat, rubber boat) having different heights above the sea level and at different distances, measured through a laser distance meter, from the ship (from 6 up to almost 12 m). The camera network comprising all the different acquisitions for the above-the-water part is shown in Figure 8.
In total, 200 images were acquired with a ground sample distance (GSD, i.e., the image resolution on the object) ranging from 0.9 mm to 1.9 mm (Figure 8). The underwater survey was carried out in the late afternoon when sun was low enough on the horizon to limit reflections on the sea and ship surface, adequate lighting underwater was assured by using a strobe. The underwater shots were taken according to a photogrammetric aerial-like strip scheme with additional stations with convergent and rolled images:
-
Four strips were realized at different planned depths (namely −1.5 m, −2 m, −3 m, and −4 m) in different days. In Figure 9 the separate strips at different depths are displayed in different colors;
-
The shots were taken to assure a forward overlap of circa 80% (50 cm distance along strip) and a sidelap of circa 40% between two adjacent strips
-
A mean distance of circa 3 m was maintained from the hull in order to assure the necessary GSD and a sufficient contrast on the hull surface according to the average visibility ascertained after a preliminary underwater reconnaissance.
About 800 images were acquired with a mean GSD of 0.7 mm. To improve the accuracy of the photogrammetric network every five images, convergent and rotated photographs were included [20].
The two surveys were first elaborated separately through a fully automatic procedure comprising the following steps4:
  • Automatic image orientation and triangulation using structure from motion (SfM) tools.
  • Filtering and regularization of automatic extracted tie points: a filtering and regularization procedure of tie points from SfM algorithms was developed and tested in several applications and presented in previous papers [20,22]. The leading principle of the procedure is to regularize the distribution of tie points in object space, while preserving connectivity and high multiplicity between observations. A regular volumetric grid (voxelization) is generated, and the side length of each voxel is set equal to a fixed percentage of the image footprint and decided in order to guarantee a redundant number of observations in each image. The 3D tie points that are inside each voxel are collected in a subset. A score is assigned to each point on the basis of the following properties listed in ascending order of importance: (i) point’s proximity to the barycenter of the considered voxel, (ii) point’s visibility on more than two images, (iii) intersecting angle, (iv) point’s visibility on images belonging to different blocks or strips. The 3D tie points with the highest score in each cell are kept. The results of the filtering are shown in Figure 10 and summarized in Table 1 and Table 2.
  • Photogrammetric bundle adjustment using the filtered tie points as image observations: different versions of self-calibrating bundle adjustment were run, separately for the emerged and submerged models. All the strips were processed following both (a) a free network adjustment with a-posteriori scaling (using both the scale bars and the visible parts of the rods) and (b) using the same reference distances as constraints in the adjustment process. For the above-the-water mode, a further test was considered, using only the closest strips carried out in the evening, hence with very similar lighting conditions and almost simultaneous with the underwater surveys and the known distances as constraints. The combination providing the best independent model adjustment was selected, i.e., the constrained adjustment with only the closest strips for the above-the-water model and constrained adjustment with all the strips for the underwater model. Their results are summarized in Table 3.
  • Dense image matching: the exterior, interior orientation and additional parameters obtained from the photogrammetric bundle adjustment of the two surveys were used for the generation of separate high dense point clouds (Figure 11).
  • Mesh generation and editing: The two high dense point clouds were finally triangulated using the Poisson algorithm [34] implemented in CloudCompare [33]. The optimization procedure described in [35] was used to find the optimum compromise between resolution and usability of the generation of meshes useful for successive analyses, such as highlighting of cracks, calculation of the water flood through the leak, etc. [10].
At the end of the third step, i.e., after the photogrammetric bundle adjustment, the two separate photogrammetric models (above and underwater) were aligned through the procedure described in Section 2.2. The transformation computed was then used to merge the dense point clouds and meshes derived from steps 4 and 5:
  • Coarse alignment of two photogrammetric models: the first step of the coarse rigid transformation was carried out by mounting each single rod separately on the above and underwater photogrammetric models (Figure 12). Each transformation from the rod to both above and underwater independent models was computed without scale factor for preserving the higher accuracy scale of the rods. Two of the four targets of OD-H plate above the water were soiled by the fuel floating around the ship, making it not possible to compute the transformation to bring OD-H in the above-the-water photogrammetric model. After this step, the 3D coordinates of all the OD targets, except OD-H, were known in the both the photogrammetric models and were used to compute the similarity transformation for their assembling. Figure 13a shows the graphical results of the transformation, computed with 48 common points and without scale factor. Both the residual vectors, from the underwater reference system to the above the water reference system, and residual distribution are reported. The statistics are summarized in Table 4, which shows that the mean difference (in terms of root mean square error, RMSE) between the two models is of 0.016 m with a maximum of 0.027 m.
  • Refinement of the alignment through independent model adjustment: the coordinates and transformation parameters computed in the previous step were used as approximations for the independent model free-network adjustment, where the scale factors of the rods were used as constraints. The results are shown in Figure 13b and Table 4. The spatial residuals from the comparison shown that, with respect to the coarse rigid transformation are reduced by a factor of about 10.
The computed transformation was used to merge the two photogrammetric models (Figure 14 and Figure 15).

3.2. Case Study 2: Photogrammetric Survey of Semi-Submerged Environment Using a Synchronized Underwater Stereo-Camera Rig. The Case Study of Grotta Giusti

This application deals with the virtualization of Grotta Giusti, an underground semi-submerged cave system in central Italy. It is the European biggest cave filled with warm water and is currently part of a thermal resort. Differently from the previous application where high accuracy was the leading parameter for the surveying and modelling, in this case the main motivation was to create a virtual model of an environment accessible only partially and to few visitors (recreational divers, http://www.grottagiustidiving.com/). A more in-depth project description is discussed in [12], while here, we will focus on the developed approach to survey the semi-submersed areas of the cave relying on a stereo camera system (method 2 Section 2.3).
The cave, which has the characteristic of a fault, alternates parts completely submerged with crystal clear thermal water with areas that are dry and parts that are partially filled with water.
The entrance chamber for the diving experience, called ‘Lago del Limbo’, is a semi submerged environment of about 20 × 8 × 17 m, with a maximum water depth of about 10 m. The chamber was surveyed with a stereo rig and with a DSLR camera. The stereo rig featured a two GoPro Hero4 Black Edition in their underwater pressure housings (Figure 16a), two underwater lights (Figure 16b) on the handles and two red laser pointers. The GoPro stereo rig was preliminary calibrated (Figure 16b and Figure 17), using rods and calibration devices previously measured in laboratory [10]. The GoPro cameras were set in video mode and an automatic software synchronization algorithm was developed. The synchronization is based on the automatic localization in the recorded streams of common events, i.e., flashing lights visible in the videos and sounds audible in the audio channels [36,37]. The fisheye lens model was used as mathematical basis for the image projection when the GoPro was used above the water, while due to total internal reflection [38] a pinhole was adopted when the camera was used below the water.
The calibrated baseline from the calibration above the water resulted equal to 336.07 ± 0.33 mm, the calibration underwater resulted 335.9 ± 0.54 mm5. The difference is likely due to thermal influences. The higher standard deviation is instead most likely caused by the uncompensated refractive effects of water [38].
The laser pointers were collimated to intersect at the calibrated working distance of about 50 cm. Their position and orientations with respect to the two cameras were also computed during the calibration stage providing an additional scaling information that can be used as a cross check. The calibrated stereo rig was used under and above-the-water to provide 3D scaled photogrammetric measurements of the two separate environments of the cave. Two strips above and five strips underwater with about 60–80% overlap were acquired. For the strip acquired across the water, images had to be masked in the area were the water surface was visible causing false tie point matchings due to reflections and refractive effects. The process was accomplished using a simple rectangular mask automatically built on the basis of simple geometrical assumptions. Once the single strips underwater and above the water were oriented and scaled using the a priori knowledge on the relative orientation, the procedure described in Section 2.3 was adopted to join the submerged and emerged parts of the cave, relying on the use of the stereo system as link across the water level. The camera positions above the water were computed using the relative orientation applied to the positions computed underwater.
Before the survey, the rods (Figure 18) were installed across the water level to assess the accuracy of the method. A similarity transformation was performed to rigidly align each rod to the corresponding one in photogrammetric reference system, as explained in Section 2.2.
The root mean square error (RMSE) of residuals of all five rods reached sub-centimeter accuracy in linking the above and below water 3D data (Figure 19). The high residuals on the rod named OD_C-D were due to instability in the fixing of the rod to the cave wall which was noted during the survey.
Figure 20 shows the 3D visualization above and below water after the merging of the two photogrammetric models.
The rest of the chamber (parts far away from the water level, such as the ceiling) was photogrammetrically surveyed with a Nikon D750 in a NiMAR waterproof housing, coupled with a more powerful strobe unit. The final virtual model is accessible at this link: http://3dom.fbk.eu/repository/grottagiusti/.

4. Discussions and Conclusions

The paper has presented an approach, fully based on photogrammetry, to survey semi submerged assets. While they can be also floating, the only restriction is that the objects do not undergo any deformation (rigid body assumption). The proposed method entails a three-step procedure, i.e., the separate survey of the (i) above and (ii) underwater parts and (iii) a process to link the two separate models, transforming them in a common reference system. Two different approaches have been developed to solve the last step: (1) an independent model adjustment is computed using a number of pre-calibrated rigid rods or (2) a calibrated relative orientation constraint is adopted through the use of synchronized stereo camera rig with a camera above and the other underwater.
The theoretical framework and mathematical details have been provided and the feasibility of two methods proved through two different case studies. In particular, the accuracy potential of method 1 has been proved in a very challenging application, i.e., the 3D survey of the Costa Concordia leak. The effectiveness of method (2) has been demonstrated in the 3D modelling of a complex semi-submerged cave environment.
Method 1 proved to be a very accurate and reliable method with redundant control provided by different pre calibrated linking devices. Although it has the disadvantage of requiring the fixation of the linking targets on the object to be surveyed, for the Costa Concordia this was accomplished using neodymium magnets which assured a good stability over different days (in other experimentation suction caps were used for fiberglass boats). Nevertheless, one of the linking devices was probably touched by workers participating in other simultaneous operations or by the strength of sea waves. The presence of natural points or targets around the target plates allowed to discover the motion and properly separate the processing in different days. For this reason, it is suggestable to perform the two surveys, underwater and above the water, at short delays between each other.
Method 2 proved to be a very flexible method not requiring any target on the object. The use of a stereo camera rig mounting GoPro action cameras proved not only to be lightweight and easy to use but also represented a very low-cost method. This method requires the stereo camera rig to be calibrated and the final accuracy results are therefore expected to rely on the mechanical stability of the rig system. For more accurate scaling, redundancy and control over the final solution, linking devices across the water or scale bars above and below the water are suggested.
While the two methods can be considered interchangeable and can be employed separately, an integrated approach would be advisable. The accuracy of method 2 is highly correlated to the baseline between the two cameras: while a long baseline would reduce the measurement uncertainty, it might negatively affect the portability of the system, also risking to compromise its stability. Another crucial aspect is the stereo-rig calibration, which has to be repeated each time the system is dismantled, for example, to download the images, charge the battery, etc. On the other hand, as reported in both the case studies, the linking rods can move during the survey. Consequently, high redundancy is essential, but, at the same time, can critically increase the time for the targeting operation. The combined use of the two approaches would represent a more robust solution for applications where often it is very hard, if not impossible, to obtain an independent accuracy check. This aspect will need further investigation, with a specifically designed experimentation to analyze the simultaneous adjustment using relative constraints from the stereo camera rig in combination with an independent model adjustment of the linking devices.

Author Contributions

The authors equally contributed to the research activity and paper writing. All authors have read and agreed to the published version of the manuscript.

Funding

The survey of the leak of the “Costa Concordia” shipwreck was partially funded by the the Department of Applied Sciences of the Parthenope University of Naples and the Court of Grosseto, and partially supported by internal funding from FBK. The virtualization project of Grotta Giusti received no external funding, but was internally supported by FBK and Grotta Giusti Diving center.

Acknowledgments

The authors express their gratitude to the many people who supported them: S. Troisi, ‘Parthenope’ University of Naples Italy, and F. Remondino, FBK Trento Italy, for their scientific support; B. Felleca, M. Rovito, and F. Sposito, ‘Parthenope’ University of Naples Italy for their support during the survey operations and the Italian Coast Guard and Army units for their constant supervision during diving operations; L. Tanini and the team of the Grotta Giusti diving center for their assistance in the measurement campaign of Grotta Giusti; E. Farella and S. Rigon, FBK Trento Italy, for their help in setting up the VR of Grotta Giusti.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In geodetic and, in general, survey adjustment, there are two basic methods of employing least squares (LS) [39,40] that are essentially the two basic forms for the functional model: (i) the observation-equation, parametric estimation or adjustment of indirect observation [41] method and (ii) condition-equation approach. In a parametric estimation, observations are expressed in terms of unknowns that were never observed. In a conditional adjustment, conditions or geometric relations are formulated among the observations. The choice of a technique is mostly a matter of convenience and/or computational economy [42].
The case here treated of determining the parameter transformation between two coordinate systems is regarded as a parametric adjustment, where each condition equation (most commonly called observation equation) contains only one observation [41]. Therefore, the number of condition equations (c) is the same as the number of observations (n):
c = n
and, at the same time, is greater than the number of unknowns to be computed (u):
n > u
r = n u
where r is the redundancy equal to the (statistical) degrees of freedom.
Here, the functional model is represented by Equation (1): each known full 3D point provides three observation equations.
The observations, i.e., the measured known coordinates of common points, are regarded as random variables with inherent stochastic properties, meaning that are (i) characterized by only stochastic errors, following the Gaussian normal distribution and (ii) independent (uncorrelated) from one another. The stochastic properties of the observations, which in turn define the stochastic model in the adjustment, are described by the covariance matrix C l :
C l = d i a g [ σ X P 1 a 2 σ Y P 1 a 2 σ Z P 1 a 2 σ Z P m a 2 ]
where σ i is the standard deviation of i-th observation and the covariance matrix of observations C l is diagonal under the hypothesis of independent observations (the correlations coefficients are zero).
Each measurement can be characterized by a different quality level, which is expressed statistically in terms of weight w , defined as the ratio between the variance factor or reference variance σ 0 2 and its own variance σ i 2 :
w i = σ 0 2 / σ i 2
From equality (A5), the weight matrix W of the observations and the cofactor matrix Q l of the observations are derived:
W = σ 0 2 C l 1
Q l = W 1 = σ 0 2 C l
The stochastic, random nature of the observations also implies that the functional model would describe the relation between the true observation values L ˜ and the true values of the unknown parameters X ˜ (Luhmann et al., 2006):
L ˜ = f ( X ˜ )
where:
  • X ˜ = [ λ ω φ κ X 0 b a Y 0 b a Z 0 b a ] is the vector of seven unknown transformation parameters
  • f ( X ˜ ) = [ f 1 ( X ˜ ) f 2 ( X ˜ ) f n ( X ˜ ) ] are n functions of the unknowns, corresponding to Equation (1) for each 3D full point.
As the true values of observation are not cognizable, the vector L ˜ is replaced by the measured observations L plus the associated corrections or residuals v :
L ^ = L + v = A X ^
L ^ and X ^ are the best estimates of the observations (also called adjusted observations) and unknowns.
In presence of redundant observations (known points), the LS adjustment yields the most probable values for the adjusted observations, which in turns provide the most probable values for the parameters (Gauss-Markov adjustment model). This is an alternative form to express the basic principles of LS that, in the general case of measurements or observations having different weights, states: the most probable value for a quantity obtained from repeated observations having various weights is that value that renders the sum of the weight times their respective squared residuals a minimum.
ϕ = v W v = ϕ = v W v = i = 1 n w i v i 2 = w X P 1 a v X P 1 a 2 + w Y P 1 a v Y P 1 a 2 + w Z P 1 a v Z P 1 a 2 + + w n v n 2 m i n i m u m
The functional model (A9)generally consists of non-linear equations, and this is also the case for the transformation between two coordinate systems in Equation (1), where non-linearities are present in the matrix R b a (10):
X P i a + v X P 1 a = f 1 ( X ^ ) = X 0 b a + λ [ x P i b ( c φ c κ ) y P i b ( c φ s κ ) + z P i b ( s φ ) ] Y P i a + v Y P 1 a = f 2 ( X ^ ) = Y 0 b a + λ [ x P i b ( c ω s κ + s ω s φ c κ ) + + y P i b ( c ω c κ s ω s φ s κ ) z P i b ( s ω c φ ) ] Z P i a + v Z P 1 a = f 3 ( X ^ ) = Z 0 b a + λ [ x P i b ( s ω s κ c ω s φ c κ ) + + y P i b ( s ω c κ + c ω s φ s κ ) + z P i b ( c ω c φ ) ]
Consequently, the original nonlinear conditions are linearized using Taylor’s series and LS is then applied to the linearized form:
f ( X ) =   f ( X 0 ) + ( f ( X ) X ) 0 ( X ^ X 0 ) =   L + v
X 0 is a vector containing approximate values for the unknowns and the difference:
( X ^ X 0 ) = x ^ = [ λ ω φ κ X 0 b a Y 0 b a Z 0 b a ]
can be seen as a vector of corrections to the initial approximations.
The first-order partial derivatives ( f ( X ) X ) 0 are organised in the Jacobian matrix A in (A9) also known as the design, model, or coefficient matrix:
A = ( f ( X ) X ) 0 = [ ( f 1 ( X ) X 1 ) 0 ( f 1 ( X ) X 2 ) 0 ( f 1 ( X ) X u ) 0 ( f 2 ( X ) X 1 ) 0 ( f 2 ( X ) X 2 ) 0 ( f 2 ( X ) X u ) 0 ( f n ( X ) X 1 ) 0 ( f n ( X ) X 2 ) 0 ( f n ( X ) X u ) 0 ] = = [ ( X P 1 a λ ) 0 ( X P 1 a ω ) 0 ( X P 1 a φ ) 0 ( X P 1 a κ ) 0 ( X P 1 a X 0 b a ) 0 ( X P 1 a Y 0 b a ) 0 ( X P 1 a Z 0 b a ) 0 ( Y P 1 a λ ) 0 ( Y P 1 a ω ) 0 ( Y P 1 a φ ) 0 ( Y P 1 a κ ) 0 ( Y P 1 a X 0 b a ) 0 ( Y P 1 a Y 0 b a ) 0 ( Y P 1 a Z 0 b a ) 0 ( Z P 1 a λ ) 0 ( Z P 1 a ω ) 0 ( Z P 1 a φ ) 0 ( Z P 1 a κ ) 0 ( Z P 1 a X 0 b a ) 0 ( Z P 1 a Y 0 b a ) 0 ( Z P 1 a Z 0 b a ) 0 ( X P 2 a λ ) 0 ( X P 2 a ω ) 0 ( X P 2 a φ ) 0 ( X P 2 a κ ) 0 ( X P 2 a X 0 b a ) 0 ( Y P 2 a Y 0 b a ) 0 ( X P 2 a Z 0 b a ) 0 ( Z P n a λ ) 0 ( Z P n a ω ) 0 ( X P n a φ ) 0 ( Z P n a κ ) 0 ( Z P n a X 0 b a ) 0 ( Z P 2 a Y 0 b a ) 0 ( Z P 2 a Z 0 b a ) 0 ]
where:
( X P i a λ ) = [ x P i b ( c φ c κ ) y P i b ( c φ s κ ) + z P i b ( s φ ) ] ( Y P i a λ ) = [ x P i b ( c ω s κ + s ω s φ c κ ) + y P i b ( c ω c κ s ω s φ s κ ) z P i b ( s ω c φ ) ] ( Z P i a λ ) = [ x P i b ( s ω s κ c ω s φ c κ ) + y P i b ( s ω c κ + c ω s φ s κ ) + z P i b ( c ω c φ ) ] ( X P 1 a ω ) = 0 ( Y P 1 a ω ) = λ [ x P i b ( s ω s κ + c ω s φ c κ ) y P i b ( s ω c κ + c ω s φ s κ ) z P i b ( c ω c φ ) ] ( Z P i a ω ) = λ [ x P i b ( c ω s κ + s ω s φ c κ ) + y P i b ( c ω c κ s ω s φ s κ ) z P i b ( s ω c φ ) ] ( X P i a φ ) = λ [ x P i b ( s φ c κ ) + y P i b ( s φ s κ ) + z P i b ( c φ ) ] ( Y P i a φ ) = λ [ x P i b ( s ω c φ c κ ) + y P i b ( s ω c φ s κ ) + z P i b ( s ω s φ ) ] ( Z P i a φ ) = λ [ x P i b ( c ω c φ c κ ) + y P i b ( c ω c φ s κ ) z P i b ( c ω s φ ) ] ( X P i a κ ) = λ [ x P i b ( c φ s κ ) y P i b ( c φ c κ ) ] ( Y P i a κ ) = λ [ x P i b ( c ω c κ s ω s φ s κ ) y P i b ( c ω s κ + s ω s φ c κ ) ] ( Z P i a κ ) = λ [ x P i b ( s ω c κ + c ω s φ s κ ) + y P i b ( s ω s κ + c ω s φ c κ ) ] ( X P i a X 0 b a ) = ( Y P i a Y 0 b a ) = ( Z P i a Z 0 b a ) = 1 ( Y P i a X 0 b a ) = ( Z P i a X 0 b a ) = ( X P i a Y 0 b a ) = ( Z P i a Y 0 b a ) = ( X P i a Z 0 b a ) = ( Y P i a Z 0 b a ) = 0
In case of linearized LS, the adjustment procedure needs to be iterated: each iteration provides a solution for the vector x ^ , which yields an improved approximation for the vector X 0 . At the end of the adjustment process, the final estimate of the parameters X ^ will be the sum of the original approximation and all the correction vector x ^ .
To derive a simple matrix expression for the LS solution of such non-linear system, it is convenient to introduce the following notation:
f ( X 0 ) =   L 0
l =   L L 0 = [ X P i a f 1 ( X 0 ) Y P i a f 2 ( X 0 ) Z P i a f 3 ( X 0 ) ]
where:
  • L 0 is the vector of approximate observations obtained from the functional model computed with the approximate parameters X 0 .
  • l is the difference between measured and approximate observations, i.e., the vector of reduced observations.
Equation (A9) can then be rewritten as:
l + v = A x ^
Thus far, the functional and stochastic models have been kept separate; by inserting the weight matrix (A6) in (A17), the following unique mathematical model is derived:
W A x ^ = W l
By multiplying for the transpose of the design matrix A , the system of normal equations is obtained:
A T W A x ^ A T W l = 0
with
A T W A = N
as the matrix of normal equations.
The corrections for solution vector result:
x ^ = N 1 A T W l
and the corresponding LS residuals are given by:
v ^ = A x ^ l = A ( N 1 A T W I ) l
As for the observations, for the unknowns and residuals, the cofactor and covariance matrices can be defined:
Q x ^ = N 1 = ( A T W A ) 1
C x ^ = σ ^ 0 2 Q x ^ = σ ^ 0 2 ( A T W A ) 1
Q v ^ = W 1 A N 1 A T
C v ^ = σ ^ 0 2 Q x ^ = σ ^ 0 2 ( W 1 A N 1 A T )
where σ ^ 0 2 is the reference variance (also called unit variance, variance factor or variance of a measurement of unit weight):
σ ^ 0 2 = v ^ T W v ^ r
The method described in Section 2.2.1 is a non-linear problem, thus the linearization procedure described above is followed and approximation values of transformation parameters (for each rod with respect to each photogrammetric model) are to be assessed. This is a very well-known issue, inherently present in solving non-linear problems through linearization algorithms. To overcome this, a preliminary transformation is computed through a Generalized Procrustes (GP) analysis (see for example [43]) that directly provides estimate for the similarity transformations. Briefly, the GP method finds the maximum “agreement” between two sets of corresponding points by minimizing the distance between the centroids of the two systems and the angles between their principal axes (found, for example, through a principal component analysis-PCA).
The transformation parameters derived from GP are considered as approximate values for the solution vector (A13) and the LS adjustment is run to bring each rod in the two distinct photogrammetric models.
The described procedure combines the benefit of a linear approach, such as GP, and the statistics completeness of classical LS.

Appendix B

As anticipated in Section 2.2, for the merging of the two photogrammetric models (above and underwater), an independent model adjustment is applied. The previous coarse alignment, detailed in Appendix A, represents the initial approximation for the chained of spatial similarity transformation aiming at simultaneously adjusting both the two photogrammetric models (above and underwater) and the coordinates systems associated to the rods.
Together with standard bundle block adjustment, block adjustment by independent models was the common technique employed for the absolute orientation of blocks of photographs [13] when the computational power of computer was not as high as today. The technique was also known as semi-analytical aerial triangulation, since originally the process was performed in two subsequent steps: (i) in the first step, the analogue part of the procedure, the relative orientation for each individual or independent stereomodel was performed with a precision or “1st order” plotter [44], then (ii) in the analytical phase, a simultaneous block adjustment of all the independent stereomodels was implemented on the computer. Hence, the aim of aerial triangulation with independent models was to link several separate photogrammetric models (each measured independently in a local or relative coordinate system) and absolutely orient all together in the global coordinate system defined by ground control points (GCPs). The connection of the individual models was performed through tie points and projection centers common to adjacent models, and the transformation of the entire block in the higher-order coordinate system was achieved thanks to GCPs visible in the images.
For simplifying the implementation of the independent model adjustment, it is more convenient to consider as functional model the inverse transformation (5) between ground coordinate system C S a and local system C S b here recalled:
[ x P y P z P ] b =   Λ 1 R a b [ X P X 0 b Y P Y 0 b Z P Z 0 b ] a = Λ 1 ( R b a ) T [ X P X 0 b Y P Y 0 b Z P Z 0 b ] a
from which the functional model is derived:
x P i b + v x P i b   = g 1 b a ( X ^ b a ) = = 1 λ b a [ ( X P i a X 0 b a ) c φ b a c κ b a + ( Y P i a Y 0 b a ) ( c ω b a s κ b a + s ω b a s φ b a c κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a s κ b a c ω b a s φ b a c κ b a ) ] y P i b + v y P i b = g 2 b a ( X ^ b a ) = = 1 λ b a [ ( X P i a X 0 b a ) c φ b a s κ b a + ( Y P i a Y 0 b a ) ( c ω b a c κ b a s ω b a s φ b a s κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a c κ b a + c ω b a s φ b a s κ b a ) ] z P i b + v z P i b = g 3 b a ( X ^ b a ) = = 1 λ b a [ ( X P i a X 0 b a ) s φ b a ( Y P i a Y 0 b a ) s ω b a c φ b a   + ( Z P i a Z 0 b a ) c ω b a c φ b a   ]
Linearizing the functional model (A29) and substituting the Jacobian matrix in (A28), the least squares system becomes:
[ ( x P i b λ b a ) 0 ( x P i b X 0 b a ) 0 ( x P i b Y 0 b a ) 0 ( x P i b Z 0 b a ) 0 ( y P i b λ b a ) 0 ( y P i b X 0 b a ) 0 ( y P i b Y 0 b a ) 0 ( y P i b Z 0 b a ) 0 ( z P i b λ b a ) 0 ( z P i b X 0 b a ) 0 ( z P i b Y 0 b a ) 0 ( z P i b Z 0 b a ) 0 ( x P i b ω b a ) 0 ( x P i b φ b a ) 0 ( x P i b κ b a ) 0 ( x P i b X P i a ) 0 ( x P i b Y P i a ) 0 ( x P i b Z P i a ) 0 ( y P i b ω b a ) 0 ( y P i b φ b a ) 0 ( y P i b κ b a ) 0 ( y P i b X P i a ) 0 ( y P i b Y P i a ) 0 ( y P i b Z P i a ) 0 ( z P i b ω b a ) 0 ( z P i b φ b a ) 0 ( z P i b κ b a ) 0 ( z P i b X P i a ) 0 ( z P i b Y P i a ) 0 ( z P i b Z P i a ) 0 ] [ λ X 0 b a Y 0 b a Z 0 b a ω φ κ X P i a Y P i a Z P i a ] = [ x P i b g 1 b a ( X 0 b a ) y P i b g 2 b a ( X 0 b a ) z P i b g 3 b a ( X 0 b a ) ]
where the partial derivatives are:
( x P i b λ b a ) = 1 λ b a 2 [ ( X P i a X 0 b a ) c φ b a c κ b a + ( Y P i a Y 0 b a ) ( c ω b a s κ b a + s ω b a s φ b a c κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a s κ b a c ω b a s φ b a c κ b a ) ] ( y P i b λ b a ) = 1 λ b a 2 [ ( X P i a X 0 b a ) c φ b a s κ b a + ( Y P i a Y 0 b a ) ( c ω b a c κ b a s ω b a s φ b a s κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a c κ b a + c ω b a s φ b a s κ b a ) ] ( z P i b λ b a ) = 1 λ b a 2 [ ( X P i a X 0 b a ) s φ b a ( Y P i a Y 0 b a ) s ω b a c φ b a   + ( Z P i a Z 0 b a ) c ω b a c φ b a ] ( x P i b X 0 b a ) = 1 λ b a [ c φ b a c κ b a ] ( y P i b X 0 b a ) = 1 λ b a [ c φ b a s κ b a ] ( z P i b X 0 b a ) = 1 λ b a [ s φ b a ] ( x P i b Y 0 b a ) = 1 λ b a [ c ω b a s κ b a + s ω b a s φ b a c κ b a ] ( y P i b Y 0 b a ) = 1 λ b a [ s φ b a ] ( z P i b Y 0 b a ) = 1 λ b a [ c ω b a c κ b a s ω b a s φ b a s κ b a ] ( x P i b Z 0 b a ) = 1 λ b a [ s ω b a s κ b a c ω b a s φ b a c κ b a ] ( y P i b Z 0 b a ) = 1 λ b a [ s ω b a c κ b a + c ω b a s φ b a s κ b a ] ( z P i b Z 0 b a ) = 1 λ b a [ c ω b a c φ b a ] ( x P i b ω b a ) = 1 λ b a [ ( Y P i a Y 0 b a ) ( s ω b a s κ b a + c ω b a s φ b a c κ b a ) + ( Z P i a Z 0 b a ) ( c ω b a s κ b a + s ω b a s φ b a c κ b a ) ] ( y P i b ω b a ) = 1 λ b a [ ( Y P i a Y 0 b a ) ( s ω b a c κ b a c ω b a s φ b a s κ b a ) + ( Z P i a Z 0 b a ) ( c ω b a c κ b a s ω b a s φ b a s κ b a ) ] ( z P i b ω b a ) = 1 λ b a [ ( Y P i a Y 0 b a ) c ω b a c φ b a ( Z P i a Z 0 b a ) s ω b a c φ b a ] ( x P i b φ b a ) = 1 λ b a [ ( X P i a X 0 b a ) s φ b a c κ + ( Y P i a Y 0 b a ) ( s ω b a c φ b a c κ b a ) ( Z P i a Z 0 b a ) ( c ω b a c φ b a c κ b a ) ] ( y P i b φ b a ) = 1 λ b a [ ( X P i a X 0 b a ) s φ b a s κ b a ( Y P i a Y 0 b a ) ( s ω b a c φ b a s κ b a ) + ( Z P i a Z 0 b a ) ( c ω b a c φ b a s κ b a ) ] ( z P i b φ b a ) = 1 λ b a [ ( X P i a X 0 b a ) c φ b a + ( Y P i a Y 0 b a ) s ω b a s φ b a ( Z P i a Z 0 b a ) c ω b a s φ b a ] ( x P i b κ b a ) = 1 λ b a [ ( X P i a X 0 b a ) c φ b a s κ b a + ( Y P i a Y 0 b a ) ( c ω b a c κ b a s ω b a s φ b a s κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a c κ b a + c ω b a s φ b a s κ b a ) ] ( y P i b κ b a ) = 1 λ b a [ ( X P i a X 0 b a ) c φ b a c κ b a + ( Y P i a Y 0 b a ) ( c ω b a s κ b a s ω b a s φ b a c κ b a ) + ( Z P i a Z 0 b a ) ( s ω b a s κ b a + c ω b a s φ b a c κ b a ) ] ( z P i b κ b a ) = 0 ( x P i b X P i a ) = 1 λ b a [ c φ b a c κ b a ] ( y P i b X P i a ) = 1 λ b a [ c φ b a s κ b a ] ( z P i b X P i a ) = 1 λ b a [ s φ b a ] ( x P i b Y P i a ) = 1 λ b a [ c ω b a s κ b a + s ω b a s φ b a c κ b a ] ( y P i b Y P i a ) = 1 λ b a [ c ω b a c κ b a s ω b a s φ b a s κ b a ] ( z P i b Y P i a ) = 1 λ b a [ s ω b a c φ b a   ] ( x P i b Z P i a ) = 1 λ b a [ s ω b a s κ b a c ω b a s φ b a c κ b a ] ( y P i b Z P i a ) = 1 λ b a [ s ω b a c κ b a + c ω b a s φ b a s κ b a ] ( z P i b Z P i a ) = 1 λ b a [ c ω b a c φ b a ]
It is worth to note that:
(i)
Equations (A28) and (A29) are to be written for each observation (target coordinates on the rods) measured in each local coordinate system/independent model and
(ii)
Each target, being visible in two models (one rod and above or underwater), provides six observations.
Consequently, the final system (A30) will contain the partial derivatives of the functional model, written for all the separate models, with respect to the (i) unknown transformation parameters from the local model/coordinate system to the ground system and (ii) target coordinates expressed in the ground system.
In a standard aerial block adjustment, the coordinate system defined by the GCPs is considered the reference system and, normally, a constrained solution would be implemented. In the case under consideration, it would be an arbitrary choice to select one model as reference rather than the other. A soft constrained solution could mitigate that. However, even if the observations coming from the photogrammetric model selected as ground system are weighted according to their variances, the choice of the datum influences the final result in terms of variance of the unknowns, casting all of the error into the uncertainty of the other points and transformation parameters. To overcome this, a free network or inner constraints solution is implemented. Free network adjustment eliminates the need of fixing the reference system and makes each observation to contribute to the final result in the most adequate way, i.e., weighted according to its own original uncertainty. It corresponds to find an arbitrary Cartesian coordinate system, out of the derived photogrammetric models of the above and underwater parts and rods, which provides the inner accuracy or favorable figures of accuracy [13] for the point coordinates. Free network adjustment solves the so-called datum problem or Zero-Order Design—ZOD [45]—that arises in surveying and, in particular, in non-topographic close-range photogrammetry when, in absence of GCPs, the seven parameters of the ground coordinate system are not estimable from the measurements [46]. In such a case, the normal-equation matrix N (A20) has a rank deficiency of seven, exactly equal to the number of datum elements that must be defined. A comprehensive synopsis of the methods proposed in literature specifically for the datum problem in close-range photogrammetry can be found in [47].
The datum problem involves the choice of an optimal reference system, here indicated as C S O , for the object space coordinates [48], given the geometry of the points and the precision of the observations. In other words, fixed the design A and weight W matrices, the selection of an appropriate datum turns in finding an optimum form of the cofactor matrix Q x ^ , that is in minimizing the mean variance of the unknowns. In photogrammetry, the free network solution is usually border to the meaningful unknowns, i.e., object point coordinates, being the exterior orientation parameters regarded as “nuisance parameters” [47], so that the ZOD problem can be summarized as [48]:
σ ¯ x ^ P 2 = σ 0 2 3 n = t r Q x ^ P m i n i m u m
where:
  • σ ¯ x ^ P 2 is the mean variance of object point coordinates.
  • n is the number of points.
  • Q x ^ P is the cofactor of object point coordinates.
To eliminate the rank deficiency of the normal-equation matrix seven additional equations are added, according to the procedure outlined in [13]. Starting from the functional model in equation 2.35, the LS system in matrix notation is derived:
[ A T a O 0 0 0 A P a O 0 A T b O 0 0 A P b O 0 0 A T c 0 0 A P c O 0 0 0 A T d O A P d O 0 0 0 0 A F N ] [ x ^ T a O x ^ T b O x ^ T c O x ^ T d O x ^ P ] = [ l a O l b O l c O l d O 0 ]
where:
  • [ A T a O A T b O A T c 0 A T d O   ] T are sub-blocks of the design matrix A containing the partial derivatives of the observation equations with respect to the transformation parameters from the local coordinate systems C S a , C S b , C S c and C S d to the final free-network datum C S O . Each sub-block features a number of rows equal to the number of targets visible in the corresponding model and seven columns, i.e., the number of unknown transformation parameters.
  • [ A P a O A P b O A P c 0 A P d O   ] T are sub-blocks of the design matrix A containing the partial derivatives of the observation equations with respect to the coordinates of points in the final coordinate system C S O . Evidently, the sub-block related to one model will display zero elements in correspondence of those points that are not visible in it and the number of row will be equal to the number of measured targets.
The last rows of the design matrix feature the sub-block A F N of the design matrix containing the coefficients of additional constraint equations for the elimination of the datum deficiency:
[ 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 Z 0 1 O Y 0 1 O 0 Z 0 2 O Y 0 2 O Z 0 1 O 0 X 0 1 O Z 0 2 O 0 X 0 2 O Y 0 1 O X 0 1 O 0 Y 0 2 O X 0 2 O 0 X 0 1 O Y 0 1 O Z 0 1 O X 0 2 O Y 0 2 O Z 0 2 O ] [ X 1 O Y 1 O Z 1 O X 2 O Y 2 O Z 2 O ] = [ 0 0 0 0 0 0 0 ]
The first three equations mean that the sum of the corrections to the approximate coordinates is zero, i.e., the centroid of the approximate coordinates is the same as the centroid of the adjusted coordinates. The further equations, three for the rotations and one for the scale factor, originate from the general relation for a spatial similarity transformation as in Equation (1).
The inner constraint solution can be subject to a geometric interpretation: when advancing from one iteration to the next, there will be no shift, rotation, or scale change between the approximate and the refined coordinate positions [42].
The dimensions of the LS system (A33) will result in the following:
(i)
The total number of rows of the design matrix will be equal to the sum of all the observations, i.e., targets visible in all the photogrammetric models plus the seven constraint equations for the free network solution. It corresponds to the length of the reduced observation vector plus the zero elements of constraint equations:
# r o w s A = # e l e m e n t s [ l , 0 ] = ( 3 × # t a r g e t s C S a + 3 × # t a r g e t s C S b + 3 ×   # t a r g e t s C S c + 3 × # t a r g e t s C S d ) + 7
(ii)
The number of columns of the design matrix is equal to elements of the correction vector of the unknowns, i.e., the seven transformation parameters of all the systems and the target coordinates in the free network datum ground system C S O :
# c o l u m n s A = # e l e m e n t s X ^ = 7 × # m o d e l s + 3 × # t a r g e t s

References

  1. Floating Wind Turbine. Available online: https://en.wikipedia.org/wiki/Floating_wind_turbine (accessed on 15 January 2020).
  2. Berkas: Bridge over Øresund. Available online: https://id.wikipedia.org/wiki/Berkas:Bridge_over_%C3%98resund.jpg (accessed on 4 February 2020).
  3. Oil Platform in the North Sea. Available online: https://commons.wikimedia.org/wiki/File:Oil_platform_in_the_North_Sea.jpg (accessed on 4 February 2020).
  4. MV Princess of the Stars. Available online: https://en.wikipedia.org/wiki/MV_Princess_of_the_Stars (accessed on 15 January 2020).
  5. Switzerland-03436 Chapel Bridge. Available online: https://www.flickr.com/photos/archer10/23215035294/in/photostream/ (accessed on 15 January 2020).
  6. Grotte di Nettuno (1). Available online: https://www.flickr.com/photos/archer10/23215035294/in/photostream/ (accessed on 15 January 2020).
  7. Drap, P.; Merad, D.; Boï, J.M.; Boubguira, W.; Mahiddine, A.; Chemisky, B.; Seguin, E.; Alcala, F.; Bianchimani, O.; Environnement, S. October. ROV-3D, 3D Underwater Survey Combining Optical and Acoustic Sensor; VAST: Stoke-on-Trent, UK, 2011; pp. 177–184. [Google Scholar]
  8. Moisan, E.; Charbonnier, P.; Foucher, P.; Grussenmeyer, P.; Guillemin, S.; Koehl, M. Adjustment of Sonar and Laser Acquisition Data for Building the 3D Reference Model of a Canal Tunnel. Sensors 2015, 15, 31180–31204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Van Der Lucht, J.; Bleier, M.; Leutert, F.; Schilling, K.; Nüchter, A. Structured-Light Based 3d Laser Scanning Of Semi-Submerged Structures. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 287–294. [Google Scholar] [CrossRef] [Green Version]
  10. Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F. A photogrammetric approach to survey floating and semi-submerged objects. SPIE Optical. Metrol. 2013, 8791, 87910. [Google Scholar]
  11. Menna, F.; Nocerino, E.; Remondino, F. Photogrammetric modelling of submerged structures: Influence of underwater environment and lens ports on three-dimensional (3D) measurements. In Latest Developments in Reality-Based 3D Surveying and Modelling; MDPI: Basel, Switzerland, 2018; pp. 279–303. [Google Scholar]
  12. Nocerino, E.; Menna, F.; Farella, E.; Remondino, F. 3D Virtualization of an Underground Semi-Submerged Cave System. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2019, 42, 857–864. [Google Scholar] [CrossRef] [Green Version]
  13. Kraus, K.; Waldhäusl, P. Photogrammetry. Fundamentals and Standard Processes; Dümmmler Verlag: Bonn, Germany, 1993; Volume 1. [Google Scholar]
  14. Faig, W. Aerotriangulation. Lecture Notes no. 40, Geodesy and Geomatics Engineering, UNB. 1976. Available online: http://www2.unb.ca/gge/Pubs/LN40.pdf (accessed on 11 February 2020).
  15. Shortis, M.; Harvey, E.; Abdo, D. A Review of Underwater Stereo-image Measurement for Marine Biology and Ecology Applications. Oceanogr. Mar. Biol. 2009, 2725, 257–292. [Google Scholar]
  16. Menna, F.; Nocerino, E.; Remondino, F.; Shortis, M. Investigation of a consumer-grade digital stereo camera. In Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection. Int. Soc. Opt. Photonics 2013, 8791, 879104. [Google Scholar]
  17. Lerma, J.L.; Navarro, S.; Cabrellesds, M.; Seguí, A.E. Camera Calibration with Baseline Distance Constraints. Photogramm. Rec. 2010, 25, 140–158. [Google Scholar] [CrossRef]
  18. Collision of Costa Concordia DSC4191. Available online: https://commons.wikimedia.org/wiki/File:Collision_of_Costa_Concordia_DSC4191.jpg (accessed on 4 February 2020).
  19. Costa Barrier. Available online: https://commons.wikimedia.org/wiki/File:Costa-barrier.svg (accessed on 4 February 2020).
  20. Nocerino, E.; Menna, F.; Remondino, F. Accuracy of typical photogrammetric networks in cultural heritage 3D modeling projects. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2014, XL-5, 465–472. [Google Scholar] [CrossRef] [Green Version]
  21. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef] [Green Version]
  22. Nocerino, E.; Menna, F.; Remondino, F.; Saleri, R. Accuracy and block deformation analysis in automatic UAV and terrestrial photogrammetry-Lesson learnt. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. In Proceedings of the 24th Internet CIPA Symposium, Strasbourg, France, 2–6 September 2013; pp. 203–208. [Google Scholar]
  23. Rupnik, E.; Nex, F.; Toschi, I.; Remondino, F. Aerial multi-camera systems: Accuracy and block triangulation issues. ISPRS J. Photogramm. Remote. Sens. 2015, 101, 233–246. [Google Scholar] [CrossRef]
  24. Agisoft. Available online: https://www.agisoft.com/ (accessed on 4 February 2020).
  25. Pix4D. Available online: https://www.pix4d.com/ (accessed on 4 February 2020).
  26. 3DF ZEPHYR. Available online: https://www.3dflow.net/3df-zephyr-pro-3d-models-from-photos/ (accessed on 4 February 2020).
  27. MicMac. Available online: https://micmac.ensg.eu/index.php/Accueil (accessed on 4 February 2020).
  28. EOS PhotoModeler. Available online: https://www.photomodeler.com/ (accessed on 4 February 2020).
  29. Photometrix Australis. Available online: https://www.photometrix.com.au/ (accessed on 4 February 2020).
  30. Forlani, G. Sperimentazione del nuovo programma CALGE dell’ITM. Bollettino SIFET. Boll. Della Soc. Ital. Topogr. Fotogramm. 1986, 2, 63–72. [Google Scholar]
  31. Orient. Available online: https://photo.geo.tuwien.ac.at/photo/software/orient-orpheus/introduction/orpheus/ (accessed on 4 February 2020).
  32. MATLAB. Available online: www.mathworks.com/products/matlab.html (accessed on 4 February 2020).
  33. CloudCompare. Available online: http://www.cloudcompare.org/ (accessed on 15 January 2020).
  34. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the 4th Eurographics Symposium on Geometry Processing, Cagliari, Sardinia, 26–28 June 2006. [Google Scholar]
  35. Rodríguez-Gonzálvez, P.; Nocerino, E.; Menna, F.; Minto, S.; Remondino, F. 3D Surveying & Modeling of Underground Passages in WWI Fortifications. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2015, 40, 17–24. [Google Scholar]
  36. Nocerino, E.; Menna, F.; Troisi, S. High accuracy low-cost videogrammetric system: An application to 6 DOF estimation of ship models. SPIE Opt. Metrol. 2013, 8791, 87910. [Google Scholar]
  37. Nocerino, E.; Nawaf, M.M.; Saccone, M.; Ellefi, M.B.; Pasquet, J.; Royer, J.-P.; Drap, P. Multi-camera system calibration of a low-cost remotely operated vehicle for underwater cave exploration. ISPRS-Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, 42, 329–337. [Google Scholar] [CrossRef] [Green Version]
  38. Menna, F.; Nocerino, E.; Fassi, F.; Remondino, F. Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry. Sensors 2016, 16, 48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Brinker, R.C.; Minnick, R. (Eds.) The Surveying Handbook; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  40. Ghilani, C.D. Adjustment Computations: Spatial Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  41. Mikhail, E.M.; Ackermann, F.E. Observations and Least Squares; IEP: New York, NY, USA, 1976. [Google Scholar]
  42. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; John Wiley & Sons Inc.: Hoboken, NJ, USA, 2001. [Google Scholar]
  43. Beinat, A.; Crosilla, F. Generalized Procrustes analysis for size and shape 3-D object reconstructions. In Optical 3-D Measurements Techniques V; Gruen, A., Kahmen, H., Eds.; Wichmann Verlag: Vienna, Austria, 2001; pp. 345–353. [Google Scholar]
  44. Faig, W. Aerial Triangulation and Digital Mapping: Lecture Notes for Workshops Given in 1984-85; School of Surveying, University of New South Wales: Sydney, Austria, 1986. [Google Scholar]
  45. Grafarend, E.W. Optimization of Geodetic Networks. Can. Surv. 1974, 28, 716–723. [Google Scholar] [CrossRef]
  46. Cooper, M.A.R.; Cross, P.A. Statistical concepts and their application in photogrammetry and surveying. Photogramm. Record 1988, 12, 637–663. [Google Scholar] [CrossRef]
  47. Dermanis, A. The photogrammetric inner constraints. ISPRS J. Photogramm. Remote. Sens. 1994, 49, 25–39. [Google Scholar] [CrossRef]
  48. Fraser, C.S. Network design considerations for non-topographic photogrammetry. Photogramm. Eng. Remote Sens. 1984, 50, 1115–1126. [Google Scholar]
1
Also known as seven-parameter rigid transformation and corresponding to a 3D Helmert transformation.
2
seven transformation parameters are three translations, which represent the coordinates of the origin of one coordinate system with respect to the other, three rotation angles and one scale parameter, if an isotropic scale factor can be assumed for the three coordinate axes (rigid body assumption). In this case the transformation is also known as conformal or isogonal.
3
This rotation matrix is the transpose of (3).
4
Several SFM tools [24,25,26,27] for step 1 and photogrammetric software packages [28,29,30,31] for step 3 were tested. The results obtained from the different software packages were not significantly different, thus, Refs. [24] and [28] were selected for their flexibility and easier accessibility and integration in an own-developed procedure written in [32]. Step 2 and the alignment procedure were implemented in [32] by the authors. Step 4 was performed in [24] and the meshing in [33].
5
The stereo-rig relative orientation procedure is implemented in [32], while the images acquired in the cave were oriented in [24], where the stereo-rig relative constraint was used to link the above and underwater parts. Dense matching, meshing, and texturing were also performed in [24].
Figure 1. Examples of partially submerged manmade and natural objects that might need three-dimensional (3D) accurate measurements for documentation, inspection and monitoring purposes. (a) Floating wind turbine [1]; (b) bridge over the Great Belt in Denmark [2]; (c) Oil platform in the North Sea [3]; (d) The merchant ship (MV) Princess of Stars capsized during typhoon Fengshen in 2008 [4]; (e) Chapel Bridge in Switzerland [5]; (f) Nettuno Caves in Sardinia, Italy [6].
Figure 1. Examples of partially submerged manmade and natural objects that might need three-dimensional (3D) accurate measurements for documentation, inspection and monitoring purposes. (a) Floating wind turbine [1]; (b) bridge over the Great Belt in Denmark [2]; (c) Oil platform in the North Sea [3]; (d) The merchant ship (MV) Princess of Stars capsized during typhoon Fengshen in 2008 [4]; (e) Chapel Bridge in Switzerland [5]; (f) Nettuno Caves in Sardinia, Italy [6].
Jmse 08 00128 g001
Figure 2. Example of a damaged boat that needs a 3D accurate reverse modelling of the damaged part placed underwater. The boat, a semi submerged object in floating condition can be surveyed with one of the two proposed methods using respectively for method one (a) some special linking targets fixed on the hull (e.g., with magnets or suction cups) or for method two (b) a synchronized stereo camera placed with a camera below and a camera above the water.
Figure 2. Example of a damaged boat that needs a 3D accurate reverse modelling of the damaged part placed underwater. The boat, a semi submerged object in floating condition can be surveyed with one of the two proposed methods using respectively for method one (a) some special linking targets fixed on the hull (e.g., with magnets or suction cups) or for method two (b) a synchronized stereo camera placed with a camera below and a camera above the water.
Jmse 08 00128 g002
Figure 3. Example of application of method 1 to the two photogrammetric surveys carried out above the water (a), and below the water (b) with the final 3D merged model (c).
Figure 3. Example of application of method 1 to the two photogrammetric surveys carried out above the water (a), and below the water (b) with the final 3D merged model (c).
Jmse 08 00128 g003
Figure 4. Example of stereo camera rig made of two GoPro cameras (a) used in method 2 to simultaneously survey objects below and above the water (b).
Figure 4. Example of stereo camera rig made of two GoPro cameras (a) used in method 2 to simultaneously survey objects below and above the water (b).
Jmse 08 00128 g004
Figure 5. The Costa Concordia run aground close to the Giglio Porto, Italy after the collision with a reef on 12th of January 2012 (a) [18]. The shipwreck leaning with the starboard side against the seafloor with a final inclination of about 70 degrees (b) [19].
Figure 5. The Costa Concordia run aground close to the Giglio Porto, Italy after the collision with a reef on 12th of January 2012 (a) [18]. The shipwreck leaning with the starboard side against the seafloor with a final inclination of about 70 degrees (b) [19].
Jmse 08 00128 g005
Figure 6. Rods and scale bars on the Costa Concordia hull. As explained in the tests, 5 linking targets (named orientation devices OD, e) were positioned on the shipwreck with a minimum of one plate and a maximum of two plates both above the water (a, b) and underwater (c d). Scale bars (SB) were also attached both above the water and underwater (e).
Figure 6. Rods and scale bars on the Costa Concordia hull. As explained in the tests, 5 linking targets (named orientation devices OD, e) were positioned on the shipwreck with a minimum of one plate and a maximum of two plates both above the water (a, b) and underwater (c d). Scale bars (SB) were also attached both above the water and underwater (e).
Jmse 08 00128 g006aJmse 08 00128 g006b
Figure 7. Effect of tide: a part of the hull is visible underwater when the tide is high and the above-the-water line when the tide is low.
Figure 7. Effect of tide: a part of the hull is visible underwater when the tide is high and the above-the-water line when the tide is low.
Jmse 08 00128 g007aJmse 08 00128 g007b
Figure 8. Camera network of the complete above-the-water survey made up of four different acquisitions.
Figure 8. Camera network of the complete above-the-water survey made up of four different acquisitions.
Jmse 08 00128 g008
Figure 9. Camera network of the complete underwater survey made up of four different strips.
Figure 9. Camera network of the complete underwater survey made up of four different strips.
Jmse 08 00128 g009
Figure 10. Sparse tie point cloud of above and underwater photogrammetric models before (a,c) and after (b,d) the filtering procedure.
Figure 10. Sparse tie point cloud of above and underwater photogrammetric models before (a,c) and after (b,d) the filtering procedure.
Jmse 08 00128 g010aJmse 08 00128 g010b
Figure 11. Details of dense point clouds from dense image matching process for the above (a) and underwater models (b).
Figure 11. Details of dense point clouds from dense image matching process for the above (a) and underwater models (b).
Jmse 08 00128 g011
Figure 12. Alignment of all the ODs on the above the water (a) and underwater (d) photogrammetric models. (b) and (c) show particulars of the alignment of OD-F.
Figure 12. Alignment of all the ODs on the above the water (a) and underwater (d) photogrammetric models. (b) and (c) show particulars of the alignment of OD-F.
Jmse 08 00128 g012
Figure 13. Spatial residuals in meters in the XY plane for the rigid transformation from underwater to the above the water coordinate systems without scale factor (a) and with the alignment through independent model adjustment (b). Vector length exaggeration factor: 1000.
Figure 13. Spatial residuals in meters in the XY plane for the rigid transformation from underwater to the above the water coordinate systems without scale factor (a) and with the alignment through independent model adjustment (b). Vector length exaggeration factor: 1000.
Jmse 08 00128 g013
Figure 14. Alignment of all the ODs on the above-the-water (a) and underwater (d) photogrammetric models. (b) and (c) show particulars of the alignment of OD-F.
Figure 14. Alignment of all the ODs on the above-the-water (a) and underwater (d) photogrammetric models. (b) and (c) show particulars of the alignment of OD-F.
Jmse 08 00128 g014
Figure 15. Merged mesh model with in blue the sea level.
Figure 15. Merged mesh model with in blue the sea level.
Jmse 08 00128 g015
Figure 16. Example of the stereo camera rig mounting two GoPro Hero4 Black Edition in their pressure housings (a) and during a pre calibration in the laboratory in the same setup used for Grotta Giusti (underwater lights on the two handles) (b).
Figure 16. Example of the stereo camera rig mounting two GoPro Hero4 Black Edition in their pressure housings (a) and during a pre calibration in the laboratory in the same setup used for Grotta Giusti (underwater lights on the two handles) (b).
Jmse 08 00128 g016
Figure 17. The calibration area (testfield) set up in Grotta Giusti (a); camera network with in red and blue the camera right and left positions, respectively (b).
Figure 17. The calibration area (testfield) set up in Grotta Giusti (a); camera network with in red and blue the camera right and left positions, respectively (b).
Jmse 08 00128 g017
Figure 18. Example of synchronized images taken with the stereo rig across the water, i.e., a camera looking below the water surface and another above the water. The images show two calibrated rigs installed for accuracy assessment evaluations.
Figure 18. Example of synchronized images taken with the stereo rig across the water, i.e., a camera looking below the water surface and another above the water. The images show two calibrated rigs installed for accuracy assessment evaluations.
Jmse 08 00128 g018
Figure 19. Results of the accuracy analysis on the pre-calibrated rod.
Figure 19. Results of the accuracy analysis on the pre-calibrated rod.
Jmse 08 00128 g019
Figure 20. 3D visualization of the surveyed semi submerged chamber with the two parts colored with different colors, respectively, in orange above the water and cyan below the water. Additionally, the camera network with in red and blue the cameras respectively below and above the water.
Figure 20. 3D visualization of the surveyed semi submerged chamber with the two parts colored with different colors, respectively, in orange above the water and cyan below the water. Additionally, the camera network with in red and blue the cameras respectively below and above the water.
Jmse 08 00128 g020aJmse 08 00128 g020b
Table 1. Filtering of the sparse tie point clouds of above and underwater photogrammetric models.
Table 1. Filtering of the sparse tie point clouds of above and underwater photogrammetric models.
Original Tie PointsFiltered Tie PointsPercentage Reduction
Above the Water
Number of three-dimensional (3D) Point151,824568996%
Number of Image Observations940,63656,47394%
Average Number of Tie Points Per Image282
Underwater
Number of 3D Point420,32811,31397%
Number of Image Observations1070,62653,26395%
Average Number of Tie Points Per Image67
Table 2. Characteristics of filtered sparse tie point clouds above and under-the-water photogrammetric models.
Table 2. Characteristics of filtered sparse tie point clouds above and under-the-water photogrammetric models.
Number of Intersecting Optical RaysNumber of StripsIntersection Angle
Above the water
Mean10241°
Standard deviation7119°
Max32490°
Underwater
Mean5228°
Standard deviation4115°
Max38489°
Table 3. Statistics of the selected different bundle adjustment processes for the above and underwater survey.
Table 3. Statistics of the selected different bundle adjustment processes for the above and underwater survey.
Above-the-WaterUnderwater
Object SpacesXYZ
Precision vector length [m]
Mean0.00260.0027
Stdv0.00080.0010
Max0.00510.0060
Image SpaceRoot mean square reprojection Error [Pixel]Mean0.3360.449
Stdv0.1370.134
Max0.8261.129
Max residual error [Pixel]Mean0.5230.741
Stdv0.2260.234
Max1.2871.683
Table 4. Results of the transformations between underwater and the above the water coordinate systems.
Table 4. Results of the transformations between underwater and the above the water coordinate systems.
RMSE X [m]RMSE Y [m]RMSE Z [m]RMSE Length [m]RMSE Mean (Magnitude) [m]Max Residual [m]
Coarse Alignment
0.00510.0030.01640.01740.01590.0269
Independent Models Adjustment
0.0030.00080.00110.00140.00110.0037

Share and Cite

MDPI and ACS Style

Nocerino, E.; Menna, F. Photogrammetry: Linking the World across the Water Surface. J. Mar. Sci. Eng. 2020, 8, 128. https://doi.org/10.3390/jmse8020128

AMA Style

Nocerino E, Menna F. Photogrammetry: Linking the World across the Water Surface. Journal of Marine Science and Engineering. 2020; 8(2):128. https://doi.org/10.3390/jmse8020128

Chicago/Turabian Style

Nocerino, Erica, and Fabio Menna. 2020. "Photogrammetry: Linking the World across the Water Surface" Journal of Marine Science and Engineering 8, no. 2: 128. https://doi.org/10.3390/jmse8020128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop