Next Article in Journal
Wireless Body Area Network for Preventing Self-Inoculation Transmission of Respiratory Viral Diseases
Next Article in Special Issue
Multispectral UAV Data and GPR Survey for Archeological Anomaly Detection Supporting 3D Reconstruction
Previous Article in Journal
High-Spatial-Resolution NDVI Reconstruction with GA-ANN
Previous Article in Special Issue
Quantifying the Influence of Surface Texture and Shape on Structure from Motion 3D Reconstructions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment

1
College of Information Technology, Shanghai Ocean University, Shanghai 201306, China
2
School of Electronic and Information Engineering, Shanghai Dianji University, Shanghai 201306, China
3
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
4
School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 2041; https://doi.org/10.3390/s23042041
Submission received: 8 January 2023 / Revised: 1 February 2023 / Accepted: 8 February 2023 / Published: 11 February 2023

Abstract

:
Low-cost camera calibration is vital in air and underwater photogrammetric applications. However, various lens distortions and the underwater environment influence are difficult to be covered by a universal distortion compensation model, and the residual distortions may still remain after conventional calibration. In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. The commonly used physical distortion models are integrated to describe the image distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the image deformation from the point of view of mathematical approximation. The calibration process is repeated until certain criteria are met and the distortions are reduced to a minimum. At the end, several sets of distortion parameters and stable camera interior orientation (IO) parameters act as the final camera calibration results. The Canon and GoPro in-air calibration experiments show that GoPro owns distortions seven times larger than Canon. Most Canon distortions have been described with the Australis model, while most decentering distortions for GoPro still exist. Using the proposed method, all the Canon and GoPro distortions are decreased to close to 0 after four calibrations. Meanwhile, the stable camera IO parameters are obtained. The GoPro Hero 5 Black underwater calibration indicates that four sets of distortion parameters and stable camera IO parameters are obtained after four calibrations. The camera calibration results show a difference between the underwater environment and air owing to the refractive and asymmetric environment effects. In summary, the proposed method improves the accuracy compared with the conventional method, which could be a flexible way to calibrate low-cost cameras for high accurate in-air and underwater measurement and 3D modeling applications.

1. Introduction

Due to their advantages of flexibility and affordability, low-cost cameras have been widely used in close-range photogrammetric applications in air and underwater environments. Typical applications include three-dimensional image reconstruction for important culture/religion sites [1], deformation monitoring of the industrial components [2], topographic mapping of shallow water area [3], and accurate underwater measurements of fish [4], and so on. Camera calibration is essential and vital in the image reconstruction process. Due to small distortion errors, the mathematical relationship between the image space and object space cannot be established rigorously, which results in an accuracy decrease in spatial coordinates. Accordingly, effective distortion compensation models are always expected to achieve the highest-level calibration precision.
Camera calibration using the bundle adjustment method was developed in the early 1970s [5], and was employed in the close-range photogrammetry and reached maturity in the 1980s [6,7,8]. To obtain high-precision calibration results, additional parameters are introduced to compensate for the systematic lens distortion errors [9,10,11]. Some researchers have applied distortion models with significant physical meaning, for example, Brown [5] considered the radial and decentering distortion for the commercial Schneider Symmar lens and it became an accepted standard for the analytical camera calibration; Fraser [12] found that four principal sources of radial distortion, decentering distortion, image plane unflatness, and in-plane image distortion should be considered in the calibration process and the mathematical models were introduced in detail; Remondino and Fraser [13] explained how the chromatic aberration had an impact on the radial distortion model for the consumer-grade digital color cameras, and camera calibration should be concerned yet with the increase of camera types; Luber [14] presented a statistical procedure to choose the most appropriate model automatically and it was suitable for wide-angle fisheye lenses, spherical lenses, and perspective lenses with a calibrating precision of 1 pixel. Some other studies suggested that the camera calibration can be considered as a mathematical approximation problem [15]. For example, Tang et al. [16,17] have verified that the orthogonal Legendre and Fourier polynomials are rigorous and efficient for calibrating the airborne cameras.
Compared to in-air calibration, the influence of light refraction traveling through water, the waterproof housing, and the air must be considered in the underwater camera calibration, rather than calibrating the lens distortions only [18]. Researches show that there are two approaches for underwater camera calibration: one is based on the refraction geometry, in which the light propagation is considered in multimedia and the imaging geometric model introducing refraction is established [19]. Lavest [20] considered the effect of refraction on light ray tracing and proposed a correction to the focal length, and a rescaling of the distortion parameters as a function of the refractive index. Underwater refractive geometry is regarded as a perspective camera looking through parallel flat refractive mediums that can be orientated by the underlying axis related to the number of lays, distances from the camera, and the refractive indices [21,22,23]. The strict refraction imaging model is complicated and there are always some assumptions in the underwater environment [24]. The other method is based on compensating for the refraction effects, in which the standard pinhole-camera model is used to describe the object-image relationship and the systematic refraction errors are additionally compensated in the calibration process [24]. Harvey and Shortis [25,26] performed calibration to compensate the refraction effect on the light path. However, there will always be some systematic errors that are not incorporated into the model.
As mentioned above, both the physical and mathematical approximation models can lead to a successful systematic error compensation and accurate calibration for most cameras used today in photogrammetry [27,28,29]. However, a wide variety of low-cost digital cameras, due to their utility and flexibility, are nowadays employed across air and underwater photogrammetric applications. Accordingly, the influence of light refraction is concerned during the underwater camera calibration process. The various lens distortions and the influence of the underwater environment are difficult to be covered with a universal mathematical model, and the residual distortions may still remain after the calibration using conventional methods. In some precise engineering applications, high accuracies (0.1 pixel or better at the image space) are required [30].
In this paper, we propose a combined physical and mathematical camera calibration method for low-cost cameras, which can adapt to both in-air and underwater environments. Photogrammetric camera calibration can—to a very large extent—be considered as a function approximation problem in mathematics. The distortion can be described by a linear combination of specific basis functions [16,17]. First, the commonly used radial, decentering, and in-plane distortion physical models are applied to describe the image distortions [12] in the proposed method. These models are expressed as high order polynomials, which can be combined with polynomial basis functions. During the in-air or underwater imaging process, the lens distortions, light refraction through multiple mediums, and other environmental factors synthetically influence the image deformation, which can be successively approximated using the polynomials from the point of view of mathematical approximation. Then, these distortion model parameters are estimated with the aid of calibration fixture. Subsequently, the calibration fixture images are geometrically corrected using the estimated distortion parameters. Next, the calibration process is performed again, in which the geometrically corrected calibration fixture images are treated as distorted images, and the distortion models are applied again to describe the distortions. Accordingly, a new set of camera calibration parameters is estimated. Specifically, when the previous camera calibration results (several sets of calibration parameters) are applied to the calibration fixture image geometric correction, in order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images. This process is repeated until certain criteria are met. In the end, several sets of distortion parameters are obtained to jointly describe all the camera distortions, along with the obtained camera interior orientation (IO) parameters, acting as the final camera calibration results.
In the following sections, the proposed camera calibration method is introduced in detail, and several experiments of low-cost camera calibration in the air and underwater environment are conducted. The results are analyzed and compared to the conventional camera calibration method (using the Australis model in this case, referring to it as the conventional method).

2. The Combined Physical and Mathematical Camera Calibration Method

The key work in this paper is to develop a combined physical and mathematical camera calibration technique that can calibrate all the cameras in the air and underwater environment. All the theory involved in the proposed method will be introduced, including the image distortion description, camera parameter estimation, geometric correction of calibration fixture images, and accuracy assessment (Figure 1).
(1)
Image distortion description
The commonly used physical radial, decentering, and in-plane distortion models are integrated to describe the image distortions [12], which can be regarded as the high order polynomials of x ¯ and y ¯ in mathematics to successively approximate the image distortions derived from the lens distortions, light refraction through multiple mediums, and other environmental factors, as shown in Equation (1):
Δ x i = x ¯ i [ k 1 i ( x ¯ i 2 + y ¯ i 2 ) + k 2 i ( x ¯ i 2 + y ¯ i 2 ) 2 + k 3 i ( x ¯ i 2 + y ¯ i 2 ) 3 ] + p 1 i [ ( x ¯ i 2 + y ¯ i 2 ) + 2 x ¯ i 2 ] + 2 p 2 i x ¯ i y ¯ i + b 1 i x ¯ i + b 2 i y ¯ i Δ y i = y ¯ i [ k 1 i ( x ¯ i 2 + y ¯ i 2 ) + k 2 i ( x ¯ i 2 + y ¯ i 2 ) 2 + k 3 i ( x ¯ i 2 + y ¯ i 2 ) 3 ] + 2 p 1 x ¯ i y ¯ i + p 2 i [ ( x ¯ i 2 + y ¯ i 2 ) + 2 y ¯ i 2 ]
where i = 1 , 2 , . . . , N stands for the number of the camera calibration; Δ x i and Δ y i represent the image distortions in the line and sample directions at the ith camera calibration; k 1 i , k 2 i , and k 3 i are the radial distortion parameters at the ith camera calibration; p 1 i and p 2 i are the decentering distortion parameters at the ith camera calibration; b 1 i and b 2 i are the in-plane distortion parameters at the ith camera calibration; x i ¯ = x i 1 x 0 i ; y i ¯ = y i 1 y 0 i ; x 0 i and y 0 i are the principal point offset at the ith camera calibration; (xi−1,yi−1) are the image coordinates on the ( i 1 )th corrected calibration fixture images.
(2)
Camera calibration parameters estimation
The imaging geometry of digital cameras in photogrammetry can be modeled according to the principle of a pinhole camera, and the well-known collinearity equation [5] is used to depict the perspective transformation between the image space and object space, as shown in Equation (2):
x i 1 x 0 i + Δ x i = f i Z ¯ 1 i Z ¯ 3 i y i 1 y 0 i + Δ y i = f i Z ¯ 2 i Z ¯ 3 i
where ( Z ¯ 1 i   Z ¯ 2 i   Z ¯ 3 i ) T = R i (X-X 0 i  Y-Y 0 i  Z-Z 0 i ) T . (X,Y,Z) are the object coordinates of the Control Points (CPs) in the geographic projection coordinate system; f i is the principal distance at the ith camera calibration; R i stands for the rotation matrix at the ith camera calibration, constituted with the attitude angles ( φ i , ω i , κ i ) of the image; (X 0 i  Y 0 i  Z 0 i ) is the perspective center position at the ith camera calibration.
Afterwards, the calibration parameters could be solved using the least square estimation method with even CPs, which is widely used in the photogrammetry. The observation equations are obtained when Equation (2) is linearized using the Taylor series expansion and the second-order and higher-order terms are ignored, as shown in Equation (3):
v x i 1 = d x i 1 d k 1 i · d k 1 i + d x i 1 d k 2 i · d k 2 i + d x i 1 d k 3 i · d k 3 i + d x i 1 d p 1 i · d p 1 i + d x i 1 d p 2 i · d p 2 i + d x i 1 d b 1 i · d b 1 i + d x i 1 d b 2 i · d b 2 i + d x i 1 d X 0 i · d X 0 i + d x i 1 d Y 0 i · d Y 0 i + d x i 1 d Z 0 i · d Z 0 i + d x i 1 d φ i · d φ i + d x i 1 d ω i · d ω i + d x i 1 d κ i · d κ i + d x i 1 d x 0 i · d x 0 i + d x i 1 d y 0 i · d y 0 i + d x i 1 d f i · d f i ( x i 1 0 x i 1 ) v y i 1 = d y i 1 d k 1 i · d k 1 i + d y i 1 d k 2 i · d k 2 i + d y i 1 d k 3 i · d k 3 i + d y i 1 d p 1 i · d p 1 i + d y i 1 d p 2 i · d p 2 i + d y i 1 d b 1 i · d b 1 i + d y i 1 d b 2 i · d b 2 i + d y i 1 d X 0 i · d X 0 i + d y i 1 d Y 0 i · d Y 0 i + d y i 1 d Z 0 i · d Z 0 i + d y i 1 d φ i · d φ i + d y i 1 d ω i · d ω i + d y i 1 d κ i · d κ i + d y i 1 d x 0 i · d x 0 i + d y i 1 d y 0 · d y 0 + d y i 1 d f i · d f i ( y i 1 0 y i 1 )
where v x i 1 and v y i 1 are the observational residuals in the line and sample directions; dk 1 i , dk 2 i , dk 3 i , dp 1 i , dp 2 i , db 1 i and db 2 i are the corrections to the distortion parameters; dx 0 i , dy 0 i and dfi are the corrections to the camera IO parameters; dX 0 i , dY 0 i , dZ 0 i , d φ i , d ω i , d κ i are the corrections to the exterior orientation elements; x i 1 0 and y i 1 0 are the nominal line and sample coordinates of CPs computed by Equation (2).
A matrix version of the observation Equation (3) is shown in Equation (4):
V i 1 = A i t i L i 1 ; P = I
where V i 1 refers to the observational residual vector; A i contains the partial derivatives of the distortion parameters, camera IO parameters and the exterior orientation elements; t i is the vector of corrections to the calibration parameters, including distortion parameters, camera IO parameters, and the exterior orientation elements; L i 1 is the observation vector of points’ image coordinates; P stands for the weights of CPs on the images, and they are observed artificially or by the image matching method at the same accuracy, therefore the weight matrix is set to unit matrix I.
The corrections of the calibration parameters will be obtained by least squares, as shown in Equation (5):
t i = ( A i T P A i ) 1 A i T P L i 1
In the linearization procedure, the coefficients of distortion parameters’ correction retain the first-order terms only, and the initial values of these unknowns are set to 0, therefore an iterative process is necessary to determine the distortion parameters. The solution of distortion parameters is shown in Equation (6):
T i = T i 1 + q i
where T i = [k 1 i , k 2 i , k 3 i , p 1 i , p 2 i , b 1 i , b 2 i ] represents the obtained distortion parameters; T i 1 is the previous distortion parameters at the ( i 1 )th camera calibration; q i is the obtained corrections of the distortion parameters at the ith calibration process.
In order to avoid the correlations among the calibration parameters, the calibration fixtures are captured at various scales and different stations, forming a robust network containing short baseline image sequences that can provide benefits to increase the redundant observations and produce strong intersections [30].
(3)
Geometric correction of the calibration fixture images
The calibration fixture images are geometrically corrected using the previous camera calibration results. In order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images (Figure 2). The pixels on the corrected calibration fixture images (Figure 2c) are directly reversed to acquire the corresponding pixels on the original calibration fixture images (Figure 2a) using the previous m sets of distortions, as shown in Figure 2 and Equation (7). Then, the resampling is performed on the original calibration fixture images using the cubic convolution method to obtain the grey value of these pixels.
x = x x 0 + Δ x 1 + Δ x 2 . . . + Δ x m y = y y 0 + Δ y 1 + Δ y 2 . . . + Δ y m ( m 1 )
where (x’, y’) are the corrected image coordinates; x 0 and y 0 are the principal point offset; (x, y) are the image coordinates on the original calibration fixture images. Subsequently, the corrected calibration fixture images are taken as the distorted images to repeat the calibration process, and so on, until certain criteria are met.
At the end, several sets of distortion parameters ( Δ x 1 , Δ x 2 ... Δ x n and Δ y 1 , Δ y 2 ... Δ y n ) are used to jointly describe all the camera distortions, which, along with the obtained camera IO parameters, act as the final camera calibration results.
(4)
Accuracy assessment
CPs that acted as check points are used for external accuracy assessment in the object space. The nominal object coordinates of these check points are calculated using Equation (2) with the direct intersection method. The Root Mean Square Error (RMSE) in the object space is computed based on the discrepancies between the truth values and nominal values of the check points, shown in Equation (8):
μ X = ( X t X c ) 2 r μ Y = ( Y t Y c ) 2 r μ Z = ( Z t Z c ) 2 r μ P = μ X 2 + μ Y 2 + μ Z 2
where μ X , μ Y and μ Z represent the RMSE of check points for the corresponding three directions; r refers to the number of check points; X t , Y t , Z t are the truth coordinates of the check points; X c , Y c , Z c are the nominal coordinates of the check points. μ P is used to state the accuracy in the object space in the following discussions.

3. Experimental Results and Discussion

3.1. Calibration Fixture

In this study, the calibration fixtures of a 3D test field and a 2D planar calibration board are used to assist the camera calibration. The 3D test field is established on the shape invariant brick wall of a building and fits the in-air camera calibration. The size of each brick is 24 cm × 6 cm. A total of 188 evenly distributed corner points of the bricks are taken as the CPs and check points. The object coordinates of these points are measured by the total station and the measurement accuracies are 2 mm. The established 3D test field of the brick wall and the distributed CPs are shown in Figure 3. The black dots represent the CPs. The 2D planar calibration board is applied for camera calibration in the underwater environment. The diffused reflective planar calibration board is used, which is sized at 400 mm × 400 mm and consists of 625 circular targets (dots). Dots are chosen as targets owing to the isotropy of circles, and the CPs are located in the center. The dots are evenly positioned across 25 rows and 25 columns. Each individual circular dot has a diameter of 7.5 mm, and two neighboring dots are 15 mm apart. The measurement accuracy is 0.01 mm. The 2D diffused reflective planar calibration board and circular targets are shown in Figure 4.

3.2. Tests of Low-Cost Camera Calibration

In this study, two types of low-cost cameras are calibrated in the air and underwater environment using the proposed camera calibration method, including a full-frame digital single-lens reflex camera (Canon EOS 5Dsr) with a prime lens of TS-E 24 mm 1:3.5 L I I and an action camera (GoPro Hero 5 Black) operated in photo mode with a narrow field of view. The Canon camera deploys a Complementary Metal Oxide Semiconductor (CMOS) imaging sensor and the size is 36 mm × 24 mm with a resolution of 8688 pixels × 5792 pixels. The given focal length of the lens is 24 mm. The GoPro carries a 1/2.3-inch image sensor with a resolution of 12 megapixels, and is equipped with a wide-angle lens, leading to large distortions for the captured images. The in-air calibration is performed using both cameras, and the underwater calibration is carried out with the GoPro camera.

3.2.1. Calibration Tests in the Air

The 3D test field was employed to calibrate the Canon and GoPro cameras in the air. Forty-five images were acquired via the Canon camera, which were captured at two scales, 1.80 m and 2.60 m away from the test field. Seventeen images were acquired by the GoPro camera at three distances of 1.10 m, 1.65 m, and 2.50 m. In order to avoid introducing other systematic errors, each image was filled with the calibration fixture, and no other object entered the picture.
A total of 98 CPs and 61 CPs evenly distributed in the test field are used for the Canon and GoPro calibration parameters estimation, respectively. A total of 90 CPs and 127 CPs are used for the Canon and GoPro calibration accuracy assessment, respectively. On the original images, the CPs are plotted manually. For the following calibrations, the CPs on the corrected images are measured automatically according to Equation (7) without extra work. The radial distortion, decentering distortion, and in-plane distortion of the two cameras at each calibration are profiled separately, shown in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. The calibrated IO parameters of the two cameras and the calibration accuracy are shown in Table 1. The calibration results of the conventional method (using the Australis model once) are exactly the same as that of the proposed method at the first calibration.
From the Canon in-air calibration results (Figure 5, Figure 6 and Figure 7, Table 1), the maximum radial, decentering, and in-plane distortion of the camera are 104 μm, 14 μm and 20 μm, respectively (Figure 5, Figure 6 and Figure 7). These maximum values decrease to 1.14 μm, 1.15 μm and 0.4 μm after the second calibration, which indicates that the residual distortions remained after the calibration with the conventional method (the first calibration). The accuracy shows that compared with the conventional method, the proposed method achieves a slight accuracy improvement of 2.8 μm (0.24%) (Table 1) after the residual distortions are calibrated.
From the GoPro in-air calibration results (Figure 8, Figure 9 and Figure 10, Table 1), the maximum radial, decentering, and in-plane distortion of the camera are 0.72 mm, 0.01 mm, and 0.07 mm, respectively (Figure 8, Figure 9 and Figure 10), which are calibrated by the conventional method (the first calibration). These maximum values decrease to 0.77 μm, 9.33 μm, and 0.4 μm after the second calibration, which indicates that the residual distortions, especially most decentering distortions, remained after the calibration with the conventional method (the first calibration). The proposed method achieves an accuracy improvement of 0.2 mm (3.88%) compared with the conventional method (Table 1) after the residual distortions are calibrated.
In summary, the Canon and GoPro in-air calibration results show that the distortions gradually grow larger with the increase of the radial distance from the center of the image. The in-plane distortion is distributed unequally on the image. All the residual distortions of the Canon and GoPro cameras are decreased as the number of calibration increases and approach 0 after four calibrations, which illustrates that all the distortions have been fully calibrated by the proposed method. Meanwhile, the stable camera IO parameters are obtained after the repeated calibration processes. From the magnitudes of distortions, the lens distortion is dominated by the radial distortion for the two cameras [18], and the GoPro lens has distortions seven times larger than the Canon lens. Most distortions of the Canon lens have been detected with the conventional method (the first calibration), while a large number of decentering distortions for the GoPro lens still exist (Figure 9). This indicates that the Australis model cannot cover the distortions of all the cameras, with which the Canon lens is better described than the GoPro lens.

3.2.2. Calibration Tests in the Underwater Environment

Due to the smaller field of view and close range caused by the attenuation of light in the water, the 2D planar calibration board was employed to calibrate the GoPro camera in the underwater environment. Because of refraction, underwater images exhibit non-linear distortions related to the distance from the object to the camera [31]. Therefore, two underwater camera calibration tests were executed with two distances, respectively. The first test captured 33 images at a distance of 0.14 m away from the calibration board. A total of 33 CPs and 45 CPs are used to calculate the calibration parameters and assess the calibration accuracy, respectively. The second test acquired 10 images at a distance of 0.50 m away from the calibration board. A total of 24 CPs and 30 CPs are used to calculate the calibration parameters and assess the calibration accuracy, respectively. The calibration results are shown in Table 1 and the distortion profiles are shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16.
(1)
Underwater camera calibration at a distance of 0.14 m away from the calibration board
From the GoPro underwater calibration results (Figure 11, Figure 12 and Figure 13, Table 1), the calibrated maximum radial, decentering, and in-plane distortion are 0.11 mm, 0.002 mm, and 25 μm, respectively. These maximum values decrease to 5 μm, 1 μm and 0.06 μm after the second calibration, which are the residual distortions that remained after the calibration with the conventional method. All the residual distortions can be detected by the proposed method. The accuracy shows a rise from 3.8451 mm to 3.8215 mm, indicating that the proposed method improves the accuracy by 0.02 mm (0.61%) compared with the conventional method (Table 1). It is similar to the in-air calibration results, in which the stable camera IO parameters are obtained after the repeated calibration processes.
(2)
Underwater camera calibration at a distance of 0.50 m away from the calibration board
From the second GoPro underwater calibration test (Figure 14, Figure 15 and Figure 16, Table 1), the calibrated maximum radial, decentering, and in-plane distortion are 0.16 mm, 0.16 mm, and 0.5 μm, respectively. These maximum values decrease to 17 μm, 1.6 μm, and 0.3 μm after the second calibration. All the residual decentering distortions and in-plane distortions are detected after the second calibration. The residual radial distortions are decreased to 0 after the third calibration. The accuracy shows a rise from 3.4860 mm to 3.4650 mm, indicating that the proposed method improves the accuracy of 0.021 mm (0.60%) compared with the conventional method (Table 1). Similar to the in-air and first underwater calibration results, the stable camera IO parameters are obtained after the repeated calibration processes.
In summary, the GoPro underwater calibration results show variations by distance from the camera to the calibration board. The distortions at a distance of 0.14 m are dominated by the radial distortions, and all the distortions can be calibrated by four repeated calibrations. The radial and decentering distortions are major distortions at a distance of 0.50 m, and all the distortions can be detected by three repeated calibrations. The stable camera IO parameters are obtained after the calibration processes.
Compared to the in-air calibration, the final GoPro camera calibration results in the underwater environment (Table 1) indicate that the achieved principal distance in the underwater environment is different from that obtained in the air, because the light rays refract when traveling from the water to the air, which leads to a change of the principal distance. The shifts of the principal point and the distortions in the underwater environment appear smaller than that in the air, which is caused by the refractive and asymmetric environment effects absorbed by the radial, decentering, and in-plane distortions [18].
In summary, Canon EOS 5Dsr and GoPro Hero 5 Black were calibrated for the the air and underwater environment. The calibration results illustrate that the residual distortions remained with the conventional method. When using the proposed method, the distortions decrease gradually throughout the repeated calibration processes and can be negligible at last. Meanwhile, stable elements of camera IO can be achieved after several calibrations. Desirable results were obtained due to the repetitive use of the physical distortion models during the calibration process. Photogrammetric calibration is regarded as a function approximation problem from the perspective of mathematics [16], and any univariate function can be approximated by a polynomial of sufficiently high degree according to the Weierstrass Theorem [17]. The distortion model combined with radial, decentering, and in-plane physical distortion models in this paper is actually a polynomial with multiple degrees, which is applied to successively approximate the distortions constantly during the calibration processes. Compared to the in-air calibration, the underwater calibration is affected by the complicated underwater environment, and it is difficult to describe these factors with various mathematical models owing to the coupling effects and correlations. The results indicate that the proposed camera calibration method is applicable for the underwater camera calibration as well.

3.3. Underwater Application Example

The calibrated camera can be used in three-dimensional reconstruction and measurement of the objects using the photogrammetric method. In this paper, the calibrated GoPro at a distance of 0.5 m in the underwater environment (Figure 14, Figure 15 and Figure 16, Table 1) was used to carry out the underwater object three-dimensional reconstruction and measurement.
Four images covering the bottom of the pool are acquired with GoPro at a depth of 0.50 m, which have about (80%) overlap, as shown in Figure 17. The offset angle between the camera movement direction and the horizontal direction is less than 3 .
These four images are geometrically corrected using three sets of calibrated distortion parameters (Figure 14, Figure 15 and Figure 16), and the corrected images are shown in Figure 18. The proposed strategy is used to perform the image resampling (Figure 2). The pixels on the corrected underwater images are directly reversed to acquire the corresponding pixels on the original images using three sets of distortions. Then, the resampling is performed on the original images using the cubic convolution method to obtain the grey value of these pixels.
Three stereo pairs are formed by the four corrected images. The first pair is made up of the corrected image Figure 18a and the corrected image Figure 18b. The second pair is composed of the corrected image Figure 18b and the corrected image Figure 18c. The corrected image Figure 18c and the corrected image Figure 18d form the third pair. About 100 tie points are matched on each image pair using the manual and semi-automatic matching method. The relative orientation is carried out with the image pairs to make the corresponding rays intersect, in which the corrected image Figure 18b is taken as the reference image. Meanwhile, the relationship between the corrected images are established by the calculated exterior orientation parameters. Then, the disparity maps are produced using the oriented image pairs through the dense matching technique (Figure 19).
Three-dimensional point clouds of the scene are obtained using the disparity maps through the forward intersection method. Afterwards, the digital surface model (DSM) of the scene can be acquired by point clouds interpolation (Figure 20a). The corrected images are then rectified with DSM to generate orthophoto (Figure 20b). The three-dimensional model can be reconstructed by DSM and orthophoto. There is no absolute reference at the bottom of the pool, therefore only relative elevation and horizontal information can be obtained from DSM and orthophoto.
An underwater brick is used to carry out the photogrammetric measurement. The true size of the experimental brick is 200.00 mm (length) × 100.00 mm (width) × 50.00 mm (height). Ten points on the brick are chosen to perform the three-dimensional information measurement. Five lines are constituted by these points. They are line 01–02, line 03–04, line 05–06, line 07–08, and line 09–10, respectively. Line 07–08 and line 09–10 stand for the brick length. Line 01–02, line 03–04, and line 05–06 stand for the brick width. The point and line distribution is shown in Figure 21. The three-dimensional coordinates of these points are calculated by forward intersection using the disparity maps and exterior orientation parameters. The length of line is the distance between two points, which can be determined by three-dimensional coordinates of the points. There is no absolute reference at the bottom of the pool, therefore only relative geometric information can be obtained. The brick length and width are calculated by scale conversion using the relative calculated results. The measurement accuracy is assessed by the true value and the calculated value of the brick length and width.
The GoPro has been calculated at a distance of 0.5 m in the underwater environment using the conventional method and the proposed camera calibration method (Figure 14, Figure 15 and Figure 16, Table 1). The calibrated results were applied in the brick size measurement. The measurement accuracy is shown in Table 2.
The measurement accuracy acquired by the proposed method is 1.24 mm, which is superior to the accuracy obtained by the conventional method (Table 2). The proposed camera calibration method can detect almost all the image distortions, and the systematic distortion error can be compensated for in the geometric correction process to improve the subsequent photogrammetric steps.

4. Conclusions

In this paper, a combined physical and mathematical calibration method for low-cost cameras in the air and underwater environment is proposed. In the proposed method, the physical radial, decentering, and in-plane distortion models are integrated to describe the camera distortions. The combination is a high-order polynomial, which can be considered as basis functions to successively approximate the distortions during the calibration processes from the point view of mathematical approximation. Then, the proposed method uses the least squares estimation method with even CPs to estimate the camera calibration parameters. Next, the original calibration fixture images are geometrically corrected using the estimated distortion parameters. Unlike the conventional methods, the proposed method performs the above calibration in a repeated fashion: treating the geometrically corrected images as input distorted images, re-applying the distortion model, estimating a new set of camera parameters, and geometrically correcting the already corrected images, and so on, until certain criteria are met. Specifically, when the previous camera calibration results (m sets of calibration parameters) are applied to the image geometric correction, in order to avoid multiple resampling degrading image quality, a strategy is proposed to perform resampling just once from the original calibration fixture images. At the end, several sets of distortion parameters and stable camera IO parameters are obtained, which act as the final camera calibration results and are used to describe all the image distortions.
The in-air calibration results with Canon EOS 5Dsr and GoPro Hero 5 Black demonstrate that the distortions of these cameras vary greatly in magnitude. The Australis model cannot cover all the distortions, and the residual distortions still remained with the conventional method. While using the proposed method, all the final distortion parameters are close to 0 and can be negligible after several calibrations, meanwhile, stable elements of IO can be achieved at last. The underwater calibration with GoPro Hero 5 Black indicates that the calibration parameters in the underwater environment appear different from that in the air, owing to the refractive and asymmetric environment effects. The proposed method is applicable for the underwater camera calibration as well. In summary, the calibration experimental results demonstrate that the proposed method is flexible to use and has potentials in practice in order to achieve high three-dimensional measurement accuracies for both in-air or underwater applications.
Further study would be focused on applying the proposed method in the situation where no control point is available, and investigating how the proposed method can be used in triangulation while camera calibration and block adjustment are performed jointly. Recently, our research team is establishing the refraction physical imaging model in the underwater environment, in which the object distance is taken as one of the parameters.

Author Contributions

Conceptualization, Z.M.; methodology, X.Z.; validation, Y.Z., Y.C. and J.W.; formal analysis, Z.M. and X.Z.; investigation, Z.M. and Y.C.; data curation, Y.C.; writing—original draft preparation, Z.M.; writing—review and editing, H.X. and Y.Z.; visualization, Z.M. and X.Z.; funding acquisition, Z.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (grant number 42101443).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Acknowledgments

We thank all the participants involved in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chikatsu, H.; Kunii, Y. Performance Evaluation of Recent High Resolution Amateur Digital Cameras and Application to Modeling of Historical Structure. Int. Arch. Photogramm. Rem. Sens. 2002, 5, 278–287. [Google Scholar]
  2. Fraser, C.S.; Riedel, B. Monitoring the Thermal Deformation of Steel Beams via Vision Metrology. ISPRS J. Photogramm. 2000, 55, 268–276. [Google Scholar] [CrossRef]
  3. Leon, J.X.; Roelfsema, C.M.; Saunders, M.I.; Phinn, S.R. Measuring Coral Reef Terrain Roughness Using ‘Structure-From-Motion’ Close-Range Photogrammetry. Geomorphology 2015, 242, 21–28. [Google Scholar] [CrossRef]
  4. Letessier, T.B.; Juhel, J.B.; Vigliola, L.; Meeuwig, J.J. Low-Cost Small Action Cameras in Stereo Generates Accurate Underwater Measurements of Fish. J. Exp. Mar. Biol. Ecol. 2015, 466, 120–126. [Google Scholar] [CrossRef]
  5. Brown, D.C. Close-range Camera Calibration. Photogramm. Eng. 1971, 8, 855–866. [Google Scholar]
  6. Fraser, C.S.; Brown, D.C. Industrial Photogrammetry-New Developments and Recent Applications. Photogramm. Rec. 1986, 68, 197–216. [Google Scholar] [CrossRef]
  7. Wester-Ebbinghaus, W. Trends in Non-Topographic Photogrammetry Systems. In Non-Topographic Photogrammetry, 2nd ed.; American Society for Photogrammetry and Remote Sensing: Baton Rouge, LA, USA, 1988; pp. 377–387. [Google Scholar]
  8. El-Hakim, S. Real-Time Image Metrology with CCD Cameras. Photogramm. Eng. Rem. Sens. 1986, 11, 1757–1766. [Google Scholar]
  9. Tsai, R.Y. A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses. IEEE J. Roboics Autom. 1987, 4, 323–344. [Google Scholar] [CrossRef]
  10. Lenz, R.K. Lens Distortion Corrected CCD Camera Calibration with Co-Planar Calibration Points for Real-Time Measurement. In Proceedings of the ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland, 2–4 June 1987. [Google Scholar]
  11. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 11, 1330–1334. [Google Scholar] [CrossRef]
  12. Fraser, C.S. Digital Camera Self-Calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
  13. Remondino, F.; Fraser, C.S. Digital Camera Calibration Methods: Considerations and Comparisons. Int. Arch. Photogramm. Rem. Sens. 2006, 5, 266–272. [Google Scholar]
  14. Luber, A. Ein Generisches Abbildungsmodell für Stereokamerasysteme: Modellierung, Kalibrierung, Evaluierung und Anwendung. Ph.D. Thesis, Humboldt University, Berlin, Germany, 2015. [Google Scholar]
  15. Mason, J.; Handscomb, D. Chebyshev Polynomials. In Manual of Photogrammetry, 5th ed.; McGlone, J., Ed.; American Society for Photogrammetry and Remote Sensing: Baton Rouge, LA, USA, 2004; p. 360. ISBN 978-1570830716. [Google Scholar]
  16. Tang, R.; Fritsch, D.; Cramer, M.; Schneider, W. A Flexible Mathematical Method for Camera Calibration in Digital Aerial Photogrammetry. Photogramm. Eng. Remote Sens. 2012, 78, 1069–1077. [Google Scholar] [CrossRef]
  17. Tang, R.; Fritsch, D.; Cramer, M. New Rigorous and Flexible Fourier Self-calibration Models for Airborne Camera Calibration. ISPRS J. Photogramm. Remote Sens. 2012, 71, 76–85. [Google Scholar] [CrossRef]
  18. Shortis, M. Calibration Techniques for Accurate Measurements by Underwater Camera Systems. Sensors 2015, 15, 30810–30827. [Google Scholar] [CrossRef] [PubMed]
  19. Li, R.; Li, H.; Zou, W.; Smith, R.G.; Curran, T.A. Quantitative Photogrammetric Analysis of Digital Underwater Video Imagery. IEEE J. Ocean. Eng. 1997, 22, 364–375. [Google Scholar]
  20. Lavest, J.M.; Rives, G.; Laprestè, J.T. Dry Camera Calibration for Underwater Applications. Mach. Vis. Appl. 2003, 13, 245–253. [Google Scholar] [CrossRef]
  21. Treibitz, T.; Schechner, Y.; Kunz, C.; Singh, H. Flat Refractive Geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 51–65. [Google Scholar] [CrossRef]
  22. Menna, F.; Nocerino, E.; Fassi, F.; Remondino, F. Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry. Sensors 2016, 16, 48. [Google Scholar] [CrossRef]
  23. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: London, UK, 2004. [Google Scholar]
  24. Telem, G.; Filin, S. Photogrammetric Modeling of Underwater Environments. ISPRS J. Photogramm Remote Sens. 2010, 65, 433–444. [Google Scholar] [CrossRef]
  25. Harvey, E.S.; Cappo, M.; Shortis, M.R.; Robson, S.; Buchanan, J.; Speare, P. The Accuracy and Precision of Underwater Measurements of Length and Maximum Body Depth of Southern Bluefin Tuna (Thunnus Maccoyii) with a Stereo-video Camera System. Fish. Res. 2003, 63, 315–326. [Google Scholar] [CrossRef]
  26. Shortis, M.R.; Seager, J.W.; Williams, A.; Barker, B.A.; Sherlock, M. A Towed Body Stereo-Video System for Deep Water Benthic Habitat Surveys. In Proceedings of the 8th Conference on Optical 3-D Measurement Techniques, Zürich, Switzerland, 9–12 July 2007; pp. 150–157. [Google Scholar]
  27. Fraser, C.S. Automatic Camera Calibration in Close-Range Photogrammetry. Photogramm. Eng. Rem. Sens. 2013, 4, 381–388. [Google Scholar] [CrossRef]
  28. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 2nd ed.; Walter de Gruyter: Berlin, Germany, 2014; p. 684. ISBN 978-3-11-030269-1. [Google Scholar]
  29. Fryer, J.G. Camera Calibration. In Close Range Photogrammetry and Machine Vision; Atkinson, K.B., Ed.; Whittles Publishing: Caithness, UK, 1996; pp. 156–179. ISBN 9781870325738. [Google Scholar]
  30. Luhmann, T.; Fraser, C.S.; Maas, H.G. Sensor Modelling and Camera Calibration for Close-Range Photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
  31. Elnashef, B.; Filin, S. Direct Linear and Refraction-invariant Pose Estimation and Calibration Model for Underwater Imaging. ISPRS J. Photogramm. Remote Sens. 2019, 154, 259–271. [Google Scholar] [CrossRef]
Figure 1. The workflow of the proposed calibration method for a low-cost camera.
Figure 1. The workflow of the proposed calibration method for a low-cost camera.
Sensors 23 02041 g001
Figure 2. The resampling method from the original calibration fixture image.
Figure 2. The resampling method from the original calibration fixture image.
Sensors 23 02041 g002
Figure 3. The established 3D test field of the brick wall and the evenly distributed CPs (black dots).
Figure 3. The established 3D test field of the brick wall and the evenly distributed CPs (black dots).
Sensors 23 02041 g003
Figure 4. The 2D diffused reflective planar calibration board and circular targets.
Figure 4. The 2D diffused reflective planar calibration board and circular targets.
Sensors 23 02041 g004
Figure 5. Radial distortion profiles at different radial distances for Canon in the air calibrated using the conventional method and the proposed method at different calibrations.
Figure 5. Radial distortion profiles at different radial distances for Canon in the air calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g005
Figure 6. Decentering distortion profiles at different radial distances for Canon in the air calibrated using the conventional method and the proposed method at different calibrations.
Figure 6. Decentering distortion profiles at different radial distances for Canon in the air calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g006
Figure 7. In-plane distortions for Canon in the air calibrated using the conventional method and the proposed method at different calibrations: (a) the conventional method (the first calibration); (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Figure 7. In-plane distortions for Canon in the air calibrated using the conventional method and the proposed method at different calibrations: (a) the conventional method (the first calibration); (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Sensors 23 02041 g007
Figure 8. Radial distortion profiles at different radial distances for GoPro in the air calibrated using the conventional method and the proposed method at different calibrations.
Figure 8. Radial distortion profiles at different radial distances for GoPro in the air calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g008
Figure 9. Decentering distortion profiles at different radial distances for GoPro in the air calibrated using the conventional method and the proposed method at different calibrations.
Figure 9. Decentering distortion profiles at different radial distances for GoPro in the air calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g009
Figure 10. In-plane distortions at different calibrations for GoPro in the air: (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Figure 10. In-plane distortions at different calibrations for GoPro in the air: (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Sensors 23 02041 g010
Figure 11. Radial distortion profiles at different radial distances for GoPro in the underwater environment (0.14 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Figure 11. Radial distortion profiles at different radial distances for GoPro in the underwater environment (0.14 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g011
Figure 12. Decentering distortion profiles at different radial distances for GoPro in the underwater environment (0.14 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Figure 12. Decentering distortion profiles at different radial distances for GoPro in the underwater environment (0.14 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g012
Figure 13. In-plane distortions at different calibrations for GoPro in the underwater environment (0.14 m away from the calibration board): (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Figure 13. In-plane distortions at different calibrations for GoPro in the underwater environment (0.14 m away from the calibration board): (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Sensors 23 02041 g013
Figure 14. Radial distortion profiles at different radial distances for GoPro in the underwater environment (0.5 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Figure 14. Radial distortion profiles at different radial distances for GoPro in the underwater environment (0.5 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g014
Figure 15. Decentering distortion profiles at different radial distances for GoPro in the underwater environment (0.5 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Figure 15. Decentering distortion profiles at different radial distances for GoPro in the underwater environment (0.5 m away from the calibration board) calibrated using the conventional method and the proposed method at different calibrations.
Sensors 23 02041 g015
Figure 16. In−plane distortions at different calibrations for GoPro in the underwater environment (0.5 m away from the calibration board): (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Figure 16. In−plane distortions at different calibrations for GoPro in the underwater environment (0.5 m away from the calibration board): (a) the first calibration; (b) the second calibration; (c) the third calibration; (d) the fourth calibration.
Sensors 23 02041 g016
Figure 17. The captured four underwater images: (a) the scene covered by four overlapping images; (b) four original underwater images.
Figure 17. The captured four underwater images: (a) the scene covered by four overlapping images; (b) four original underwater images.
Sensors 23 02041 g017
Figure 18. The four corrected underwater images: (a) the first image; (b) the second image; (c) the third image; (d) the fourth image.
Figure 18. The four corrected underwater images: (a) the first image; (b) the second image; (c) the third image; (d) the fourth image.
Sensors 23 02041 g018
Figure 19. The generated disparity maps: (1) the first disparity map; (2) the second disparity map; (3) the third disparity map.
Figure 19. The generated disparity maps: (1) the first disparity map; (2) the second disparity map; (3) the third disparity map.
Sensors 23 02041 g019
Figure 20. Digital surface model (a) and orthophoto (b).
Figure 20. Digital surface model (a) and orthophoto (b).
Sensors 23 02041 g020
Figure 21. Measurement points and lines distribution.
Figure 21. Measurement points and lines distribution.
Sensors 23 02041 g021
Table 1. The calibrated IO parameters of the two cameras with the conventional method and the proposed method in the air and underwater environment.
Table 1. The calibrated IO parameters of the two cameras with the conventional method and the proposed method in the air and underwater environment.
Calibration ResultConventionalProposed Method (Number of Iterations)
Method1234
Canon in the airf (mm)24.275524.275524.277224.277124.2771
x 0 (mm)−0.1238−0.1238−0.1237−0.1237−0.1237
y 0 (mm)0.15270.15270.15350.15340.1534
RMSE (mm)1.13841.13841.13841.13561.1356
GoPro in the airf (mm)5.44325.44325.45265.45765.4576
x 0 (mm)0.04720.04720.05010.05050.0505
y 0 (mm)0.04520.04520.05240.05410.0541
RMSE (mm)5.75595.75595.53655.53195.5324
GoPro in the underwater environment (0.14 m)f (mm)7.28027.28027.26507.26527.2652
x 0 (mm)0.03320.03320.03170.03170.0317
y 0 (mm)0.04050.04050.04140.04130.0413
RMSE (mm)3.84513.84513.82153.82153.8215
GoPro in the underwater environment (0.50 m)f (mm)3.92813.92813.92843.92773.9277
x 0 (mm)0.03780.03780.03750.03740.0374
y 0 (mm)0.04310.04310.03980.03980.0398
RMSE (mm)3.48603.48603.46503.46503.4650
Table 2. Measurement accuracy comparison between the conventional camera calibration method and the proposed camera calibration method.
Table 2. Measurement accuracy comparison between the conventional camera calibration method and the proposed camera calibration method.
Conventional MethodProposed Method
Measurement accuracy (mm)5.001.24
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, Z.; Zhong, X.; Xie, H.; Zhou, Y.; Chen, Y.; Wang, J. A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment. Sensors 2023, 23, 2041. https://doi.org/10.3390/s23042041

AMA Style

Ma Z, Zhong X, Xie H, Zhou Y, Chen Y, Wang J. A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment. Sensors. 2023; 23(4):2041. https://doi.org/10.3390/s23042041

Chicago/Turabian Style

Ma, Zhenling, Xu Zhong, Hong Xie, Yongjun Zhou, Yuan Chen, and Jiali Wang. 2023. "A Combined Physical and Mathematical Calibration Method for Low-Cost Cameras in the Air and Underwater Environment" Sensors 23, no. 4: 2041. https://doi.org/10.3390/s23042041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop