Next Article in Journal
Centralized Mission Planning for Multiple Robots Minimizing Total Mission Completion Time
Previous Article in Journal
The Potency of Graphitic Carbon Nitride (gC3N4) and Bismuth Sulphide Nanoparticles (Bi2S3) in the Management of Foliar Fungal Pathogens of Maize
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large-Scale Aircraft Pose Estimation System Based on Depth Cameras

1
State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, No. 38 Zheda Road, Hangzhou 310027, China
2
Xi’an Institute of Modern Control Technology, Xi’an 710065, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3736; https://doi.org/10.3390/app13063736
Submission received: 28 January 2023 / Revised: 27 February 2023 / Accepted: 13 March 2023 / Published: 15 March 2023
(This article belongs to the Section Optics and Lasers)

Abstract

:
In the fields of wind tunnel measurement and aerospace, the real-time pose information of aircraft is an important index. In this paper, we propose a large-scale aircraft pose estimation system, in which depth cameras are used to scan the entire aircraft model in multiple directions. Using a principal component analysis (PCA) featuring vectors as the target coordinate system through a coordinate transformation matrix for the point cloud calibration of aircraft, we merge the complete aircraft model with the point cloud. An intrinsic shape signature (ISS) key point extraction and a signature of histograms of orientations (SHOT) feature description are used to form feature descriptors. The scale of the point clouds is reduced, and coarse registration of the point clouds is performed by feature matching and random sample consensus (RANSAC) mismatching. The robustness of the algorithm is improved, and the initial pose estimation is achieved for the precise registration of point clouds. The experimental results demonstrate that the proposed system can achieve an angle measurement accuracy of 0.05°.

1. Introduction

The real-time pose information of objects is of great significance in various fields, such as defense, aerospace, and industrial automation. For example, wind tunnel experiments must analyze the aerodynamic performance of the mechanical structure of aircraft models, and the pose of the models via real-time measurements [1,2,3]. In addition, real-time pose information is important for independent navigation, robotic arm tracking, and the automatic sorting of robots in the industrial and medical fields [4,5]. It is also important in the field of aerospace, particularly for connecting and docking, fault spacecraft capture, and the clearance of space fragments [6]. Since the 1960s, estimation and measurement methods for the wind tunnel model have received widespread attention in industrial and academic circles. These methods can be divided into the inertial measurement method and the optical measurement method, depending on the type of sensor. The inertial measurement method uses inertial sensors inside the model, including electrolytic tilt sensors, accelerometers, and gyroscopes [7]. Electrolytic tilt sensors measure the attitude angle of the model by solving the output voltage. The principle of this method is the imbalance between the two sensing electrodes. In 1994, at the National Transonic Facility at the NASA Langley Research Center (LaRC), Wong achieved a model angle measurement accuracy of 0.1° using a shell temperature of 71 °C in electrolytic tilt sensors [8].
Accelerometers are widely used in many applications because of their high precision and repeatability. The latest-generation accelerometer, Q-Flex, consists of pendulous flexures composed of fused quartz. In 2007, Crawford improved the angle measurement system at LaRC by using three quadratically mounted Q-Flex accelerometers to measure the pitch and roll angles of a craft simultaneously, and the model attitude error was reduced to less than 0.01° [9]. Microelectronic mechanical sensors (MEMS) can be used, but are limited in their measurement capabilities with regard to small models by size restrictions. In NASA LaRC.18, Kahng et al. performed laboratory testing of MEMS-based capacitive and servo accelerometers. Their calibration error range was higher than 0.04°, and their confidence interval was 95% [10]. However, accelerometers suffer from vibration in a wind tunnel, which causes angle measurement errors.
Gyroscopes can be used to measure both vibration and acceleration, and are widely used on aircraft and ships. The principle of gyroscopes is similar to that of accelerometers; the difference is that gyroscopes have a diagonal speed response rather than a linear acceleration response. Therefore, the gyroscope relies on the accelerometer to zero during angle measurement, resulting in random walk and measurement drift. In 2008, Boeing developed a measurement method that used a pair of gyroscopes, an electrolytic sensor, and a servo accelerometer. The gyroscopes drifted 0.08° in 50 s, and the overall uncertainty range of the system was between 0.05° and 0.1° [11,12]. Inertial sensors are less accurate than this system because they can change the load of the model, thereby increasing the uncertainty of the pose measurement. In addition, in a hypervelocity wind tunnel, the size of the model is small, and there is no space for an inertial sensor.
Optical pose measurement typically uses non-contact methods to measure the model’s pose and can be divided into photogrammetry and laser-based methods. Photogrammetry is the most commonly used optical pose measurement technology. Multiple cameras are positioned to have different perspectives, tracking a marked point on the model, and judging the displacement of the marked point by the triangular measurement method. In 2014, Jia et al. proposed a high-speed pose visual measurement method for high-speed rolling targets in supersonic wind tunnels. A new ultrathin marker point layout, based on a spatial encoding method, was used. The pose measurement accuracy of the model’s pitch angle was less than 0.132°, and the accuracy of the cross-rolling angle was less than 0.712° [13]. Optotrak™ developed a wind tunnel pose measurement device with a linear CCD camera and an infrared camera. The device exhibited good robustness and a 0.5° pose angle measurement accuracy, and has been successfully applied in wind tunnel measurement, motion tracking, and robotics. However, the marked point required for this method is large, and thus cannot be used to measure small models [14]. The principle of the laser-based method is that a laser estimates the change in distance by interference measurement. However, this method is complex, and due to the limitations of the reflected light angle, it can only be applied to the particular measurement scenario of small wind tunnels [15].
With regard to the non-contact measurement of model position and pose, a measurement system based on stereovision or structured light measurement has a high measurement accuracy, usually less than 0.1°. However, the stereovision method requires pasting markers on the model, which will affect the aerodynamic performance of the model itself, and thus lead to inaccurate experimental results. With the increase of the working distance, the precision of distance measurement by structured light decreases sharply, so it is not suitable for the position and attitude measurement of large-sized aircraft. Three-dimensional (3D) point clouds have natural advantages in expressing dimensions, and are increasingly used in such fields as reverse engineering, precision manufacturing, and virtual reality [16,17]. Compared with a two-dimensional image, a 3D point cloud can more accurately express the external shape and space of an object. At present, laser radar and depth sensors are used to obtain low-noise, high-resolution, and high-frame-rate point cloud data. Therefore, the use of 3D point cloud data for improved measurement has become a research hotspot. Additionally, depth cameras have a low cost and are being developed rapidly. In the measurement scenario of a large-sized aircraft model, the main difficulty is to satisfy the attitude measurement requirements of long-range and high-speed motion. In order to ensure measurement accuracy and good integration, depth cameras usually operate at less than 10 m. By combining multiple depth cameras to splice point clouds, the whole model of space covering the aircraft can be measured. It is particularly critical that the exposure time of the depth camera is less than 1 ms, so that the point cloud of the aircraft model does not cause a large amount of distortion due to its high-speed movement. This is an unavoidable problem for both traditional laser radar and structured light technology. The motion-capture camera depends on the external markers; once the markers are shifted, damaged, or blocked, the solution accuracy of the attitude algorithm is greatly reduced. The depth camera, on the other hand, does not rely on any external markers. Depth cameras have many advantages, such as being less influenced by the surface texture of the model, being unaffected by ambient light, having millimeter ranging accuracy, and so on [18]. Therefore, depth cameras are a suitable choice for the pose measurement of large-sized aircraft models.
Litvak et al. [19] proposed a pose measurement method based on a depth camera and machine learning for robot assembly. Neural networks were used to learn simulated depth images and then transferred to real robots, obtaining an average pose estimation error of 2.16 mm and 0.64°, leading to a 91% success rate for the robotic assembly of randomly distributed parts. Zhu et al. [20] designed an embedded system based on a ToF camera by using a laser as the driving light source. Gray images and depth images were collected, and real-time detection and matching were carried out on the cooperative aircraft to obtain attitude information. In the experiment, the attitude accuracy calculated by the ToF camera reached 0.13°. The feasibility of using the ToF camera for rendezvous and docking was verified.
In this paper, we propose a large-scale aircraft pose estimation system based on a depth camera. We use several depth cameras to conduct pose measurement experiments on an aircraft model on a biaxial turntable. Through comparison of the angle data of the turntable, the measurement accuracy and robustness of the system are evaluated. The experimental results indicate that the system has strong application potential in the field of wind tunnel pose measurement and intelligent manufacturing. This paper consists of the following two technical contributions:
  • An attitude measurement algorithm suitable for large-sized aircraft is studied. Based on the point cloud data obtained by the depth camera and the target surface features as matching objects, ISS key point extraction and SHOT feature description are used to form feature descriptors to simplify the point cloud scale, and coarse point cloud registration is carried out by feature matching and RANSAC mismatching removal in order to improve the robustness of the algorithm and provide initial pose estimation for fine point cloud registration.
  • An attitude simulation measurement system suitable for a large-sized aircraft model in a wind tunnel is built. By comparing the real attitude data of the model with the turntable, it is verified that the system has the characteristics of high precision (0.05°) and high frequency (15 Hz).

2. Measurement Principles

As illustrated in Figure 1, the proposed pose estimation method for large-scale aircraft can be divided into the following four steps:
  • Point cloud collection: The aircraft model is mounted on a turntable and scanned with depth cameras to obtain the original point cloud.
  • Point cloud preprocessing: The coordinate system normalization of the aircraft model point cloud is calibrated using principal component analysis (PCA). A calibration plate is used to fuse the aircraft model point clouds scanned by several depth cameras in different positions and poses.
  • Point cloud registration: Intrinsic shape signatures (ISS) are used to extract the key points of the aircraft point cloud. Coarse registration is performed using signature of histograms of orientations (SHOT) feature description and the random sample consensus (RANSAC) feature-matching method, and the result of the coarse registration is used as the initial value of the fine G-iterative closest point (ICP) registration.
  • Model pose solution: The quaternion matrix of the registration result is obtained, and the Euler angle is solved and output. The registration results are used to guide the next frame of point cloud registration.

2.1. Preprocessing

Point cloud filtering is the first step in point cloud data preprocessing. Noise points, holes, and outliers inevitably appear in the collected point cloud data, due to the measurement accuracy of the depth camera, multi-path interference, and the diffraction characteristics of electromagnetic waves. These outliers interfere with the accuracy of subsequent point cloud feature recognition and point cloud registration algorithms. Therefore, we use three filtering algorithms to improve the quality of the point cloud and reduce the amount of point cloud data. The three filtering algorithms used are pass-through filtering, radius outlier removal, and down-sampling. The point cloud filtering process is presented in Figure 2.
Pass-through filtering is used to remove irrelevant point clouds outside of the target. Given the spatial scope of the model, the maximum and minimum values of the point cloud on the x, y, and z axes are set, and points whose values are not within the range are deleted. This achieves rapid elimination of irrelevant data points. The principle of radius outlier removal is to set the radius of a sphere, creating a sphere neighborhood, and to determine each point in the model point cloud. If the number of points in the sphere neighborhood of the point is greater than or equal to the set number of points, the point is preserved. If it is less than the set number of points, the point is removed. Radius outlier removal aims to remove noise, outliers, and sparse points in the model point cloud. The voxel grid filter is used to down-sample the model point cloud. This filter computes the cube convex hull based on the input point cloud. Setting the resolution splits the cube into smaller cubes, also known as voxels. Subsequently, all points in each voxel are approximated by the barycenter points in that voxel. The voxel grid filter deletes redundant points on the premise of preserving the geometric features of the point cloud surface, and keeps the point clouds at the same distance. For preprocessing before point cloud registration, the voxel grid filter compresses the point cloud data and improves the computing speed of point cloud registration. The original point cloud collected by the depth camera is presented in Figure 3a; Figure 3b–d shows the aircraft model point cloud through pass-through filtering, radius outlier removal, and down-sampling. The parameters of each filtering method used and the size of point cloud have been given in Table 1.
An unaligned coordinate system between the model and depth camera results in angle coupling of the rotation angle obtained by the solution. In addition, point cloud registration error also leads to rotation angle error. Therefore, PCA [21] is used to establish the target coordinate system in order to achieve normalization of the model point cloud. The principle of PCA is to find a new basis for the input dataset, namely the principal component. The data in the principal component acquire the maximum degree of dispersion, leading to improved data representation. The steps to normalize the model with PCA are described below.
The model point cloud is set as P = p i p i R 3 , i = 1 , 2 , , m 1 , and the centroid of the point cloud is calculated. Thereafter, the centroid removal operation is performed:
C P = 1 n i = 1 n p i ,
p i = p i c p .
The covariance matrix Cov of the cloud data of centroid point removal is calculated:
C o v = 1 N i = 1 N p i p i T .
After eigenvalue decomposition of Cov, three eigenvalues, from large to small, and their corresponding eigenvectors are obtained: v1p, v2p, and v3p. The model point cloud takes the camera coordinate system as the basis, and the three feature vectors of the aircraft model are calculated and arranged from largest to smallest as ex, ey, and ez. The target coordinate system is constructed according to the following formulas:
e x = v 1 v 1 ,
e y = v 2 v 2 ,
e z = e x × e y .
The basis vector of the model coordinate system and depth camera coordinate system is used. The following formula is employed:
R = v 1 p T v 1 q v 1 p T v 2 q v 1 p T v 3 q v 2 p T v 1 q v 2 p T v 2 q v 2 p T v 3 q v 3 p T v 1 q v 3 p T v 2 q v 3 p T v 3 q ,
t = c p R c q .
The transformation matrix of the target coordinate system and camera coordinate system is calculated. The CAD model of an aircraft is used for PCA principal direction correction, as illustrated in Figure 4. The XYZ axis direction is given, and the red point cloud represents the original point cloud, while the blue point cloud represents the point cloud after PCA normalization.
Due to the limitations of the shape and scanning angle of the aircraft model, the point cloud of the aircraft model scanned by a single-depth camera is incomplete and morphologically inconsistent, which affects the accuracy of subsequent point cloud registration. Therefore, we use several depth cameras to scan the aircraft model at different positions, and fuse the collected point clouds. As illustrated in Figure 5, 16 depth cameras scan the calibration plate in four directions.
The ICP algorithm is used to register the point cloud of the calibration plate, and the coordinate transformation matrix between the two depth cameras is obtained. The calibration plate is rotated, and the process is repeated until the exact position and pose of all ToF cameras are obtained. In Figure 6, the point clouds of the calibration plate scanned by the depth camera in different directions are indicated by different colors.

2.2. Registration

Feature points are stable and differentiated point sets obtained by defining detection criteria. Since feature points are only a subset selected from the dataset, the number of key points is much smaller than that in the original point cloud. The key points are combined with feature descriptors to form feature point descriptors, which can form a compact representation of the original data, thus increasing the processing speed of subsequent algorithms. The ISS algorithm [22] must establish a local coordinate system in order to detect feature points. For any given point in the model point cloud, its neighborhood point set is taken, and the weight is calculated according to the distance between each nearest neighbor point and the given point:
w i j = 1 | | ρ i ρ j | | .
The weighted covariance matrix is constructed according to the points to be detected and their nearest neighbors:
C o v = | ρ i ρ j | < r w i j ( ρ i ρ j ) ( ρ i ρ j ) T | ρ i ρ j | < r w i j .
By eigenvalue decomposition, three eigenvalues of the covariance matrix are obtained, and the eigenvalues are arranged from large to small, as follows: λ1 > λ2 > λ3. Parameters Th12 and Th23 are set, and points satisfying the following formula are taken as candidate feature points:
λ 1 λ 2 < T h 12 ,
λ 2 λ 3 < T h 23 .
Subsequently, a candidate key point is randomly selected for k-nearest-neighbor search, k other-candidate key points in the neighborhood are obtained, and point group Q is obtained. The eigenvalue λ3 of all points in Q is compared, and the point corresponding to the largest λ3 is the final key point in Q. This operation is repeated for the remaining candidate key points to complete ISS key point extraction. This key point extraction method, which uses the relationship between eigenvalues to describe the feature degree of the point, can greatly reduce the size of the point cloud and improve the computing efficiency of a high-point cloud. As displayed in Figure 7, the red point cloud represents the model point cloud, while the blue point cloud represents the feature points extracted by ISS.
High-quality feature descriptors can improve the accuracy of feature matching, reduce mismatches, and improve the subsequent point cloud registration results. The construction process of a feature histogram is generally divided into two parts: feature signature and histogram statistics. The feature signature is more descriptive, while the feature histogram is more robust. SHOT [23] combines the two. The feature point p of the point cloud is taken as the center of the circle, and a neighborhood with a radius of R is obtained. Thereafter, the covariance matrix M is calculated with the distance from the neighborhood point to point p as the weight:
M = 1 i : d i R ( R d i ) i : d i R ( R d i ) ( ρ i ρ ) ( ρ i ρ ) T .
Eigenvalue decomposition of M is performed to determine the XYZ axes of the local reference frame with the eigenvalues and eigenvectors obtained from the decomposition. Subsequently, the spherical neighborhood is divided into meshes along the radial direction, azimuth direction, and pitch direction, in which radial 2, azimuth 8, and pitch 2 are equally divided into 32 spatial meshes. Taking the cosine value of the angle between the normal neighborhood point and the Z axis in the grid as the metric, the cosine value is divided into b intervals, and the number of points falling into the interval is counted to obtain a local histogram of each grid. Thereafter, all local histograms are combined to form a SHOT descriptor with a dimension of b × 32.
The main goal of point cloud registration is to find the optimal transformation matrix between two different point clouds. Point cloud registration can be divided into coarse point cloud registration and fine point cloud registration. The function of coarse point cloud registration is to roughly align two frames of point clouds that are far apart with the aim of providing a good initial estimate for subsequent fine point cloud registration. Since noise in the model point cloud and other factors may cause mismatches in feature matching, the RANSAC algorithm [24] is required to remove these mismatches. The core principle of the RANSAC algorithm is to iteratively find valid intra-office points from a dataset containing out-of-office points. In the application of mismatch elimination, intra-office points are the correctly matched feature point pairs, whereas out-of-office points are the mismatched feature point pairs. More intra-office points can be obtained by iteratively updating the transformation matrix. The steps are as follows:
  • Randomly select n ≥ 3 groups of corresponding points from the feature-matching sets of two point clouds P and Q.
  • Calculate the transformation matrix according to the selected corresponding points (p, q):
    x q y q z q 1 = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 x p y p z p 1 = H x p y p z p 1 .
  • Use H to rotate and translate point cloud P to obtain P′.
  • The variance of the Euclidean distance of the point cloud is used to construct the evaluation function, and the evaluation scores are calculated according to P′ and Q. The pairs of feature points whose scores are smaller than the threshold value are included in the intra-office points, while the rest are excluded as out-of-office points.
  • Set the number of iterations, perform the above steps, and select the iteration result with the largest number of intra-office points as the correct matching feature point pairs that eliminate mismatching. The initial registration results are obtained, and the coarse point cloud registration is completed.
The ICP algorithm [25] is the main algorithm for point cloud precision registration and is based on optimal matching based on the least squares method. Two frames of aircraft point clouds P and Q are taken as the source point set and target point set for registration. In the target point set Qi, the corresponding point with the shortest Euclidean distance from point Pi in the source point set is determined. The rotation matrix R and translation vector t are solved, and the process is iterated to minimize the sum of the squares of the Euclidean distances of the corresponding points until convergence or a preset threshold condition is reached. The error function is as follows:
E R , t = 1 n i = 1 n Q i R P i + t 2 .
The centroid coordinates Cp and Cq of the aircraft point clouds Pi and Qi are calculated. The covariance matrix M is obtained as follows:
M = 1 n i = 1 n P i C P Q i C Q .
Singular value decomposition is performed on the covariance matrix M, and the rotation matrix R and translation vector t are then calculated. The transformation relationship between point clouds P and Q can be described using the following formula:
P = ( R P + t ) , Q .
The G-ICP algorithm [26] optimizes the traditional ICP algorithm. A Gaussian probability model is added to part of the cost function to allow for the convergence of points and domain points in point clouds P and Q to satisfy the Gaussian distribution model. Its cost function can be described as follows:
d i ( T ) = Q i T P i ,
T = a r g m i n T i d i ( T ) T ( C i Q + T C i P T T ) 1 d i ( T ) ,
where Ci represents the covariance matrix of the point cloud domain set. The G-ICP algorithm adopts the corresponding model closest to the distribution, which leads to a higher registration accuracy than that of the traditional ICP algorithm. In addition, NVIDIA CUDA is also used to accelerate the G-ICP algorithm to ensure the high running speed of our algorithm. As illustrated in Figure 8a, the yaw angle of the point clouds of the two aircraft models differs by 30°. Figure 8b displays the aircraft point clouds registered by the G-ICP algorithm.
For a traditional registration algorithm, such as the ICP algorithm, the measurement accuracy of the single-axis angle is higher, but because of the inconsistency with the coordinate system of the rotation center of the biaxial turntable and the depth camera, the angle of the other two axes is highly coupled. Moreover, when the angle values of the model point clouds differ too much (>30°), the algorithm may resort to the local optimal solution, resulting in a large calculation deviation. In addition, the horizontal alignment of the depth camera’s plane and the biaxial turntable also affects the measurement results. To eliminate the errors caused by the above issues, PCA was used to correct the model point cloud, and the accuracy and stability of the calculation results were improved by combining the feature descriptor and G-ICP registration method.

3. Experiments and Results

As illustrated in Figure 9, the depth camera scans the point cloud data of the model and transmits these data to the computing server. The computing server is responsible for point cloud preprocessing, point cloud registration, and point cloud preservation, and outputs the calculated results to the host computer. The host computer has a user-operated interface for the depth camera and computing server.
All algorithms in the computing server were run in Visual Studio 2022 on Ubuntu 20.04. Table 2 presents the detailed hardware configuration of the computing server.
We used a Helios2 depth camera from Lucid Vision Labs. The depth camera operates on the principle of continuous wave modulation. As illustrated in Figure 10, the vertical-cavity surface-emitting laser emits modulated light, and the distance is calculated according to the phase shift between the emitted light and reflected light. Table 3 presents the detailed technical parameters of the Helios2 depth camera.
The measurement scene is presented in Figure 11. The aircraft model was set up on a biaxial turntable, and the accuracy of the triaxial angular position of the biaxial turntable was less than 0.002°. The biaxial turntable controlled the precise rotation of the aircraft model and obtained accurate attitude data.
In order to verify the validity of our coarse registration method selection, a RANSAC coarse registration was performed by combining three key point algorithms with three feature description algorithms. The data in Table 4 are the average values obtained by the point cloud tests of three different aircraft models, where the score represents the overlap degree between two clouds (the closer the value is to zero, the higher the registration accuracy is). The ISS key point algorithm has obvious advantages, and the coarse registration effect is at its best when combined with SHOT feature description.
In the experiment, we used several depth cameras to scan the model aircraft. Through a hardware-triggered mode, the depth camera alternates exposure to avoid interference between depth cameras. We used a biaxial turntable to rotate the yaw angle of the aircraft from −40° to 40°. Because the angle accuracy of the biaxial turntable was higher than that of our measurement system, we took the angle data of the biaxial turntable as the real values in the experiment. The triaxial angle obtained by the measuring system was subtracted from the corresponding angle of the biaxial turntable to obtain the triaxial angle error of the measuring system. The triaxial angle errors are provided in Figure 12a–c.
As can be seen in Figure 12, when the rotation angle of the model was large, our registration algorithm did not produce local optimal solutions. Moreover, different coaxial angles did not induce angle coupling. The standard deviations are an important basis for judging the degree of data dispersion. They can be calculated via the following formula:
R = i = 1 n ( θ i θ ) n 1 ,
where θ represents the angle measurement error and θ′ represents the average value of the angle measurement error.
The standard deviations of the measurement errors of the yaw angle, pitch angle, and roll angle of the model were 0.0507°, 0.0583°, and 0.0628°, respectively. The experiment thus verifies the feasibility of our proposed pose measurement system.
For the three aircraft model point clouds, we use several different registration methods for comparison with our proposed method. The accuracy registration results were classified and averaged, and are listed in Table 5.
The ISS key point extraction method exhibits good repeatability with regard to the point cloud data of an aircraft model after fusion, and can suitably cooperate with other feature descriptors. The SHOT feature descriptor encodes the geometric attributes of feature points and their neighborhood points via a local reference frame (LRF), which can adequately describe situations in which the wing or other normal directions change greatly in point cloud models of large-sized aircraft. A RANSAC coarse registration algorithm was used to remove the mismatched points and obtain the correctly matched feature points. Good initial pose estimation improves the accuracy and speed of the point cloud precision registration algorithm.
The accuracy of the registration results of ICP depends on the accuracy of the coarse registration results. If the coarse registration error is large, the ICP cannot be corrected independently. Therefore, for point cloud registration with large changes, accurate coarse registration results are important. As seen in Table 5, our method has advantages in both speed and accuracy and can maintain a fast calculation speed and high registration accuracy for different models and point cloud densities. Although deep learning algorithms such as RPM-Net [27], LM3D [28], and DeepVCP [29] can achieve better registration accuracy, a single run takes a long time; therefore, these methods are not suitable for application scenarios requiring high-speed measurement.

4. Conclusions

In this paper, we propose a contactless attitude estimation system for large aircraft. In our experiment, we used several depth cameras to measure the attitude of an aircraft model on a biaxial turntable, achieving an attitude measurement accuracy of approximately 0.05°. The experimental results reveal that the proposed system has good application prospects in high-speed and high-precision model pose measurement scenarios. Obtaining accurate attitude information for the model is the prerequisite for spacecraft connection and docking, the capture of faulty spacecraft, and removal of debris in space. In the design of aircraft, the accurate attitude measurement of aircraft models also provides important basic information for the aerodynamic analysis of aircraft. The limitation of this system is that the measurement speed is restricted by the computing power of the hardware system. In addition, the model attitude measurement method proposed in this paper is suitable primarily for large-scale models. For small-sized models, due to the limited ranging accuracy of ToF cameras, the attitude-solving accuracy of models will be decreased to a certain extent. Under the hardware configuration utilized in this paper, the realized system measurement frequency is about 10 Hz. The number of threads in the CPU and the number of CUDA cores in the GPU are the main factors affecting the measurement frequency of the system. In the follow-up work, the algorithm flow should be further optimized without affecting the accuracy, and the frequency of the measurement system should be improved. Considering the obvious angular and convex features on the surface of the aircraft model, local feature descriptors are used in this paper. Moreover, there are also global feature descriptors and feature descriptors based on deep learning. In future work, the construction of such feature descriptors should be attempted in order to carry out a more detailed and comprehensive description of the target features, in order to improve the accuracy of high-point cloud registration.

Author Contributions

Conceptualization, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, J.H. and S.S.; project administration, K.L. and T.H.; funding acquisition, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2017YFF0204901, and the Zhejiang Provincial Natural Science Foundation of China, grant number Y19F050039.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.; Ma, S.; Lin, Q. Hybrid pose/tension control based on stiffness optimization of cable-driven parallel mechanism in wind tunnel test. In Proceedings of the 2016 2nd International Conference on Control, Automation and Robotics (ICCAR), Hong Kong, China, 28–30 April 2016. [Google Scholar]
  2. Engler, R.H.; Hartmann, K.; Schulze, B. Aerodynamic assessment of an optical pressure measurement system (OPMS) by comparison with conventional pressure measurements in a high speed wind tunnel. In Proceedings of the ICIASF’91 Record, International Congress on Instrumentation in Aerospace Simulation Facilities, Rockville, MD, USA, 27–31 October 1991. [Google Scholar]
  3. Liu, S.; Feng, Y.; Shen, K.; Wang, Y.; Chen, S. An RGB-D-Based Cross-Field of View Pose Estimation System for a Free Flight Target in a Wind Tunnel. Complexity 2018, 2018, 7358491. [Google Scholar] [CrossRef]
  4. Han, S.; Liu, X.; Han, X.; Wang, G.; Wu, S. Visual sorting of express parcels based on multi-task deep learning. Sensors 2020, 20, 6785. [Google Scholar] [CrossRef] [PubMed]
  5. Wu, Q.; Li, M.; Qi, X.; Hu, Y.; Li, B.; Zhang, J. Coordinated control of a dual-arm robot for surgical instrument sorting tasks. Robot. Auton. Syst. 2019, 112, 1–12. [Google Scholar] [CrossRef]
  6. Bosse, A.B.; Barnds, W.J.; Brown, M.A.; Creamer, N.G.; Feerst, A.; Henshaw, C.G.; Hope, A.S.; Kelm, B.E.; Klein, P.A.; Pipitone, F.; et al. SUMO: Spacecraft for the universal modification of orbits. In Proceedings of the SPIE Conference + Exhibitions Spacecraft Platforms and Infrastructure, Orlando, FL, USA, 12–16 April 2004; Volume 5419. [Google Scholar]
  7. Toro, K.G. Technology Review of Wind-Tunnel Angle Measurement. In Proceedings of the International Symposium on Strain-Gauge Balances, Cologne, Germany, 14–17 May 2018. No. NF1676L-29113. [Google Scholar]
  8. Wong, D.T. Evaluation of the Prototype Dual-Axis Wall Attitude Measurement Sensor; No. NAS 1.15: 109056; National Aeronautics and Space Administration, Langley Research Center: Hampton, VA, USA, 1994.
  9. Crawford, B. Angle measurement system (AMS) for establishing model pitch and roll zero, and performing single axis angle comparisons. In Proceedings of the 45th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, USA, 8–11 January 2007. [Google Scholar]
  10. Lawrence, R.; Kahng, S.; Adcock, E.; Soto, H.; Culliton, W.; Jordan, T.; Hernandez, C.; Gorton, S. MEMS sensor system development at NASA Langley Research Center for wind tunnel applications. In Proceedings of the 40th AIAA Aerospace Sciences Meeting & Exhibit, Reno, NV, USA, 14–17 January 2002. [Google Scholar]
  11. Rueger, M. The use of an intertial-gyro system for model attitude measurement in a blow-down wind tunnel. In Proceedings of the 2005 US Air Force T&E Days, Nashville, TN, USA, 6–8 December 2005; p. 7643. [Google Scholar]
  12. Rueger, M.; Lafferty, J. Demonstration of a gyro-based model attitude measurement system at the AEDC tunnel 9 test facility. In Proceedings of the 38th Fluid Dynamics Conference and Exhibit, Seattle, WA, USA, 23–26 June 2008. [Google Scholar]
  13. Jia, Z.; Ma, X.; Liu, W.; Lu, W.; Li, X.; Chen, L.; Wang, Z.; Cui, X. Pose measurement method and experiments for high-speed rolling targets in a wind tunnel. Sensors 2014, 14, 23933–23953. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Watzlavick, R.; Crowder, J.; Wright, F. Comparison of model attitude systems—Active target photogrammetry, precision accelerometer, and laser interferometer. In Proceedings of the Advanced Measurement and Ground Testing Conference, New Orleans, LA, USA, 17–20 June 1996. [Google Scholar]
  15. Bomar, B.W.; Goethert, W.H.; Belz, R.A.; Bentley, H.T., III. The Development of a Displacement Interferometer for Model Deflection Measurements; Arnold Engineering Development Center Arnold Afb: Tullahoma, TN, USA, 1977. [Google Scholar]
  16. Patil, A.K.; Balasubramanyam, A.; Ryu, J.Y.; BN, P.K.; Chakravarthi, B.; Chai, Y.H. Fusion of multiple lidars and inertial sensors for the real-time pose tracking of human motion. Sensors 2020, 20, 5342. [Google Scholar] [CrossRef] [PubMed]
  17. Hong, B.; Jia, A.; Hong, Y.; Li, X.; Gao, J.; Qu, Y. Online extraction of pose information of 3D zigzag-line welding seams for welding seam tracking. Sensors 2021, 21, 375. [Google Scholar] [CrossRef] [PubMed]
  18. Yang, X.; Huang, Y.; Zhang, Q. Automatic stockpile extraction and measurement using 3D point cloud and multi-scale directional curvature. Remote Sens. 2020, 12, 960. [Google Scholar] [CrossRef] [Green Version]
  19. Litvak, Y.; Biess, A.; Bar-Hillel, A. Learning pose estimation for high-precision robotic assembly using simulated depth images. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  20. Zhu, W.; Mu, J.; Shao, C.; Hu, J.; Wang, B.; Wen, Z.; Han, F.; Li, S. System design for pose determination of spacecraft using time-of-flight sensors. Space Sci. Technol. 2022, 2022, 763198. [Google Scholar] [CrossRef]
  21. Holland, S.M. Principal Components Analysis (PCA); Department of Geology, University of Georgia: Athens, GA, USA, 2008. [Google Scholar]
  22. Zhong, Y. Intrinsic shape signatures: A shape descriptor for 3D object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar]
  23. Salti, S.; Tombari, F.; Di Stefano, L. SHOT: Unique signatures of histograms for surface and texture description. Comput. Vis. Image Underst. 2014, 125, 251–264. [Google Scholar] [CrossRef]
  24. Quan, S.; Yang, J. Compatibility-guided sampling consensus for 3-D point cloud registration. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7380–7392. [Google Scholar] [CrossRef]
  25. Besl, P.J.; McKay, N.D. Method for Registration of 3-D Shapes. Sensor Fusion IV: Control Paradigms and Data Structures; SPIE: Bellingham, WA, USA, 1992; Volume 1611. [Google Scholar]
  26. Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. Robot. Sci. Syst. 2009, 2, 435. [Google Scholar]
  27. Yew, Z.J.; Lee, G.H. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  28. Gojcic, Z.; Zhou, C.; Wegner, J.D.; Guibas, L.J.; Birdal, T. Learning multiview 3D point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  29. Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. Deepvcp: An end-to-end deep neural network for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
Figure 1. Overview of the proposed large-scale aircraft pose estimation system.
Figure 1. Overview of the proposed large-scale aircraft pose estimation system.
Applsci 13 03736 g001
Figure 2. Process of point cloud filtering.
Figure 2. Process of point cloud filtering.
Applsci 13 03736 g002
Figure 3. (a) Original point cloud captured by the depth camera; (b) model point cloud after pass-through filtering; (c) model point cloud after radius outlier removal; (d) model point cloud after down-sampling.
Figure 3. (a) Original point cloud captured by the depth camera; (b) model point cloud after pass-through filtering; (c) model point cloud after radius outlier removal; (d) model point cloud after down-sampling.
Applsci 13 03736 g003
Figure 4. Original point cloud and point cloud after PCA principal direction correction.
Figure 4. Original point cloud and point cloud after PCA principal direction correction.
Applsci 13 03736 g004
Figure 5. Depth camera calibration scenario.
Figure 5. Depth camera calibration scenario.
Applsci 13 03736 g005
Figure 6. Calibration plate point clouds captured by different cameras.
Figure 6. Calibration plate point clouds captured by different cameras.
Applsci 13 03736 g006
Figure 7. Model point cloud using the ISS key point extraction algorithm.
Figure 7. Model point cloud using the ISS key point extraction algorithm.
Applsci 13 03736 g007
Figure 8. (a) Aircraft point cloud before registration; (b) aircraft point cloud after registration.
Figure 8. (a) Aircraft point cloud before registration; (b) aircraft point cloud after registration.
Applsci 13 03736 g008
Figure 9. Hardware diagram of the measurement system.
Figure 9. Hardware diagram of the measurement system.
Applsci 13 03736 g009
Figure 10. Working principle diagram of ToF camera based on continuous wave modulation.
Figure 10. Working principle diagram of ToF camera based on continuous wave modulation.
Applsci 13 03736 g010
Figure 11. Aircraft pose measurement scenario.
Figure 11. Aircraft pose measurement scenario.
Applsci 13 03736 g011
Figure 12. (a) Yaw angle measurement error; (b) pitch angle measurement error; (c) roll angle measurement error.
Figure 12. (a) Yaw angle measurement error; (b) pitch angle measurement error; (c) roll angle measurement error.
Applsci 13 03736 g012aApplsci 13 03736 g012b
Table 1. Filtering parameters and point cloud scale of each filtering stage.
Table 1. Filtering parameters and point cloud scale of each filtering stage.
Filtering MethodPoint Cloud SizeFiltering Parameter
Original point cloud256,796/
Pass-through filtering16,899X Limits (−800, 300)
Y Limits (−400, 500)
Z Limits (1700, 2400)
Radius outlier removal15,764Take the close points of the average = 3.0
Threshold for the number of adjacent points = 1
Down-sampling4321Grid size = 10 mm × 10 mm × 10 mm
Table 2. Computing server hardware configuration.
Table 2. Computing server hardware configuration.
System HardwareSpecification
CPUIntel Xeon Gold 5320 2.2 G
RAM size256 GB
RAM typeRDIMM, 3200 MT/s
Hard disk speed2500 MB/s
GPUNVIDIA Ampere A30
Table 3. Helios2 depth camera specifications.
Table 3. Helios2 depth camera specifications.
SpecificationIndicators
Resolution640 × 480 pixels
Frame rate30 fps
Working distance0.3–8.33 m
Field of view69° × 51°
Output informationXYZ coordinates and intensity
Output format.ply
Table 4. RANSAC coarse registration results with different feature descriptors and key point extraction methods (average point cloud size: 4321).
Table 4. RANSAC coarse registration results with different feature descriptors and key point extraction methods (average point cloud size: 4321).
MethodTime (ms)Score
ISS + SHOT28.7121.066
ISS + PFH41.3121.895
ISS + FPFH37.3862.379
NARF + SHOT257.90317.570
NARF + PFH298.44724.796
NARF + FPFH285.42128.633
HARRIS + SHOT145.1427.296
HARRIS + PFH136.3817.731
HARRIS + FPFH177.2725.427
Table 5. Comparison of registration methods (average point cloud size: 3851).
Table 5. Comparison of registration methods (average point cloud size: 3851).
MethodTime (ms)Mean Angle Error (°)
OURS67.5910.068
ICP139.6400.863
VG-ICP116.6730.173
GO-ICP362.8420.223
NDT311.9732.380
RPM-Net1183.2400.018
FGR849.9240.127
LM3D2671.3330.012
DEEPVCP7893.6790.025
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Sun, S.; Huang, J.; Huang, T.; Liu, K. Large-Scale Aircraft Pose Estimation System Based on Depth Cameras. Appl. Sci. 2023, 13, 3736. https://doi.org/10.3390/app13063736

AMA Style

Yang Y, Sun S, Huang J, Huang T, Liu K. Large-Scale Aircraft Pose Estimation System Based on Depth Cameras. Applied Sciences. 2023; 13(6):3736. https://doi.org/10.3390/app13063736

Chicago/Turabian Style

Yang, Yubang, Shuyu Sun, Jianqiang Huang, Tengchao Huang, and Kui Liu. 2023. "Large-Scale Aircraft Pose Estimation System Based on Depth Cameras" Applied Sciences 13, no. 6: 3736. https://doi.org/10.3390/app13063736

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop