Next Article in Journal
Blockchain for Healthcare Games Management
Next Article in Special Issue
Enhancing the Accuracy of an Image Classification Model Using Cross-Modality Transfer Learning
Previous Article in Journal
A Multi-Path Inpainting Forensics Network Based on Frequency Attention and Boundary Guidance
Previous Article in Special Issue
A Workpiece-Dense Scene Object Detection Method Based on Improved YOLOv5
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Measurement of Full Profile of Steel Rail Cross-Section Based on Line-Structured Light

College of Automation, Chengdu University of Information Technology, Chengdu 610103, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3194; https://doi.org/10.3390/electronics12143194
Submission received: 27 June 2023 / Revised: 20 July 2023 / Accepted: 20 July 2023 / Published: 24 July 2023
(This article belongs to the Special Issue Applications of Computer Vision, Volume II)

Abstract

:
The wear condition of steel rails directly affects the safety of railway operations. Line-structured-light visual measurement technology is used for online measurement of rail wear due to its ability to achieve high-precision dynamic measurements. However, in dynamic measurements, the random deviation of the measurement plane caused by the vibration of the railcar results in changes in the actual measured rail profile relative to its cross-sectional profile, ultimately leading to measurement deviations. To address these issues, this paper proposes a method for three-dimensional measurement of steel rail cross-sectional profiles based on binocular line-structured light. Firstly, calibrated dual cameras are used to simultaneously capture the profiles of both sides of the steel rail in the same world coordinate system, forming the complete rail profile. Then, considering that the wear at the rail waist is zero in actual operation, the coordinate of the circle center on both sides of the rail waist are connected to form feature vectors. The measured steel rail profile is aligned with the corresponding feature vectors of the standard steel rail model to achieve initial registration; next, the rail profile that has completed the preliminary matching is accurately matched with the target model based on the iterative closest point (ICP) algorithm. Finally, by comparing the projected complete rail profile onto the rail cross-sectional plane with the standard 3D rail model, the amount of wear on the railhead can be obtained. The experimental results indicate that the proposed line-structured-light measurement method for the complete rail profile, when compared to the measurements obtained from the rail wear gauge, exhibits smaller mean absolute deviation (MAD) and root mean square error (RMSE) for both the vertical and lateral dimensions. The MAD values for the vertical and lateral measurements are 0.009 mm and 0.039 mm, respectively, while the RMSE values are 0.011 mm and 0.048 mm. The MAD and RMSE values for the vertical and lateral wear measurements are lower than those obtained using the standard two-dimensional rail profile measurement method. Furthermore, it effectively eliminates the impact of vibrations during the dynamic measurement process, showcasing its practical engineering application value.

1. Introduction

The surface wear condition of a rail directly affects the stability and safety of train operation. With the continuous development of railway transportation towards heavier loads and higher speeds, the surface wear of rails exhibits characteristics of shorter cycles and more severe wear. Therefore, higher requirements are placed on the accuracy and efficiency of online rail profile measurement. Currently, railway maintenance departments primarily rely on measuring the cross-sectional profile of rails to assess their wear condition. Rail wear measurement methods are mainly divided into two categories: contact-based and non-contact-based methods [1]. Contact-based measurement methods include mechanical gauges such as the P110B and SKM by Vogel and Plötscher from Germany, as well as the Miniprof profilometer by Greenwood Engineering from Denmark [2]. This type of equipment has mature technology but has the disadvantages of low measurement efficiency and difficulties in equipment maintenance. Compared to contact-based measurement methods, non-contact measurement methods have the advantages of fast measurement and high accuracy [3,4]. They mainly utilize laser displacement sensor and structured-light vision measurement. The laser displacement sensor is based on the principle of triangular ranging. The position of the rail profile can be determined according to the geometric relationship between the laser and the camera and the imaging position on the laser line array CCD, for example, the laser displacement sensors developed by Optimess in Switzerland and the laser portable track inspection instrument. This type of equipment offers simple operation and rapid measurement. However, the obtained rail cross-sectional profiles may have sparse sampling points, and the measurement data can be affected by ambient light interference. Additionally, these devices can be expensive [5]. Structure light photogrammetry technology is used to extract rail cross-sectional profiles from captured images of line-structured light on the rail surface. This method involves system calibration, image extraction, and coordinate transformation [6,7]. Examples of such measurement systems include KLD Labs’ ORIAN™ (optical rail inspection and analysis) optical inspection and analysis system for rails, and the rail full-profile onboard measurement system by MERMEC. These measurement methods provide more detailed rail cross-sectional profile data, higher system flexibility in setup, and are relatively cost-effective [8].
To achieve dynamic measurement of the full cross-sectional profile of a rail using line-structured light, the use of binocular line-structured-light technology is a prevalent approach. In order to dynamically obtain rail cross-sectional profile data, the line-structured-light measurement system is typically installed on the underside of a rail inspection vehicle or grinding vehicle positioned close to the inner side of the rail. This allows for continuous acquisition of the cross-sectional profile of one side of the rail [9,10]. The railhead of the track undergoes deformation due to long-term train pressure, and the detection of one side profile cannot accurately assess the condition of rail usage. Moreover, the vibrations during the vehicle’s motion cause the plane of the line-structured-light measurement to deviate from the vertical cross-section of the rail. To address the vibration issue, researchers have proposed various methods, including dynamic vibration correction based on multiple line-structured lights [11,12,13], estimation of vibration deviation by measuring the variation in the rail profile features compared to the standard rail profile [14,15,16], and combining multiple rail cross-sections for three-dimensional global registration to measure wear [17]. The multi-line-structured-light measurement method proposed by Wang Chao and Sun Junhua [12,13] involves extracting feature points from the measured cross-section to calculate an auxiliary projection plane. The distorted rail section is then projected onto the auxiliary plane to correct the deviation; however, inaccurate feature point extraction may occur due to the presence of outliers in the measured cross-section and inherent geometric distortions in the rail profile. Zhan dong [16] proposed a vehicle multiple degrees-of-freedom vibration decoupling and compensation method based on orthogonal decomposition. It corrects the deviations caused by vehicle vibrations by considering the changes in the rail waist profile; however, the rail profile after the deviation correction is not the cross-sectional profile that is perpendicular to the radial direction of the rail, which can still lead to measurement errors. Yang Yue [17] proposed merging multiple rail sections into a three-dimensional surface of the rail and aligning the measured three-dimensional profile with a standard rail model using global registration methods to calculate rail wear; however, merging the sections into a three-dimensional profile can introduce deviations in the radial direction of the rail.
To address the issue of low measurement accuracy in rail section profiles caused by the deviation of the line-structured-light measurement plane due to vehicle vibrations, this paper proposes a three-dimensional measurement of the full profile of the rail cross-section based on line-structured light. The proposed method begins by establishing a measurement model based on dual-camera vision with line-structured light and calibrating the line-structured-light measurement plane. Next, a two-step railhead profile measurement method is introduced, which starts with a coarse measurement and gradually refines the measurement to obtain the complete rail profile. The measured rail profile is then compared with a standard rail 3D point cloud model in three-dimensional space to quantify the rail wear. Finally, the line-structured-light dual-camera rail full-profile vision measurement system is tested and validated at a railway infrastructure maintenance base to assess the measurement accuracy of the proposed method.

2. Rail Cross-Section Full-Profile Measurement System Based on Binocular Line-Structured Light

2.1. Binocular Measurement System and Model

The structure of the binocular-structured light-based steel rail cross-section full-profile measurement system is shown in Figure 1. The system consists of a measurement unit comprising two sets of line lasers and CCD cameras, a data switch, an odometer, and a data processing computer. The structured light vision acquisition front-end uses a Basler acA1600-120uc CCD camera with an image resolution of 1600 × 1200. The lens used has a target size of 1/1.8 and a resolution of 5 million, and the accuracy of the image can be guaranteed. During the inspection, the line lasers are projected vertically onto the surface of the steel rail, forming a curve of full-profile light stripes on the rail section. The odometer triggers the cameras at a fixed frequency to capture the images of the rail profile light stripes. The computer, using an established binocular vision imaging model based on structured light, reconstructs the full profile of the rail section from the captured light stripe images.
The binocular-structured-light vision system model is shown in Figure 2. O w X w Y w Z w is the world coordinate system, O c X c Y c Z c and O c X c Y c Z c are the camera coordinate system, o c x y and o c x y are the camera imaging plane coordinate system, o 1 u v and o 1 u v are the camera imaging plane pixel coordinate system, [ R t ] and [ R t ] are the transformation matrix between the camera coordinate system and the world coordinate system. Suppose point P M = ( x w , y w , z w ) T is an intersection point M between the light plane and the surface of the steel rail in the world coordinate system, then P c = ( x c , y c , z c ) T and p c = ( x c , y c , z c ) T are the coordinates of point M in the camera coordinate system, p m = ( u m , v m ) T and p m = ( u m , v m ) T are the pixel coordinates of the point imaged by the camera. Taking the O c X c Y c Z c camera as an example, a camera is modeled by the usual pinhole: the relationship between a 3D point M and its image projection m is given by
s p ˜ m = A p c = A [ R t ] P ˜ M
where s is the scale factor, A is the matrix of the camera’s intrinsic parameters, R and t are the rotation matrix and translation vector from the world coordinate system to the camera coordinate system, and p ˜ m and P ˜ M are the homogeneous coordinates of p m and P M . In addition, P M satisfies
a x c + b x c + c z c + d = 0
where (a, b, c, and d) are the parameters of the light plane in the camera coordinate system.
The light plane in the world coordinate system satisfies
Z W = 0
Referring to (1) to (3), the binocular-structured-light vision model can be represented as
s p ˜ m = A P c = A [ R t ] P ˜ M a x c + b x c + c z c + d = 0 s p ˜ m = A p c = A [ R t ] P ˜ M a x c + b x c + c z c + d = 0 Z W = 0
where the camera’s intrinsic parameters A and A can be obtained using the chessboard calibration method [18]. If the external parameters [ R t ] and [ R t ] of the transformation from the line-structured-light plane Z W = 0 in the world coordinate system to the O c X c Y c Z c and O c X c Y c Z c camera coordinate systems can be obtained, the unique coordinates of the complete rail profile on the line-structured-light plane P M in the world coordinate system can be determined.

2.2. Determination of the Measurement Plane for Line-Structured Light

To achieve simultaneous calibration of the cameras on both sides of the rail, the checkerboard calibration board is adjusted so that it is within the common field of view of the two cameras. Then, the line projectors on both sides are adjusted to align the light planes, as shown in Figure 3. The cameras and projectors are symmetrically distributed on both sides, and the angle between the camera optical axis and the light plane is approximately β = 60 . The vertical distance d between the projectors and the cameras is
d = f H / h × s i n β
where f is the focal length of the camera, H is the height of target, and h is the optical size of camera.
The target plane is fixed in the common viewing area of the cameras on both sides. An image of the target plane is captured, denoted as image F i , and an image of the intersection between the target plane and the line-structured-light plane is captured, denoted as image F i . Image F i is used to extract corner points, while image F i is used to extract feature points formed by the intersection between the line-structured-light plane and the calibration board, the images captured by the cameras on both sides are shown in Figure 4. Then, the target plane is moved i (≥3) times, and the above steps are repeated to obtain i pairs of images.
The chessboard calibration method [18] is used to solve the external parameters [ R i , t i ] and [ R i , t i ] that transform each set of captured target planes from the world coordinate system Z W = 0 to the coordinate systems of the cameras on both sides; the origin of the world coordinate system is set to the top-left corner point of the target plane. The feature points formed by the intersection of the light stripes and the chessboard target plane are shown in Figure 5 as points P 1 i and P 2 i .
In the figure, points P 1 , P 2 , P 3 , and P 4 represent the four corner points of the target plane, in the image, the corresponding corner points are denoted as P m 1 , P m 2 , P m 3 , and P m 4 . By moving the target plane times, the line-structured-light plane will generate a total of 2 m feature points in the coordinate systems of the cameras on both sides. The coordinates of feature points p 1 i , p 2 i in the camera coordinate system are as follows:
P c 1 i = [ R i t i ] P 1 i P c 2 i = [ R i t i ] P 2 i P c 1 i = [ R i t i ] P 1 i P c 2 i = [ R i t i ] P 2 i , i = 1 , 2 m
In the equation, points P 1 i and P 2 i are unknown and can be obtained through the processing of images F i and F i . In image F i , the pixel coordinates ( u j , v j ) , j = 1 , 2 , n that correspond to the light stripe satisfy the following condition:
v = m u + c
The pixel points ( u j , v j ) , j = 1 , 2 , n occupied by the light stripe can be extracted from images F i and F i using the differential method. Then, the line parameters of the light stripe in the images can be solved using the least squares method:
m = n ( u j v j ) ( u j ) ( v j ) n ( u j 2 ) ( u j ) 2 c = ( u j 2 ) ( v j ) ( u j ) ( u j v j ) n ( u j 2 ) ( u j ) 2
The subpixel coordinates of the P m 1 P m 2 P m 3 P m 4 corner point can be obtained from the F i image [19]. Using line intersection calculations, the intersection points P m 1 i and P m 2 i can be determined for the lines formed by the corner points P m 1 P m 4 and P m 2 P m 3 in the image, and the lines representing the structure of the light stripe. The chessboard target plane has a square size of 15 mm, according to the projective transformation and the principle of invariant ratios [20], the points P m 1 j and P m 2 j in the image and their corresponding feature points P 1 i and P 2 i in the world coordinate system satisfy the following relationship:
P 1 i P 1 P 1 i P 4 = P m 1 i P m 1 P m 1 i P m 4 , P 2 i P 2 P 2 i P 3 = P m 2 i P m 2 P m 2 i P m 3
From (6) to (9), by applying the singular value decomposition (SVD) method to the feature points P c 1 i , P c 2 i and P c 1 i , P c 2 i , i = 1 , 2 m , we can individually fit the equations of the line-structured-light plane in the coordinate systems of the two cameras:
a x c + b x c + c z c + d = 0 a x c + b x c + c z c + d = 0
Based on Equations (3) and (10), and using the Rodrigues transformation [21], we can calculate the external parameter matrices [ R t ] and [ R t ] , which represent the transformation from the line-structured-light plane to the coordinate systems of the two cameras.

3. Full-Profile Wear Measurement of Rail Section Based on Binocular Line-Structured Light

3.1. Extraction of Full Profile of Steel Rail Section

The rail geometry information can be obtained by means of distance measuring equipment such as rail height, rail waist height, rail waist width, and rail waist inclination, and the curve shape information can be obtained using a laser scanner. First, the contour images of the inner and outer sides of the steel rail are captured. Then, the subpixel coordinates of the center points of the steel rail contour light stripes are extracted [22,23]. Based on the established dual-camera vision model and the calibration of the line-structured-light plane’s external parameters, the steel rail contour points on the line-structured-light plane are calculated using Equation (11):
P ˜ M = s [ R t ] 1 A 1 P ˜ m P ˜ M = s [ R t ] 1 A 1 P ˜ m
To convert the subpixel coordinates of the center points of the rail profile light stripes in the image to the coordinates of the actual rail profile points on the measured light plane in the world coordinate system, the curved shape information of the rail is obtained using a laser scanner. The two locations on the top and bottom of the rail can be marked as “right” and “left”, respectively. Then, the image undergoes median filtering and subpixel processing to obtain the contours of one side of the rail. Based on this, the complete contour of the rail on both sides is merged, as shown in Figure 6.

3.2. Measurement of Railhead Contour Based on Two-Step Method

According to the “Maintenance Rules for Ballastless Track of High-speed Railway” [24], the measurement of vertical wear on the rail is taken at a width of one-third of the rail’s top surface, and the measurement of side wear on the rail is taken 16 mm below the rail’s running surface. As shown in Figure 7, W V represents the vertical wear of the rail, and W H represents the side wear of the rail. In general, by comparing the measured rail profile with the standard 2D profile, we can obtain W V and W H . However, the visual measurement system is installed on a moving train, and the vibrations of the train body can cause the random deflection of the line-structured-light measurement plane.
As shown in Figure 8, a yaw angle deviation around the Y-axis, denoted as α , results in the measured rail profile being horizontally stretched compared to the standard rail profile. Similarly, a pitch angle deviation around the X-axis, denoted as β , leads to the measured rail profile being vertically stretched compared to the standard profile. Directly comparing the measured rail profile obtained under the train’s vibration with the standard 2D rail profile will introduce measurement deviation. Therefore, the two-step method for measuring the railhead profile is adopted, which involves comparing the measured full rail profile with a standard 3D steel rail point cloud model in three-dimensional space. This approach helps to eliminate the influence of vibrations during the measurement process. Please refer to Section 3.2.1 and Section 3.2.2 for detailed procedures.

3.2.1. Step 1: Initial Alignment of Rail Waist Contour Based on Rail Waist Feature Vectors

Due to the random deviation of the line-structured-light plane, the initial position of the measured steel rail profile in three-dimensional space differs significantly from the standard rail model. This misalignment prevents accurate alignment, requiring adjustment of the initial position of the measured rail profile to achieve initial registration with the standard rail model. By analysis, the complete rail profile can be divided into the railhead and rail waist sections, with the rail waist section experiencing no wear during actual operation. Therefore, the rail waist profile can be used as the reference. Two different feature vectors are formed by connecting the centers of the small circular arcs on both sides of the rail waist profile of the measured rail profile and the standard rail model, respectively, and the initial registration of the measured rail profile and the standard rail model is realized by aligning the two feature vectors. The specific process is as follows:
First, to minimize interference during the alignment between the measured rail profile and the standard rail model, the railhead bottom surface and rail bottom surface data of the standard steel rail model are removed before matching with the measured rail profile. Then, the standard rail model is transformed into a point cloud model. Specifically, for each triangle in the standard steel rail model with three vertices A, B, and C, a random uniformly distributed point cloud Q k , k = 1 , 2 is generated on the surface of the triangle.
Q k = A + s A B + t A C
where s and t are random numbers in [0, 1], if s + t > 1 , then s = 1 s and t = 1 t [25]. Establish a Kd-tree index for the measured rail profile points to enable fast searching based on neighborhood relationships. Specifically, the measured rail profile points are denoted as Q, with the railhead contour points represented by Q H , and the rail waist contour points represented by Q W . Due to the significant distance between Q H and Q W , we can use Euclidean clustering to segment and extract them.
Then, as shown in Figure 9, the approximate centroid coordinates of the small arc contours on both sides of the steel rail waist can be extracted using techniques such as Hough circle detection. The radius of the small arc contour is r = 20 mm. The centroid coordinates o r and o l can be obtained, and the position of the centroid can be determined based on the grayscale values in the image. Specifically, in the steel rail waist curve, where each pixel point represents 1 mm, we can draw a circle with radius r = 20 π centered at the steel rail waist contour point, and the value of the corresponding pixel point increases by 1 when the arc passes through the pixel point. The approximate coordinates of the maximum pixel value point correspond to the centroid coordinates o r and o l of the small arc contour;the resulting steel rail waist feature point vector is denoted as o l o r . Furthermore, for the standard rail model, the corresponding approximate coordinates on both sides of the rail waist contour are denoted as O R and O L , and the corresponding rail waist feature point vector is O R O L . The corresponding points of the measured steel rail contour after initial registration are denoted as Q , the rail waist contour points become Q W , the rotation matrix and translation vector for the initial registration are represented as R 1 and t 1 , the detailed solution process can be found in Equations (13)–(15).
Q = R 1 Q + t 1 Q W = R 1 Q W + t 1
R 1 = I + s i n ( θ ) K + ( 1 c o s ( θ ) ) K 2 t 1 = O L o l
k = o l o r × O R O L θ = a c o s o l o r · O R O L o l o r O R O L
In this formula, k represents the rotation axis vector for the rotation transformation, θ represents the rotation angle, K represents the cross-product matrix of k, and I represents the identity matrix.

3.2.2. Step 2: Accurate Measurement of Railhead Profile Based on ICP Algorithm and Model Registration

Using the initially aligned measured contour as a reference, and using the the ICP (iterative closest point) algorithm to achieve precise registration of the measured contour and the standard rail model, this ultimately enables the measurement of the railhead profile. In the ICP algorithm for precise registration, the total number of iterations [26], total deviation, and threshold for the difference between consecutive deviations are set to limit the number of iterations. The algorithm utilizes singular value decomposition (SVD) to estimate the rigid transformation. The rotation matrix and translation vector for precise registration are denoted as R 2 and t 2 , respectively, to achieve the following transformation:
f ( R 2 , t 2 ) = Q k ( R 2 Q + t 2 ) 2 = m i n
Finally, the transformation of the measured full rail contour points Q is given by:
Q = R 2 ( R 1 Q + t 1 ) + t 2
Projecting the full rail contour points Q onto a plane perpendicular to the longitudinal direction of the rail and establishing a Kd-tree index, we can use the nearest neighbor point search to find the distances between the measured rail contour and the standard rail contour. This allows us to calculate the rail’s overall wear condition based on the contour.
In summary, the alignment and comparison process between the measured rail contour points and the standard rail point cloud is shown in Figure 10. The process involves two steps: initial alignment and precise alignment. (a) The measured steel rail’s full contour points are imported into the initial state of the standard steel rail point cloud in 3D space; (b) based on the feature vector formed by connecting the centers of the small circular arcs on both sides of the steel rail contour, the initial alignment between the measured steel rail contour and the standard steel rail model is performed; (c) the precise alignment is achieved by aligning the measured steel rail’s full contour points with the standard steel rail point cloud model. The measured steel rail’s waist section fits perfectly with the standard steel rail point cloud model, enabling accurate measurement and detection of railhead contour wear.

4. Experiment and Analysis

4.1. Test Experiment Platform

The measurement system was tested at the railway maintenance base of the railway bureau, as shown in Figure 11. The vision sensor was installed on the upper side of the railhead, about 70 mm above, and the vision sensor and line laser were fixed at a certain relative angle. The resolution of the vision sensor was 1600 × 1200 , and the system was capable of capturing and processing rail contour images, system calibration, and measuring rail wear on the entire contour. The experimental process included the following steps: first, verifying the accuracy of the system’s stereo calibration method [27]; then, using the proposed binocular vision measurement system and wear gauge measurement, comparing the vertical and side wear of the same rail to validate the measurement accuracy.

4.2. Calibration of Measurement System and Accuracy Analysis

The line-structured-light binocular camera system employs a 9 × 12 chessboard calibration board with square size of 15 × 15 mm. Thirteen sets of images of light stripe targets are captured using the two cameras. The method described in Section 2.2 is used to extract feature points at the intersections of the light stripes and the chessboard grid. The fitted equations of the light planes in the coordinate systems of the left and right cameras are as follows:
0.476 x c 0.543 x c + 0.692 z c 196.001 = 0 0.391 x c 0.528 x c + 0.754 z c 202.922 = 0
The calculated internal parameters A and A of the two cameras, as well as the external parameters [ R , t ] and [ R , t ] of the line-structured-light plane, are shown in Table 1.
In order to evaluate the calibration accuracy of the line-structured-light stereo camera system, and to verify the feasibility of the system, this paper proposes an analysis of the calibration errors of the cameras’ internal parameters and the fitting degree of the line-structured-light plane. By using the calibration of the planar target, the cameras’ internal and external parameters are obtained. The corner coordinates of the planar target in the world coordinate system are projected onto the images and compared with the corresponding corner coordinates in the images. The deviations of each image captured by the left and right cameras are shown in Figure 12. The overall average deviations are 0.0149 px and 0.0118 px, respectively.
In the coordinate systems of the left and right cameras, the distance deviations between the line-structured-light plane fitted using SVD decomposition and the feature points are shown in Figure 13, and only the edge points within the plane have relatively larger errors due to the quality of the projected line-structured light. The evaluation parameters for the fitting of the line-structured-light plane are shown in Table 2, where a determination coefficient close to 1 indicates a good fitting degree of the line-structured-light plane.

4.3. Analysis of Rail Wear Measurement Accuracy

For the measurement of one section of the 60# steel rail at 20 different positions, the rail wear gauge, the standard rail-based 2D contour measurement and the proposed method in this paper were used. Among them, the cameras used in this experiment have a resolution of 1600 × 1200. The field of view of each camera is approximately 167 × 122 units, and the pixel accuracy is 0.1 mm/pixel. The measurement accuracy for rail wear using the gauge is 0.01 mm, which is one order of magnitude higher than the image accuracy. The vertical wear and side wear obtained based on the rail wear gauge measurement are shown in Table 3.
There was no obvious data fluctuation in the two groups of data. Therefore, the rail wear gauge measurement data are taken as the reference standard, and the measurement results obtained from the standard rail-based 2D contour measurement and the proposed method in this paper are compared with it. The measurement results are shown in Table 3.
Table 3 shows that compared to the measurement results of the rail wear gauge, the average absolute errors of vertical wear for the two methods are 0.038 mm and 0.009 mm, and the average absolute errors of the side wear measurements are 0.086 mm and 0.039 mm, respectively. The root mean square errors of the vertical wear measurements are 0.046 mm and 0.011 mm, and the root mean square errors of the side wear measurements are 0.097 mm and 0.048 mm, respectively. The proposed method in this paper exhibits smaller average absolute deviations and root mean square errors for both vertical and side wear measurements compared to the results obtained from the standard rail-based 2D contour measurement.
Furthermore, the measurements using the proposed method are closer to the standard measurements obtained from the rail wear gauge, as shown in Figure 14 and Figure 15.
The method proposed in this paper for measuring the railhead contour, has the ability to correct measurement errors caused by random deviations in the measurement plane of the line-structured light. By bringing the measured steel rail’s complete contour into a 3D space and comparing it with the standard 3D point cloud model of the steel rail, the wear can be accurately measured. This approach eliminates the influence of vibrations during the measurement process, and obtains more precise wear measurements.

5. Conclusions

In this paper, a full profile of rail cross-section measurement method based on line-structured light was established, and further, a coarse alignment and precise registration approach was proposed for measuring the railhead contour. The method involved comparing the measured rail contour with a three-dimensional point cloud model of a standard steel rail to obtain rail wear measurements and mitigate the influence of light plane deviations caused by vibrations during the measurement process. The experiments demonstrated the following conclusions: (1) The establishment of the binocular line-structured-light vision system model, with a camera’s internal parameter calibration deviation of approximately 0.01px and a fitting degree of the line-structured-light plane reaching 0.9999, validated the feasibility of the binocular calibration method. (2) The proposed full profile of rail cross-section measurement method based on line-structured light and stereo vision achieved accurate measurement of the railhead contour in a two-step process. The on-site comparative measurement tests showed that the average absolute error in the vertical wear measurement obtained through this method was 0.009 mm, and the average absolute error in the side wear measurement was 0.039 mm. This measurement method effectively corrected measurement plane deviations and met the accuracy requirements for rail wear measurement.

Author Contributions

Writing—original draft preparation, J.L.; writing—review and editing, J.Z.; methodology and formal analysis, Z.M.; resources and data curation, H.Z.; data curation, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by International Cooperation Project of Science and Technology Bureau of Chengdu (No. 2019-GH02-00051-HZ), Sichuan unmanned system and intelligent perception Engineering Laboratory Open Fund, and Research Fund of Chengdu University of information engineering, under Grant (No. WRXT2020-001, No. WRXT2020-002, No. WRXT2021-002 and No. KYTZ202142), and the Sichuan Science and Technology Program China, under Grant (No. 2022YFS0565); This paper is also supported by the Key R&D project of Science and Technology Department of Sichuan Province, under Grant (2023YFG0196 and 2023YFN0077), Science and Technology achievements transformation Project of Science and Technology Department of Sichuan Province, under Grant (2023JDZH0023), Sichuan Provincial Science and Technology Department, Youth Fund project, under Grant (2023NSFSC1429).

Data Availability Statement

Data is not available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, C.; Zeng, J. Combination-Chord Measurement of Rail Corrugation Using Triple-Line Structured-Light Vision: Rectification and Optimization. IEEE Trans. Intell. Transp. Syst. 2021, 22, 7256–7265. [Google Scholar] [CrossRef]
  2. Minbashi, N.; Bagheri, M.; Golroo, A.; Arasteh Khouy, I.; Ahmadi, A. Turnout degradation modelling using new inspection technologies: A literature review. In Current Trends in Reliability, Availability, Maintainability and Safety: An Industry Perspective; Springer: Cham, Switzerland, 2016; pp. 49–63. [Google Scholar]
  3. Steinbuch, M.; Henselmans, R. Non-contact Measurement Machine for Freeform Optics. Macromolecules 2009, 35, 607–624. [Google Scholar]
  4. Modjarrad, A. Non-contact measurement using a laser scanning probe. In-Process Optical Measurements. SPIE 1989, 1012, 229–239. [Google Scholar]
  5. Giri, P.; Kharkovsky, S. Detection of Surface Crack in Concrete Using Measurement Technique with Laser Displacement Sensor. IEEE Trans. Instrum. Meas. 2016, 65, 1951–1953. [Google Scholar] [CrossRef]
  6. Alippi, C.; Casagrande, E.; Scotti, F.; Piuri, V. Composite real-time image processing for railways track profile measurement. IEEE Trans. Instrum. Meas. 2000, 49, 559–564. [Google Scholar] [CrossRef] [Green Version]
  7. Ran, Y.; He, Q.; Feng, Q.; Cui, J. High-Accuracy On-Site Measurement of Wheel Tread Geometric Parameters by Line-Structured Light Vision Sensor. IEEE Access 2021, 9, 52590–52600. [Google Scholar] [CrossRef]
  8. Chugui, Y.; Verkhoglyad, A.; Poleshchuk, A.; Korolkov, V.; Sysoev, E.; Zavyalov, P. 3D Optical Measuring Systems and Laser Technologies for Scientific and Industrial Applications. Meas. Sci. Rev. 2013, 13, 322–328. [Google Scholar] [CrossRef] [Green Version]
  9. Guerrieri, M.; Parla, G.; Celauro, C. Digital image analysis technique for measuring steel rail defects and ballast gradation. Meas. J. Int. Meas. Confed. 2018, 113, 137–147. [Google Scholar] [CrossRef]
  10. Liu, Z.; Sun, J.; Wang, H.; Zhang, G. Simple and fast rail wear measurement method based on structured light. Opt. Lasers Eng. 2011, 49, 1343–1351. [Google Scholar] [CrossRef]
  11. Wang, C.; Liu, H.; Ma, Z.; Zeng, J. Dynamic inspection of rail wear via a three-step method: Auxiliary plane establishment, self-calibration, and projecting. IEEE Access 2018, 6, 36143–36154. [Google Scholar] [CrossRef]
  12. Wang, C.; Li, Y.; Ma, Z.; Zeng, J.; Jin, T.; Liu, H. Distortion Rectifying for Dynamically Measuring Rail Profile Based on Self-Calibration of Multiline Structured Light. IEEE Trans. Instrum. Meas. 2018, 67, 678–689. [Google Scholar] [CrossRef]
  13. Sun, J.; Liu, Z.; Zhao, Y.; Liu, Q.; Zhang, G. Motion deviation rectifying method of dynamically measuring rail wear based on multi-line structured-light vision. Opt. Laser Technol. 2013, 50, 25–32. [Google Scholar] [CrossRef]
  14. Wang, C.; Ma, Z.; Li, Y.; Zeng, J.; Jin, T.; Liu, H. Deviation rectification for dynamic measurement of rail wear based on coordinate sets projection. Meas. Sci. Technol. 2017, 28, 105203. [Google Scholar] [CrossRef]
  15. Zhan, D.; Yu, L.; Xiao, J.; Lu, M. Study on dynamic matching algorithm in inspection of full cross-section of rail profile. Tiedao Xuebao J. China Railw. Soc. 2015, 37, 71–77. [Google Scholar]
  16. Zhan, D.; Yu, L.; Xiao, J.; Chen, T. Study on High-accuracy Vision Measurement Approach for Dynamic Inspection of Full Cross-sectional Rail Profile. J. China Railw. Soc. 2015, 37, 96–106. [Google Scholar]
  17. Yang, Y.; Liu, L.; Yi, B.; Chen, F. An accurate and fast method to inspect rail wear based on revised global registration. IEEE Access 2018, 6, 57267–57278. [Google Scholar] [CrossRef]
  18. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  19. Lucchese, L.; Mitra, S.K. Using saddle points for subpixel feature detection in camera calibration targets. In Proceedings of the Asia-Pacific Conference on Circuits and Systems, Denpasar, Indonesia, 28–31 October 2002; pp. 191–195. [Google Scholar]
  20. Zhang, G.J.; He, J.J.; Yang, X.M. Calibrating camera radial distortion with cross-ratio invariability. Opt. Laser Technol. 2003, 35, 457–461. [Google Scholar] [CrossRef]
  21. Xu, G. Image Based Modelling and Rendering, 1st ed.; Wuhan University Press: Wuhan, China, 2006. [Google Scholar]
  22. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef] [Green Version]
  23. Zhai, H.; Ma, Z. Detection algorithm of rail surface defects based on multifeature saliency fusion method. Sens. Rev. 2022, 42, 402–411. [Google Scholar] [CrossRef]
  24. Yang, S.; Liu, C. Discrete modeling and calculation of traction return-current network for 400 km/h high-speed railway. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 2023, 237, 445–457. [Google Scholar] [CrossRef]
  25. Turk, G. Generating Random Points in Triangles; Elsevier Inc.: Amsterdam, The Netherlands, 1990; pp. 24–28. [Google Scholar]
  26. Jiang, Y.; Niu, G. Rail Local Damage Detection based on Recursive Frequency-domain Envelope Tracking Filter and Rail Impact Index. In Proceedings of the 2022 Global Reliability and Prognostics and Health Management (PHM-Yantai), Yantai, China, 13–16 October 2022; pp. 1–7. [Google Scholar]
  27. Zang, L.; Wang, H.; Han, Q.; Fang, Y.; Wang, S.; Wang, N.; Li, G.; Ren, S. A Laser Plane Attitude Evaluation Method for Rail Profile Measurement Sensors. Sensors 2023, 23, 4586. [Google Scholar]
Figure 1. Steel rail full-profile measuring system based on binocular-structured light.
Figure 1. Steel rail full-profile measuring system based on binocular-structured light.
Electronics 12 03194 g001
Figure 2. Binocular-structured-light vision model.
Figure 2. Binocular-structured-light vision model.
Electronics 12 03194 g002
Figure 3. Line-structured-light plane calibration model.
Figure 3. Line-structured-light plane calibration model.
Electronics 12 03194 g003
Figure 4. Calibration images.
Figure 4. Calibration images.
Electronics 12 03194 g004
Figure 5. Feature points extraction on the light plane.
Figure 5. Feature points extraction on the light plane.
Electronics 12 03194 g005
Figure 6. Full-profile collection of railway tracks.
Figure 6. Full-profile collection of railway tracks.
Electronics 12 03194 g006
Figure 7. Rail vertical wear and horizontal wear.
Figure 7. Rail vertical wear and horizontal wear.
Electronics 12 03194 g007
Figure 8. Measuring the plane pitch and heading offset.
Figure 8. Measuring the plane pitch and heading offset.
Electronics 12 03194 g008
Figure 9. Rail waist feature point extraction.
Figure 9. Rail waist feature point extraction.
Electronics 12 03194 g009
Figure 10. The process of steel rail full−profile measuremen (ad).
Figure 10. The process of steel rail full−profile measuremen (ad).
Electronics 12 03194 g010
Figure 11. The experimental platform built on the rail inspection vehicle.
Figure 11. The experimental platform built on the rail inspection vehicle.
Electronics 12 03194 g011
Figure 12. Camera calibration deviation.
Figure 12. Camera calibration deviation.
Electronics 12 03194 g012
Figure 13. Analysis of fitting plane error.
Figure 13. Analysis of fitting plane error.
Electronics 12 03194 g013
Figure 14. The measured values of side wear obtained from different methods.
Figure 14. The measured values of side wear obtained from different methods.
Electronics 12 03194 g014
Figure 15. The measured values of vertical wear obtained from different methods.
Figure 15. The measured values of vertical wear obtained from different methods.
Electronics 12 03194 g015
Table 1. Results of camera calibration parameters of left and right cameras.
Table 1. Results of camera calibration parameters of left and right cameras.
NameSolution Results
Internal External A = 1899.792 0 785.809 0 1898.065 621.232 0 0 1
A = 1904.880 0 804.499 0 1904.684 613.186 0 0 1
External Parameters [ R , t ] = 0.877 0.236 0.418 52.677 0.064 0.806 0.588 8.064 0.476 0.543 0.692 253.306
[ R , t ] = 0.920 0.240 0.309 95.225 0.018 0.815 0.579 24.717 0.391 0.528 0.754 301.256
Table 2. Fitting plane evaluation parameters.
Table 2. Fitting plane evaluation parameters.
Coordinate System of the
Left Camera
Coordinate System of the
Right Camera
Sum of squared errors (SSE)2.3581.761
Coefficient of determination0.99990.9999
(R-square)
Standard deviation (RMSE)0.32020.2767
Table 3. Results of wear measurement for 60 kg/m rail.
Table 3. Results of wear measurement for 60 kg/m rail.
Results by Standard
Rail Wear Gauge
Results by Standard 2D ProfileResults by the Proposed Method
NoVertical WearSide WearVertical WearDeviationSide WearDeviationVertical WearDeviationSide WearDeviation
10.020−0.3900.011−0.009−0.601−0.2110.0210.001−0.455−0.065
20.020−0.4100.013−0.007−0.511−0.1010.017−0.003−0.4020.008
30.060−0.3500.024−0.036−0.461−0.1110.0650.005−0.353−0.003
40.070−0.3200.044−0.026−0.428−0.1080.062−0.008−0.3050.015
50.110−0.3400.037−0.073−0.408−0.0680.090−0.020−0.350−0.010
60.070−0.3000.033−0.037−0.428−0.1280.065−0.005−0.301−0.001
70.060−0.3100.028−0.032−0.399−0.0890.045−0.015−0.2860.024
80.080−0.2300.004−0.076−0.261−0.0310.076−0.004−0.1730.057
90.110−0.3500.035−0.075−0.441−0.0910.104−0.006−0.3400.010
100.060−0.3900.004−0.056−0.490−0.1000.048−0.012−0.3050.085
110.140−0.2200.074−0.066−0.292−0.0720.108−0.032−0.244−0.024
120.030−0.2600.004−0.026−0.365−0.1050.0360.006−0.264−0.004
130.080−0.2400.039−0.041−0.296−0.0560.070−0.010−0.1900.050
140.010−0.4300.001−0.009−0.474−0.0440.006−0.004−0.3830.047
150.020−0.4400.001−0.019−0.469−0.0290.0280.008−0.3740.066
160.050−0.3800.001−0.049−0.3750.0050.037−0.013−0.2920.088
170.040−0.2100.004−0.036−0.248−0.0380.0500.010−0.1770.033
180.030−0.0500.012−0.018−0.128−0.0780.020−0.010−0.0020.048
190.050−0.1800.002−0.048−0.316−0.1360.0550.005−0.267−0.087
200.060−0.3100.033−0.027−0.428−0.1180.056−0.004−0.361−0.051
MAD (mm) 0.038 0.086 0.009 0.039
RMSE (mm) 0.046 0.097 0.011 0.048
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Zhang, J.; Ma, Z.; Zhang, H.; Zhang, S. Three-Dimensional Measurement of Full Profile of Steel Rail Cross-Section Based on Line-Structured Light. Electronics 2023, 12, 3194. https://doi.org/10.3390/electronics12143194

AMA Style

Liu J, Zhang J, Ma Z, Zhang H, Zhang S. Three-Dimensional Measurement of Full Profile of Steel Rail Cross-Section Based on Line-Structured Light. Electronics. 2023; 12(14):3194. https://doi.org/10.3390/electronics12143194

Chicago/Turabian Style

Liu, Jiajia, Jiapeng Zhang, Zhongli Ma, Hangtian Zhang, and Shun Zhang. 2023. "Three-Dimensional Measurement of Full Profile of Steel Rail Cross-Section Based on Line-Structured Light" Electronics 12, no. 14: 3194. https://doi.org/10.3390/electronics12143194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop