Next Article in Journal
Puppis: Hardware Accelerator of Single-Shot Multibox Detectors for Edge-Based Applications
Previous Article in Journal
Comparison of Coverage-Prediction Models for Modern Mobile Radio Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Roadheader Positioning Method Based on Multi-Sensor Fusion

1
College of Safety and Emergency Management Engineering, Taiyuan University of Technology, Taiyuan 030024, China
2
Shanxi Engineering Research Center for Coal Mine Intelligent Equipment, Taiyuan University of Technology, Taiyuan 030024, China
3
College of Mechanical and Vehicle Engineering, Taiyuan University of Technology, Taiyuan 030024, China
4
Postdoctoral Workstation, Shanxi Coking Coal Group Co., Ltd., Taiyuan 030024, China
5
College of Mining Engineering, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(22), 4556; https://doi.org/10.3390/electronics12224556
Submission received: 7 September 2023 / Revised: 2 November 2023 / Accepted: 3 November 2023 / Published: 7 November 2023

Abstract

:
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a roadheader Visual–Inertial Odometry (VIO) system is proposed, combining SINS and stereo visual odometry to adjust to coal mine environments. Given the inherently dimly lit conditions of coal mines, our system includes an image-enhancement module to preprocess images, aiding in feature matching for stereo visual odometry. Additionally, a Kalman filter merges the positional data from SINS and stereo visual odometry. When tested against three other methods on the KITTI and EuRoC datasets, our approach showed notable precision on the EBZ160M-2 Roadheader, with attitude errors less than 0.2751° and position discrepancies within 0.0328 m, proving its advantages over SINS.

1. Introduction

In China, coal remains the predominant energy source, with rapid coal mine roadway excavation being essential for maintaining consistent and optimal production levels. As depicted in Figure 1, the roadheader is the principal machinery used for tunneling in coal mines, with its operation primarily being manual [1]. Such tunneling involves challenging work conditions that pose significant risks to the well-being and safety of frontline workers. Drawing from extensive experience, it is evident that the modernization of traditional excavation techniques necessitates the creation of advanced, efficient, reliable, and intelligent excavation equipment [2,3,4,5,6].
The strapdown inertial navigation system(SINS) is a self-sufficient navigation mechanism that functions autonomously, devoid of any external interventions [7]. Within the SINS framework, the gyroscope quantifies the angular acceleration, while the accelerometer gauges the carrier’s linear acceleration. Through the amalgamation of these linear and angular accelerations, one can ascertain the carrier’s relative velocity and angular velocity. Typically, the SINS finds its application in swift, short-duration, and expansive-range aircraft positioning and navigation. Conversely, roadheaders predominantly function in a modus operandi characterized by slow speeds, prolonged durations, and limited ranges [8]. At the same time, the SINS is prone to inherent challenges. Over time, its continuous integration of measurements leads to accumulating errors, significantly affecting position accuracy. Gyroscope drift can further distort orientation readings. The system’s initial alignment is crucial; any misalignment can introduce notable errors. Moreover, the SINS is sensitive to temperature changes, demanding either controlled operational environments or compensation mechanisms. Ensuring sustained accuracy requires the use of high-end, and often expensive, sensors like accelerometers and gyroscopes.
To deal with the limitations of the SINS, many research efforts have introduced Visual–Inertial Odometry (VIO) to enhance the two aspects of the odometry. VIO combines the strengths of both visual systems (like cameras) and inertial systems (like the SINS) to estimate a robot’s or vehicle’s path or change in position over time. This synergistically combines visual systems, such as cameras, with inertial systems, like the SINS, to refine position estimates. The visual component offers absolute position updates, mitigating the accumulated drift inherent in purely inertial systems. Cameras track environmental visual features, aiding in motion determination. While inertial data provide rapid position updates, the visual system, despite its typically lower frequency, validates these, ensuring enhanced navigation accuracy. Unlike the heavy reliance of SINS on initial alignment, VIO uses visual cues for better initialization, reducing susceptibility to alignment errors. Furthermore, incorporating cameras, which are generally more economical than high-end inertial sensors, makes VIO a cost-effective solution that doesn’t compromise on accuracy, capitalizing on the combined strengths of both systems [9].
Stereo Camera–Inertial VIO Odometry harnesses the depth perception of stereo cameras, which are paired with rapid position updates from inertial systems. The stereo cameras capture 3D environmental features, enhancing motion determination through depth perception. These visual inputs not only correct accumulated drift from the inertial system but also provide robust initialization, reducing alignment dependency. The inertial system, on the other hand, offers high-frequency position updates. Together, they ensure a richer, more accurate, and cost-effective navigation solution, leveraging the depth data from stereo vision and the dynamic feedback from inertial sensors [10,11]. However, in coal mines, low-quality images significantly affect SLAM (Simultaneous Localization and Mapping) feature matching in computer vision. Dark segments in these images obscure details essential for accurate feature detection, affecting tasks like pose estimation. In coal mines, coal dust exacerbates the issue by reducing image contrast and the signal-to-noise ratio, which are crucial for computer vision. This results in fewer detectable feature points, hindering effective feature extraction in SLAM. Consequently, the accuracy and reliability of SLAM-based systems can be compromised without clear, high-quality images.
In this study, we present a roadheader Visual–Inertial Odometry (VIO) tailored for coal mine environments, integrating the strapdown inertial navigation system(SINS) and stereo sensors. Given the low-light conditions inherent in coal mines, our approach incorporates an image-enhancement module designed to preprocess images, facilitating feature matching for stereo visual odometry under dim lighting. Furthermore, to synergize the position data from the SINS with stereo visual odometry, we employ a Kalman filter as the fusion strategy for these two modalities of positional information. Tests on the KITTI and EuRoC datasets compared our method to three alternatives. Experiments on the EBZ160M-2 Roadheader showed attitude errors below 0.2751° and position errors within 0.0328 m, demonstrating our system’s superior accuracy over the SINS.

2. Related Work

The positioning method of a roadheader can be divided into three categories: photoelectric navigation, inertial navigation, and multi-sensor fusion navigation [8]. Fu [12,13] designed an ultra-wideband (UWB) position and attitude perception system which realizes the position and attitude perception of the roadheader in narrow environments. Roman [14] designed an attitude perception system based on an active illumination infrared sensor which realizes the attitude perception of a roadheader under light-free conditions. Du [15] applied machine vision to the attitude perception of a roadheader, and built an attitude perception system for a roadheader. Based on the principle of indoor global positioning system (iGPS) positioning, Tao [16] designed a single-station, multi-point, and time-sharing attitude perception system which calculates the deflection angle, offset distance, and position of the roadheader by measuring the position of the receivers fixed at different positions on the roadheader’s body.
Tian [17] proposed an inertial navigation positioning method for a roadheader based on zero velocity correction, which can correct the measurement error. Shen [18] established a roadheader dynamics model based on cutting loads, and then proposed an error compensation method based on the vibration characteristics of the roadheader to reduce the cone and paddle error of the inertial navigation system.
The multi-sensor fusion positioning method can make full use of the advantages of various sensors and make up for the shortcomings of individual sensors. In order to solve the problem that the error of the SINS accumulates over time, and given that the single odometer limits position perception ability, Wang [19] introduced a differential odometer as an auxiliary positioning system on the basis of the SINS, which effectively improves the position and attitude perception ability of the inertial navigation system. Meanwhile, the development of machine vision and visual odometry provides new ideas for the positioning of a roadheader. Yang [20] proposed a roadway environment modeling and roadheader-positioning method based on self-coupling HectorSLAM, which realized the modeling of the roadway and the positioning of the roadheader. Based on the principle of monocular vision measurement, Wang [21] used the least-squares method to align SINS data with monocular camera data and fused both datasets using an extended Kalman filter to achieve a highly accurate perception of the roadheader’s position. The SLAM method based on an RGB-D camera, as adopted by Zhang [22], not only realizes the position perception of a roadheader, but also realizes the construction of the roadway map.
The above studies cover the main methods of position and attitude perception for a roadheader. Among them, positioning solutions based on machine vision are widely used for the positioning of roadheaders due to the advantages of non-contact measurement and rich and intuitive information. And the SINS is also suitable for the positioning of a roadheader because of the system’s advantages of adaptability, autonomy, and passivity.
Therefore, this paper proposes a positioning method for a roadheader based on stereo visual odometry and the SINS. The method improves the front end of the stereo visual odometry system by adding an image-enhancement module to suppress complex light sources, account for coal dust, and enhance the image’s details to improve the quality of images and system robustness, and, in the back-end part, a Kalman filter with a 15-dimensional state quantity is designed to fuse the position and attitude data of the SINS and stereo visual odometry.

3. Framework of Proposed Method

Figure 2 illustrates the structure of the proposed methodology. The system is segmented into three primary components: the position and attitude detection system SINS, another position and attitude detection system utilizing stereo visual odometry, and a data fusion mechanism based on the Kalman filter. As for the detail, the provided image portrays a schematic representation of a VIO system. At its core, two primary subsystems are distinctly illustrated: the SINS representing the inertial segment, and the stereo visual odometry system symbolizing the visual element. The SINS component is marked with various interconnected nodes, suggesting sensors such as gyroscopes and accelerometers. Opposite, the stereo visual odometry system showcases twin camera icons, indicative of its dual optical capture capability. Both subsystems converge towards a central module, likely indicating a data fusion mechanism.
The inertial component is driven by the SINS. This provides rapid, high-frequency positional updates. In contrast, the visual segment, underpinned by the stereo visual odometry system, captures 3D spatial information from the surroundings using its twin cameras. This captures the depth and relative motion within the environment, offering absolute position updates that can counteract the drift inherent in standalone inertial systems. When combined, these systems offer a robust navigation solution. The SINS gives a continuous stream of movement data, while the stereo visual odometry periodically refines and corrects this data using captured visual cues. Central to this integration is a data fusion mechanism, typically based on algorithms such as the Kalman filter, which synergistically processes data from both sources to deliver accurate and real-time positional and orientation information.
In this paper, the coordinate systems used to describe the relative relationship between a roadheader and an integrated positioning system are as follows.
  • Geocentric Coordinate System: This system has its origin at the Earth’s center. The x-axis is directed towards the vernal equinox, while the y-axis aligns with the Earth’s rotational axis, pointing to the North Pole. The z-axis, in conjunction with the other axes, creates a right-handed coordinate system. Notably, the outputs from both the gyroscope and accelerometer rely on this system.
  • Earth Coordinate System: Centered at the Earth’s core, the x and y axes of this system lie on the Earth’s equatorial plane. Specifically, the x-axis indicates the juncture of the prime meridian and the Equator, whereas the y-axis extends towards the North Pole in alignment with the Earth’s rotational motion.
  • Navigation Coordinate System: With the roadheader’s center as its origin, the x-axis of this system is directed towards the geographic east. The y-axis aligns with the geographic north. Complementing the other two axes, the z-axis helps to constitute a right-handed system. This system serves as the referential framework for showcasing the integrated positioning system’s final results.
  • Carrier Coordinate System: This system also uses the roadheader’s center as its starting point. Here, the x-axis looks to the right side, and the y-axis faces the roadheader’s front. Meanwhile, the z-axis rises upwards, and together, these axes form a right-handed coordinate system.
  • Camera Coordinate System: Rooted at the left camera’s optical center, the x-axis of this system runs rightward along the camera’s baseline. Conversely, the y-axis descends, while the z-axis, combined with the other two, establishes a right-handed coordinate system. This orientation follows the camera’s optical path.
As shown in Figure 3, in the three-dimensional space, the attitude of the roadheader can be described by a Euler angle (yaw angle, roll angle, pitch angle), which is determined by the angular position relationship between the carrier coordinate system and the navigation coordinate system.

4. Position and Attitude Perception System Based on Improved Stereo Visual Odometry

4.1. Improved Stereo Visual Odometry Front End: Image Pre-Processing with Image Enhancement Module

Within coal mine tunnels, intricate lighting conditions, combined with dust accumulation, predominantly contribute to the degradation in image quality. Two salient characteristics of such low-quality images are the prevalence of numerous shadowed regions juxtaposed with sporadic overexposed areas. These darkened segments often blur image contours, resulting in the loss of intricate details and subpar visual appeal. The presence of coal dust exacerbates the situation by diminishing both image contrast and the signal-to-noise ratio, consequently reducing the number of image feature points, and thereby hampering effective feature extraction [23,24,25]. To combat this challenge, it becomes imperative to deploy specific image-enhancement techniques aimed at ameliorating the overall quality of these mine-based visuals.
Implementing the divide-and-conquer strategy [26,27,28], an image-enhancement module was devised to increase the quality of the images and thus bolster the accuracy of stereo visual odometry. After assessing the latest progress in image-enhancement methods [29,30], the current research approach was adopted with a consistent 424 × 240 pixel-size resolution. Results from side-by-side tests highlighted the effectiveness of chosen resizing, as depicted in Figure 4 [31,32,33]. Our enhancement approach, grounded on a weighted fusion strategy, predominantly encompasses two stages: the extraction of the reflection component and image dehazing. Separate models are established to accentuate each image, followed by the amalgamation of these enhanced sub-images through a straightforward, yet efficacious, weighted fusion mechanism. This consolidated approach facilitates the enhancement of previously degraded images.

4.1.1. Extraction of the Reflection Component of the Mine Image

As shown in Figure 5, according to the retinex [34] theory, the image information S observed by the observer is determined by the illumination component L and the reflection component R which reflects the details of object, and their relationship can be expressed as follows:
S = R · L .
In order to extract the reflection component, the above formula is usually converted to the logarithmic domain to make it more consistent with the form of the human eye’s perception [35].
log R = log S log L .
Then the illumination component can be estimated [36], and subsequently, the image signal S removes the light component to isolate the reflection component R.
L = S G log R = log S log ( S G )
where G is the Gaussian surround function and is a convolution operation.

4.1.2. Mine-Image Dehaze

When there is coal dust in the roadway, the image information S can be regarded as the superposition of the reflection component P and a scattering component Q [37].
S = P ω + Q ( 1 ω )
where Q represents the scattering component, and ω is the weight coefficient. P represents the reflection component of the position under sufficient and uniform illumination, carrying the texture information of the object.
Suppose that the mean pixel grayscale value of an image in a fixed small-sized window centered at pixel point S ( x , y ) is μ ( x , y ) and the standard deviation is σ ( x , y ) . Within that window, the scattering component can be assumed to be a constant, so the variance σ 2 ( x , y ) satisfies:
σ 2 ( x , y ) = ω 2 ( x , y ) σ P 2 ( x , y )
where σ P 2 ( x , y ) is the standard deviation of the reflection component within that window. Assuming that ω ( x , y ) is 1 in the window where the maximum of σ 2 ( x , y ) is located, the weight coefficient ω ( x , y ) can be obtained as follows:
ω ( x , y ) = σ ( x , y ) · max ( u ( x , y ) ) u ( x , y ) max ( σ ( x , y ) ) .
According to the dark channel theory [38], the gray level at the minimum reflection component in each window is close to 0, that is:
P ( x , y ) min = 0 S ( x , y ) min = ( 1 ω ( x , y ) ) Q ( x , y ) .
Finally, the reflection component can be derived based on Formulas (4)–(7).
P ( x , y ) = ( S ( x , y ) P ( x , y ) min ) max ( σ ( x , y ) ) σ ( x , y ) .
By performing above operation for each pixel of the image, the coal-dust-free image can be restored. And the final image is a linear weighted fusion of two enhanced images, R and P.
S o u t = W 1 R + W 2 P W 1 = μ 1 μ 1 + μ 2 W 1 + W 2 = 1
where μ 1 is the average gray value of component R and μ 2 is the average gray value of component P.

4.1.3. Image Enhancement Experiment

The images were resized to 424 × 240 pixels and the enhancement results of each method were determined. We selected four datasets (DCIM [39], LIME [40], NPEA [41], and VV [42]) to verify the effectiveness of the proposed module. In the working file, we built a mine scene dataset, and the dataset contained four scenarios: underground work environment with personnel (Scenario 1), tunnel scene with multiple light sources (Scenario 2), scene with reflective stripes (Scenario 3), and coal mining equipment scene (Scenario 4).
Analyzing the data in Table 1, we used two metrics: the natural image quality evaluator (NIQE) [43] and the no-reference image quality metric for contrast distortion (NIQMC) [44]. The ‘KinD’ [45] method outperformed according to the NIQE metric with a high of 4.77 on the LIME dataset. In contrast, ‘Enlight-GAN’ [46] had the lowest NIQE score, 3.57, for the DICM. For the NIQMC metric, ‘Mbllen’ [47] scored the highest, with 4.18 on the VV dataset. Our proposed method showcased a consistent performance, with scores between 3.47 and 3.73 across all datasets. While not always leading, our method’s steady results across various datasets highlight its adaptability and reliability, making it a competitive choice against other image-enhancement methods in diverse scenarios.
In Table 2, using the NIQMC metric, our method notably outperforms most of the competing approaches across various datasets. Specifically, it achieves the highest scores on DICM (5.47), LIME (5.41), and VV (5.49), indicating superior image restoration quality in these scenarios. While the ‘KinD’ method scored marginally higher on the NPEA dataset (5.01) compared to ours (5.13), our method consistently retained better overall naturalness in the images across the majority of tested datasets.
As is shown in Figure 6, the ‘Retinex-Net’ [48] introduces noise and artifacts in dark areas, leading to noticeable color distortions and loss of contrast. On the other hand, the ‘KinD’ significantly improves the quality of the enhancement, but it tends to oversaturate the bright areas in the enhanced image. ‘Mbllen’ exhibits poorer contrast-adjustment results in both dark and high-intensity regions. ‘Enlight-GAN’ is capable of restoring low-quality images to high-quality ones but fails to remove noise.
As is shown in Figure 7, the proposed module can maintain higher quality, minimizing distortion and retaining intricate details. The naturalness of the image is well maintained after enhancement, and, meanwhile, the effects of complex light sources and coal dust are eliminated. In addition, the module demonstrates the ability to balance the relationship between light sources and dark environments, reducing the impact of dust on image-enhancement results. It also achieves a balance between contrast and saturation, resulting in more realistic colors, compared to other methods.
In the realm of visual odometry, feature matching quality critically influences pose-estimation accuracy. Through our study, we probed the impact of image enhancement on feature-point extraction and matching efficacy. Utilizing the Scale-Invariant Feature Transform (SIFT) for feature point extraction and matching, our data was derived from a meticulously chosen video sequence of 1000 frames shot with a stereo camera in a mining roadway. Figure 8 displays histograms illustrating the contrast in correctly matched feature points for raw versus enhanced images. Here, ‘correctly matched’ refers to points retained after employing the RANSAC algorithm to eliminate initial matching errors. Notably, raw images average near 150 correctly matched points, while enhanced ones rise significantly, to about 180. This improvement underscores the pivotal role of image enhancement in optimizing feature-point extraction and matching.
Figure 9 provides an insightful visual representation of a frame extracted from the referenced video sequence for a more granulated analysis. In the untreated image, the concentration of extracted feature points is markedly denser than its counterpart in the enhanced rendition. Such density indicates that, in the absence of image enhancement, the motion-estimation predominantly identifies feature points within a confined depth spectrum. Conversely, when augmented through image enhancement, the motion estimation can discern feature points across an extended depth continuum. An important corollary to note is that a heightened density of feature points often predicates a heightened susceptibility to errors during the matching phase. Consequently, the introduction of the image-enhancement module stands out as a substantial advancement, elevating the precision and robustness of feature matching, particularly in the challenging visual environments of underground coal mines.

4.2. Stereo Visual Odometry Back End: Position and Attitude Perception for a Roadheader Based on Motion Estimation

Considering the inherent challenges in simultaneously estimating position and attitude using a monocular camera, and the superior measurement accuracy and broader measurement range of a stereo camera, this method employs a stereo camera to estimate the position and attitude of the roadheader.
As is shown in Figure 10, in the initialization phase, the feature points of the image captured by the left-eye camera are first extracted and then matched with the feature points of the image captured by the right-eye camera. After calibration, the stereo camera can obtain its baseline b and focal length f. According to the similar-triangle principle, the depth information of matching points can be solved by the following equation:
z c = f b d d = x l x r
where x l and x r are the coordinates of the feature point p in the left and right camera coordinate systems, respectively. With the scale information, the position ( x c , y c , z c ) of the feature point in the camera coordinate system can be derived based on the pixel coordinates ( u , v ) of the feature point in the left-eye camera.
u v 1 = 1 z c K x c y c z c
where K is the internal parameter matrix of the camera. Since the camera is fixed on the roadheader’s body, it is necessary to clarify the conversion relationship between the navigation coordinate system and the camera coordinate system in order to solve for the position and attitude of the roadheader. At a certain time, the relationship between the coordinates of feature points in the navigation coordinate system and camera coordinate system can be expressed as follows:
x c y c z c 1 = T k x n y n z n 1
where T k = R k t k 0 1 is the external parameter matrix of the camera. The position and attitude of the roadheader can be determined based on the external reference matrix T k :
φ C n = γ θ ψ = arctan R k     13 R k     33 arcsin R k     23 arctan R k     21 R k     22
where R k i j is the element of the i-th row and j-th column of the matrix R k . The position of the roadheader in the navigation coordinate system can be expressed as follows:
P c n = R k 1 t .
The external parameter matrix of the camera is different from the internal parameter matrix, which is unchanged after calibration. However, the external parameter matrix changes as the camera moves, so the essence of position and attitude perception is the update of the external parameter matrix T k .
After the stereo camera captures the image of the most recent frame, the feature points in the previous frame are tracked using the feature-point matching method to establish correspondences between the feature points.
Descriptors [49] can describe the Hamming distance between feature points where the Hamming distance of correctly matched feature points is smaller than that of mismatches. When an object is obstructed, there will be a large number of mismatched points in the occluded area of the image, but correctly matched points still exist in the unobstructed area. Therefore, by setting a threshold, the portion of the feature points with a Hamming distance greater than the threshold are judged to be mismatched points and eliminated, while the remaining available feature points can still be used for pose estimation.
For matched point sets P = { p 1 , , p n } and P = { p 1 , , p n } , the following least-squares problem needs to be constructed to find the R k + 1 , k and t k + 1 , k that make J reach the minimum.
J = min R , t 1 2 i = 1 n | | p i R k + 1 , k p i t k + 1 , k | | 2 2 .
After getting R k + 1 , k and t k + 1 , k , the external parameter matrix is updated by the following formula. The position and attitude of the roadheader at the current moment can be solved by Formulas (13) and (14).
T k + 1 = T k + 1 , k T k .

5. Position and Attitude Perception for a Roadheader Based on the SINS

5.1. Update of Roadheader Attitude

Given the continual movement of the roadheader, its position and orientation constantly vary. Consequently, the integrated positioning system is responsible for ascertaining the roadheader’s precise location and orientation within the navigation coordinate system. Therefore, it is necessary to project the data of the SINS in the carrier coordinate system to the navigation coordinate system through the attitude transformation matrix C b n .
The spatial angular position relationship between two coordinate systems can be viewed as the rotation of a rigid body around a fixed point, and this rotation relationship can be expressed by the normalized quaternion Q = q 0 + q 1 i + q 2 j + q 3 k ( q 0 , q 1 , q 2 , q 3 are real numbers, i , j , k are imaginary units).
C b n = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 = q 0 2 + q 1 2 q 2 2 q 3 2 2 q 1 q 2 q 0 q 3 2 q 1 q 3 + q 0 q 2 2 q 1 q 2 + q 0 q 3 q 0 2 q 1 2 + q 2 2 q 3 2 2 q 2 q 3 q 0 q 1 2 q 1 q 3 q 0 q 2 2 q 2 q 3 + q 0 q 1 q 0 2 q 1 2 q 2 2 + q 3 2
where C i j is the element of the i-th row and j-th column of the transfer matrix C b n . The crux of updating the attitude transformation matrix lies in the quaternion update, accomplished through the equivalent rotation vector method [50].
Q k + 1 = Q k Q k , k + 1
where Q k is the attitude quaternion at time k, is the multiplication of quaternions, and Q k + 1 , k is the attitude change quaternion from time k to k + 1,which can be calculated by the rotational angular velocity of the roadheader, as measured by gyroscope.
Once the attitude transformation matrix is acquired, the orientation angle of the roadheader in the navigation coordinate system can be determined using the following formula:
φ I n = γ θ ψ = arctan C 31 C 33 arcsin C 32 arctan C 12 C 22 .

5.2. Update of Roadheader Velocity and Position

The differential equation for the velocity from the SINS in the navigation coordinate system can be expressed as follows [51]:
V ˙ I n = C b n f b ( 2 ω i e n + ω e n n ) × V I n + g n
where V I n is the velocity of the roadheader within the navigation coordinate system, as obtained by the SINS; f b is the specific force measured by the accelerometer in the carrier coordinate system; ω i e n is the rotational angular velocity of the earth coordinate system relative to the geocentric coordinate system; ω e n n is the rotational angular velocity of the navigation coordinate system relative to the earth coordinate system; g n is gravitational acceleration.
The differential equation for the position of the roadheader can be expressed as follows [52]:
P ˙ I n = L ˙ I λ ˙ I h ˙ I = 0 1 R M + h I 0 sec L I R N + h I 0 0 0 0 1 V I n
where P I n is the geographic coordinate of the roadheader obtained by the SINS, and LI, λI, hI are latitude, longitude, and height, respectively. R M and R N are the curvature radius of the meridian circle and the curvature radius of the unit circle, respectively.

5.3. Error Equation of the SINS

The velocity-error equation of the SINS is expressed as follows [53]:
Δ V ˙ I n = ϕ I n × ( C b n f b ) + C b n Δ K A + Δ α A f b + Δ V I n × 2 ω i e n + ω e n n + V I n × 2 ω i e n + ω e n n + n .
Δ V I n is velocity error; ϕ I n is attitude error, Δ K A and Δ α A are respectively the scale factor error and installation angle error of the accelerometer; n is the bias error of the accelerometer. The position equation of the SINS can be expressed as follows:
Δ P ˙ I n = Δ L ˙ I Δ λ ˙ I Δ h ˙ I = Δ V N R M + h I Δ h I V N R M + h I 2 Δ V E R N + h I sec L I + Δ L I V E R N + h I tan L I sec L I Δ h I V E sec L I R N + h I 2 Δ V U
where Δ L I , Δ λ I , and Δ h I are latitude, longitude, and height error, respectively; Δ V N , Δ V E , and Δ V U are north, east, and sky velocity errors, respectively. The attitude error equation can be written as follows:
ϕ ˙ I n = ϕ I n × ω i n n + Δ ω i n n C b n Δ K G + Δ α G ω i b b ε n
where ω i n n is the rotational angular velocity of the navigation coordinate system relative to the geocentric coordinate system, Δ K G and Δ α G are, respectively, the calibration coefficient error and installation angle error of the gyroscope; ω i b b is the rotational angular velocity of the roadheader relative to the geocentric coordinate system in the carrier coordinate system; ε n is gyroscope bias error.

5.4. Design of Kalman Filter

The Kalman filter accurately updates the mean square error of the system, allowing for an optimal estimation of the system’s state. Based on the Kalman filter principle, the disparity between the position and attitude data of the SINS and those of the stereo visual odometry is employed as the observed value of the Kalman filter, while the Kalman filter’s output value is used to rectify the error of the SINS.
On the basis of the analysis of error equation of the SINS, 15-dimensional vector x related to position and attitude is selected as the state quantity:
x = ( ϕ I n ) T Δ V I n T Δ P I n T ( ε n ) T ( n ) T T
The difference between the data of the SINS and of the stereo visual odometry is used as the observation quantity of the Kalman filter:
Z = P I n φ I n P C n φ C n
The state equation and the observation equation of the Kalman filter are as follows:
x k = A x k 1 + w k 1 Z k = H x k + D k
where A is the state transfer matrix; w k 1 is the process noise of the system; Z k is the observed value; H is the measurement matrix; D k is the measurement white noise.
The Kalman filter updates state quantity by following these steps:
  • Predict the state quantity x ^ k :
    x ^ k = A x ^ k 1
    where x ^ k 1 is the a posteriori estimate of x at time k − 1.
  • Predict the state covariance matrix
    P k = A P k 1 A T + Q k 1
    where P k 1 is the state covariance matrix at time k − 1 and Q k 1 is the system noise covariance matrix at time k − 1.
  • Update Kalman filter gain
    K k = P k H T ( H P k H T + R k ) 1
    where R k is the covariance matrix of the measurement noise at time k.
  • Update the state variable quantity x ^ k :
    x ^ k = x ^ k + K k ( Z k H x ^ k ) .
  • Update the state covariance matrix:
    P k = ( I K k H ) P k .
The posterior state variable quantity x ^ k is used as the optimal estimate of the state quantity, on the basis of which the SINS performs error correction and then updates the position and attitude of the roadheader.

6. Experiment and Discussion

6.1. Dataset

We tested the proposed method on the KITTI dataset [54] and the EuRoC dataset [55] and compared the proposed methods with three other methods. The absolute trajectory error (ATE) was used to compute the average translation error (m) and rotation error (°).
The ATE values on KITTI (00 to 03) and EuRoC (MH01-MH03) are respectively shown in Table 3 and Table 4, with Figure 11 and Figure 12, respectively, showing the trajectory of KITTI seq. 00 and EuRoc MH_03 in the xz plane.
From these results, we can summarize:
  • Visio struggles with accurately estimating attitude, but it is relatively consistent on overall translation.
  • With IMU, the position estimation from triple integration of accelerometer and gyroscope measurements drifts rapidly over time. Also, the IMU results have lower orientation error than Vision; this implies that the majority of IMU drifts are the result of integrating noisier accelerometer measurements, leading to increasingly inaccurate velocity estimates.
  • Comparing to ORBSLAM2 [56], our method obtains lower error on all sequences, which can be attributed to the image-enhancement module and the ability of the Kalman filter to fuse the data of IMU and Visio.
The proposed method has, overall, lower rates of error than other methods on all sequences and it corrects the accumulated errors in translation and rotations, significantly improving position estimates.

6.2. Integrated Positioning Experiment

The experimental platform of integrated positioning for the roadheader mainly includes: EBZ160M-2 Roadheader (SANY Heavy Industry Co., Ltd., Beijing, China), FSON II nine-axis strapdown inertial navigation system (China Aerospace Science & Industry Corp, Beijing, China), TS60 total station (Leica Geosystems, St. Gallen, Switzerland), CH110 low-precision six-axis strapdown inertial navigation system (Beijing Hipnuc Electronic Technology Co., Ltd., Beijing, China), and a Realsense D455 camera (Intel Corporation, Santa Clara, CA, USA). The resolution of the images taken by the camera is 640 × 480 pixels, but all images need to be resized to 424 × 240 pixels before processing. And our system operates at a speed of 30 frames per second for image processing. The experiments were conducted in an extendable simulated roadway measuring 25 m in length, 4 m in width, and 5.5 m in height. The roadheader has a height of 2.5 m, a length of 10.4 m, and a track spacing of 1.1 m.
The CH110 is a six-axis strapdown inertial navigation system with moderate precision, equipped with a three-axis gyroscope and a three-axis accelerometer for measuring angular velocity and linear acceleration. Due to the absence of a built-in magnetometer, the CH110 is unable to conduct self-alignment to acquire the initial attitude. FSON II is a high-precision nine-axis strapdown inertial navigation system which can perform fine alignment and use the attitude after alignment as the initial attitude of the integrated positioning system. During the positioning experiment, the position data from the total station and the attitude data from FSON II will serve as the actual position and attitude data of the roadheader, validating the precision of the integrated positioning method. The parameters of CH110 and FSON II are shown in Table 5.
The equipment is installed as shown in Figure 13. CH110 is installed above the roadheader, near the center of gravity. And the D455 and prism are installed next to the CH110 to reduce the pole-arm error. If the installation error angle of the CH110 is not a small angle, the CH110 needs to be calibrated with FSON II to eliminate the error and output the CH110 initial attitude transformation matrix before the integrated positioning system starts working.
The operator controls its movement along the target path through a remote-control platform. The strap-down inertial navigation system (SINS) initiates the updating process for the machine’s posture, velocity, and position. Simultaneously, stereo visual odometry undertakes pose updates. The navigation computer utilizes a Kalman filter to amalgamate the data from both systems, rectifying and offsetting errors in the SINS, and concurrently recording the real-time position and attitude data of the machine. The experiment is repeated multiple times under the same road conditions, and a set of experimental data is selected for analysis.
In the positioning experiment, the position and attitude of roadheader was estimated by CH110 and the integrated positioning method, respectively. As shown in the figure, three different curves represent the results of three different positioning methods. The displacements of the roadheader in the x, y, and z axes are shown in Figure 14a–c. Figure 14a represents the displacement of the roadheader along the x-axis in the navigation coordinate system. Figure 14b represents the displacement along the y-axis, and Figure 14c represents the displacement along the z-axis. The attitude of the roadheader, as solved by a different positioning method, is shown in Figure 14d–f. Figure 14d–f, respectively, represent the variations of the roadheader’s roll angle, yaw angle, and pitch angle over time. As can be seen from the figure, the position and attitude data calculated by CH110 fluctuates widely, indicating that there is a large amount of noise in the gyroscope and accelerometer data.
The results of the quantitative analysis of the positioning results are shown in Table 6; the maximum errors of roll, yaw, and pitch are 0.1129°, 1.3589°, and 0.9759°, respectively. The maximum errors of displacement in the x, y, and z axis are 0.0360 m, 0.1172 m, and 0.0150 m, respectively. Additionally, the average errors and standard deviation errors of the integrated positioning method are all smaller for the SINS, which means that integrated positioning system has higher accuracy and better reliability.

6.3. Discussions

It can be seen from Figure 14 that the error of the SINS would accumulate and tend to disperse over time, and that the integrated positioning method proposed in this paper can effectively eliminate the accumulated error of the SINS. In addition, the deviation of the position and attitude curves of CH110 gradually increases over time, indicating that the errors of gyroscope and accelerometer data accumulate over time and lead to a sharp decrease in the accuracy of attitude perception, while the integrated positioning method can effectively eliminate the effects of accumulated errors. Comparing the displacement curves in different directions, we find that the average error and standard deviation in the y-axis direction are larger than those in the other two axes, because the roadheader moves a longer distance in the y-axis direction, and similarly, since the yaw angle varies more than the roll angle and pitch angle, the mean error and mean square error of the yaw angle are larger. The turning action of the roadheader at 75 s and 180 s makes the error of the SINS increase significantly, while the integrated positioning method can effectively limit the error fluctuation.
By comparing the position and attitude data calculated by the integrated positioning method and the SINS with real position and attitude data, it can be concluded that the integrated positioning method can effectively eliminate the influence of the accumulated errors of the SINS and realize the highly accurate positioning of the roadheader.
The quantitative analysis of the experimental data shows that the maximum errors of position and attitude of the integrated positioning method are reduced relative to the SINS error. The integrated positioning method proposed in this paper can realize the highly accurate positioning of the roadheader.
In this paper, we propose an integrated positioning method for a roadheader. By leveraging multi-sensor fusion, the accuracy of the positioning system can be enhanced without relying on costly sensors. Furthermore, this method demonstrates high applicability in harsh environments such as coal mines.
This method is not limited to roadheaders, but can also be used for other mine equipment that works in similar environments, such as bolt-drilling trucks and trackless rubber-tired vehicles. But inevitably, the positioning accuracy of various sensors would be greatly discounted owing to the vibration, etc. In particular, the vibration caused by cutting work not only reduces the positioning accuracy, but also damages the sensors. Therefore, it is necessary to reduce the impact of vibration through algorithm optimization and technology optimization in future research [57]. Furthermore, this method does not provide the roadheader’s position relative to the surrounding roadway. Therefore, integrating positioning technology with digital roadway technology becomes imperative, as it can facilitate the advancement of roadheader automation.

6.4. Limitation

In the domain of image processing, particularly within stereo visual odometry, the computation of both “attitude” and “position” is not instantaneous; it mandates a quantifiable duration. Leveraging state-of-the-art hardware, our system is adept at processing images at a brisk pace of 30 frames per second, which is meticulously calibrated to align with the roadheader’s kinetic speed. Nevertheless, it is pivotal to note that, while our system’s swiftness is attributed to its robust hardware components, we have yet to innovate a methodology that mitigates computational overheads, marking a discernible limitation in our research.
Diving deeper into the nuances of coal-mine roadways, one is confronted with the inherent unpredictability of rock distribution. The subterranean geological landscapes within coal mines are emblematic of intricate and dynamic environments. Termed as ‘mixed and transitioning terrains’, these roadways predominantly traverse sedimentary formations, including mudstone, siltstone, sandstone, and occasionally, coal seams [58]. Notably, the chromatic properties of coal and associated rock strata exhibit marked similarities, often making them indistinguishable, given their minimal localized variations [59].
In our future endeavors, we aim to further refine the Visual–Inertial Odometry (VIO) tailored for coal mining environments. Key areas of emphasis include optimizing computational overhead to enhance real-time responsiveness and advancing our research in image quality enhancement techniques specifically tailored for mining scenarios, such as dehazing and denoising. Through these improvements, we hope to develop a VIO system that is robust and efficient under the challenging conditions inherent in coal mines.

7. Conclusions

(1)
With the goal of achieving the precise positioning and attitude perception of the roadheader, we propose an integrated positioning method based on the fusion of the strapdown inertial navigation system (SINS) and stereo visual odometry. The fundamental concept of this method is to utilize the strapdown inertial navigation system (SINS) as the reference system and incorporate stereo visual odometry as the supplementary system. The data from both systems are combined using the Kalman filter.
(2)
To eliminate the influence of low-quality images on the accuracy of stereo visual odometry, we designed an image-enhancement module to preprocess the images. We tested our image-enhancement module on both public and self-built datasets and conducted comparative experiments with other image-enhancement methods. The results show that our image-enhancement module can effectively improve image quality, increase the number of features extracted from images, and improve the accuracy of feature matching.
(3)
We tested the proposed integrated positioning method on the KITTI dataset and the EuRoC dataset and compared proposed methods with three other methods. In addition, we designed and conducted integrated positioning experiments on a simulated roadway, with the roadheader as the experimental object. The experimental results demonstrated that the maximum errors for roll, yaw, and pitch were 0.1129°, 1.3589°, and 0.9759°, respectively. Additionally, the maximum errors for displacement along the x, y, and z axes were 0.0360 m, 0.1172 m, and 0.0150 m, respectively.

Author Contributions

Software, Z.L., F.Z. and Y.W.; Formal analysis, H.W. (Hongwei Wang); Investigation, H.W. (Haoran Wang); Data curation, H.W. (Haoran Wang) and W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China (Grant number 2020YFB1314000), Bidding Project of Shanxi Province (Grant number 20201101008), and the Applied Basic Research Program of Shanxi Province (Grant number 202203021222105).

Conflicts of Interest

The authors have no conflict of interest to declare that are relevant to the content of this article.

References

  1. Deshmukh, S.; Raina, A.K.; Murthy, V.M.S.R.; Trivedi, R.; Vajre, R. Roadheader–A comprehensive review. Tunn. Undergr. Space Technol. 2020, 95, 103148. [Google Scholar] [CrossRef]
  2. Corke, P.; Roberts, J.; Cunningham, J.; Hainsworth, D. Mining robotics. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1127–1150. [Google Scholar] [CrossRef]
  3. Jiang, X.X.; Li, C.X. Statistical analysis on coal mine accidents in China from 2013 to 2017 and discussion on the countermeasures. Coal Eng. 2019, 51, 101–105. [Google Scholar] [CrossRef]
  4. Liu, Y.; Li, Y.Q. Research on the automatic laser navigation system of the tunnel boring machine. Seventh Int. Symp. Precis. Eng. Meas. Instrum. 2011, 8321, 484–489. [Google Scholar] [CrossRef]
  5. Cui, Y.; Liu, S.; Liu, Q. Navigation and positioning technology in underground coal mines and tunnels: A review. J. South. Afr. Inst. Min. Metall. 2021, 121, 295–303. [Google Scholar] [CrossRef]
  6. Xie, H.P.; Wu, L.X.; Zheng, D.Z. Prediction on the energy consumption and coal demand of china in 2025. J. China Coal Soc. 2019, 44, 1949–1960. [Google Scholar] [CrossRef]
  7. Liu, M.; Gao, Y.; Li, G.; Guang, X.; Li, S. An improved alignment method for the strapdown inertial navigation system (SINS). Sensors 2016, 16, 621. [Google Scholar] [CrossRef]
  8. Tian, W.Q.; Tian, Y.; Jia, Q.; Zhang, K. Research status and development trend of cantilever Roadheader navigation technology. Coal Sci. Technol. 2022, 50, 0253–2336. [Google Scholar] [CrossRef]
  9. Qin, T.; Li, P.L.; Shen, S.J. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
  10. Mur, A.; Raul, J.M.M.M.; Juan, D.T. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  11. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
  12. Fu, S.; Li, Y.; Zhang, M.; Zong, K.; Cheng, L.; Wu, M. Ultra-wideband pose detection system for boom-type Roadheader based on Caffery transform and Taylor series expansion. Meas. Sci. Technol. 2017, 29, 015101. [Google Scholar] [CrossRef]
  13. Fu, S.; Li, Y.; Zong, K.; Liu, C.; Liu, D.; Wu, M. Ultra-wideband pose detection method based on TDOA positioning model for boom-type Roadheader. AEU Int. J. Electron. Commun. 2019, 99, 70–80. [Google Scholar] [CrossRef]
  14. Roman, M.; Breido, J.; Drijd, N. Development of Position System of a Roadheader on a Base of Active IR-sensor. Procedia Eng. 2015, 100, 617–621. [Google Scholar] [CrossRef]
  15. Du, Y.; Tong, M.; Liu, T.; Dong, H. Visual measurement system for Roadheaders pose detection in mines. Opt. Eng. 2016, 55, 104107. [Google Scholar] [CrossRef]
  16. Tao, Y.; Huang, Y.; Chen, J.; Li, P.; Wu, M. Study on measurement error of angle of deviation and offset distance of Roadheader by single-station, multipoint and time-shared measurement system based on iGPS. In Proceedings of the CSAA/IET International Conference on Aircraft Utility Systems (AUS 2018), Guiyang, China, 19–22 June 2018. [Google Scholar] [CrossRef]
  17. Tian, Y. Inertial navigation positioning method of Roadheader based on zero-velocity update. Ind. Mine Autom. 2019, 45, 70–73. [Google Scholar] [CrossRef]
  18. Shen, Y.; Wang, P.; Zheng, W.; Ji, X.; Jiang, H.; Wu, M. Error compensation of strapdown inertial navigation system for the boom-type Roadheader under complex vibration. Axioms 2021, 10, 224. [Google Scholar] [CrossRef]
  19. Wang, H.R. Roadheader combined positioning method based on strapdown inertial navigation and differential odometer. Ind. Mine Autom. 2022, 48, 148–156. [Google Scholar] [CrossRef]
  20. Yang, J.; Wang, C.; Zhang, Q.; Chang, B.; Wang, F.; Wang, X.; Wu, M. Modeling of Laneway Environment and Locating Method of Roadheader Based on Self-Coupling and Hector SLAM. In Proceedings of the 5th International Conference on Electromechanical Control Technology and Transportation, Nanchang, China, 15 May 2020. [Google Scholar] [CrossRef]
  21. Wang, L.X. Pose Measurement Technology of Roadheader Body based on Fusion of Visual and SINS. J. Phys. Conf. Ser. 2022, 2363, 012014. [Google Scholar] [CrossRef]
  22. Zhang, W.; Zhai, G.; Yue, Z.; Pan, T.; Cheng, R. Research on visual positioning of a Roadheader and construction of an environment map. Appl. Sci. 2021, 11, 4968. [Google Scholar] [CrossRef]
  23. Li, C.; Liu, J.; Zhu, J.; Zhang, W.; Bi, L. Mine image enhancement using adaptive bilateral gamma adjustment and double plateaus histogram equalization. Multimed. Tools Appl. 2022, 81, 12643–12660. [Google Scholar] [CrossRef]
  24. Zhang, W. Research on image enhancement algorithm for the monitoring system in coal mine hoist. Meas. Control 2023, 00202940231173767. [Google Scholar] [CrossRef]
  25. Nan, Z.; Yun, G. An image enhancement method in coal mine underground based on deep retinex network and fusion strategy. In Proceedings of the 6th International Conference on Image, Qingdao, China, 21–23 April 2021. [Google Scholar] [CrossRef]
  26. Zhuang, P.; Ding, X. Divide-and-conquer framework for image restoration and enhancement. Eng. Appl. Artif. Intell. 2019, 85, 830–844. [Google Scholar] [CrossRef]
  27. Hosseini, S.A.; Abbas, A.S.; Reza, A. Prediction of bedload transport rate using a block combined network structure. Hydrol. Sci. J. 2022, 67, 117–128. [Google Scholar] [CrossRef]
  28. Zhuang, P. Image enhancement using divide-and-conquer strategy. J. Vis. Commun. Image Represent. 2017, 45, 137–146. [Google Scholar] [CrossRef]
  29. Ghosh, S.K.; Biswajit, B.; Anupam, G. A novel approach of retinal image enhancement using PSO system and measure of fuzziness. Procedia Comput. Sci. 2020, 167, 1300–1311. [Google Scholar] [CrossRef]
  30. Shahri, A.A.; Spross, J.; Johansson, F.; Larsson, S. Landslide susceptibility hazard map in southwest Sweden using artificial neural network. Catena 2019, 183, 104225. [Google Scholar] [CrossRef]
  31. Ibrahim, H.; Nicholas, S.P.K. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
  32. Peng, L.I.; Yong, H.; Kunlun, Y.A.O. Multi-algorithm fusion of RGB and HSV color spaces for image enhancement. In Proceedings of the 37th Chinese Control Conference (CCC), Wuhan, China, 28–27 July 2018. [Google Scholar] [CrossRef]
  33. Petro, A.B.; Catalina, S.; Jean, M.M. Multiscale retinex. Image Process. Line 2014, 4, 71–88. [Google Scholar] [CrossRef]
  34. Land, E.H.; John, J.; Cann, M. Lightness and retinex theory. Josa 1971, 61, 1–11. [Google Scholar] [CrossRef]
  35. Al, H.; Mohammad, A.; Zohair, A.A. Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement. Trait. Du Signal 2020, 37, 733–743. [Google Scholar] [CrossRef]
  36. Jobson, D.J.; Zia-ur, R.; Glenn, A.W. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  37. Cho, Y.; Ayoung, K. Visibility enhancement for underwater visual SLAM based on underwater light scattering model. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar] [CrossRef]
  38. He, K.; Jian, S.; Xiaoou, T. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  39. Lee, C.; Chul, L.; Chang, S.K. Contrast enhancement based on layered difference representation. In Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [Google Scholar] [CrossRef]
  40. Guo, X.; Yu, L.; Haibin, L. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
  42. Vonikakis, V.; Odysseas, B.; Ioannis, A. Multi-exposure image fusion based on illumination estimation. Proc. IASTED SIPA 2011, 135–142. [Google Scholar] [CrossRef]
  43. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  44. Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Trans. Cybern. 2016, 47, 4559–4565. [Google Scholar] [CrossRef]
  45. Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar] [CrossRef]
  46. Gong, Y.; Liao, P.; Zhang, X.; Zhang, L.; Chen, G.; Zhu, K.; Tan, X.; Lv, Z. Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images. Remote Sens. 2021, 13, 1104. [Google Scholar] [CrossRef]
  47. Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-Light Image/Video Enhancement Using CNNs. BMVC 2018, 220, 4. Available online: https://api.semanticscholar.org/CorpusID:52285038 (accessed on 6 September 2023).
  48. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar] [CrossRef]
  49. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the Computer Vision–ECCV 11th European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010. [Google Scholar] [CrossRef]
  50. Zhenhuan, W.; Chen, X.; Zeng, Q. Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion. Chin. J. Aeronaut. 2013, 26, 442–448. [Google Scholar] [CrossRef]
  51. Tian, M.; Liang, Z.; Liao, Z.; Yu, R.; Guo, H.; Wang, L. A Polar Robust Kalman Filter Algorithm for DVL-Aided SINSs Based on the Ellipsoidal Earth Model. Sensors 2022, 22, 7879. [Google Scholar] [CrossRef] [PubMed]
  52. Cui, Y.; Liu, S.; Yao, J.; Gu, C. Integrated positioning system of unmanned automatic vehicle in coal mines. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  53. Huang, Y.; Yonggang, Z. A new process uncertainty robust Student’st based Kalman filter for SINS/GPS integration. IEEE Access 2017, 5, 14391–14404. [Google Scholar] [CrossRef]
  54. Geiger, A.; Philip, L.; Raquel, U. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16 June 2012. [Google Scholar] [CrossRef]
  55. Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
  56. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
  57. Liu, Y.; Hongwei, W.; Lei, T. TBM-MSE: A Multi-engine State Estimation Based on Inertial Enhancement for Tunnel Boring Machines in Perceptually Degraded Roadways. IEEE Access 2023, 11, 55978–55989. [Google Scholar] [CrossRef]
  58. Huang, X.; Liu, Q.; Shi, K.; Pan, Y.; Liu, J. Application and prospect of hard rock TBM for deep roadway construction in coal mines. Tunn. Undergr. Space Technol. 2018, 73, 105–126. [Google Scholar] [CrossRef]
  59. Yan, Z.; Wang, H.; Geng, Y. Coal-rock interface image recognition method based on improved DeeplabV3+ and transfer learning. Coal Sci. Technol. 2023, 41, 328–338. [Google Scholar] [CrossRef]
Figure 1. Cantilever type roadheader. The operator needs to manually control the traveling and cutting movements of the roadheader, and the operating environment is harsh and dangerous.
Figure 1. Cantilever type roadheader. The operator needs to manually control the traveling and cutting movements of the roadheader, and the operating environment is harsh and dangerous.
Electronics 12 04556 g001
Figure 2. Frame of Integrated Positioning System. It consists of three parts: the position and attitude perception system based on the SINS, the position and attitude perception system based on stereo visual odometry, and a data fusion module based on the Kalman filter.
Figure 2. Frame of Integrated Positioning System. It consists of three parts: the position and attitude perception system based on the SINS, the position and attitude perception system based on stereo visual odometry, and a data fusion module based on the Kalman filter.
Electronics 12 04556 g002
Figure 3. Euler angle. A Euler angle can describe the attitude of the roadheader.
Figure 3. Euler angle. A Euler angle can describe the attitude of the roadheader.
Electronics 12 04556 g003
Figure 4. Image Enhancement Module. The module performs reflection component extraction and dehazing operations on the input image, obtains components R and P, and then fuses the two components into a new image.
Figure 4. Image Enhancement Module. The module performs reflection component extraction and dehazing operations on the input image, obtains components R and P, and then fuses the two components into a new image.
Electronics 12 04556 g004
Figure 5. Retinex theory in principle. The image information S is determined by illumination component L and the reflection component R, and the illumination component L has an adverse effect on the extraction of feature points.
Figure 5. Retinex theory in principle. The image information S is determined by illumination component L and the reflection component R, and the illumination component L has an adverse effect on the extraction of feature points.
Electronics 12 04556 g005
Figure 6. Different algorithms compared using four different datasets.
Figure 6. Different algorithms compared using four different datasets.
Electronics 12 04556 g006
Figure 7. Different algorithms compared using self-built datasets.
Figure 7. Different algorithms compared using self-built datasets.
Electronics 12 04556 g007
Figure 8. The distribution histograms of the number of correctly matched feature points between the raw images and enhanced images.
Figure 8. The distribution histograms of the number of correctly matched feature points between the raw images and enhanced images.
Electronics 12 04556 g008
Figure 9. Results of feature points matching. The lines connecting the images represent the corresponding relationships between the matched feature points.
Figure 9. Results of feature points matching. The lines connecting the images represent the corresponding relationships between the matched feature points.
Electronics 12 04556 g009
Figure 10. Depth-perception principle of stereo visual odometry. The position of the feature points on the two cameras is different, which can help the stereo visual odometry to calculate depth information z.
Figure 10. Depth-perception principle of stereo visual odometry. The position of the feature points on the two cameras is different, which can help the stereo visual odometry to calculate depth information z.
Electronics 12 04556 g010
Figure 11. KITTI seq. 00 trajectory.
Figure 11. KITTI seq. 00 trajectory.
Electronics 12 04556 g011
Figure 12. EuRoc MH_03 trajectory.
Figure 12. EuRoc MH_03 trajectory.
Electronics 12 04556 g012
Figure 13. Installation diagram of the experimental equipment. Positioning experiments were conducted on a simulated roadway where the light environment was not good.
Figure 13. Installation diagram of the experimental equipment. Positioning experiments were conducted on a simulated roadway where the light environment was not good.
Electronics 12 04556 g013
Figure 14. Position and attitude perception results of different methods. (ac) are displacements of the roadheader in the x, y, and z axes; (df) are roll, yaw, and pitch of the roadheader, respectively.
Figure 14. Position and attitude perception results of different methods. (ac) are displacements of the roadheader in the x, y, and z axes; (df) are roll, yaw, and pitch of the roadheader, respectively.
Electronics 12 04556 g014aElectronics 12 04556 g014b
Table 1. Comparison of average NIQE for public datasets. Lower scores indicate higher image quality for image restoration. For each case, the best result is in red, whereas the second-best one is in blue.
Table 1. Comparison of average NIQE for public datasets. Lower scores indicate higher image quality for image restoration. For each case, the best result is in red, whereas the second-best one is in blue.
MethodsDICMLIMENPEAVV
LIESD3.824.383.543.09
KinD3.334.773.513.37
Mbllen3.724.513.944.18
Enlight-GAN3.573.724.112.58
Ours3.473.643.733.28
Table 2. Comparison of average NIQMC for public datasets. Higher scores indicate higher image quality for image restoration. For each case, the best result is in red, whereas the second-best one is in blue.
Table 2. Comparison of average NIQMC for public datasets. Higher scores indicate higher image quality for image restoration. For each case, the best result is in red, whereas the second-best one is in blue.
MethodsDICMLIMENPEAVV
Retinex_net5.204.835.205.30
KinD5.154.955.015.44
Mbllen5.315.135.195.38
Enlight-GAN5.134.885.155.45
Ours5.475.415.135.49
Table 3. Comparisons of accuracy using the KITTI dataset.
Table 3. Comparisons of accuracy using the KITTI dataset.
Seq.IMUVisioORB-SLAM2Proposed
trtrtrtr
005.821.455.431.753.491.472.751.32
019.733.298.223.177.522.926.172.84
0210.723.267.025.116.694.735.522.57
035.181.254.191.633.071.272.511.08
Table 4. Comparisons of accuracy using the EuRoc dataset.
Table 4. Comparisons of accuracy using the EuRoc dataset.
Seq.IMUVisioORB-SLAM2Proposed
trtrtrtr
MH_010.451.530.491.650.341.620.261.37
MH_020.631.590.611.700.351.710.301.09
MH_030.591.540.221.650.241.010.170.97
MH_040.511.380.721.530.381.270.211.12
Table 5. Parameters of CH110 and FSON II.
Table 5. Parameters of CH110 and FSON II.
ParametersCH110FSON II
Gyroscope zero bias/ ( · h 1 ) ≤3.5≤0.005
Alignment accuracy/(°)0.10.01
Heading accuracy/(°)≤0.4≤0.02
Table 6. Results of the quantitative analysis of the positioning data. It contains three evaluation indicators: maximum error, average error, and standard deviation.
Table 6. Results of the quantitative analysis of the positioning data. It contains three evaluation indicators: maximum error, average error, and standard deviation.
Positioning MethodAssessment
Indicators
x/my/mz/mRoll/°Yaw/°Pitch/°
Proposed methodMaximum error0.03600.11720.01500.11291.06160.9759
Average error−0.0012−0.06150.0073−0.0343−0.477−0.5367
Standard deviation0.01500.03280.00350.03580.27510.2413
SINSMaximum error0.11480.39420.05810.22031.35891.0110
Average error−0.0036−0.20610.0314−0.0631−0.1227−0.5782
Standard deviation0.04920.11430.01330.09280.61880.2401
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Li, Z.; Wang, H.; Cao, W.; Zhang, F.; Wang, Y. A Roadheader Positioning Method Based on Multi-Sensor Fusion. Electronics 2023, 12, 4556. https://doi.org/10.3390/electronics12224556

AMA Style

Wang H, Li Z, Wang H, Cao W, Zhang F, Wang Y. A Roadheader Positioning Method Based on Multi-Sensor Fusion. Electronics. 2023; 12(22):4556. https://doi.org/10.3390/electronics12224556

Chicago/Turabian Style

Wang, Haoran, Zhenglong Li, Hongwei Wang, Wenyan Cao, Fujing Zhang, and Yuheng Wang. 2023. "A Roadheader Positioning Method Based on Multi-Sensor Fusion" Electronics 12, no. 22: 4556. https://doi.org/10.3390/electronics12224556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop