Next Article in Journal
HMB-I/O: Fast Track for Handling Urgent I/Os in Nonvolatile Memory Express Solid-State Drives
Next Article in Special Issue
Analysis and Comparison of Macro Fiber Composites and Lead Zirconate Titanate (PZT) Discs for an Energy Harvesting Floor
Previous Article in Journal
Cyber Threat Actors for the Factory of the Future
Previous Article in Special Issue
A Novel Sensorless Approach for Speed and Displacement Control of Bearingless Switched Reluctance Motor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust-Extended Kalman Filter and Long Short-Term Memory Combination to Enhance the Quality of Single Point Positioning

1
Lab-STICC CNRS UMR 6285, ENSTA Bretagne, 2 Rue François Verny, 29806 Brest CEDEX 9, France
2
Safran Electronics & Defense, 21 Avenue du Gros Chêne, 95610 Eragny sur Oise, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(12), 4335; https://doi.org/10.3390/app10124335
Submission received: 22 April 2020 / Revised: 8 June 2020 / Accepted: 16 June 2020 / Published: 24 June 2020

Abstract

:
In the recent years, multi-constellation and multi-frequency have improved the positioning precision in GNSS applications and significantly expanded the range of applications to new areas and services. However, the use of multiple signals presents advantages as well as disadvantages, since they may contain poor quality signals that negatively impact the position precision. The objective of this study is to improve the Single Point Positioning (SPP) accuracy using multi-GNSS data fusion. We propose the use of robust-Extended Kalman Filter (referred to as robust-EKF hereafter) to eliminate outliers. The robust-EKF used in the present work combines the Extended Kalman Filter with the Iterative ReWeighted Least Squares (IRWLS) and the Receiver Autonomous Integrity Monitoring (RAIM). The weight matrix in IRWLS is defined by the MM Estimation method which is a robust statistics approach for more efficient statistical data analysis with high breaking point. The RAIM algorithm is used to check the accuracy of the protection zone of the user. We apply the robust-EKF method along with the robust combination of GPS, Galileo and GLONASS data from ABMF base station, which significantly improves the position accuracy by about 84% compared to the non-robust data combination. ABMF station is a GNSS reception station managed by Météo-France in Guadeloupe. Thereafter, ABMF will refer to the acronym used to designate this station. Although robust-EKF demonstrates improvement in the position accuracy, its outputs might contain errors that are difficult to estimate. Therefore, an algorithm that can predetermine the error produced by robust-EKF is needed. For this purpose, the long short-term memory (LSTM) method is proposed as an adapted Deep Learning-Based approach. In this paper, LSTM is considered as a de-noising filter and the new method is proposed as a hybrid combination of robust-EKF and LSTM which is denoted rEKF-LSTM. The position precision greatly improves by about 95% compared to the non-robust combination of data from ABMF base station. In order to assess the rEKF-LSTM method, data from other base stations are tested. The position precision is enhanced by about 87%, 77% and 93% using the rEKF-LSTM compared to the non-robust combination of data from three other base stations AJAC, GRAC and LMMF in France, respectively.

1. Introduction

Nowadays, positioning services can greatly improve life quality by covering a wide range of applications such as automatic driving, intelligent transportation, agriculture and so on. In this context, the Global Navigation Satellite System (GNSS) is an optimal infrastructure to perform accurate positioning. The GNSS-based absolute positioning technologies can be called single point positioning (SPP) technologies and offer multifold advantages, such as the absence of restriction by the inter-station distance, low cost and simple data processing.
GNSS brings more signals to enhance the accuracy of user position. However, a main challenge for multi-constellation GNSS positioning is to determine the reliable satellite measurements from all visible satellites for inferring true positions. The estimation precision depends on the ability of the receiver to get the position in the presence of outlying observations that can affect the data. Multipath and NLoS (Non-Line Of Sight) signals can cause errors in the satellite observations for the receiver moving in harsh environments such as urban canyons, under crowded trees, or inside tunnels. It is thus challenging to achieve the positioning accuracy and reliability in such environments and it is required to adopt robust positioning estimation techniques to prevent the effects of possible wrong or outlier satellite observations on the user position. For this purpose, several algorithms and methods have been proposed to improve the GNSS receiver performance in terms of positioning accuracy. Receiver autonomous integrity monitoring (RAIM) is a commonly used approach to reject outlier observations using pseudo-range measurement redundancy to maintain the positioning accuracy by isolating the contaminated measurements from the good ones [1,2,3]. However, for user position estimation in urban environments, visible and available satellites are rapidly changing and it is difficult to use the RAIM for satellite selection. Besides the satellite selection, Weighted Least-Squares Estimation (WLSE) is often used for calculating and assigning weights to the GNSS observations [4,5,6,7,8,9]. The weight model is defined based on the elevation angle and the signal to noise ratio. However, it suffers from the same shortcomings as RAIM since the satellite elevation angle and the signal to noise ratio can be impacted by the multipath and radio interferences, especially in urban canyons. This has been highlighted in previous works [8,9] where we used the WLSE based on the elevation angle and signal to noise ratio. The positioning accuracy can improve, but the estimations remain unstable. In consequence, the robust-Extended Kalman Filter (robust-EKF) [10,11,12,13,14,15,16,17,18] is proposed in this paper to address this issue. Robust-EKF is the combination of the Extended Kalman Filter with Iterative Reweighted Least Squares algorithm with a weight matrix defined by MM-Estimation, and using the RAIM algorithm to check the protection zone of user.
The robust-EKF is based on robust statistical estimation which is employed to reduce the negative impact of the inaccurate pseudo-range measurements. In the statistical framework, robust estimation algorithms are assessed based on three factors: efficiency, stability and breakdown point. Efficiency is a measure of the performance of an estimator; stability is a measure of how small deviations from the model assumptions affect the performance; whereas the breakdown point is defined as the measure of the maximal fraction of outlier data that the estimator can handle without resulting in an incorrect final estimate. The role of a robust algorithm is to harmonize these three factors. Multiple robust statistical estimation techniques have been discussed including M-Estimation, S-Estimation, MM-Estimation etc. [19]. In this paper, the MM-Estimation proposed by Yohai [20] is used to define the weight matrix for its efficient statistical data analysis with a high breakdown point. Iterative Reweighted Least Squares (IRWLS) [21,22] is used for optimizating results by an iterative method in which each step involves solving a weighted least squares problem. Even though combining MM-Estimation and IRWLS algorithm can eliminate the outlier data, a method to check user’s protection zone is necessary. In this case, the RAIM algorithm [23] is proposed to calculate the Horizontal Protection Level (HPL).
Furthermore, although robust-EKF can significantly improve the position accuracy, if the initial state estimate is erroneous, or if the process is incorrectly modelled, the final results are negatively impacted. Therefore, an algorithm which can predetermine the behavior error of the robust-EKF is needed. Recently, “deep learning” is known as one of the powerful methods to apply in many domains such as automatic speech recognition, image recognition, medical image analysis, GNSS, etc. Long Short-Term Memory (LSTM) is an artificial recurrent neural network architecture used in the field of deep learning and known as an effective method to infer predictions based on time series data [24,25,26,27,28]. This paper hence proposes to use LSTM as a denoising filter. Figure 1 summarizes the global flowchart for the position determination using the proposed approach.
The remainder of this paper is organized as follows: Section 2 introduces the single point positioning technology; Section 3 presents the Robust-Extended Kalman Filter; Section 4 shows the experimental results of applying robust-EKF; Section 5 presents the de-noising filter method and experimental results; Section 6 presents the conclusions and perspectives.

2. Single Point Positioning Technology

The Single Point Positioning technology can provide meter-level positioning accuracy for emerging multi-GNSS integration and opens various new prospects. GNSS brings more signals to improve the user position and velocity accuracy. In this paper, observation models of three satellite systems (GPS, Galileo and GLONASS) are combined. For combining these satellites data, two important things are paid attention to: coordinate reference and time reference.
With respect to the coordinate reference, the coordinate systems of GPS, Galileo and GLONASS satellites adopt the broadcast orbits of WGS-84, GTRF and PZ90.11, respectively. Even though their coordinates are different, the disparity is around several centimeters as stated in [29]. As a result, their coordinate systems are considered the same.
As for the time reference, GPS, Galileo and GLONASS systems define their own time scale. These time systems are presented in the RINEX report version 3.03 [30]. GPS time ( G P S T ) runs parallel to UTC (Universal Time Coordinated) (1 microsecond difference), but it is a continuous time scale that does not insert any leap seconds. Galileo runs on Galileo System Time ( G S T ) which is nearly identical to GPS time (tens of nanoseconds difference). GLONASS is basically running on UTC or, more precisely, GLONASS time ( G L O N A S S T ) linked to UTC(SU). It is not a continuous time, i.e., it introduces the same leap seconds as UTC. The reported GLONASS time has the same hours as UTC and not UTC+3h as the original GLONASS System Time. Apart from the mild errors in the realization of the different time systems, the relation between them is given by:
G L O N A S S T = U T C and G P S T = G S T = U T C + Δ t L S + τ G P S
where Δ t L S is the time difference between G P S T and U T C due to leap seconds (1 January 2019, Δ t L S = 18), τ G P S is the fractional time offset between the two systems (GPS and GLONASS).
After synchronization of the coordinate and time systems, the position and velocity estimation models are presented in the following.
Pseudo-range measurements are considered as the sum of the distance to the satellite, the receiver clock error, the ionospheric and tropospheric errors and multi-paths. They are expressed as follows:
ρ G , E , R = x G , E , R x u 2 + y G , E , R y u 2 + z G , E , R z u 2        + c δ t G , E , R δ T S G , E , R + I o n o G , E , R + T r o p o G , E , R + M u l p a G , E , R + ε ρ , G , E , R
where the superscripts/subscripts G, E and R refer to GPS, Galileo and GLONASS satellites respectively; ρ is the measured pseudo-range; c is the speed of light in vacuum; [ x u , y u , z u ] is the user position; [ x G , E , R , y G , E , R , z G , E , R ] are positions of GPS, Galileo and GLONASS satellites respectively; δ t is the user clock offset; δ T S is the satellite clock offset; I o n o is the ionospheric delay; T r o p o is the tropospheric delay; M u l p a are the multi-paths and ε ρ is measured noise.
Before the determination of user position and velocity, it is required to correct the GNSS errors that are the satelllites clocks bias, the ionospheric errors, the tropospheric errors, and the multi-paths. The corrections of the satellite clock for GPS and GLONASS are presented in [31] (pp. 88–89), whereas the correction of the satellite clock of GLONASS is discussed in [30] (p. 34). We use the Saastamoinen model to correct the tropospheric errors as in [32] (pp. 135–137). We correct the ionospheric errors using the dual-frequency method in [23] (p. 286). While the multi-paths are estimated by the combination of the code pseudo-ranges and the phase pseudo-ranges in [23] (pp. 290–291). After compensating for satellite clock bias, ionospheric errors, tropospheric errors, and multi-paths from Equation (2), the corrected pseudo-ranges can be written as:
ρ c G , E , R = r G , E , R + c δ t G , E , R + ε ρ , G , E , R
where ρ c is the corrected pseudo-range; r is the true range from the satellite to the user:
r G , E , R = x G , E , R x u 2 + y G , E , R y u 2 + z G , E , R z u 2
Given the Doppler effect, the pseudo-range rate ρ ˙ can be computed as [31]:
ρ ˙ = D c f
where D is the Doppler effect; c is the speed of light and f is the satellite transmitted frequency.
The velocity can be estimated from the pseudo-range rate, starting by differentiating Equation (3) to obtain:
ρ ˙ c G , E , R = r ˙ G , E , R + c δ t ˙ G , E , R + ε ρ ˙ , G , E , R
where δ t ˙ is the user’s clock drift (sec/sec); ε ρ ˙ is the error in observation (meters/sec); and r ˙ is the true range rate expressed as:
r ˙ G , E , R = 1 x v x , G , E , R v x , u + 1 y v y , G , E , R v y , u + 1 z v z , G , E , R v z , u
where v x , G , E , R , v y , G , E , R , v z , G , E , R is the satellite’s velocity; v x , u , v y , u , v z , u is the true user’s velocity; and 1 x , 1 y , 1 z is the true line of sight unit vector from the satellite to the user:
1 x , 1 y , 1 z = x G , E , R x u , y G , E , R y u , z G , E , R z u x G , E , R x u 2 + y G , E , R y u 2 + z G , E , R z u 2
Thus, the combined equations for the corrected pseudo-ranges (Equation (3)) and corrected pseudo- range rates (Equation (6)) can be as expressed as follows:
ρ c G , E , R = r G , E , R + c δ t G , E , R + ε ρ , G , E , R and ρ ˙ c G , E , R = r ˙ G , E , R + c δ t ˙ G , E , R + ε ρ ˙ , G , E , R
To simultaneously estimate the user position and velocity, the state vector is defined from Equations (7)–(9) as follows:
X = x u v x , u y u v y , u z u v z , u c δ t G c δ t ˙ G c δ t E c δ t ˙ E c δ t R c δ t ˙ R T
This vector will be used in robust extended Kalman filter algorithm detailed in Section 3.

3. Robust Extended Kalman Filter

Extended Kalman Filter (EKF) is an extended application of Kalman filter in solving nonlinear optimal filtering problems. It is used in a wide range of technological fields (radar, electronic vision, communication). However, in hybrid positioning, the large outliers can negatively impact its application to position estimation. According to [17,18], the resulting Kalman filter outliers can be sorted into three types: observation, innovation and structural outliers. In this paper, robust extended Kalman filter is used to handle observation and innovation outliers. Robust-EKF is presented as the combination of the Extended Kalman Filter with the Iterative Reweighted Least Squares algorithm (Section 3.3) with a weight matrix determined by MM-estimation (Section 3.2). Moreover, a combination of Robust-EKF with the RAIM algorithm (Section 3.4) is used to check the accuracy of user zone.

3.1. Robust Extended Kalman Filter Model

The state space representation for the extended Kalman filter model is defined by two equations as follows:
State Equation:
X k = f X k 1 + ϵ k
Observation Equation:
Z k = h X k + e k
where X k is the state vector and Z k is the measurement vector at time t k ; ϵ k is the vector that conveys the system error sources; e k is the vector that represents the measurement error sources; f ( ) is the function for state transition and h ( ) is the function for the measurement.
IRWLS works on linear regression to handle the problem of outliers in the data. Therefore, converting a non-linear model into a linear model is necessary to apply the robust-EKF that consists in three steps: Linearizing equations, reforming filter and updating.
Step 1: Linearizing equations
The linearized state in Equation (11) is described as:
A k = f X k 1 X
where A k is the Jacobian of the process model.
The predicted state is determined by:
X k = A k X k 1
The relation between the true state X K and its prediction X k can be written as:
X k = X k δ k
where δ k is the error between the true state and its prediction.
Linearizing Equation (12) using the first order Taylor series expansion with the predicted state vector X k :
Z k = h X k + h X X = X k X k X k + e k
By noting: h X X = X k = H k , Equation (16) can be written as:
Z k = h X k + H k X k X k + e k
Or
Z k h X k + H k X k = H k X k + e k
Step 2: Reforming the filter
This step will reform the filter to a regression equation. To perform this, combining Equations (15) and (18) together in a matrix form yields the following:
X k Z k h X k + H k X k = I H k X k + δ k e k
where I is the identity matrix.
Equation (19) is expressed in a compact form as:
Z ˘ k = H ˘ k X k + e ˘ k
The covariance matrix of e ˘ k is given by:
R ˘ k = P k 0 0 R k = L k L k T
where R k = E e k e k T is the measurement covariance matrix and P k = E δ k δ k T is the covariance matrix of the predicted error. P k is determined by:
P k = A k P k 1 A k T + Q k
where Q k is the covariance matrix of the process.
L k in Equation (21) is calculated using Cholesky decomposition. To perform pre-whitening, Equation (20) is multiplyed by L k 1 and by defining: y k = L k 1 Z ˘ k , G k = L k 1 H ˘ k , ζ k = L k 1 e ˘ k . Equation (20) becomes the regression equation written as:
y k = G k X k + ζ k
Step 3: Updating
This step consists in updating the estimated state and checking the position accuracy using RAIM algorithm presented in section “RAIM algorithm” (Section 3.4). The estimated state X ^ k in Equation (23) is optimized by the IRWLS algorithm presented in section “Iterative Reweighted Least Squares Algorithm” (Section 3.3). The final state estimate X ^ k is given by:
X ^ k = G k T W k G k 1 G k T W k y k
where W k is the diagonal weight matrix at time t k . This matrix is defined in section “MM Estimation theory” (Section 3.2) and is calculated in section “Iterative Reweighted Least Squares Algorithm” (Section 3.3).
The error covariance matrix is updated as follows:
P k = I K H k P k
where K the gain of the filter given by:
K = P k H k T H k P k H k T + R k 1

3.2. MM Estimation Theory

In regression analysis, using the least squares method would not be enough to handle outliers or extreme observations. We therefore resort to the MM-Estimation which is known as a robust statistics method providing a more efficient analysis of statistical data with high breaking point as demonstrated in [20,22].
The linear equation in this paper is written as follows:
y = G X + e
where y is the new observation vector after combining system and observation data; G is the combined system and observation matrices corresponding to y; X is the unknown state vector and e is the noise vector corresponding to y.
The robust method is used to bound the influence of outliers on the X state estimates. The goal of the robust estimator is to seek the best fitting criterion to the real observations in order to reduce the effect of abnormal data. Generally, a robust estimator minimizes the cost function J ( r ) given by:
J ( r ) = i = 1 n μ r i = i = 1 n μ y i G i X
where y i is the i t h observation, G i is the transformation vector for the i t h observation, and y i G i X is the residual error for the i t h observation. μ ( r ) is the objective function that gives the contribution of each residual to the cost function J r .
The scaled estimation is used to minimize the weights out of the restrained observations dynamically:
min i = 1 n μ r i s = i = 1 n μ y i G i X s
where s is the scale factor.
Let φ be the derivative of the objective function for the unknown X, φ = μ X . Differentiating the objective function in Equation (29) and setting the derivative to zero:
1 s i = 1 n φ y i G i X s G i = 0
Which is equivalent to:
i = 1 n φ y i G i X s G i = 0
By introducing a weight w ( r ˘ s , i ) to consider the effects of each scaled residual r ˘ s , i = y i G i X s , Equation (31) can be modified as:
i = 1 n w r ˘ s , i φ r ˘ s , i G i = 0
The solution of Equation (32) is a weighted least squares estimation. Nevertheless, the weights are determined by the scaled residuals, which rely upon the estimated coefficients, which themselves rely upon the weights. For this reason, the Iterative Re-Weighted Least Squares method is used and presented in section “Iterative Reweighted Least Squares Algorithm” (Section 3.3) to optimize the solution. The weight function plays an important role and some variants are presented in Table 1.
In Table 1, r i is the i t h observation residual. α is a tuning constant to optimize the estimation with a balance between a high breakdown point and efficiency of the estimator [22].

3.3. Iterative Reweighted Least Squares Algorithm

An Iterative Reweighted Least Squares Algorithm (IRWLS) is performed to eliminate the outliers. The steps for the robust estimation procedure are defined as follow:
  • Find an initial estimate X ^ 0
    X ^ 0 = G 0 T G 0 1 G 0 T y
  • Estimate the vector for initial residuals of the observations r 0 :
    r 0 = y G 0 X ^ 0
  • Define the initial scale value s 0 [33]
    s 0 = 1.4826 median r 0
    where “ m e d i a n ” stands for the median function computed on the residuals vector r 0 .
  • Estimate the initial diagonal weight matrix by MM-Estimation.
    W 0 ( i , i ) = 1 r 0 ( i ) s 0 α 0 r 0 ( i ) s 0 > α
    where i = 1 , , n ; n being the number of observations.
  • While (j is j t h iteration)
    (a)
    Update the value of the matrix G j at the j t h iteration with X ^ j 1
    (b)
    Solve the estimated state X ^ j using the weighted least-squared method
    X ^ j = G j T W j 1 G j 1 G j T W j 1 y
    (c)
    Calculate the HPL (in Section 3.4)
    (d)
    If X ^ j X ^ j 1 < 0.001 ; break
    (e)
    If X ^ j X ^ j 1 > 0.001 ; continue
    (f)
    Update the estimated residuals
    r j = y G j X ^ j
    (g)
    Calculate the scale value
    s j = 1.4826 median r j
    (h)
    Recalculate the diagonal weighted matrix using MM-Estimation:
    W j ( i , i ) = 1 r j ( i ) s j α 0 r j ( i ) s j > α
    (i)
    Go to step (a)
  • End
  • If H P L H A L then the estimated positions are accepted, (HAL: Horizontal Alert Limit)
    If not, they are rejected.
In this paper, we use the bi-Tukey weight function in Equations (36) and (40). The HPL is calculated using the RAIM algorithm presented in Section 3.4.

3.4. RAIM Algorithm

In this paper, RAIM algorithm is used to detect positioning errors exceeding the alert limit. One of RAIM’s outputs is the HPL, that is defined as the radius of a circle in the horizontal plane, centered at the true position. The Horizontal Alert Limit (HAL) is the maximum allowed HPL, which means that the estimated positions are accepted when H P L H A L . Given the linear Equation (27), the estimated state vector X ^ is determined by IRWLS as explained in Section 3.3:
X ^ = G T W G 1 G T W y
where W is the W matrix last obtained matrix in Equation (40).
The state error is calculated as the difference between the estimated state vector X ^ and the true state vector X:
δ X = X ^ X = G T W G 1 G T W y X = G T W G 1 G T W ( G X + e ) X = G T W G 1 G T W e
The estimated residual e ^ is determined by computing the difference between the measured observation y and the estimated observation ( G X ^ ):
e ^ = y G X ^ = y G G T W G 1 G T W y = I G G T W G 1 G T W y = S y = I G G T W G 1 G T W G X + e = G X + e G G T W G 1 G T W G X G G T W G 1 G T W e = G X + e G X G G T W G 1 G T W e = I G G T W G 1 G T W e = S e
where:
S = I G G T W G 1 G T W
Note that S is a projector matrix and thus idempotent: S 2 = S . The a priori standard deviation of the pseudo-range is calculated as follows:
σ 0 = e ^ T W e ^ / n p
where n is the number of observations and p is the number of parameters in the unknown state X.
The position error in the state error ( δ X ) of Equation (42) is defined in the (x, y, z) coordinate system. In practice, a local (e, n, u: east, north, up) coordinate system is more suitable. To transform from (x, y, z) coordinates to (e, n, u) coordinates, the F T orthogonal transformation matrix is required [23]:
F T = sin λ cos λ 0 sin φ cos λ sin φ sin λ cos φ cos φ cos λ cos φ sin λ sin φ
where φ is the longitude and λ the latitude.
Here, only the parameters of coordinates are considered. For example, the state vector X is defined following Equation (10) with the first, third and fifth positions in relation to the three (x, y, z) coordinates. Therefore, the new G ˜ 0 matrix is obtained as G ˜ 0 = G ( : , 1 : 2 : 5 ) and the position errors as δ X ˜ 0 = δ X ( 1 : 2 : 5 ) . The M matrix related to the errors in easting ( d E ), northing ( d N ) and upping ( d U ) is calculated as follows:
δ X E N U = dE dN dU = F T δ X ˜ 0 = F T G ˜ 0 T G ˜ 0 1 G ˜ 0 T e = M e
Since the horizontal error related to easting and northing is considered, a matrix M ˜ 0 can be defined as M ˜ 0 = M ( 1 : 2 , : ) . In reality, none of the elements in e is equal to zero. However, when serious observation measure error occurs, some elements in e may be extremely larger than the others. Therefore, we only consider the i t h observation that has a failure of magnitude β , while we suppose that the other elements are equal to zero. The error vector can thus be re-written as:
e = 0 β 0 T
The norm squared of δ X E N (errors in easting and northing) assuming this specific choice of e is computed as:
δ X E N 2 = e T M ˜ 0 T M ˜ 0 e = m 1 i 2 + m 2 i 2 β 2
From Equation (43), e ^ = S e is recalled. Then S T S = S and the diagonal entry ( i , i ) of S is denoted s i i and we have:
e ^ 2 = e ^ T e ^ = e T S T S e = s i i β 2
Combining Equations (49) and (50) gives:
δ X E N 2 = m 1 i 2 + m 2 i 2 s i i e ^ 2
Or
δ X E N = m 1 i 2 + m 2 i 2 s i i e ^ = α i e ^
The obtained equation is a linear function with a straight line through the origin and a slope α i . The slope α i of the failure mode axis related to observation i is computed for all i = 1 , , n as:
α i = m 1 i 2 + m 2 i 2 s i i
The mode axis with the largest α i is denoted α m a x and the HPL is defined as:
H P L = α max σ 0
where σ 0 is calculated following Equation (45).

4. Experimental Results of Applying Robust-EKF

Numerous investigations on robust-EKF for GNSS data exist in the literature [13,14,15]. In [13], robust-EKF is used for GEO/IGSO/GPS Raw-PPP/INS data fusion. In [14], it is applied for GPS/Galileo/GLONASS data combination. In [15], it is used for GPS/GLONASS data combination. All these studies assert the significant improvement of the position precision brought by the application of robust-EKF. In this paper, we combine the robust-EKF with the RAIM algorithm to ensure the position precision. The details of this robust-EKF and the experimental results for GNSS data are presented in the sequel in Section 4.1 and Section 4.2 respectively.

4.1. Applying Robust-EKF

From Equation (10), we can write the compact user state of the three systems GPS, Galileo and GLONASS as follows:
X = x u v x y u v y z u v z b G d G b E d E b R d R T
where [ x u , y u , z u ] is the user position; [ v x , v y , v z ] is the user velocity; b G , b E and b R are the errors in range due to the user’s clock bias with GPS, Galileo and GLONASS time, respectively; d G , d E and d R are the user’s clock drifts with GPS, Galileo and GLONASS time respectively.
We then proceed to the steps of the robust Extended Kalman Filter:
Step 1: Linearizing equations
By linearizing the function f ( X k 1 ) , the state transition matrix A k is obtained as a 6 × 6 block diagonal matrix of the form:
A k = f X X = X k 1 = A x 0 0 0 0 0 0 A y 0 0 0 0 0 0 A z 0 0 0 0 0 0 A b , G 0 0 0 0 0 0 A b , E 0 0 0 0 0 0 A b , R
where A x = A y = A z = A b , G = A b , E = A b , R = 1 Δ T 0 1 , and Δ T is the time interval between two epochs.
The predicted state is then determined by:
X k = A k X k 1
The relation between the true state ( X k ) and its prediction ( X k ) can be written as:
X k = X k δ k
The observation equation is defined as: Z k = h X k + e k , where
Z k = Z ρ , G Z ρ ˙ , G Z ρ , E Z ρ , ˙ E Z ρ , R Z ρ ˙ , R
where Z ρ , G and Z ρ ˙ , G are the pseudo range ( ρ G ) and the pseudo range rate ( ρ ˙ G ) matrices of N g p s GPS satellites tracked (dimension ( N g p s × 1 ) ); Z ρ , E and Z ρ , ˙ E are the pseudo range ( ρ E ) and the pseudo range rate ( ρ ˙ E ) matrices of N g a l Galileo satellites tracked (dimension ( N g a l × 1 )); Z ρ , R and Z ρ ˙ , R are the pseudo range ( ρ R ) and the pseudo range rate ( ρ ˙ R ) matrices of N g l o GLONASS satellites tracked (dimension ( N g l o × 1 ) ). The ρ G , ρ ˙ G , ρ E , ρ ˙ E , ρ R , and ρ ˙ R are calculated in Equation (9).
Following step 1 in Section 3.1, the linearized “Observation equation” using the first order Taylor series expansion with the predicted state vector X k is written as:
Z k h X k + H k X k = H k X k + e k
where the matrix H k is determined as the derivative of the function h ( X k ) : H k = h X X = X k
H k = H ρ , G 1 0 0 0 0 0 H ρ ˙ , G 0 1 0 0 0 0 H ρ , E 0 0 1 0 0 0 H ρ ˙ , E 0 0 0 1 0 0 H ρ , R 0 0 0 0 1 0 H ρ ˙ , R 0 0 0 0 0 1
H ρ , G , H ρ , E and H ρ , R are the Jacobians of the pseudo-range measurement model for GPS, Galileo and GLONASS data with the dimensions ( N g p s × 6 ), ( N g a l × 6 ) and ( N g l o × 6 ), respectively; H ρ , ˙ G , H ρ , ˙ E and H ρ , ˙ R are the Jacobians of the pseudo-range rate measurement model for GPS, Galileo and GLONASS data with the dimensions ( N g p s × 6 ), ( N g a l × 6 ) and ( N g l o × 6 ), respectively.
H ρ , G is computed as:
H ρ , G = ρ c , G 1 x 0 ρ c , G 1 y 0 ρ c , G 1 z 0 ρ c , G 2 x 0 ρ c , G 2 y 0 ρ c , G 2 z 0 ρ c , G N g p s x 0 ρ c , G N g p s y 0 ρ c , G N g p s z 0
And H ρ , ˙ G is computed as:
H ρ ˙ , G = 0 ρ c , G 1 x 0 ρ c , G 1 y 0 ρ c , G 1 z 0 ρ c , G 2 x 0 ρ c , G 2 y 0 ρ c , G 2 z 0 ρ c , G N g p s x 0 ρ c , G N g p s y 0 ρ c , G N g p s z
where ρ c , G i x = x G i x u x G i x u 2 + y G i y u 2 + z G i z u 2 ; ρ c , G i y = y G i y u x G i x u 2 + y G i y u 2 + z G i z u 2 ; ρ c , G i z = z G i z u x G i x u 2 + y G i y u 2 + z G i z u 2 , with i = 1 , , N g p s .
H ρ , E , H ρ , R and H ρ , ˙ E , H ρ , ˙ R are computed similarly to H ρ , G , and H ρ , ˙ G respectively.
Step 2: Reforming filter
Following step 1 and step 2 in Section 3.1, from Equations (58) and (60) we have the equation as Equation (19). We compact it as Equation (20), the compact regression equation is as follows:
Z ˘ k = H ˘ k X k + e ˘ k
The covariance matrix of e ˘ k is R ˘ k = P k 0 0 R k = L k L k T ; where R k = E ( e k e k T ) is the measurement covariance matrix and P k = E δ k δ k T is the covariance matrix of the predicted error, and both matrices are needed to determine L k .
The determination of R k is not straightforward and in this work the Exponential model is proposed to estimate it. The Exponential model formulation [23] depends on the elevation angle and is given by:
S l = x 0 + x 1 e E l ( l ) x 2 m
where l refers to the l t h satellite; S l is the elevation uncertainty m ; x 0 and x 1 are constant; E l ( l ) is the elevation angle of the l t h satellite ( in d e g r e e s ) and x 2 is a scaled value of the elevation error (in d e g r e e s ). x 0 , x 1 and x 2 must be estimated empirically. After performing many tests, we chose the values x 0 = 0 . 8 , x 1 = 0 . 4 and x 2 = 12 . The normalization elevation uncertainty σ l 2 is given by [7]:
σ l 2 = S l / l = 1 n s S l n s
n s being the number of the tracked satellites.
The covariance of the measurement noise of GPS satellite is determined as follows:
R k , G = σ 1 2 0 0 0 0 0 0 σ N g p s 2
where N g p s is the number of the tracked GPS satellites.
The covariance of the measurement noise of Galileo ( R k , E ) and GLONASS ( R k , R ) satellites are determined the same way as GPS satellite ( R k , G ). Therefore, the covariance of the measurement noise of GPS, Galileo and GLONASS satellites is written as:
R k = R k , G 0 0 0 0 0 0 γ R k , G 0 0 0 0 0 0 R k , E 0 0 0 0 0 0 γ R k , E 0 0 0 0 0 0 R k , R 0 0 0 0 0 0 γ R k , R
where γ R k , G , γ R k , E and γ R k , R are the covariance matrices of the pseudo-range rates of GPS, Galileo and GLONASS, respectively. γ is the covariance pseudo-range rate to the covariance pseudo-range ratio and is empirically chosen; After performing several tests, the value γ = 0 . 01 was chosen in this paper.
The covariance prediction P k is calculated by:
P k = A k P k 1 A k T + Q k
where Q k is the process covariance matrix. It plays an important role in the optimal recursive process and is written as:
Q k = t k 1 t k A ( τ ) A T ( τ ) d τ
where A ( τ ) = d i a g o n a l ( A x , τ , A y , τ , A z , τ , A b , G , τ , A b , E , τ , A b , R , τ ) , with A x , τ = A y , τ = A z , τ = A b , G , τ = A b , E , τ = A b , R , τ = 1 τ 0 1 . The matrix ∑ is the spectral density matrix given by:
= diagonal S x , S v x , S y , S v y , S z , S v z , S b , G , S d , G , S b , E , S d , E , S b , R , S d , R
where S x , S v x ; S y , S v y & S z , S v z are the power spectral densities of position noise and speed noise in the x-, y- and z-coordinates, respectively; S b is the power spectral density of the clock bias noise and S d is the power spectral density of the frequency drift noise. To simply, we consider to the ∑ which is unchangeable.
The covariance matrix Q k in Equation (70) is rewritten as:
Q k = diagonal Q x , Q y , Q z , Q b , G , Q b , E , Q b , R
With
Q x = S x Δ T + S v x ( Δ T ) 3 / 3 S v x ( Δ T ) 2 / 2 S v x ( Δ T ) 2 / 2 S v x Δ T
Q y and Q z are determined similarly to Q x ; and Q b , G is expressed as:
Q b , G = S b , G Δ T + S d , G ( Δ T ) 3 / 3 S d , G ( Δ T ) 2 / 2 S d , G ( Δ T ) 2 / 2 S d , G Δ T
Q b , E and Q b , R are determined similarly to Q b , G .
After the calculation of the covariance matrix R k and the covariance prediction P k , it is easy to determine the matrix L k using the Cholesky decomposition. Multiplying Equation (64) by L k 1 and defining: y k = L k 1 Z ˘ k , G k = L k 1 H ˘ k , ζ k = L k 1 e ˘ k , Equation (64) becomes the regression equation written as:
y k = G k X k + ζ k
Step 3: Updating
The estimated state X ^ K in Equation (75) is optimized by the IRWLS algorithm previously detailed in Section 3.3. After testing all weight functions in Section 3.2, we chose the bi-Tukey weight function to eliminate the outliers data and α is set to 1.756 (empirically chosen after many tests) for the IRWLS algorithm. The final estimated state is calculated as Equation (24): X ^ k = G k T W k G k 1 G k T W k y k .
The error covariance matrix is updated as Equation (25): P k = I K H k P k ; where K is the filter gain determined in Equation (26): K = P k H k T H k P k H k T + R k 1 .
After the state estimation, RAIM algorithm in Section 3.4 is applied to check the position accuracy.

4.2. Experimental Results

To demonstrate the above algorithm, this study uses the GPS, Galileo and GLONASS data from ABMF station (RINEX version 2.11 (GPS and GLONASS) and version 3.02 (Galileo) format). This reference station is located in Guadeloupe. RINEX data and broadcast ephemeris data used for calculation were downloaded from RGP network and CDDIS for day of 1 January 2019. In order to assess the algorithm performance, five scenarios are investigated under MATLAB environment:
  • Scenario #1: Navigation solution based on GPS data,
  • Scenario #2: Navigation solution based on Galileo data,
  • Scenario #3: Navigation solution based on GLONASS data,
  • Scenario #4: Navigation solution based on GPS/Galileo/GLONASS data,
  • Scenario #5: Navigation solution based on robust-EKF GPS/Galileo/GLONASS data.
Figure 2 illustrates the number visible satellites of GPS (red curve), Galileo (black curve), GLONASS (cyan curve), and combined GPS/Galileo/GLONASS (blue curve) systems over an elevation mask of 10 . It demonstrates that the number Galileo and GLONASS satellites is zero during some epochs. The average number of visible satellites is about 20 satellites when GPS, Galileo, and GLONASS are combined for positioning, which presents an advantage since they bring more signals to the receiver. However, it also presents a disadvantage since they may bring fault signals which can penalize the position estimation accuracy. The results are given Figure 3 which represents the position errors of the five scenarios with respect to time. dE, dN, dU are the errors in the East, North and Vertical components of the user’s position estimates.
In Figure 3, the red curve pertains to scenario #1 with GPS data only. The black curve represents scenario #2 with Galileo data only. The cyan curve is related to scenario #3 with GLONASS data only. It can be seen that the position errors of GPS and Galileo data are small and stable whereas those from GLONASS data are large and unstable. The blue curve represents for scenario #4 with combination of GPS, Galileo, and GLONASS data. The position accuracy from the combination improves compared to the use of GLONASS data only. However, the errors still remain unsteady and large in the East axis. In particular, the positions are not determined around the times 22 h, 23 h, and 24 h, because of the large HPL (in Figure 4). Consequently, the robust-EKF method is applied to enhance the accuracy of user position in scenario #5 and the obtained position errors are represented in the green curve. They are more stable and more accurate than the other scenarios.
Figure 4 represents the HPLs of combined GPS, Galileo and GLONASS data as a function of time. HPL is represented to check an assured zone of the estimated position. If the HPL is bigger than the HAL, the failure signals are detected and affect the position accuracy. After applying robust-EKF method, HPLs of robust GPS, Galileo and GLONASS data are recalculated in Figure 5 and the large HPLs are reduced.
For more insights about the position accuracy obtained for the five scenarios, Table 2 presents a comparison of the corresponding position errors.
According to the RMS (Root Mean Square) error values listed in Table 2, the robust combination of GPS, Galileo, and GLONASS data improves the position accuracy by about 53 %, 41%, and 95% compared to the use of GPS data only, Galileo data only, and GLONASS data only, respectively. It is also improved by about 84% compared to the non-robust combination of GPS, Galileo, and GLONASS data.
In the introduction section, it has been stated that the robust-EKF can be better than the Weighted Least-Squares Estimation (WLSE) to improve the position accuracy. To give tangible evidence on that, we compared the two methods and the results are illustrated in Figure 6.
The Figure 6 presents the position errors obtained from the two methods using the combined GPS, Galileo and GLONASS data. This WLSE method was developed in our previous work in [8], where the weight matrix is defined using the "Exponential model". The position errors of WLSE are wavering; In particular, the positions around 23h-24h are unavailable because of high HPLs. On the other hand, the position errors of robust-EKF are stable and small.
Although the robust-EKF can greatly improve the position accuracy, its outputs might contain errors that cannot be estimated. Therefore, an algorithm that can predetermine the errors of robust-EKF behaviour is needed. For this purpose, a de-noising filter method is presented in Section 5.

5. De-Noising Filter Method and Experimental Results

The recent developments in the “Deep learning” field have allowed the use of its algorithms for GNSS applications [24,25,27,34]. In [34], an intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and Kalman Filter (KF) has been proposed to improve the accuracies of positional components as well as orientation components in real time. This scheme is able to overcome the limitations of KF. In [25], a “deep learning” algorithm was proposed to denoise the MEMS IMU output signals. The used algorithm is a Long Short Term Memory (LSTM) that was employed to filter the MEMS gyroscope outputs, in which the signals were treated as time series. LSTM is an artificial recurrent neural network (RNN) architecture that is useful for classifying, processing and making predictions based on time series data. Therefore, with a combination of the works presented in [25,34], LSTM is implemented in this paper to learn and compensate for the residual errors of robust-EKF in order to improve the position accuracy. Hence, the rEKF-LSTM hybrid schemes are proposed as a method capable of learning how the state vector behaves based on the dynamic of the filter. The details of rEKF-LSTM hybrid schemes and the experimental results for GNSS data are detailed in the upcoming Section 5.1 and Section 5.2.

5.1. De-Noising Filter Model

Figure 7 presents the LSTM layered architecture. This design demonstrates the flow of a time series X (inputs) with N features of length S through an LSTM layer. The outputs are a time series “h” with H hidden units of length S. In the design, h k and c k denote the output (hidden state) and the “cell state” at time step t k , respectively. The first LSTM block uses the initial state of the network to calculate the first output ( h 1 ) and the “updated cell state” ( c 1 ) at the first time. At the t k , the block uses the previous state ( h k 1 , c k 1 ) to compute the output ( h k ) and the “updated cell state” ( c k ). A LSTM unit is composed of a cell, an “input gate”, an “output gate” and a “forget gate”. The basic structure of an LSTM unit is shown in Figure 8 illustrating the flow of data at instant t k . This diagram highlights the three parts: forget gate, input gate and output gate.
A LSTM layer has the learnable weights and bias: the input weights W l s t m , the recurrent weights R l s t m and the bias b l s t m . The matrices W l s t m , R l s t m , and b l s t m are the union of the input weights, the recurrent weights, and the bias of each part, respectively. These matrices are grouped as follows:
W lstm = W f W i W c W o ; R lstm = R f R i R c R o ; b lstm = b f b i b c b o
where f, i, c, and o denote the forget gate, input gate, cell state, and output gate, respectively.
In each LSTM block, the objective is the calculation of the output (hidden state) and the cell state through three parts: forget gate, input gate and output gate. As shown in Figure 8, the “forget gate” is presented in the first part of the LSTM, which is used to decide what information will be kept in the cell state. The decision is made by a sigmoid layer called “forget gate layer”. h k 1 and x k are inputs to the sigmoid function, and the output is a value ranging from 0 to 1 for each number in the cell state C k 1 . If the output is “1”, the information is “completely kept” in the cell state. When the output is “0”, the information is “completely cleared”. The forget gate’s activation vector f k is written as:
f k = σ ( W f x k + R f h k 1 + b f )
where σ ( . ) is the sigmoid function, h k 1 is the hidden state at time t k 1 , and x k is the input vector at time t k .
The second part is the “input gate”, which is employed to decide which new information should enter the previous cell state. This gate is composed of two parts: (1) a “sigmoid” layer to decide what values will be updated. The output values i k for this layer ranging from 0 to 1. “0” means “not important” and “1” represents “important”; (2) another part is “tanh” layer which creates a vector of new candidate values C ˜ k between −1 and 1 to help regulate the network. The input gate’s activation vector i k and the new candidate vector C ˜ k are calculated as follows:
i k = σ ( W i x k + R i h k 1 + b i )
C ˜ k = tanh ( W c x k + R c h k 1 + b c )
where “ t a n h ” is the hyperbolic tangent function.
The new cell state C k is updated from the old cell state C k 1 as follows:
C k = f k C k 1 + i k C ˜ k
In the last part, the “output gate” will decide the output. First, a sigmoid layer is used to determine what parts of the cell state will be output. After that, the cell state is passed through a “tanh” function and the ranging of the cell state values is from −1 to 1. Finally, the results are multiplied by the output of the sigmoid gate, and the output parts are decided. The output gate’s activation vector and hidden state vector are given by:
o k = σ ( W o x k + R o h k 1 + b o )
h k = o k tanh C k
The above Equations (77)–(82) are the steps to compute the output and the cell state of a LSTM unit. LSTM network can be applied for classification or regression tasks with sequence and time series data. In this paper, we make use of it for the regression task. From Figure 7, we obtain outputs “h” with the dimensions ( H × S ), while we want have predicted results “Y” with dimesions ( M × S ); therefore we need a weight matrix with dimension ( M × H ) to tranforme “h” to “Y”. As a result, we have the full architecture of the LSTM network with one LSTM layer for regression as the Figure 9.
Applying this LSTM network, the inputs data (X) are the positions estimated by robust-EKF in Section 3, and the outputs data (Y) are the errors on the estimated positions.
For training and testing our model, we should divide our data into three distinct sets: training, validation and prediction. For the “training part”, Figure 10 presents the rEKF-LSTM training architecture. We determine the estimated position by robust EKF using the combination of GPS, Galileo and GLONASS data. The ENU r E K F is the East, north, and up positions by robust EKF. Before the estimated positions enter the LSTM model, we need to standardize them to have zero mean and unit variance. As a result, the standardized estimated positions are now the training inputs of the LSTM model. The target outputs are the errors of the estimated positions by robust-EKF ( δ E N U ). The errors in the estimated positions are the difference between the true and the estimated (robust-EFK) positions. Similarly, we standardize the target outputs to have zero mean and unit variance and the standardized target outputs are now the training target outputs of the LSTM model. The model is trained on the training dataset using supervised learning with optimization methods such as: stochastic gradient descent, Adam or RMSProp. Each time, the LSTM model is run with the training inputs and returns the training outputs, which are then compared with the targeted ones. Based on the results of the comparison, the parameters of the model (weights and biases) are adjusted. The training process works until the training errors reach the error threshold and the inferred LSTM model is referred to as the “fitting LSTM model”.
The subsequent part is the “validation part” for which the architecture is illustrated in Figure 11. Firstly, we standardize the estimated east, north, and up positions. After, we use the “fitting LSTM model” from the “training part” to predict the standardized ENU errors. Then, we unstandardize them to obtain the predicted ENU errors. Following that, we compare the predicted ENU errors with the true ENU errors. Based on the comparison, we can see how well the LSTM model is generalizing in the training part. If the model is overfitting or underfitting, it is necessary to go back to the training part to tune the model’s hyperparameters (the number of layers, the number of hidden units in each layer etc.). We can return to the training part many times until we get the best LSTM model to ensure confidence on our model performance.
After obtaining the best LSTM model, this model is used in the “prediction part”. Figure 12 presents the prediction architecture of rEKF-LSTM. First, we standardize the estimated east, north, and up positions similarly to the validation part. After, we use the “best LSTM model” previously validated in the "validation part", to predict the standardization ENU errors. Then, we unstandardize them to obtain the predicted ENU errors. Finally, we obtain the estimated positions by rEKF-LSTM, which are computed as the difference between the estimated positions by robust-EKF and the predicted errors following Equation (83).
The estimated positions by rEKF-LSTM are written as follows:
ENU r E K F L S T M = ENU r E K F δ E N U

5.2. Experimental Results

In this section we aim at comparing the position estimation performance of the two methods: the robust-EKF and the rEKF-LSTM.
A total of 20 hidden layers were set up for the LSTM in this paper. The LSTM set up to H = 20 hidden layers, N = 3 inputs features (positions), S = 1440 number of timesteps and M = 3 outputs features (errors positions). The data used are still from ABMF station recorded on the 1 January 2019 (as in Section 4.2). Figure 13 represents the position errors with respect to time for the two methods: robust-EKF and rEKF-LSTM. The period of data is from 12 h to 24 h, since data from 0 h to 12 h is used to train the model.
In Figure 13, the green curve represents the position errors for the robust-EKF method, whereas the purple curve represents the errors produced by the rEKF-LSTM method. It can be seen that the position accuracy of rEKF-LSTM method largely improves and remains stable. To have more insights, Table 3 summarizes the RMS position errors of the two methods.
As shown in Table 3, the position accuracy improves by about 74.0 % using rEKF-LSTM compared to the robust EKF. To have a more general view, the five scenarios in Section 4.2 and two scenarios in this section are combined. As a result, six scenarios are considered since the scenario pertaining to robust-EKF is in common. The underlying results are summarized in Table 4.
As shown in Table 4, the position precision is improved by about 95% using the rEKF-LSTM compared to the non-robust data combination. This result proves that applying LSTM method can greatly improve the position accuracy. In order to assess more thoroughly the rEKF-LSTM method, other base station data are tested and the results are summarized in Table 5.
Table 5 gives the RMS errors in position for three other base stations: AJAC, GRAC and LMMF. The position precision is enhanced by about 87%, 77% and 93% using the rEKF-LSTM compared to non-robust data combination from the three base stations AJAC, GRAC and LMMF in France on 1 January 2019, respectively. This corroborates the conclusion that the rEKF-LSTM method can significantly improve the position precision.

6. Conclusions and Perspectives

In this paper, the SPP technology is applied to determine the user position calculated by GPS, Galileo and GLONASS data fusion. Data fusion may be containing outliers which negatively impact the final results. To address this issue, the MM-Estimation method is used to eliminate the outliers and IRWLS method is used to optimize the estimated results. Although the MM-Estimation and the IRWLS are effective, a method to detect the positioning errors is needed. The RAIM algorithm was used for this purpose to detect positioning errors exceeding the alert limit. As a result, the proposed robust Extended Kalman Filter is built by combining the Extended Kalman Filter, the MM-Estimation, the IRWLS and the RAIM method to improve the position accuracy. Robust GPS, Galileo and GLONASS data fusion using the robust-EKF increases the position precision by about 84.0% compared to non-robust estimation using data of ABMF base station. Furthermore, the position precision is significantly enhanced using the rEKF-LSTM method by about 95% compared to non-robust data combination. To prove the power of rEKF-LSTM, the approach is used for data from three other base stations: AJAC, GRAC and LMMF. The position accuracy is significantly improved by about 87%, 77% and 93% compared to the non-robust data combination, respectively.
In this work, the emphasis is laid on the improvement of the positioning accuracy for a slowly moving receiver. Only data from base stations are used, for which velocities are almost equal to zero and helps enhancing the performance of the LSTM model. A perspective of our work could be assessing the performance and limitations of the method on data from a user in motion and adapting or potentially developing a new method taking into account additional velocity information. While considering data from moving users, sensor noise levels are not constant and the challenge is to estimate an adapted LSTM modeling. Moreover, an additional challenge concerns the GNSS signal blockages. A suggested solution is to combine GNSS and INS (Inertial Navigation System) data. Along these lines, the perspective for this work is to use an adapted rEKF-LSTM model for combining GNSS and INS data from users in motion.

Author Contributions

T.-N.T. proposed the general idea of the method presented and realized its application to real data. A.K., F.C., P.F., J.-M.C. and O.R., suggested the problematic and the field of application, reviewed the idea applied and provided many suggestions. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been performed within the framework of e-PANEMA project. The authors would like to thank the funding organization ADEME.

Acknowledgments

Our warmest thanks go to all the other partners of the e-PANEMA project and also to the people involved in monitoring the project within the ADEME agency. Also our thanks are for anonymous reviewers who have contributed effectively in their comments and suggestions to obtain this version of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Huang, Z.; Li, R. RAIM analysis in the position domain. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium, Indian Wells, CA, USA, 4–6 May 2010; pp. 53–59. [Google Scholar] [CrossRef]
  2. Hewitson, S.; Lee, H.K.; Wang, J. Localizability analysis for GPS/Galileo receiver autonomous integrity monitoring. J. Navig. 2004, 57, 245–259. [Google Scholar] [CrossRef] [Green Version]
  3. Blanch, J.; Ene, A.; Walter, T.; Enge, P. An optimized multiple hypothesis RAIM algorithm for vertical guidance. In Proceedings of the 20th International Technical Meeting of the Satellite Division of the Institute of Navigation, Fort Worth, TX, USA, 25–28 September 2007; Volume 3, pp. 2924–2933. [Google Scholar]
  4. Tay, S.; Marais, J. Weighting models for GPS Pseudorange observations for land transportation in urban canyons. In Proceedings of the 6th European Workshop on GNSS Signals and Signal Processing, Munich, Germany, 5–6 December 2013. [Google Scholar]
  5. Rahemi, N.; Mosavi, M.R.; Abedi, A.A.; Mirzakuchaki, S. Accurate Solution of Navigation Equations in GPS Receivers for Very High Velocities Using Pseudorange Measurements. Adv. Aerosp. Eng. 2014, 2014, 435891. [Google Scholar] [CrossRef]
  6. Tabatabaei, A.; Mosavi, M.R.; Khavari, A.; Shahhoseini, H.S. Reliable Urban Canyon Navigation Solution in GPS and GLONASS Integrated Receiver Using Improved Fuzzy Weighted Least-Square Method. Wirel. Pers. Commun. 2017, 94, 3181–3196. [Google Scholar] [CrossRef]
  7. Li, J.; Wu, M. The improvement of positioning accuracy with weighted least square based on SNR. In Proceedings of the 5th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM), Beijing, China, 24–26 September 2009; pp. 9–12. [Google Scholar] [CrossRef]
  8. Tan, T.N.; Khenchaf, A.; Comblet, F. Estimating Ambiguity and Using Model Weight To Improve the Positioning Accuracy of a Stand-alone Receiver. In Proceedings of the 13th European Conference on Antennas Propagation (EuCAP 2019), Krakow, Poland, 31 March–5 April 2019; Volume 1. [Google Scholar]
  9. Tan, T.N.; Khenchaf, A.; Comblet, F.; Franck, P.; Champeyroux, J.M.; Reichert, O. GPS/GLONASS Data Fusion and Outlier Elimination to Improve the Position Accuracy of Receiver. In Proceedings of the 2019 IEEE Conference on Antenna Measurements & Applications (CAMA), Bali, Indonesia, 23–25 October 2019; Volume 805, pp. 191–194. [Google Scholar] [CrossRef]
  10. Wang, Y.; Jie, H.; Cheng, L. A Fusion Localization Method based on a Robust Extended Kalman Filter and Track-Quality for Wireless Sensor Networks. Sensors 2019, 19, 3638. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Fang, H.; Haile, M.A.; Wang, Y. Robust Extended Kalman Filtering for Systems with Measurement Outliers. arXiv, 2019; arXiv:1904.00335. [Google Scholar]
  12. Perälä, T.; Piché, R. Robust extended Kalman filtering in hybrid positioning applications. In Proceedings of the 4th Workshop on Positioning, Navigation and Communication, WPNC’07—Workshop Proceedings, Hannover, Germany, 22 March 2007; pp. 55–63. [Google Scholar] [CrossRef]
  13. Gao, Z.; Li, Y.; Zhuang, Y.; Yang, H.; Pan, Y.; Zhang, H. Robust kalman filter aided GEO/IGSO/GPS raw-PPP/INS tight integration. Sensors 2019, 19, 417. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Wang, J.; Xu, C.; Wang, J. Applications of Robust Kalman Filtering Schemes in GNSS Navigation. In Proceedings of the International Symposium on GPS/GNSS, Tokyo, Japan, 11–14 Novembre 2008. [Google Scholar]
  15. Gaglione, S.; Angrisano, A.; Crocetto, N. Robust Kalman Filter applied to GNSS positioning in harsh environment. In Proceedings of the European Navigation Conference (ENC 2019), Warsaw, Poland, 9–12 April 2019; pp. 1–6. [Google Scholar] [CrossRef]
  16. Kumar, P. Robust Kalman Filter Using Robust Cost Function. Master’s Thesis, Deparment of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Indian, 2015. [Google Scholar]
  17. Gandhi, M.A.; Mili, L. Robust Kalman Filter based on a generalized maximum-likelihood-type estimator. IEEE Trans. Signal Process. 2010, 58, 2509–2520. [Google Scholar] [CrossRef]
  18. Zhao, J.; Netto, M.; Mili, L. A Robust Iterated Extended Kalman Filter for Power System Dynamic State Estimation. IEEE Trans. Power Syst. 2017, 32, 3205–3216. [Google Scholar] [CrossRef]
  19. Yuliana, S.; Hasih, P.; Sri Sulistijowati, H.; Twenty, L. M Estimation, S Estimation, and MM Estimation in Robust Regression. Int. J. Pure Appl. Math. 2014, 91, 349–360. [Google Scholar]
  20. Yohai, Y.J. High Breakdown-point and High Efficiency Robust Estimates for Regression. Ann. Stat. 1986, 14, 590–606. [Google Scholar] [CrossRef]
  21. Huber, P.J. Robust Statistics; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2009. [Google Scholar]
  22. Akram, M.A.; Liu, P.; Wang, Y.; Qian, J. GNSS positioning accuracy enhancement based on robust statistical MM estimation theory for ground vehicles in challenging environments. Appl. Sci. 2018, 8, 876. [Google Scholar] [CrossRef] [Green Version]
  23. Strang, G.; Borre, S. Algorithms for Global Positioning; Wellesley-Cambridge Press: Wellesley, MA, USA, 2012. [Google Scholar]
  24. Fang, W.; Jiang, J.; Lu, S.; Gong, Y.; Tao, Y.; Tang, Y.; Yan, P.; Luo, H.; Liu, J. A LSTM algorithm estimating pseudo measurements for aiding INS during GNSS signal outages. Remote Sens. 2020, 12, 256. [Google Scholar] [CrossRef] [Green Version]
  25. Jiang, C.; Chen, S.; Chen, Y.; Zhang, B.; Feng, Z.; Zhou, H.; Bo, Y. A MEMS IMU de-noising method using long short term memory recurrent neural networks (LSTM-RNN). Sensors 2018, 18, 3470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Li, X.; Wu, X. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In Proceedings of the ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, 19–24 April 2015; pp. 4520–4524. [Google Scholar] [CrossRef] [Green Version]
  27. Kim, H.U.; Bae, T.S. Deep learning-based GNSS network-based real-time kinematic improvement for autonomous ground vehicle navigation. J. Sens. 2019, 2019, 3737265. [Google Scholar] [CrossRef]
  28. Kim, H.U.; Bae, T.S. Preliminary study of deep learning-based precipitation prediction. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2017, 35, 423–429. [Google Scholar] [CrossRef]
  29. Nicolini, L.; Caporali, A. Investigation on reference frames and time systems in Multi-GNSS. Remote Sens. 2018, 10, 80. [Google Scholar] [CrossRef] [Green Version]
  30. Gurtner, W.; Estey, L. RINEX The Receiver Independent Exchange Format Version 3.03. In Proceedings of the IGS Workshop 2012, Olsztyn, Poland, 23–27 July 2012; Volume 104, pp. 1–74. [Google Scholar]
  31. Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-Based Positoning and Their Integration; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  32. Hofmann-Wellenhof, B.; Lichtenegger, H.; Wasle, E. GNSS—Global Navigation Satellite Systems: GPS, GLONASS, Galileo and More; Springer: Wien, Austria, 2008. [Google Scholar]
  33. Ellis, D.M.; Draper, N.P.; Smith, H. Applied Regression Analysis. Appl. Stat. 1968, 17, 83. [Google Scholar] [CrossRef]
  34. Chiang, K.W.; Chang, H.W.; Li, C.Y.; Huang, Y.W. An Artificial Neural Network Embedded Position and Orientation Determination Algorithm for Low Cost MEMS INS/GPS Integrated Sensors. Sensors 2009, 9, 2586–2610. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Global flowchart for the position estimation.
Figure 1. Global flowchart for the position estimation.
Applsci 10 04335 g001
Figure 2. Number of visible satellites.
Figure 2. Number of visible satellites.
Applsci 10 04335 g002
Figure 3. Position errors of the five scenarios at station ABMF.
Figure 3. Position errors of the five scenarios at station ABMF.
Applsci 10 04335 g003
Figure 4. HPLs of combined GPS, Galileo and GLONASS data at station ABMF.
Figure 4. HPLs of combined GPS, Galileo and GLONASS data at station ABMF.
Applsci 10 04335 g004
Figure 5. HPLs of robust GPS/GAL/GLO data at station ABMF.
Figure 5. HPLs of robust GPS/GAL/GLO data at station ABMF.
Applsci 10 04335 g005
Figure 6. Comparison of position errors between robust-EKF and WLSE.
Figure 6. Comparison of position errors between robust-EKF and WLSE.
Applsci 10 04335 g006
Figure 7. LSTM layered Architecture (source: mathworks/lstm).
Figure 7. LSTM layered Architecture (source: mathworks/lstm).
Applsci 10 04335 g007
Figure 8. Basic structure of an Long Short Term Memory Unit.
Figure 8. Basic structure of an Long Short Term Memory Unit.
Applsci 10 04335 g008
Figure 9. Architecture of LSTM network for regression.
Figure 9. Architecture of LSTM network for regression.
Applsci 10 04335 g009
Figure 10. rEKF-LSTM training architecture.
Figure 10. rEKF-LSTM training architecture.
Applsci 10 04335 g010
Figure 11. rEKF-LSTM Validation architecture.
Figure 11. rEKF-LSTM Validation architecture.
Applsci 10 04335 g011
Figure 12. rEKF-LSTM Prediction architecture.
Figure 12. rEKF-LSTM Prediction architecture.
Applsci 10 04335 g012
Figure 13. Comparison of position errors between robust-EKF and rEKF-LSTM.
Figure 13. Comparison of position errors between robust-EKF and rEKF-LSTM.
Applsci 10 04335 g013
Table 1. Weighting functions [22].
Table 1. Weighting functions [22].
NameWeight Function
Huber w ( r i ) = 1 | r i | α α / | r i | | r i | > α
Bi-Tukey w ( r i ) = 1 | r i | α 0 | r i | α
Bi-Square w ( r i ) = [ 1 ( | r i | / α ) ] 2 | r i | α 0 | r i | > α
Table 2. RMS Errors in Position for the five scenarios at station ABMF in 24 h.
Table 2. RMS Errors in Position for the five scenarios at station ABMF in 24 h.
RMS Error (m)GPSGALGLOGPS/GAL/GLORobust-EKF GPS/GAL/GLO
RMS-E1.421.3431.008.110.74
RMS-N1.461.26.652.740.75
RMS-U3.943.0825.79.501.82
3D-RMS4.433.5640.8112.732.1
Table 3. RMS Errors in position for the two methods using data from ABMF station in late 12h.
Table 3. RMS Errors in position for the two methods using data from ABMF station in late 12h.
RMS Error (m)Robust-EKFrEKF-LSTM
RMS-E0.820.39
RMS-N0.830.32
RMS-U2.040.36
3D-RMS2.350.62
Table 4. RMS Errors in Position for the six scenarios at station ABMF in late 12 h.
Table 4. RMS Errors in Position for the six scenarios at station ABMF in late 12 h.
RMS Error (m)GPSGALGLOGPS/GAL/GLORobust-EKF GPS/GAL/GLOrEKF-LSTM
RMS-E1.431.2130.717.080.820.39
RMS-N1.581.196.182.270.830.32
RMS-U3.323.0721.108.182.040.36
3D-RMS3.953.5037.7611.052.350.62
Table 5. RMS Errors in Position at the three AJAC, GRAC LMMF stations in late 12 h.
Table 5. RMS Errors in Position at the three AJAC, GRAC LMMF stations in late 12 h.
Base StationRMS Error (m)GPSGALGLOGPS/GAL/GLORobust-EKF GPS/GAL/GLOrEKF-LSTM
RMS-E1.121.3617.763.400.720.73
AJACRMS-N1.651.259.304.570.880.46
RMS-U2.522.423.404.751.330.45
3D-RMS3.213.0420.337.421.750.97
RMS-E1.141.3817.393.420.670.75
GRACRMS-N1.581.2811.875.850.910.54
RMS-U2.893.0212.744.762.291.66
3D-RMS3.493.5624.68.282.551.89
RMS-E1.591.0733.126.620.680.51
LMMFRMS-N1.360.9611.811.820.630.30
RMS-U3.382.204.328.171.800.47
3D-RMS3.962.6335.4310.672.020.76

Share and Cite

MDPI and ACS Style

Tan, T.-N.; Khenchaf, A.; Comblet, F.; Franck, P.; Champeyroux, J.-M.; Reichert, O. Robust-Extended Kalman Filter and Long Short-Term Memory Combination to Enhance the Quality of Single Point Positioning. Appl. Sci. 2020, 10, 4335. https://doi.org/10.3390/app10124335

AMA Style

Tan T-N, Khenchaf A, Comblet F, Franck P, Champeyroux J-M, Reichert O. Robust-Extended Kalman Filter and Long Short-Term Memory Combination to Enhance the Quality of Single Point Positioning. Applied Sciences. 2020; 10(12):4335. https://doi.org/10.3390/app10124335

Chicago/Turabian Style

Tan, Truong-Ngoc, Ali Khenchaf, Fabrice Comblet, Pierre Franck, Jean-Marc Champeyroux, and Olivier Reichert. 2020. "Robust-Extended Kalman Filter and Long Short-Term Memory Combination to Enhance the Quality of Single Point Positioning" Applied Sciences 10, no. 12: 4335. https://doi.org/10.3390/app10124335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop