Next Article in Journal
A Federated Blockchain Architecture for File Storage with Improved Latency and Reliability in IoT DApp Services
Previous Article in Journal
Decoupling of Temperature and Strain Effects on Optical Fiber-Based Measurements of Thermomechanical Loaded Printed Circuit Board Assemblies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved GNSS Ambiguity Fast Estimation Reduction Algorithm

1
Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China
2
Department of Civil Engineering, Yibin Campus, Chengdu Technological University, Yibin 644000, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(20), 8568; https://doi.org/10.3390/s23208568
Submission received: 14 September 2023 / Revised: 11 October 2023 / Accepted: 17 October 2023 / Published: 18 October 2023
(This article belongs to the Topic GNSS Measurement Technique in Aerial Navigation)

Abstract

:
The fast and accurate solution of integer ambiguity is the key to achieve GNSS high-precision positioning. Based on the lattice theory of high-dimensional ambiguity solving, the reduction time consumption is much larger than the search time consumption, and it is especially important to improve the efficiency of the lattice basis reduction algorithm. The Householder QR decomposition with minimal column pivoting is utilized to pre-sort the basis vectors and reduce the number of basis vector exchanges during the reduction process by partial size reduction and relaxing the basis vector exchange condition to improve the reduction efficiency of the LLL algorithm. The improved algorithm is validated using simulated and measured data, respectively, and the performance advantages and disadvantages of the improved algorithm are evaluated from the perspectives of the extent of reduction basis orthogonality and the quality of reduction basis size reduction. The results show that the improved LLL algorithm can significantly reduce the number of basis vector exchanges and the reduction time consumption. The HSLLL and PSLLL algorithms with the Siegel condition as the basis vector exchange condition have a better reduction effect, but are slightly less stable. The PLLLR algorithm significantly improves the search ambiguity resolution efficiency, which is conducive to the rapid realization of ambiguity resolution.

1. Introduction

The resolution of integer ambiguity has a significant impact on the high-precision navigation and positioning results of the carrier phase. The LAMBDA method is currently recognized as the most theoretically rigorous, most efficiently solved, and most widely used ambiguity solving method [1]. It is based on the integer least squares model and reduces the search space by reducing the correlation between the variance components of the ambiguity in order to improve the search efficiency [2]. In addition, many scholars have carried out a lot of fruitful research on the decorrelation algorithm: Liu et al. proposed an approach to united ambiguity decorrelation from the perspective of LU decomposition [3]; Xu proposed a decorrelation algorithm for inverse integer Cholesky decomposition using a pre-sorting strategy [4,5]; Zhou proposed the (inverse) paired Cholesky integer transformation algorithm using upper and lower triangular Cholesky decomposition [6,7]; and Chang et al. improved the LAMBDA algorithm by using a greedy algorithm and lazy transformation strategy, and proposed the MLAMBDA algorithm [8].
Ambiguity resolution is an integer least-squares problem, which is essentially equivalent to the closest vector problem (CVP) in lattice theory, which is an NP-hard problem [9,10]. In order to obtain the nearest vector, it is usually necessary to reduce the basis vector. That is, the integer Gaussian transform is utilized to reduce the correlation of the basis vector, and the basis vector is sorted according to a certain criterion in order to obtain the shortest possible reduced basis. Among them, the LLL reduction algorithm is the most popular [11]. Therefore, the related methods of ambiguity resolution can be placed in the framework of lattice theory, thus injecting new vitality into the in-depth study of integer ambiguity resolution. Hassib et al. first introduced the LLL reduction algorithm to GNSS ambiguity resolution [12]. Grafarend introduced the solution principle of the LLL algorithm and carried out a detailed data analysis [13]. L.Z. Lou proposed to improve the original LLL algorithm by using a new judgment criterion in response to the defect of iterative non-convergence in the LLL algorithm [14]. Since the LLL algorithm introduces a large rounding error during the rounding process, Z.P. Liu et al. proposed an improved LLL algorithm based on overall matrix rounding [15]. R.H. Yang et al. improved the LLL algorithm by reordering the Gram–Schmidt orthogonal basis [16]. L. Fan and K. Xie improved the LLL algorithm from the perspective of reducing the length of the reduction basis vector [17,18]. Jazaeri et al. compared and analyzed the performance differences between the LAMBDA algorithm and the LLL algorithm [19]. Ling and Howgrave-Graham pointed out that the core of the LLL algorithm lies in the basis vector exchange by analyzing the characteristics of the size reduction and basis vector exchange in the LLL algorithm, and based on this, they proposed the ELLL (Effective LLL) algorithm with partial size reduction [20]. Xie et al. analyzed the effectiveness of the size reduction in the LLL algorithm and proposed the PLLL (Partial LLL) algorithm that selectively performs the size reduction of a column vector in response to the large truncation error of the ELLL algorithm [21]. L.G. Lu et al. improved the LLL algorithm using the greedy selection of the basis vector and partial column vector reduction to reduce the computational complexity of the LLL algorithm [22]. H. Lv et al. improved the original LLL algorithm by using delayed size reduction and partial size reduction in order to reduce the redundant size reduction during the reduction process [23]. Li et al. improved the LLL algorithm based on the Householder transform by using a symmetric pivoting strategy [24].
At the same time, relevant scholars have also conducted in-depth research on the correlation between the LAMBDA decorrelation algorithm and LLL reduction algorithm in lattice theory, as well as the performance evaluation index of lattice basis reduction assisting ambiguity resolution. J.N. Liu et al. theoretically proved the equivalence between the decorrelation and the lattice basis reduction [9]. Lannes further proved the equivalence of the LAMBDA decorrelation algorithm and the LLL algorithm [25]. Borno et al. pointed out through theoretical analysis that simple integer Gaussian transformations do not affect the efficiency of the search for integer ambiguity [26]. Jazaeri et al. analyzed the relationship between commonly used reduction performance evaluation indexes (condition number and orthogonal defect) and search efficiency, and pointed out that the above indexes could not accurately measure the efficiency of an accelerated ambiguity search for lattice basis reduction methods [27]. L.G. Lu et al. compared and analyzed the performance of the LAMBDA decorrelation algorithm and the LLL reduction algorithm under different decomposition methods. They categorized and generalized the common decorrelation (reduction) evaluation indexes from a geometric perspective. It is further illustrated that different evaluation indexes are not directly related to the search efficiency of ambiguity [28].
In view of this, this paper proposes new improved algorithms on the basis of LLL and PLLL algorithms for the characteristics of ambiguity resolution, and appropriately relaxes the exchange conditions of the basis vector in order to reduce the reduction time consumption and improve the computational efficiency of ambiguity resolution. The effectiveness of the improved algorithms and the reduction performance are verified by simulation and measured data.

2. Methods and Improvement Strategies

The following describes the notations to be used in this paper. The set of real and integer formed by n-dimensional vectors are denoted by n and n , respectively. MATLAB notation is used to represent submatrices. Specifically, if A = a i , j , then A i , : denotes the i -th row, A : , j denotes the j -th column, and A i 1 : i 2 , j 1 : j 2 the submatrix formed by rows i 1 to i 2 and columns j 1 to j 2 . For the element i , j of A, it is denoteb by a i , j or A i , j .

2.1. Integer Least Squares Model

The GNSS observation equation is [29,30]:
y = Aa + Bb + e  
where a is the carrier phase ambiguity parameter, b is the baseline parameter of the component to be estimated, e is the observation noise, y is the carrier phase and pseudorange observation, and A and B are the design matrix.
Using the least squares criterion [31,32], it can be shown that
min y Aa Bb Q y 2 a m   , b n
where · = · T Q y 1 · , Q y is the variance covariance matrix of observation y . Considering that the ambiguity parameter is an integer, Equation (2) can be further decomposed as:
y Aa Bb Q y 2 = e ^ Q y 2 + a ^   a Q a ^ 2 + b ^ a b Q b ^ | a ^ 2
and
e ^ = y A a ^   B b ^ b ^ a = b ^   Q b ^ a ^ Q a ^ 1 a ^   a Q b ^ | a ^ = Q b ^ Q b ^ a ^ Q a ^ 1 Q a ^ b ^
where a ^ is the ambiguity float solution, b ^ is the baseline component corresponding to the ambiguity float solution, and b ^ a is the baseline component corresponding to the ambiguity fixed solution. Since b in Equation (1) is a real vector, the third term to the right of Equation (3) should be zero. So when b = b ^ a and a ^   a Q a ^ 2 takes the minimum value, y Aa Bb Q y 2 takes the minimum value. Therefore, the minimization problem of Equation (2) is transformed into:
min a ^ a Q a ^ 2 = min a ^   a T Q a ^ 1 a ^   a   a Z m
The Cholesky decomposition of Q a ^ , namely,
Q a ^ = G T G
where G is the upper triangular matrix.
Substituting Equation (6) into Equation (5) gives
min G T a ^   a 2 = min y G T a 2
where y = G T a ^ is a constant.
Equation (7) is also known as the nearest vector problem in lattice theory [33]. In order to obtain the integer solution of the ambiguity rapidly, the decorrelation process is usually used to reduce the correlation between the variance components in Q, which improves the efficiency of the search for the ambiguity in Equation (5).

2.2. LLL Algorithm Based on QR Decomposition

Let g 1   , g 2   , g n n be a set of linearly independent basis vectors and the lattice L g 1   , g 2   , g n represent the set consisting of all linear combinations of integers of g 1   , g 2   , g n , i.e.,:
L G = i = 1 n x i g i   ,   x i 1 i n
where G = g 1   , g 2   , g n is called a set of basis of the lattice, L G is the lattice generated by G , and x i is the combinatorial coefficient of g i .
The classical LLL algorithm implements the reduction on the basis of Gram–Schmidt Orthogonalization (GSO) [11]. Schmidt Orthogonalization is performed on the basis matrix G = g 1   , g 2   , g n :
G = G U = g 1 , g 2 , g n 1 u 1 , 2 u 1 , n 0 1 u 2 , n 0 0 0 0 0 1
In the formula, G = g 1 , g 2 , g n and g i = g i j = 1 i 1 u j , i g j , U = u j , i are the unit upper triangular matrix and satisfy u j , i = g i , g j / g j 2   ,   1 j < i n . Matrix G and U satisfy the following two reduction conditions:
u j , i 1 2     1 j < i n δ g i 1 2 g i 2 + u i , i 1 2 g i 1 2     1 4 < δ 1
Call G the LLL reduction basis parameterized by δ . The first equation is called the size reduction and the second equation is the basis vector exchange.
In fact, in order to improve the float accuracy of the lattice basis reduction, the LLL reduction algorithm based on QR decomposition is usually used [34]. The following decomposition is performed on the basis matrix G :
G = QR = q 1 , q 2 , q n r 1 , 1 r 1 , 2 r 1 , n 0 r 2 , 2 r 2 , n 0 0 0 0 0 r n , n
In the equation, Q is the orthogonal matrix and q i = g i / g i , R = r j , i is the upper triangular matrix and satisfies u j , i = r j , i / r j , j and b j = r j , j .
Thus, the reduction condition of Equation (8) can be rewritten as:
r j , i r j , j 1 2     1 j < i n δ r i 1 , i 1 2 r i , i 2 + r i 1 , i 2     1 4 < δ 1
Equation (12) is the reduction condition of the LLL algorithm based on QR decomposition.
In order to satisfy the above reduction conditions, it is usually necessary to construct a transformation matrix for reduction operation.
  • Size reduction: in order to realize the first condition in Equation (10), construct the unimodular matrix Z j , i = I n r j , i / r j , j int e j e i T ( int represents the rounding operator), use its right multiplication by the basis matrix G to realize the size reduction of the corresponding element, and, at the same time, update the upper triangular matrix R .
  • Basis vector exchange (Lovasz condition): if the second condition in Equation (10) is not satisfied, the exchange matrix P i 1 , i is constructed to swap the order of g i 1 and g i , and the matrix R is updated to re-triangularize it.

2.3. Improved LLL Algorithm

2.3.1. Householder QR Decomposition Based on Minimum Column Pivoting

The original LLL algorithm performs a QR decomposition of the basis matrix based on GSO. The Householder QR decomposition has lower computational complexity and better numerical stability compared to GSO [35]. Partial LLL reduction algorithms utilize the Householder QR decomposition with minimum column pivoting instead of the regular Householder QR decomposition. In general, the number of basis vector exchanges is a key factor affecting the time consumption of the whole LLL reduction, and it is possible to reduce the number of basis vector exchanges if matrix R of QR decomposition can be made closer to the LLL reduction basis. It can be obtained from Equation (12) that
δ 1 4 r i 1 , i 1 2 r j , j 2     1 4 < δ 1
In order to make it easier for matrix R to satisfy Equation (13), the minimum column pivoting strategy selects the columns that minimize r j , j to be exchanged. In the j-th step of the QR decomposition, find column i of G j : n , j : n which has the shortest length and exchange the i -th column of G with the j -th column. The off-diagonal element G j + 1 : n , j is then eliminated by the Householder transformation. By using the minimum column pivoting strategy, the Householder QR expression is:
Q T GP = R
where P n is the exchange matrix and Q T = H n H n 1 H 1 is the product of n Householder transformations.

2.3.2. Partial Size Reduction

It has been theoretically demonstrated in the literature that simple size reduction does not affect the number of candidate points for the ambiguity search, and that basis vector exchange is the real goal of lattice basis reduction to accelerate the ambiguity search. In the LLL algorithm, only the size reduction of the secondary diagonal element is generally required. However, considering the lattice basis reduction efficiency and numerical stability of the algorithm, it is necessary to carry out size reduction for partial non-principal secondary diagonal elements under certain conditions.
r i 1 , i = r i 1 , i ζ r i 1 , i 1
where ζ = r i 1 , i / r i 1 , i 1 int . The size reduction is applied to the non-principal secondary diagonal elements when they satisfy r i 1 , i / r i 1 , i 1 int 2 , viz:
r k , i = r k , i r k , i / r k , k int r k , k   ,   k = i 2 , i 3 , , 1
It should be noted that matrix element size reduction is an integer transformation process, which not only reduces the size of the element itself, but also updates the rest of the column vector accordingly.
Givens rotation has better numerical stability than GSO. Therefore, Givens rotation is used for triangularization after the exchange of the basis vector of the PLLL reduction algorithm. Suppose we exchange the k 1 and k columns of R , i.e.,:
R P k 1 , k = R 1 , 1 R ¯ 1 , 2 R 1 , 3 R ˜ 2 , 2 R 2 , 3 R 3 , 3 k 2 2 n k k 2 2 n k
then
P k 1 , k = I k 2 P I n k ,   P = 0 1 1 0 ,   R ˜ 2 , 2 = r k 1 , k r k 1 , k 1 r k , k 0 ,   R ¯ 1 , 2 = R 1 : k 2 , k 1 R 1 : k 2 , k .
It can be seen that the block matrix R ˜ 2 , 2 is not an upper triangular matrix. Therefore, it is triangularized using Givens rotation. Assuming that the Givens rotation matrix is Γ , we have:
R ¯ 2 , 2 : = Γ R ˜ 2 , 2 = c s s c r k 1 , k r k 1 , k 1 r k , k 0
where
c = r k 1 , k r k 1 , k 2 + r k , k 2 ,   s = r k , k r k 1 , k 2 + r k , k 2 .
Therefore, it can be concluded that
Γ k 1 , k R P k 1 , k = R ¯ = R 1 , 1 R ¯ 1 , 2 R 1 , 3 R ¯ 2 , 2 R ¯ 2 , 3 R 3 , 3 , Γ k 1 , k = I k 2 Γ I n k , R ¯ 2 , 3 = Γ R 2 , 3 .

2.3.3. Improvement of the LLL Algorithm

From the PLLL reduction algorithm we note that size reduction is only carried out when the basis vector exchange occurs, and the resulting matrix R is not fully regulated. Therefore, we add an additional size reduction process at the end of the PLLL reduction algorithm and convert R to the LLL reduction matrix. We denote the PLLL algorithm with extra size reduction as PLLLR.
In addition, in the procedure of LLL reduction, it is necessary to detect whether the basis vector satisfies the exchange condition to decide whether it enters the column exchange step or not, and it is obvious that the procedure of LLL reduction can be simplified if the exchange condition of the basis vector is appropriately relaxed in order to reduce the operations, such as the column exchange afterward. Inspired by the literature [36], we replace the Lovasz condition in the LLL reduction with the Siegel condition, and Equation (13) becomes:
δ 1 2 r i 1 , i 1 2 r j , j 2     3 4 δ 1
We denote the LLL algorithm based on the Householder QR decomposition as HLLL. The HLLL algorithm is where the basis vector exchange condition is replaced by the Siegel condition as HSLLL, and the PLLL algorithm is where the Siegel condition is used as the basis vector exchange condition as PSLLL. The specific flow of the two improved algorithms is shown in Figure 1.

3. Experiments and Results Analysis

In order to verify the effectiveness of the LLL improvement algorithm proposed in this paper in the application of ambiguity resolution, simulation experiments and measured data are used to compare and analyze HLLL, HSLLL, PLLL, PSLLL, and PLLLR, and to evaluate the performance advantages and disadvantages of each algorithm in terms of the extent of reduction basis orthogonality and the quality of reduction basis size reduction. In ambiguity resolution, the searching process of the ambiguity degree adopts the SE-VB strategy which is widely used at present [10]. The experimental environment is a private PC (Intel Core i7-9700 CPU, 2.80 GHz, 16.0 GB of RAM, 64-bit Windows 10 operating system) and the software is MATLAB R2017 a.

3.1. Indicators for Evaluating the Quality of the Reduced Basis

In measuring the performance of lattice basis reduction, an orthogonal defect (OD) is usually used to reflect the orthogonality of the basis vector, but it has an obvious disadvantage in that only the OD value is obtained, which is not able to intuitively judge the extent of the orthogonality of the reduced basis [37,38,39]. Therefore, in this paper, the minimum angle θ of the reduced basis vector is used instead of the orthogonality defect to measure the extent of the orthogonality of the reduced basis. Its expression is given as:
θ G = min θ i , j   ,   1 i < j n
where
θ i , j = min arccos ρ i , j , 180 ° arccos ρ i , j ρ i , j = g i , g j g i g j
By definition, it follows that 0 ° θ G 90 ° . If θ G = 90 ° , it means that all basis vectors are orthogonal to each other. θ , as an alternative indicator of the extent of orthogonality, can be used to roughly determine the orthogonality of the reduced basis intuitively. And the calculation of θ and OD is based only on the elements of the variance covariance matrix Q a ^ ; it does not increase the computational complexity.
The purpose of the lattice basis reduction is to make the reduced basis as orthogonal as possible and to make the length of the first basis vector as short as possible after the basis vector exchange. Based on this property, the Hermite factor in lattice theory is introduced as another indicator for evaluating the performance of the reduction [40,41], which is defined as:
κ = g 1 det Q a ^ 1 2 n
where g 1 denotes the first basis vector of the lattice basis G . Obviously, det Q a ^ is a fixed value, then the size of the Hermite factor depends on the length of g 1 . The smaller the value of κ , the shorter the length of the first basis vector after the lattice basis reduction, the more adequate the basis vector exchange, and the better the quality of the reduction, and vice versa.

3.2. Simulation Experiment

The random simulation method in the literature [8] is used to construct 5–40 dimensional ambiguity float solution a ^ and variance covariance matrix Q a ^ . Each dimension constructs 100 groups of data, which are processed by HLLL, HSLLL, PLLL, PSLLL, and PLLLR algorithms for lattice basis reduction, respectively. These calculate the average number of basis vector swaps, the average reduction time consumption, and the average number of ambiguity candidate points for the 100 groups of data. The specific construction is as follows:
a ^ = 100 × r a n d n n , 1 Q a ^ = LDL T
  • Scheme 1: L is an upper triangular matrix unit and the upper triangular element l j , i follows the standard normal distribution; D = d i a g n 1 , n 1 1 , , 1 .
  • Scheme 2: L is a random orthogonal matrix, obtained by the QR decomposition of the random matrix generated by r a n d n n , n ; d 1 = 2 n 4 , d n = 2 n 4 , d i d n , d 1 , D = d i a g d 1 , , d i , , d n .
Figure 2 shows the trend of the number of basis vector swaps for the five algorithms in different schemes and dimensions. As seen in Figure 2, the number of basis vector swaps for the five algorithms is positively correlated with the number of dimensions; overall, PSLLL has the fewest number of swaps, and PLLL and PLLLR have the same number of swaps.
By analyzing the results in Figure 2, it can be seen that PLLLR is equivalent to PLLL in terms of the number of basis vector swaps because PLLLR only adds an additional size reduction process, which has no effect on the ordering of the basis vectors, a phenomenon that is in line with the theory. HSLLL and PSLLL simplify the LLL reduction process by relaxing the swap condition of the basis vector, which reduces the number of basis vector swaps.
Figure 3 shows the reduction time consumption for the five algorithms with different schemes and dimensions. It can be intuitively seen from Figure 3 that as the number of dimensions increases, the overall trend of the reduction time consumption for ambiguity is upward, and PSLLL has the smallest reduction times. From Figure 3a, it can be observed that the PLLLR reduction time consumption is lower than the HLLL, except for the 16th dimension. The PSLLL reduction time consumption is lower than the HSLLL (except for the 6th and 11th dimensions). A similar conclusion can be drawn in Figure 3b. The possible reasons for the above special cases are that the reduction time consumption is smaller in the case of lower dimensions and due to the running error of MATLAB.
Figure 4 represents the number of search candidate points for the five algorithms with different schemes and dimensions. As can be seen from Figure 4, the change in the number of search candidate points and dimensions have the same trend overall, that is, the number of candidate points increase with the growth of dimensions. PLLLR and PLLL have the same number of search candidate points, whereas the number of candidate points for the ambiguity of HSLLL and PSLLL is more than HLLL and PLLL in most dimensions compared to the other methods, which indicates that it may be more time consuming in the ambiguity search process.
In analyzing the results of Figure 4, since the simple size reduction does not change the candidate integer vector for the ambiguity search, the number of candidate points for the search of PLLL is equivalent to the search results of the PLLLR algorithm, which is consistent with the theory. Since HSLLL and PSLLL adopt different basis vector exchange conditions from the regular LLL algorithms, the column exchange operation is reduced on the basis vector exchange, thus speeding up the lattice basis reduction procedure. Therefore, the final basis vector lengths obtained are different from those of HLLL and PLLL (the basis vector is obtained by exchanging them in a certain order), which results in a different number of search candidate vectors for HSLLL and PSLLL in different dimensions compared to HLLL and PLLL.
The minimum angle θ and Hermite factor κ between the basis vectors after the reduction of Schemes 1 and 2 using the HLLL, HSLLL, PLLL, PSLLL, and PLLLR algorithms are listed in Table 1 and Table 2, respectively. As can be seen from the average basis vectors of the five algorithms in Table 1 with minimum angles θ , all five algorithms in Scheme 1 have good reduction effects in general. Considering the extent of the orthogonality of the basis vectors, the HSLLL reduction performance is optimal, followed by PLLLR, PLLL, and PSLLL, and HLLL is the worst. Similar conclusions to Table 1 can be drawn from Table 2, but with slight differences in terms of the reduction performance advantages and disadvantages, with PLLLR being the best, followed by PSLLL, HSLLL, and PLLL, and HLLL being the worst. The reason for this difference is that HSLLL and PSLLL are less stable compared to PLLLR. The minimum value of the PSLLL algorithm in Table 1 is 41.3540°, which fluctuates a lot, which means that the orthogonal performance will be poor, whereas both HSLLL and PLLL’s minimum values are greater than 45° and the reduction performance is more stable. Similarly, the same is true for HSLLL in Table 2, which will not be explained here. Combining Table 1 and Table 2, PLLLR is superior in terms of the stability and extent of orthogonality combined.
From the Hermite factor κ of the five algorithms in Table 1 and Table 2, it can be observed that the Hermite factors of PLLLR and PLLL are basically the same, and the relative error is 0.0072%, which is negligible. This indicates that it is difficult to evaluate the performance advantages and disadvantages of the two algorithms from the Hermite factor indicator. There is little difference in the reduction performance between HSLLL and PSLLL, and both outperform the other three algorithms. HSLLL slightly outperforms PSLLL in Scheme 1, while the opposite is true for Scheme 2, which may be related to the type and randomness of the reduced basis. HLLL has the worst reduction performance.

3.3. Measured Experiment 1

To further validate the effectiveness of the algorithm and the reduction effect, using the GPS dual-frequency observation data from the US CORS station LWES with DSTR on 15 March 2023 (DOY-074) for 2778 epochs, the baseline length is 7.79 km and the sampling interval is 30 s. The ambiguity dilution of precision (ADOP) is usually used to evaluate the accuracy of the ambiguity resolution [42]. Figure 5 shows the variation trend of the ambiguity dimension and ADOP in DOY-074. It can be observed from Figure 5 that the ambiguity dimension of DOY-074 ranges from 12 to 22 dimensions, and the ADOP value is all less than 0.1. The dimensions of the first 200 epochs are about 20, and the ADOP value is all less than 0.06. Therefore, in this paper, we select the data of the first 200 epochs to verify the effectiveness and reduction performance of the improved algorithm.
Figure 6 shows the cumulative distribution functions of the number of basis vector swaps and reduction time consumption for the first 200 epochs of the five algorithms. It can be seen from Figure 6a that PLLL and PLLLR have the same number of basis vector swaps, and PSLLL has the smallest number of basis vector swaps, followed by HSLLL. This is consistent with the conclusion of the simulation experiments in Section 3.2. From Figure 6b, it can be observed that the reduction time consumption of PSLLL and HSLLL is significantly less than the other three algorithms, and PLLLR consumes slightly more reduction time than PLLL due to the extra added size reduction, which is in line with the theory in Section 2.3. The five algorithms in descending order of reduction efficiency are PSLLL, HSLLL, PLLL, PLLLR, and HLLL.
Figure 7 shows the variation of the number of ambiguity candidate points for the five reduction algorithms in the first 200 epochs, from which it can be seen that the number of ambiguity candidate points is exactly the same for PLLL and PLLLR, which is consistent with the conclusion of the simulation experiments, and will not be explained here. HLLL, HSLLL, and PSLLL have different numbers of candidate points for ambiguity, and the differences between HLLL and HSLLL can be clearly seen in the figure, while the overall trend of PSLLL is in line with PLLL and PLLLR.
Table 3 shows the statistical results for the five algorithmic basis vectors’ minimum angles (deg) and Hermite factors in the first 200 epochs. As seen in Table 3, the average basis vector minimum angle of all five algorithms is greater than 45°, and the PLLLR has the best reduction performance. From the Hermite factors κ of the five algorithms, it can be observed that the order of the reduction performance is consistent with Scheme 2 in the simulation experiments, PLLLR and PLLL have the same Hermite factor, and PSLLL outperforms several other methods.
Table 4 represents the solution time consumption (reduction time consumption, search time consumption, and total time consumption) of the five algorithms for the first 200 epochs, from which it can be observed that the five algorithms have the highest overall efficiency in the order of PSLLL, HSLLL, PLLLR, PLLL, and HLLL. The PSLLL has the highest overall efficiency. The PLLLR algorithm has the highest search efficiency by further size reduction and has the best stability, which is favorable for improving the search efficiency of ambiguity.

3.4. Measured Experiment 2

In order to further verify the reduction performance of the algorithm in the case of multiple GNSS systems and higher dimensionality, the simulated railroad track measured GPS/BDS data of 1210 epochs from Southwest Jiaotong University on 16 August 2023 (DOY-228) are selected, with a baseline length of 9.80 m and a sampling interval of 1 s. Figure 8 shows the trend plot of the ambiguity dimension and ADOP for 1210 epochs. From the figure, it can be seen that the number of ambiguity dimension is greater than 36 and the value of ADOP is less than 0.07. Therefore, the accuracy of the float solution of the ambiguity is better.
Figure 9 shows the cumulative distribution functions of the number of basis vector swaps and the reduction time consumption for the five algorithms, from which it can be observed that PLLL and PLLLR have the same number of basis vector swaps. As the number of ambiguity dimensions is close to 40, the variation of the reduction time consumption of PSLLL and HSLLL is small, and the reduction time consumption of PSLLL and HSLLL is significantly smaller than the other three algorithms. The reduction time consumption of PLLLR adding extra size reduction does not increase significantly compared with PLLL, which is due to the extra size reduction with low complexity, and the time consumption is basically negligible. There is no difference in the trend of the number of ambiguity candidate points of the five algorithms, which will not be shown here.
Table 5 shows the basis vector minimum angle (deg) and Hermite factor of the five algorithms. It can be seen that the average basis vector minimum angle θ of the five algorithms is greater than 45°, and all of them have good reduction effects. The minimum value of θ of the PLLLR algorithm is greater than 45°, which indicates that PLLLR has the best robustness in avoiding the reduced basis of poor orthogonality. The Hermite factor κ of PLLLR and PLLL is basically the same, and the relative error is negligible. The superiority of the PLLLR and PLLL algorithms cannot be judged from the Hermite factor κ alone.
Table 6 shows the solution time consumption of the five algorithms (reduction time consumption, search time consumption, and total time consumption), from which the conclusions are consistent with those of Table 4 in Measured Experiment 1, and will not be repeated here. Figure 10 illustrates the cumulative distribution functions of the total time consumed for the two measured experiments, from which it can be seen that the HSLLL, PSLLL, and PLLLR algorithms outperform the HLLL and PLLL. The difference is that it is not possible to ascertain the performance of the HSLLL and PLLLR algorithms from Measured Experiment 1, whereas Measured Experiment 2 clearly shows that the efficiency of the HSLLL outperforms that of the PLLLR. The possible reasons for this difference are related to the number of ambiguity dimensions and MATLAB running errors.
In order to illustrate the performance difference between HLLL, HSLLL, PLLL, PSLLL, and PLLLR more clearly, we compare the speed, stability, and computational complexity of the five algorithms, and the results are shown in Table 7.

4. Discussion

The classical LLL algorithm is based on the QR decomposition of the basis matrix by GSO. The computational complexity of GSO is 2 n 3 , while the Householder QR decomposition does not require the formation of orthogonality factor Q during the reduction process, and the computational complexity is 4 3 n 3 . In addition, the GSO method has poor numerical properties because there is usually a severe loss of orthogonality in the computation of the orthogonality factor Q . Therefore, in this paper, the Householder QR decomposition is utilized instead of the conventional GSO decomposition, which has lower computational complexity and better numerical stability. We propose corresponding improved algorithms based on the HLLL and PLLL algorithms, and verify the validity of the methods and the performance of the reduction through simulation and measured experiments. Figure 2, Figure 6a, and Figure 9a represent the number of basis vector swaps for the simulation and measured experiments, from which it can be seen that PSLLL has the smallest number of basis vector swaps, and PLLLR has the same number of basis vector swaps as PLLL. From the reduction time consumption of different algorithms in Figure 3, Figure 6b, and Figure 9b, it can be seen that the reduction time consumption of PSLLL and HSLLL is less than the other algorithms, whereas PLLLR makes the reduction time slightly higher than PLLL due to the extra size reduction. Combined with the cumulative distribution functions of the total time consumption (the sum of the reduction time and the search time) of the two measured experiments in Figure 10, it can be seen that although the PLLLR reduction time consumption is slightly higher than that of PLLL, the further size reduction of the R-matrix after exchanging the basis vectors greatly shortens the search time, which improves the overall efficiency of the ambiguity resolution. Figure 4 and Figure 7 analyze the five algorithms from the comparison of the number of ambiguity candidate points. The number of ambiguity candidate points is exactly the same for PLLLR and PLLL, while the other algorithms are slightly different. Table 4 and Table 6 show the solution time consumption of the five algorithms, where PSLLL has the fastest reduction efficiency and PLLLR has the best stability. Table 7 summarizes the performance differences of the five algorithms.
There is a close correlation between lattice basis orthogonality and basis vector length, and the ambiguity lattice basis reduction is precise to better solve the CVP on the lattice. In measuring the performance of the lattice basis reduction, the extent of orthogonality between the reduced basis vector is not intuitively determined due to the orthogonal defect indicator. We introduce the minimum angles θ among the reduced basis vectors as an alternative to overcome the drawbacks of the orthogonal defect. It can be found through Equation (20) that a good lattice basis reduction algorithm should ensure that θ is greater than 45 ° . As can be seen from the minimum angles among the basis vectors in Table 1, Table 2 and Table 3 and Table 5, the minimum values of θ for the PLLLR algorithm are all greater than 45 ° , which suggests that it is the most robust in terms of avoiding a reduced basis with poor orthogonality. The Hermite factors can well reflect the property of the first basis vector of the lattice basis reduction, that is, whether the length of the first orthogonal basis vector is short enough. From the Hermite factors statistics in Table 1, Table 2 and Table 3 and Table 5, it can be observed that HSLLL and PSLLL outperform the other three algorithms in terms of reduction performance. PLLLR and PLLL have basically the same Hermite factors with negligible relative errors. It is difficult to evaluate their performance advantages or disadvantages from the Hermite factors.

5. Conclusions

In this paper, for the characteristics of the high dimensionality and high accuracy of ambiguity resolution, based on analyzing the LLL reduction algorithm, we introduce the minimum column pivoting Householder QR decomposition, partial size reduction, and the relaxation of the basis vector exchange condition to improve the regular LLL algorithm. In order to visualize the extent of the orthogonality of the basis vectors, the minimum angle of the basis vectors is used to replace the conventional degree of orthogonality defects, and the quality of the reduced basis size reduction is evaluated by the Hermite factor. Based on the simulation and measured data to verify the effectiveness of the improved algorithm and the reduction effect, the experimental results show that the improved algorithm effectively reduces the size reduction and the number of basis vectors exchanged in the process of lattice basis reduction, and can obtain a better reduction effect, which significantly improves the reduction performance of the LLL algorithm. HSLLL and PSLLL have a better reduction effect, but are poor in the stability performance of lattice base reduction. The PLLLR algorithm loses a small amount of reduction time, but improves the search efficiency of the ambiguity, which effectively improves the overall efficiency of the ambiguity resolution.

Author Contributions

Conceptualization, X.L. and Y.X.; methodology, X.L. and Y.X.; software, X.L.; validation, X.L., Y.X., W.C., S.X. and R.Z.; formal analysis, X.L. and Y.X.; investigation, Y.X.; resources, X.L.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and Y.X.; visualization, X.L.; supervision, Y.X.; project administration, Y.X.; funding acquisition, Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sichuan Science and Technology Program (Funder Name: Yongliang Xiong. Grant No. 2022YFG0169), and the National Natural Science Foundation of China (Funder Name: Yongliang Xiong. Grant No. 41674028 and 41274044).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset supporting this research can be found at the NOAA’s National Geodetic Survey (NGS).

Acknowledgments

The authors thank the NOAA’s National Geodetic Survey (NGS) for their products and datasets, as well as the researchers who provided the open-source software.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Teunissen, P.J.G. Least-squares estimation of the integer GPS ambiguities. In Proceedings of the Invited Lecture, Section IV Theory and Methodology, IAG General Meeting, Beijing, China, 8–13 August 1993; p. 16. [Google Scholar]
  2. Teunissen, P.J.G. The least-square ambiguity decorrelation adjustment: A method for fast GPS integer ambiguity estimation. J. Geod. 1995, 70, 65–82. [Google Scholar] [CrossRef]
  3. Liu, L.T.; Hsu, H.T.; Zhu, Y.Z.; Ou, J.K. A new approach to GPS ambiguity decorrelation. J. Geod. 1999, 73, 478–490. [Google Scholar] [CrossRef]
  4. Xu, P. Random simulation and GPS decorrelation. J. Geod. 2001, 75, 408–423. [Google Scholar] [CrossRef]
  5. Xu, P. Cholesky-based reduction for the weighted integer least squares problem. J. Geod. 2012, 86, 35–52. [Google Scholar] [CrossRef]
  6. Zhou, Y. A new practical approach to GNSS high-dimensional ambiguity decorrelation. GPS Solut. 2011, 15, 325–331. [Google Scholar] [CrossRef]
  7. Zhou, Y.; He, Z. ariance reduction of GNSS ambiguity in (inverse) paired Cholesky decorrelation transformation. GPS Solut. 2013, 18, 1–9. [Google Scholar]
  8. Chang, X.W.; Yang, X.; Zhou, T. MLAMBDA: A modified LAMBDA method for integer least-squares estimation. J. Geod. 2005, 79, 552–565. [Google Scholar] [CrossRef]
  9. Liu, J.; Yu, X.; Zhang, X. GNSS ambiguity resolution using the lattice theory. Acta Geod. Cartogr. Sin. 2012, 41, 636–645. [Google Scholar]
  10. Lv, H. Research on Key Methods of GNSS Integer Ambiguity Estimation; Information Engineering University: Zhengzhou, China, 2019. [Google Scholar]
  11. Lenstra, A.K.; Lenstra, H.W.; Lovász, L. Factoring polynomials with rational coefficients. Math. Ann. 1982, 261, 515–534. [Google Scholar] [CrossRef]
  12. Hassibi, A.; Boyd, S. Integer parameter estimation in linear models with applications to GPS. IEEE Trans. Signal Process. 1998, 46, 2938–2952. [Google Scholar] [CrossRef]
  13. Grafarend, E.W. Mixed integer-real valued adjustment (IRA) problems: GPS initial cycle ambiguity resolution by means of the LLL algorithm. GPS Solut. 2000, 4, 31–44. [Google Scholar] [CrossRef]
  14. Lou, L. One modified LLL algorithm in GPS decorrelation. J. Tongji Univ. 2004, 32, 237–241. [Google Scholar]
  15. Liu, Z.; He, X. An improved LLL algorithm for GPS ambiguity solution. Acta Geod. Cartogr. Sin. 2007, 36, 286–289. [Google Scholar]
  16. Yang, R.; Hua, X.; Li, Z.; Wu, J. An improved LLL algorithm for GPS ambiguity solution. Geomat. Inf. Sci. Wuhan Univ. 2010, 35, 21–24. [Google Scholar]
  17. Fan, L.; Zhai, G.; Chai, H. Ambiguity decorrelation with Integer block orthogonalization algorithm. Acta Geod. Cartogr. Sin. 2014, 43, 818–826. [Google Scholar]
  18. Xie, K.; Chai, H.; Fan, L.; Pan, Z. An improved LLL ambiguity decorrelation algorithm. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 1363–1368. [Google Scholar]
  19. Jazaeri, S.; Amiri-Simkooei, A.R.; Sharifi, M.A. Fast integer least-squares estimation for GNSS high-dimensional ambiguity resolution using lattice theory. J. Geod. 2012, 86, 123–136. [Google Scholar] [CrossRef]
  20. Ling, C.; Howgrave-Graham, N. Effective LLL reduction for lattice decoding. In Proceedings of the IEEE International Symposium on Information Theory, Nice, France, 24–29 June 2007. [Google Scholar]
  21. Xie, X.; Chang, X.W.; Borno, M.A. Partial LLL reduction. In Proceedings of the IEEE GLOBECOM 2011, Houston, TX, USA, 5–9 December 2011. [Google Scholar]
  22. Lu, L.; Liu, W.; Li, J. An effective LLL reduction algorithm. Geomat. Inf. Sci. Wuhan Univ. 2016, 41, 1118–1124. [Google Scholar]
  23. Lv, H.; Lv, Z.; Zhai, S. Improved LLL ambiguity reduction algorithm. J. Chin. Inert. Technol. 2017, 25, 611–617. [Google Scholar]
  24. Li, K.; Tian, C.; Jiao, Y.; Yue, Z. Improved HLLL lattice basis reduction algorithm to solve GNSS integer ambiguity. Int. J. Aerosp. Eng. 2023, 2023, 1–8. [Google Scholar] [CrossRef]
  25. Lannes, A. On the theoretical link between LLL-reduction and LAMBDA-decorrelation. J. Geod. 2013, 87, 323–335. [Google Scholar] [CrossRef]
  26. Borno, M.A.; Chang, X.W.; Xie, X.H. On ‘decorrelation’ in solving integer least-squares problems for ambiguity determination. Emp. Surv. Rev. 2014, 46, 37–49. [Google Scholar] [CrossRef]
  27. Jazaeri, S.; Amiri-Simkooei, A.R.; Sharifi, M.A. On lattice reduction algorithms for solving weighted integer least squares problems: Comparative study. GPS Solut. 2014, 18, 105–114. [Google Scholar] [CrossRef]
  28. Lu, L.; Liu, W.; Li, J. Impact of decorrelation on search efficiency of ambiguity resolution. Acta Geod. Cartogr. Sin. 2015, 44, 481–487. [Google Scholar]
  29. Teunissen, P.J.G. A new method for fast carrier phase ambiguity estimation. In Proceedings of the Position Location & Navigation Symposium, Las Vegas, NV, USA, 11–15 April 1994. [Google Scholar]
  30. Teunissen, P.J.G. Integer least-squares theory for the GNSS compass. J. Geod. 2010, 84, 433–447. [Google Scholar] [CrossRef]
  31. Xu, P.; Cannon, E.; Lachapelle, G. Mixed integer programming for the resolution of GPS carrier phase ambiguities. arXiv 2010, arXiv:1010.1052. [Google Scholar]
  32. Teunissen, P.J.G. The LAMBDA method for the GNSS compass. Artif. Satell. 2006, 41, 89–103. [Google Scholar] [CrossRef]
  33. Liu, W.; Lu, L.; Shan, H. A new block processing algorithm of LLL for fast high-dimension ambiguity resolution. Acta Geod. Et Cartogr. Sin. 2016, 45, 147–156. [Google Scholar]
  34. Schnorr, C.P. Fast LLL-type lattice reduction. Inf. Comput. 2006, 204, 1–25. [Google Scholar] [CrossRef]
  35. Golub, G.H.; Loan, C.F.V. Matrix Computations, 4th ed.; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  36. Gestner, B.; Zhang, W.; Ma, X.; Anderson, D.V. Lattice reduction for MIMO detection: From theoretical analysis to hardware realization. IEEE Trans. Circuits Syst. I Regul. Pap. 2011, 58, 813–826. [Google Scholar] [CrossRef]
  37. Nguyen, P.Q.; Stehlé, D. LLL on the average. In Proceedings of the Algorithmic Number Theory, 7th International Symposium, ANTS-VII, Berlin, Germany, 23–28 July 2006. [Google Scholar]
  38. Xu, P. Experimental quality evaluation of lattice basis reduction methods for decorrelating low-dimensional integer least squares problems. Eurasip J. Adv. Signal Process. 2013, 2013, 1–29. [Google Scholar] [CrossRef]
  39. Teunissen, P.J.G. Success probability of integer GPS ambiguity rounding and bootstrapping. J. Geod. 1998, 72, 606–612. [Google Scholar] [CrossRef]
  40. Nguyen, P.Q.; Vallée, B. The LLL Algorithm: Survey and Applications; Springer Publishing Company: Berlin, Germany, 2010. [Google Scholar]
  41. Fontein, F.; Schneider, M.; Wagner, U. PotLLL: A polynomial time version of LLL with deep insertions. Des. Codes Cryptogr. 2014, 73, 355–368. [Google Scholar] [CrossRef]
  42. Odijk, D.; Teunissen, P.J.G. ADOP in closed form for a hierarchy of multi-frequency single-baseline GNSS models. J. Geod. 2008, 82, 473–492. [Google Scholar] [CrossRef]
Figure 1. (a) Flow chart of HSLLL algorithm; (b) Flow chart of PSLLL algorithm.
Figure 1. (a) Flow chart of HSLLL algorithm; (b) Flow chart of PSLLL algorithm.
Sensors 23 08568 g001
Figure 2. (a) Number of basis vector swaps in different dimensions of Scheme 1; (b) Number of basis vector swaps in different dimensions of Scheme 2.
Figure 2. (a) Number of basis vector swaps in different dimensions of Scheme 1; (b) Number of basis vector swaps in different dimensions of Scheme 2.
Sensors 23 08568 g002
Figure 3. (a) Reduction time consumption in different dimensions of Scheme 1; (b) Reduction time consumption in different dimensions of Scheme 2.
Figure 3. (a) Reduction time consumption in different dimensions of Scheme 1; (b) Reduction time consumption in different dimensions of Scheme 2.
Sensors 23 08568 g003
Figure 4. (a) The number of search candidate points in different dimensions of Scheme 1; (b) The number of search candidate points in different dimensions of Scheme 2.
Figure 4. (a) The number of search candidate points in different dimensions of Scheme 1; (b) The number of search candidate points in different dimensions of Scheme 2.
Sensors 23 08568 g004
Figure 5. Ambiguity dimensions and ADOP values for DOY-074.
Figure 5. Ambiguity dimensions and ADOP values for DOY-074.
Sensors 23 08568 g005
Figure 6. (a) Plot of the cumulative distribution functions of the number of basis vector swaps for the five algorithms in the first 200 epochs; (b) Plot of the cumulative distribution functions of the reduction time consumption for the five algorithms in the first 200 epochs.
Figure 6. (a) Plot of the cumulative distribution functions of the number of basis vector swaps for the five algorithms in the first 200 epochs; (b) Plot of the cumulative distribution functions of the reduction time consumption for the five algorithms in the first 200 epochs.
Sensors 23 08568 g006
Figure 7. Number of ambiguity candidate points for the five algorithms in the first 200 epochs.
Figure 7. Number of ambiguity candidate points for the five algorithms in the first 200 epochs.
Sensors 23 08568 g007
Figure 8. Trend of ambiguity dimension and ADOP for 1210 epochs of DOY-228.
Figure 8. Trend of ambiguity dimension and ADOP for 1210 epochs of DOY-228.
Sensors 23 08568 g008
Figure 9. (a) Plot of the cumulative distribution functions of the number of basis vector swaps for the five algorithms; (b) Plot of the cumulative distribution functions of the reduction time consumption for the five algorithms.
Figure 9. (a) Plot of the cumulative distribution functions of the number of basis vector swaps for the five algorithms; (b) Plot of the cumulative distribution functions of the reduction time consumption for the five algorithms.
Sensors 23 08568 g009
Figure 10. Plot of the cumulative distribution functions of the total time consuming of the five algorithms for the two measured experiments.
Figure 10. Plot of the cumulative distribution functions of the total time consuming of the five algorithms for the two measured experiments.
Sensors 23 08568 g010
Table 1. Basis vector minimum angle (deg) and Hermite factor for the five algorithms of Scheme 1.
Table 1. Basis vector minimum angle (deg) and Hermite factor for the five algorithms of Scheme 1.
MethodsHLLLHSLLLPLLLPSLLLPLLLR
θ κ θ κ θ κ θ κ θ κ
Max65.08480.770472.46340.654468.62060.753765.69310.654468.62060.7537
Min41.11870.627946.20280.627743.50840.627741.35400.627745.60560.6277
Mean51.29010.662258.72900.635554.72610.659952.55640.636157.13430.6598
Table 2. Basis vector minimum angle (deg) and Hermite factor for the five algorithms of Scheme 2.
Table 2. Basis vector minimum angle (deg) and Hermite factor for the five algorithms of Scheme 2.
MethodsHLLLHSLLLPLLLPSLLLPLLLR
θ κ θ κ θ κ θ κ θ κ
Max67.23231.022367.50140.989367.30500.989367.30500.989367.30500.9893
Min39.36410.798544.39960.480145.44270.644547.41580.480147.62550.6445
Mean50.04380.865255.40140.780454.70200.823757.00280.770557.70650.8237
Table 3. Basis vector minimum angle (deg) and Hermite factor for five algorithms of Measured Experiment 1.
Table 3. Basis vector minimum angle (deg) and Hermite factor for five algorithms of Measured Experiment 1.
MethodsHLLLHSLLLPLLLPSLLLPLLLR
θ κ θ κ θ κ θ κ θ κ
Max52.03472.195255.85642.091753.41412.195252.18062.091756.51252.1952
Min41.00431.152844.05791.152742.10931.152841.43271.152745.84181.1528
Mean45.58691.739950.31491.733847.67421.738346.82281.733150.77911.7383
Table 4. Statistical results of five algorithms’ resolution times for Measured Experiment 1 (ms).
Table 4. Statistical results of five algorithms’ resolution times for Measured Experiment 1 (ms).
TimeMethodsHLLLHSLLLPLLLPSLLLPLLLR
Reductionmean18.752715.778917.277715.309717.3245
max20.432918.835218.656718.978118.4936
Searchmean2.52712.29921.63842.07320.9467
max4.93884.37302.75434.77542.3275
Totalmean21.279818.078118.916117.382918.2712
max24.850722.880420.804522.350620.5054
Table 5. Basis vector minimum angle (deg) and Hermite factor for five algorithms of Measured Experiment 2.
Table 5. Basis vector minimum angle (deg) and Hermite factor for five algorithms of Measured Experiment 2.
MethodsHLLLHSLLLPLLLPSLLLPLLLR
θ κ θ κ θ κ θ κ θ κ
Max53.12542.167655.13712.043556.45742.167655.60642.043458.17382.1675
Min42.27831.136643.48541.136444.00981.136543.75391.136446.11051.1365
Mean46.04381.701348.51301.695448.13271.697749.61031.691251.08831.6977
Table 6. Statistical results of five algorithms’ resolution times for Measured Experiment 2 (ms).
Table 6. Statistical results of five algorithms’ resolution times for Measured Experiment 2 (ms).
TimeMethodsHLLLHSLLLPLLLPSLLLPLLLR
Reductionmean27.181318.088021.337817.841221.3639
max29.081128.627922.629627.873922.5280
Searchmean3.89413.32492.85442.95921.7886
max6.18415.21183.21765.10172.8402
Totalmean31.075421.412924.192220.800423.1525
max34.221930.580125.634529.626624.5172
Table 7. Comparison of the five algorithms.
Table 7. Comparison of the five algorithms.
MethodHLLLHSLLLPLLLPSLLLPLLLR
Reduction speedSlowFasterFastFastestFaster
Search speedSlowFastFasterFasterFastest
StabilityGoodGood in most casesBetterGood in most casesBest
Complexity ο 4 3 n 3 + n 5 + n 4 log α β *Same as HLLLSame as HLLLSame as HLLL ο 7 3 n 3 + n 5 + n 4 log α β *
* α = max g i and β = min G T a .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Xiong, Y.; Chen, W.; Xu, S.; Zhang, R. Improved GNSS Ambiguity Fast Estimation Reduction Algorithm. Sensors 2023, 23, 8568. https://doi.org/10.3390/s23208568

AMA Style

Li X, Xiong Y, Chen W, Xu S, Zhang R. Improved GNSS Ambiguity Fast Estimation Reduction Algorithm. Sensors. 2023; 23(20):8568. https://doi.org/10.3390/s23208568

Chicago/Turabian Style

Li, Xinzhong, Yongliang Xiong, Weiwei Chen, Shaoguang Xu, and Rui Zhang. 2023. "Improved GNSS Ambiguity Fast Estimation Reduction Algorithm" Sensors 23, no. 20: 8568. https://doi.org/10.3390/s23208568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop