Next Article in Journal
Software Failure Log Analysis for Engineers—Review
Next Article in Special Issue
A Collaborative Multi-Granularity Architecture for Multi-Source IoT Sensor Data in Air Quality Evaluations
Previous Article in Journal
Energy Efficient Enhancement in a 5.8 GHz Batteryless Node Suitable for Backscattering Communications
Previous Article in Special Issue
An Accelerator for Semi-Supervised Classification with Granulation Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Variable Structure Multiple-Model Estimation Algorithm Aided by Center Scaling

School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(10), 2257; https://doi.org/10.3390/electronics12102257
Submission received: 27 March 2023 / Revised: 11 May 2023 / Accepted: 12 May 2023 / Published: 16 May 2023
(This article belongs to the Special Issue Advances in Intelligent Data Analysis and Its Applications)

Abstract

:
The accuracy for target tracking using a conventional interacting multiple-model algorithm (IMM) is limited. In this paper, a new variable structure of interacting multiple-model (VSIMM) algorithm aided by center scaling (VSIMM-CS) is proposed to solve this problem. The novel VSIMM-CS has two main steps. Firstly, we estimate the approximate location of the true model. This is aided by the expected-mode augmentation algorithm (EMA), and a new method—namely, the expected model optimization method—is proposed to further enhance the accuracy of EMA. Secondly, we change the original model set to ensure the current true model as the symmetry center of the current model set, and the model set is scaled down by a certain percentage. Considering the symmetry and linearity of the system, the errors produced by symmetrical models can be well offset. Furthermore, narrowing the distance between the true model and the default model is another effective method to reduce the error. The second step is based on two theories: symmetric model set optimization method and proportional reduction optimization method. All proposed theories aim to minimize errors as much as possible, and simulation results highlight the correctness and effectiveness of the proposed methods.

1. Introduction

Multiple-model (MM) is an advanced method to solve many problems, especially the target tracking problem [1,2]. Compared with traditional algorithms combined with radar systems [3,4], MM’s power of MM comes from the teamwork of multi parallel estimators [5], not only a single estimator. The MM approach has been presented for more than fifty years, and it was first proposed in [6,7]. It has a mature framework system [8,9], and the parallel structure using Bayesian filters proves its great performance. Usually, the model set designed in advance or generated in real-time is used to cover the possible true models. Then, the system dynamics can be described as hybrid systems [10,11] with discrete modes and continuous states. The model set used during the target tracking process has a big influence on estimation results [12], and a better model set often leads to more precise tracking results. The overall estimation is the combination of all estimations from the parallel-running Bayesian filters [13,14]. In recent decades, the MM methods have achieved rapid development [15]. The MM has been used widely because of its completeness of method and easiness of implementation. In addition, it goes through three stages altogether [16]: static MM (SMM), interacting MM (IMM) and variable-structure interacting of MM (VSIMM).
Compared with SMM, the biggest advantage of IMM is considering the jumps between models [6,7], and this drawback was fixed by Blom and Bar-Shalom [17]. Many advanced methods of IMM have been proven that improving tracking accuracy and not increasing computation burden can be conducted at the same time [18,19,20]. A reweighted interacting multiple model algorithm [21], which is a recursive implementation of a maximum a posteriori (MAP) state sequence estimator, is a competitive alternative to the popular IMM algorithm and GPB methods. Considering the non-Gaussian white noise, the interacting multiple model based on maximum correntropy Kalman filter (IMM-MCKF) [22] is presented to combine interacting multiple model with MCKF to deal with the impulsive noise [23]. Furthermore, to overcome the small kernel bandwidth, changing the kernel is also presented in it. Not only that, some emerging technologies, such as neural network [24] combined with MM also achieves rapid development. For example, the multiple model tracking algorithm of the multiple processing switch is a reliable method to improve the tracking precision [25]. However, the inherent defect, fixed model structure, of IMM limits its development to a large extent. In many real-world scenarios, the true model space is not discrete [26], but continues and uncountable. It is hard to design a model set to cover all possible models. Therefore, it seems natural to design a model set including huge models to fit the true model space perfectly. However, too many models leads to competition between models, even worse than few models [5]. In addition, the surging computation burden is another severe problem. Loosely speaking, the major objective of MM is to find a method using as few models as possible to achieve better performance of target tracking.
FIMM and SMM both use fixed model set at all time, without considering the properties of true model space. VSIMM [27,28,29] is a successful method to consider. On the one hand, it uses a limited number of models to reduce computation burden. On the other hand, the model set generated on the current is closer than the original model set. Generally, the VSIMM adapts the continuous mode space. It seems that tracking precision and acceptable computation burden are both solved perfectly. The key problem of VSIMM is how to design a highly cost-effective model set adaptive (MSA) mechanism. The model-group switching algorithm (MGS) has been used widely in a large class of problems with hybrid (continuous and discrete) uncertainties [30]. Compared with FIMM, the main advantage of MGS is reducing computation significantly, but it has limited performance improvements. The expected-mode augmentation covers a large continuous mode space by a relatively small number of models at a given accuracy level [31]. Though the expected model is closer than any other model, the results may even deteriorate because it is not precise enough. Similarly, the model set is augmented by a variable model intended to best match the unknown true model, and the algorithm is named the equivalent-model augmentation algorithm (EqMA) [32]. Compared with IMM and EMA, the EqMA has stronger timeliness. The likely-model set algorithm is also an adaptive method for VSIMM, and its cost-effectiveness is more valid than many other methods [33]. However, those three algorithms mentioned in [33] are more difficult to implement. A method using hypersphere-symmetric model-subset and axis-symmetric model-subset is presented as the fundamental model-subset for multiple models estimation with fixed structure, variable structure, and moving bank [34]. Different kinds of VSIMM algorithms provide reasonable methods to achieve model sets, and MSA is still an open topic to study.
With the gradual upgrading of radar detection systems, the tracking of maneuvering targets requires better accurate results. VSIMM-CS not only meets the requirements well in terms of tracking accuracy but also hardly adds any computation. In this paper, a variable structure of multiple-model algorithm aided by center scaling (VSIMM-CS) is proposed for providing a rational method to generate model sets in real-time. Considering the properties of the linear system [35] and the effect of distance between the model set and the true model on final error, we provide a symmetric model set optimization method and proportional reduction optimization method. For the Kalman filter [36], it is a linear system, and if the true model is at the geometric center of the model set, any two symmetric models produce opposite errors. Therefore, if the model set has an even number of models, the overall error converges to 0. From another point of view, the error of a single filter is related to the distance of the model from the actual model; thus, it is reasonable to scale down the model set by a certain percentage. Those two theories are based on the current true model. Therefore, to find a model that is closer to the true model, the expected model optimization method is proposed. The modified model is closer to the real model than the original model. For many existing methods, such as FIMM, VSIMM-CS has excellent cost performance ratio without more computation in the design of the initial model set. Compared with many VSIMM algorithms, just like LMS proposed in [33], VSIMM has better implementability with the same computation. In general, VSIMM-CS shows high precision and universality, and it is also easy to implement.
The remaining parts of the paper are organized as follows: Section 2 introduces the processes of the IMM and VSIMM. Section 3 provides three optimization methods: symmetric model set optimization method, proportional reduction optimization method, and expected model optimization method to prove the feasibility. Section 4 presents the process of VSIMM-CS. Finally, Section 5 provides the conclusion.

2. Multiple-Model Algorithm

In this section, the processing of VSIMM and FIMM are briefly introduced.

2.1. The Process of FIMM

If the model set M = { m ( 1 ) , m ( 2 ) , . . . , m ( N ) } is determined in advance, the model probability transition matrix Π is also determined. The transition probability from i-th model ( m ( i ) ) to j-th model ( m ( j ) ) is π i j . If a multiple-model system has M models, each of them can be denoted as
x k ( i ) = F k 1 ( i ) x k 1 ( i ) + G k 1 ( i ) ( a k 1 + w k 1 ( i ) ) z k ( i ) = H k ( i ) x k ( i ) + v k ( i ) i = 1 , . . . , N
where x = ( x , x ˙ , y , y ˙ ) is the target state; a = ( a x , a y ) is the acceleration; the process noise is w k N [ 0 , Q ] ; the measurement value is z and its random measure error is v N [ 0 , R ] ; F represents the state transition matrix, G represents the acceleration input matrix, and H is the observation matrix.
Assume that the best target estimation, the state estimation covariance matrix and the model probability of m ( i ) are x ^ k | k ( i ) , p k | k ( i ) and u k ( i ) at the time k, respectively. Then, the overall state estimation and state estimation covariance are
x ^ k | k = i x k | k ( i ) u k ( i )
p k | k = i u k ( i ) [ p k | k ( i ) + ( x ^ k | k x ^ k | k ( i ) ) ( x ^ k | k x ^ k | k ( i ) ) ]

2.2. The Process of VSIMM

The biggest difference between FIMM and VSIMM is whether the model set changes through the target tracking process. For a maneuvering target, its mode space of acceleration S is very large and even uncountable. In most cases, the real model does not fall on the model set. Thus, using a limited model set M to approach S is unreasonable. Simply increasing the number of models does not improve the results, and it has a high probability to degrade the performance, even worse than very few models. The significant advantages of VSIMM are high precision, less computation, and strong adaptability.
If we obtain the current model set M k through a specific method, and M k = { m k ( j ) , j = 1 , . . . , n k } . The model set M k at any time is included in the total model set M. Then, the model transition probability is π i j = p { m k ( j ) | m k 1 ( i ) } = p { m k ( j ) | m k 1 ( i ) , s k M } . The overall state estimation and state estimation covariance based on x ^ k | k ( i ) | M k , p k | k ( i ) | M k and u k ( i ) | M k are, respectively,
x ^ k | k = m ( i ) M k x ^ k | k ( i ) | M k u k ( i ) | M k
p k | k = m ( i ) M k [ p k | k ( i ) | M k + ( x ^ k | k ( i ) | M k x ^ k | k ) ( x ^ k | k ( i ) | M k x ^ k | k ) ] u k | k ( i ) | M k

3. Model Optimization Method

In this section, we introduce three methods to optimize the model sets, including symmetric model set optimization method, proportional reduction optimization method and expected model optimization method.

3.1. Symmetric Model Set Optimization Method

A linear system can be described as shown in Figure 1. In addition, X is input and output is A X + B . If input is X , the output is A x + B . It is clear that the output of the sum of X and X is 2 B . If A X is the error produced by the system, two opposite inputs eliminate errors well.
It is assumed that two model sets M ( 1 ) and M ( 2 ) , where M ( 1 ) = { m 1 ( 1 ) , m 2 ( 1 ) , . . . , m N ( 1 ) } and M ( 2 ) = { m 1 ( 2 ) , m 2 ( 2 ) , . . . , m N ( 2 ) } . The current motion mode is s k . M ( 1 ) and M ( 2 ) are centrosymmetric and axially symmetric, respectively. However, the centers of symmetries are different. M ( 1 ) ’s center of symmetry is (0,0) and M ( 2 ) is s k . Obviously, the connection between M ( 1 ) and M ( 2 ) , as shown in Figure 2, is
m i ( 1 ) + s k = m i ( 2 ) , i = 1 , . . . , N
For M ( 1 ) , the distance between s k and m i ( 1 ) is different, and the magnitude of the distance is
| m q 1 ( 1 ) s k | | m q 2 ( 1 ) s k | | m q N ( 1 ) s k |
For IMM algorithm, the connection between the model probability u ( i ) and the distance | m ( i ) s k | is
| m ( i ) s k | 1 u ( i )
Then, the following relation can be determined as
u q 1 ( 1 ) u q 2 ( 1 ) . . . u q N ( 1 )
For each model m i ( 1 ) , they have different errors x ^ k | k ( i ) | M ( 1 ) s k = δ i ( 1 ) ; thus, the overall estimation error is
E R R O R ( 1 ) = i u i ( 1 ) δ i ( 1 )
Clearly, according to (8), the overall estimation error has been reduced to a certain extent. However, since the system is linear and it is asymmetrical with respect to s k , the error of each m i ( 1 ) is not eliminated well.
Since M ( 2 ) holds symmetric properties, the relationships can be obtained as
m p 1 ( 1 ) ( 2 ) s k = = m p 1 ( 2 / N 1 ) ( 2 ) s k = s k m p 1 ( 2 / N 1 + 1 ) ( 2 ) = s k m p 1 ( N 1 ) ( 2 ) = δ p 1 ( 2 ) m p i ( 1 ) ( 2 ) s k = = m p i ( 2 / N i ) ( 2 ) s k = s k m p i ( 2 / N i + 1 ) ( 2 ) = s k m p i ( N i ) ( 2 ) = δ p i ( 2 ) m p n ( 1 ) ( 2 ) s k = = m p n ( 2 / N n ) ( 2 ) s k = s k m p n ( 2 / N n + 1 ) ( 2 ) = s k m p n ( N n ) ( 2 ) = δ p n ( 2 )
where N = i N i , and N i is even number. In this linear system, m p j ( i ) ( 2 ) and m p j ( i + N j / 2 ) ( 2 ) produce two opposite errors ε p j ( 2 ) and ε p j ( 2 ) , and
ε p j ( 2 ) = x k | k ( p j ) | M ( 2 ) s k ε p j ( 2 ) = x k | k ( p j + N j / 2 ) | M ( 2 ) s k
Theoretically, if M ( 2 ) is strictly symmetric with respecting to s k , the overall errors are equal to 0, without considering the system noise.

3.2. Proportional Reduction Optimization Method

Suppose two model sets M ( 1 ) = { m 1 ( 1 ) , m 2 ( 1 ) , . . . , m N ( 1 ) } and M ( 2 ) = { m 1 ( 2 ) , m 2 ( 2 ) , . . . , m N ( 2 ) } , and the current model is s k . The relationship between M ( 1 ) and M ( 2 ) is
m i ( 1 ) s k m i ( 2 ) s k = α > 1
Then, the model sets obeying the equation above can be called relative position invariant model sets. The most important feature is that the position of the model sets relative of s k does not change, as shown in Figure 3.
It is obvious that M ( 2 ) is more likely to have better performance than M ( 1 ) . However, the precondition is that the topology formed by M ( 1 ) includes all possible real models, as shown in Figure 4. If the topology is tangent to the true mode space S, this situation can be called the critical point, namely, α = α 0 .
Thus, if some noise could be tolerated, the following relationships are determined as
u 1 ( 1 ) = u 1 ( 2 ) u 2 ( 1 ) = u 2 ( 2 ) u N ( 1 ) = u N ( 2 )
For IMM, the distance between m i and s k directly affects the final error
| m i s k | | x ^ k | k ( i ) s k |
From (13), if the value of α is suitable, it is easy to deduce as
| x ^ k | k ( i ) | M ( 2 ) s k | < | x ^ k | k ( i ) | M ( 1 ) s k |
The connection between overall estimation error of M ( 1 ) and M ( 2 ) is
| E R R O R ( 2 ) | < | E R R O R ( 1 ) |
Generally, if the model set is scaled down, its performance improves. However, the scale should not be too small, or even exceed the critical point; otherwise, it may make the covariance matrix irreversible.

3.3. Expected Model Optimization Method

If the theories referenced in Section 3.1 and Section 3.2 is feasible, the common condition is: the current model s k is known or a model is found to approach s k as close as possible. Then, an effective method to approach the current model s k , called expected-mode augmentation, is given in [31]. The current expected model m e is the weighted sum of all the models at the current time
m e = i u i m i
and it is closer than any model to s k , namely, | m e s k | < min { | m 1 s k | , | m 2 s k | , . . . , | m N s k | } . However, the value of | m e s k | is not small enough, as shown in Figure 5. Expected model m e may appear in anywhere in expected model space. If m e + γ = s k , where γ is the error, it is necessary to find a method to reduce γ to a certain extent.
If m e is scaled a little bit, let m e becomes λ m e , where λ is scaling factor. Thus, (18) is rewritten as
s k λ m e = γ ( λ 1 ) m e
and the conditions are
λ > 1 , γ < 0 0 < λ < 1 , γ > 0
Obviously, the error γ reduces to γ ( λ 1 ) m e . If λ is chosen reasonably, the error becomes small enough and λ m e becomes close enough to s k , namely, λ m e s k λ m e + s k 0 . What is worth noting is increasing or decreasing λ blindly contributes to serious mistakes, including even worse performance.

4. Variable Structure of Interacting Multiple-Model Algorithm Aided by Center Scaling

In this section, we introduce a new VSIMM algorithm: VSIMM-CS. The Section 3.1, Section 3.2 and Section 3.3 provide three model optimization theories. The main idea of VSIMM-CS is finding the λ m e to approach the current real model s k as close as possible, and moving the original model set M and scaling it to obtain the current model set M ( k ) . The current model set M ( k ) could achieve better performance than the original model set M. The main function of the original model set M is locating the expected model m e .
The VSIMM algorithm has a clear framework of inputs and outputs. The process of Section 2.2 can be denoted as
V S I M M [ M ( k 1 ) , M ( k ) ] : { x ^ k | k i | M ( k ) , p k | k i | M ( k ) , u k | k i | M ( k ) } = V S I M M ( x ^ k 1 | k 1 i | M ( k 1 ) , p k 1 | k 1 i | M ( k 1 ) , u k 1 | k 1 i | M ( k 1 ) )
Obviously, if M ( k ) = M ( k 1 ) , the processing becomes to FIMM. Thus, the FIMM is a special case of VSIMM.
F I M M [ M ] : { x ^ k | k i | M , p k | k i | M , u k | k i | M } = F I M M ( x ^ k 1 | k 1 i | M , p k 1 | k 1 i | M , u k 1 | k 1 i | M )
The novel VSIMM-CS always has an original model set M, which is taken part in the whole processing of target tracking, and M ( k ) is always generated by M. M ( k ) and M ( k 1 ) are probably completely different. The steps of VSIMM-CS are shown in Algorithm 1.
Algorithm 1 VSIMM-CS Process
  • S1: Increase the time counter k by 1.
  • S2: Run the F I M M [ M ] cycle, and obtain the outputs: x ^ k + 1 | k + 1 i , u k + 1 | k + 1 i and p k + 1 | k + 1 i , based on the model set M.
  • S3: Obtain the expected model m e by using EMA algorithm, which is the weighted summation of all models.
    m e = i u k + 1 | k + 1 i m i
  • S4: Select the suitable values of α and λ to generate the current model set M ( k + 1 ) . M ( k + 1 ) = α { m i + λ m e , i = 1 , 2 , . . . , N }
  • S5: Run the V S I M M [ M ( k ) , M ( k + 1 ) ] cycle to obtain the final results.
    x ^ k + 1 | k + 1 = m ( i ) M ( k + 1 ) x ^ k + 1 | k + 1 ( i ) | M ( k + 1 ) u k ( i ) | M ( k + 1 )
    p k + 1 | k + 1 = m ( i ) M ( k + 1 ) ) [ p k + 1 | k + 1 ( i ) | M ( k + 1 ) + ( x ^ k + 1 | k + 1 ( i ) | M ( k + 1 ) x ^ k + 1 | k + 1 ) ( x ^ k + 1 | k + 1 ( i ) | M ( k + 1 ) x ^ k + 1 | k + 1 ) ] u k + 1 | k + 1 ( i ) | M ( k + 1 )
  • S6: Go to S 1 .
The algorithm complexity is T n = 2 n 2 + 16 n , and n equals to the number of models.
The most important part of the process is model set generation. The values of λ and α have a huge impact on the performance of the model set. Therefore, it is unreasonable to choose λ and α blindly. The biggest advantage of VSIMM-CS is that the model set is generated in real-time without increasing huge computational complexity. In addition, if the original model set is designed rational and the number of models is few, the proposed VSIMM-CS can achieve rewarding results and keep the calculation volume within a reasonable range at the same time. The rational combination of λ and α achieve great performance in terms of precision.

5. Simulation Results

In this section, a reasonable simulation process is given: firstly, the target state, the measurement equations and their specific parameters are presented. Secondly, the original model set is given, including the probability transition matrix. Thirdly, the target motion state and performance criterion are presented. Finally, different simulation results are analyzed.
The target state and the measurement equations are, respectively, given as
x k + 1 = F k ( j ) x k + G k ( j ) a k ( j ) + w k ( j )
z k = H k ( j ) x k + v k ( j )
where w k ( j ) N [ 0 , 0.01 ] ; v k ( j ) N [ 0 , 1250 ] ; a k ( i ) is target acceleration input;
F = 1 T 0 0 0 1 T 0 0 0 1 T 0 0 0 1 and G = 0.5 1 0 0 0 0 0.5 1 . T is time interval.
The original model set M included 4 models as
m 1 = [ 10 , 10 ] m 2 = [ 10 , 10 ] m 4 = [ 10 , 10 ] m 3 = [ 10 , 10 ]
and its topology structure is shown in Figure 6
Its probability transition matrix Π is 0.85 0.05 0.05 0.05 0.05 0.85 0.05 0.05 0.05 0.05 0.85 0.05 0.05 0.05 0.05 0.85 . The Table 1 illustrates an ensemble of maneuver trajectories for compare the existing and proposed algorithm.
If e ¯ represents the average error of the whole processing, it can be denoted as
e ¯ = k = k 1 k N x ^ k x k 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 x ^ k x k / k N k 1
where x ^ k is the best estimation at time k, x k is the target state; and N = 300 . Therefore, different λ and α produce different e ¯ , as shown in Table 2. The unit of e ¯ is meters, and we just consider the position error.
Different combinations of λ and α present different performances. Assume λ = 0.8 , 1.9 , 3 , 4.1 , 5.2 and α = 0.5 , 0.6 , 0.7 , 0.8 , 0.9 . Their root mean square errors (RMSE) are shown in Figure 7 with 50 times Montecarlo experiment.
In most cases, the VSIMM-CS shows great performance on target tracking, and its errors are far lower than FIMM using 4 models. In some cases, the error is close to 0, and the proposed VSIMM-CS achieves a perfect performance. Thus, sacrificing a certain amount of computation for high accuracy is an acceptable approach.
When λ = 0.8 , as shown in Figure 7a,b, most values of α do not present better performance than IMM-4. Especially, when α = 0.5 , it becomes the worst one and it produces twice as many errors as IMM-4. As α is increasing, the errors reduce gradually. Until α = 0.8 , the error curve is as same as IMM-4. When α = 0.9 , it becomes better than IMM-4. Obviously, in the situation of λ = 0.8 , VSIMM-CS does not show its high precision. When λ = 1.9 in Figure 7c,d, compared with λ = 0.8 , the tracking precision has been improved. Noting that when α = 0.7 or 0.8 , the errors are smaller than IMM-4. However, when α = 0.5 , it is still the worst one among them. It is surprising that, except α = 0.5 when λ = 3 , other values of α have greater performance than IMM-4. The relationship of these five algorithms has not changed. α = 0.9 is still the best one and α = 0.5 is still the worst one. Only when α = 0.5 , the performance is worse than IMM-4. While the minimum position error is about two meters, it is not close enough to 0. Clearly, up to now, λ does not achieve the most suitable value. When λ = 4.1 , all VSIMM-CS algorithms have better performance than IMM-4, even α = 0.5 , α = 0.7 , and α = 0.9 have similar tracking precision, and α = 0.9 is not the best one anymore. When α = 0.7 , the position error is smaller than that in any situation as shown in Figure 7. When the true model does not equal to ( 0 , 0 ) , the position error is also very close to 0. It seems that the best combination is λ = 4.1 and α = 0.8 in this system. In this case, the conclusion, referenced in Section 3, has been fully demonstrated. As shown in Figure 7i,j, VSIMM-CS still shows its great performance. All the error curves are less than half of IMM-4’s, and α = 0.6 has the highest accuracy. The simulation result of α = 0.7 is very close to α = 0.6 . However, the best one shown in Figure 7g,h does not maintain its advantage in Figure 7i,j.
For α = 0.5 , as λ changes, its precision does not become better, except λ < 5.2 . In these simulation results, it is always the worst one among VSIMM-CS algorithms. The main reason for this is that the value of α is too small. Similarly, α = 0.6 still does not meet the requirements. Compared with α = 0.5 , its performance becomes better. It is not difficult to find that VSIMM-CS takes twice as much computation as IMM-4. However, in most cases, the promotion of α = 0.6 is very limited, even it is worse than IMM-4 when λ = 0.9 or 1.9 . Only when λ = 5.2 does it have the best performance, which is shown in Figure 7i,j. When α = 0.7 , the precision becomes acceptable. Though it is not the best one in all situations, the performance is greatly promoted. As λ increases, the tracking precision of the proposed algorithm becomes better. When α = 0.8 and λ 3 , it only performs worse than α = 0.9 . Especially when λ = 4.1 , its performance becomes the best one. However, if the value of λ is too large, referenced as Figure 7ij, its advantage disappears. When α = 0.9 , its performance is always the best one in Figure 7a–f. Similarly, when λ = 5.2 , so much large value of λ would deteriorate the performance.
A good demonstration of the average error e ¯ of the unworkable portfolio is shown in Table 2. If α < 0.7 , the critical point would not bee meet the condition at certain times. When α = 0.7 , 0.8 , 0.9 , respectively, there is a high probability that the proposed algorithm achieves better performance.
In general, VSIMM-CS certainly achieves a big performance boost compared with FIMM, if λ and α are selected rationally. Different systems tend to have different suitable combinations of λ and α . Smaller α or larger λ do not promote the performance. According to the current system, the reasonable λ and α always reflect satisfactory results.

6. Conclusions

In this paper, a new variable structure interacting multiple-model algorithm, VSIMM-CS, is proposed. Its model set is generated in real-time, and the generated model set is based on the original model set. Considering the error properties of a linear system and the symmetry of model set structure, the two theories called proportional reduction optimization method and symmetric model set optimization method are presented. The main purpose of the two theories is to reduce errors. Without considering the effect of noise, VSIMM-CS eliminates errors perfectly. To better locate the real model, the expected model optimization method is proposed. The excepted model generated by this method is closer than any other model. Simulation results show different combinations of α and λ have different performances. In most cases, VSIMM-CS achieved better tracking results. It is acceptable to sacrifice a certain amount of computation for high accuracy. A huge performance boost can be obtained by the precise selection of α and λ . When the performance achieves in the optimal situation, α and λ are 0.8 and 4.1, respectively. In different simulation conditions, the results may be different. Many factors may influence the values of α and λ , such as noise, original model set, and true mode space, etc., and our following research aims to focus on these factors. However, the unreasonable selection of α and λ leads to worse results. Simulation results also highlight the rationality and feasibility of this novel approach.

Author Contributions

Writing—original draft preparation, Q.W.; investigation, Q.W. and G.L.; writing—review and editing, G.L., W.J. and S.Z.; project administration, S.Z. and G.L.; supervision, W.S.; funding acquisition, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grants 62001227, 61971224 and 62001232.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the reviewers for their great help on the article during its review progress.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gorji, A.A.; Tharmarasa, R.; Kirubarajan, T. Performance measures for multiple target tracking problems. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1–8. [Google Scholar]
  2. Poore, A.B.; Gadaleta, S. Some assignment problems arising from multiple target tracking. Math. Comput. Model. 2006, 43, 1074–1091. [Google Scholar] [CrossRef]
  3. Huang, X.; Tsoi, J.K.; Patel, N. mmWave Radar Sensors Fusion for Indoor Object Detection and Tracking. Electronics 2022, 11, 2209. [Google Scholar] [CrossRef]
  4. Wei, Y.; Hong, T.; Kadoch, M. Improved Kalman filter variants for UAV tracking with radar motion models. Electronics 2020, 9, 768. [Google Scholar] [CrossRef]
  5. Li, X.R.; Bar-Shalom, Y. Multiple-model estimation with variable structure. IEEE Trans. Autom. Control 1996, 41, 478–493. [Google Scholar]
  6. Magill, D. Optimal adaptive estimation of sampled stochastic processes. IEEE Trans. Autom. Control 1965, 10, 434–439. [Google Scholar] [CrossRef]
  7. Lainiotis, D. Optimal adaptive estimation: Structure and parameter adaption. IEEE Trans. Autom. Control 1971, 16, 160–170. [Google Scholar] [CrossRef]
  8. Tudoroiu, N.; Khorasani, K. Satellite fault diagnosis using a bank of interacting Kalman filters. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1334–1350. [Google Scholar] [CrossRef]
  9. Kirubarajan, T.; Bar-Shalom, Y.; Pattipati, K.R.; Kadar, I. Ground target tracking with variable structure IMM estimator. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 26–46. [Google Scholar] [CrossRef]
  10. Grossman, R.L.; Nerode, A.; Ravn, A.P.; Rischel, H. Hybrid Systems; Springer: Berlin/Heidelberg, Germany, 1993; Volume 736. [Google Scholar]
  11. Branicky, M.S. Introduction to hybrid systems. In Handbook of Networked and Embedded Control Systems; Birkhäuser: Basel, Switzerland, 2005; pp. 91–116. [Google Scholar]
  12. Li, X.R. Multiple-model estimation with variable structure. II. Model-set adaptation. IEEE Trans. Autom. Control 2000, 45, 2047–2060. [Google Scholar]
  13. Labbe, R. Kalman and bayesian filters in python. Chap 2014, 7, 4. [Google Scholar]
  14. Zhang, G.; Lian, F.; Gao, X.; Kong, Y.; Chen, G.; Dai, S. An Efficient Estimation Method for Dynamic Systems in the Presence of Inaccurate Noise Statistics. Electronics 2022, 11, 3548. [Google Scholar] [CrossRef]
  15. Rong Li, X.; Jilkov, V. Survey of maneuvering target tracking. Part V. Multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar] [CrossRef]
  16. Bar-Shalom, Y. Multitarget-Multisensor Tracking: Applications and Advances; Artech House, Inc.: Norwood, MA, USA, 2000; Volume iii. [Google Scholar]
  17. Blom, H.A.; Bar-Shalom, Y. The interacting multiple model algorithm for systems with Markovian switching coefficients. IEEE Trans. Autom. Control 1988, 33, 780–783. [Google Scholar] [CrossRef]
  18. Ma, Y.; Zhao, S.; Huang, B. Multiple-Model State Estimation Based on Variational Bayesian Inference. IEEE Trans. Autom. Control 2019, 64, 1679–1685. [Google Scholar] [CrossRef]
  19. Wang, G.; Wang, X.; Zhang, Y. Variational Bayesian IMM-filter for JMSs with unknown noise covariances. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1652–1661. [Google Scholar] [CrossRef]
  20. Li, H.; Yan, L.; Xia, Y. Distributed robust Kalman filtering for Markov jump systems with measurement loss of unknown probabilities. IEEE Trans. Cybern. 2021, 52, 10151–10162. [Google Scholar] [CrossRef]
  21. Johnston, L.; Krishnamurthy, V. An improvement to the interacting multiple model (IMM) algorithm. IEEE Trans. Signal Process. 2001, 49, 2909–2923. [Google Scholar] [CrossRef]
  22. Fan, X.; Wang, G.; Han, J.; Wang, Y. Interacting Multiple Model Based on Maximum Correntropy Kalman Filter. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 3017–3021. [Google Scholar] [CrossRef]
  23. Davis, R.R.; Clavier, O. Impulsive noise: A brief review. Hear. Res. 2017, 349, 34–36. [Google Scholar] [CrossRef]
  24. Nie, X. Multiple model tracking algorithms based on neural network and multiple process noise soft switching. J. Syst. Eng. Electron. 2009, 20, 1227–1232. [Google Scholar]
  25. Mazor, E.; Averbuch, A.; Bar-Shalom, Y.; Dayan, J. Interacting multiple model methods in target tracking: A survey. IEEE Trans. Aerosp. Electron. Syst. 1998, 34, 103–123. [Google Scholar] [CrossRef]
  26. Gao, W.; Wang, Y.; Homaifa, A. Discrete-time variable structure control systems. IEEE Trans. Ind. Electron. 1995, 42, 117–122. [Google Scholar]
  27. Li, X.R.; Bar-Shakm, Y. Mode-set adaptation in multiple-model estimators for hybrid systems. In Proceedings of the 1992 American Control Conference, Chicago, IL, USA, 24–26 June 1992; pp. 1794–1799. [Google Scholar]
  28. Pannetier, B.; Benameur, K.; Nimier, V.; Rombaut, M. VS-IMM using road map information for a ground target tracking. In Proceedings of the 2005 7th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005; Volume 1. 8p. [Google Scholar]
  29. Xu, L.; Li, X.R. Multiple model estimation by hybrid grid. In Proceedings of the 2010 American Control Conference, Baltimore, MD, USA, 30 June–2 July 2010; pp. 142–147. [Google Scholar]
  30. Li, X.R.; Zwi, X.; Zwang, Y. Multiple-model estimation with variable structure. III. Model-group switching algorithm. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 225–241. [Google Scholar]
  31. Li, X.R.; Jilkov, V.P.; Ru, J. Multiple-model estimation with variable structure-part VI: Expected-mode augmentation. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 853–867. [Google Scholar]
  32. Lan, J.; Li, X.R. Equivalent-Model Augmentation for Variable-Structure Multiple-Model Estimation. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 2615–2630. [Google Scholar] [CrossRef]
  33. Li, X.R.; Zhang, Y. Multiple-model estimation with variable structure. V. Likely-model set algorithm. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 448–466. [Google Scholar]
  34. Sun, F.; Xu, E.; Ma, H. Design and comparison of minimal symmetric model-subset for maneuvering target tracking. J. Syst. Eng. Electron. 2010, 21, 268–272. [Google Scholar] [CrossRef]
  35. Callier, F.M.; Desoer, C.A. Linear System Theory; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  36. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
Figure 1. A linear system.
Figure 1. A linear system.
Electronics 12 02257 g001
Figure 2. Example of M ( 1 ) and M ( 2 ) .
Figure 2. Example of M ( 1 ) and M ( 2 ) .
Electronics 12 02257 g002
Figure 3. Topology structure of M ( 1 ) and M ( 2 ) .
Figure 3. Topology structure of M ( 1 ) and M ( 2 ) .
Electronics 12 02257 g003
Figure 4. Topology of a model set including all model space S.
Figure 4. Topology of a model set including all model space S.
Electronics 12 02257 g004
Figure 5. Example of the mentioned hypothesis.
Figure 5. Example of the mentioned hypothesis.
Electronics 12 02257 g005
Figure 6. Topology structure of M.
Figure 6. Topology structure of M.
Electronics 12 02257 g006
Figure 7. Root mean square error of VSIMM-CS.
Figure 7. Root mean square error of VSIMM-CS.
Electronics 12 02257 g007
Table 1. Parameters of deterministic scenarios.
Table 1. Parameters of deterministic scenarios.
Time k (s) a x (m/s2) a y (m/s2)
1–5000
50–10055
100–1503 7
150–2007 2
200–25041
250–300 4 2
300–35000
Table 2. The average error e ¯ of the tracking process.
Table 2. The average error e ¯ of the tracking process.
α0.50.60.70.80.9
λ
0.8 16.7213.0010.458.597.19
1.9 13.299.897.555.844.56
3 10.166.964.763.151.98
4.1 7.214.162.090.981.52
5.2 4.321.531.863.144.09
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Li, G.; Jin, W.; Zhang, S.; Sheng, W. A Variable Structure Multiple-Model Estimation Algorithm Aided by Center Scaling. Electronics 2023, 12, 2257. https://doi.org/10.3390/electronics12102257

AMA Style

Wang Q, Li G, Jin W, Zhang S, Sheng W. A Variable Structure Multiple-Model Estimation Algorithm Aided by Center Scaling. Electronics. 2023; 12(10):2257. https://doi.org/10.3390/electronics12102257

Chicago/Turabian Style

Wang, Qiang, Guowei Li, Weitong Jin, Shurui Zhang, and Weixing Sheng. 2023. "A Variable Structure Multiple-Model Estimation Algorithm Aided by Center Scaling" Electronics 12, no. 10: 2257. https://doi.org/10.3390/electronics12102257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop