Next Article in Journal
Power Compression and Phase Analysis of GaN HEMT for Microwave Receiver Protection
Next Article in Special Issue
High-Definition Map Representation Techniques for Automated Vehicles
Previous Article in Journal
A Sensorless Control Strategy for Permanent Magnet Synchronous Motor at Low Switching Frequency
Previous Article in Special Issue
Adaptive Cruise Predictive Control Based on Variable Compass Operator Pigeon-Inspired Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperative Localization Based on Augmented State Belief Propagation for Mobile Agent Networks

1
School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
AVIC Xi’an Flight Automatic Control Research Institute, Xi’an 710076, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(13), 1959; https://doi.org/10.3390/electronics11131959
Submission received: 20 May 2022 / Revised: 14 June 2022 / Accepted: 20 June 2022 / Published: 22 June 2022
(This article belongs to the Collection Advance Technologies of Navigation for Intelligent Vehicles)

Abstract

:
Belief propagation (BP) is widely used to solve the cooperative localization problem due to its excellent performance and natural distributed structure of implementation. For a mobile agent network, its factor graph inevitably encounters loops. In this case, the BP algorithm becomes iterative and can only provide an approximate marginal probability density function of the estimate with finite iterations. We propose an augmented-state BP algorithm for mobile agent networks to alleviate the effect of loops. By performing state augmentation, the messages in the factor graph will actually be allowed to be backward propagated, which reduces the number of loops in the factor graph, increases the available information of agents, and thus, benefits the localization. Experimental results demonstrate the better performance of the proposed algorithm over the original BP method.

1. Introduction

Location awareness promotes the rapid development of autonomous driving [1], formation flying [2], robotics [3], etc. Accurate location estimation is a prerequisite for proper functioning control loops [4,5]. Traditionally, agents can perform localization tasks through the global positioning system (GPS) or beacon localization. However, localization by GPS requires continuous and stable GPS signal reception, which is usually impossible in harsh scenarios, e.g., when indoors or when the signal is jammed. On the other hand, beacon localization requires a large number of anchors with known location and is impractical due to its high cost. Another way to perform localization is to use inertial measurement units (IMUs) and dead reckoning (DR), but the unlimited increase of accumulated errors is often unacceptable [6]. In recent years, cooperative localization (CL) has received extensive attention due to its broad applicability. CL uses relative measurements among neighboring agents to improve the absolute position of each agent in the network. Compared with noncooperative localization (NCL), CL can achieve better-precision localization of the entire agent network with some additional computation and communication [7,8]. The advantages of CL are more obvious, especially for heterogeneous agent networks. For example, a small number of agents equipped with high-precision sensors can share information with other agents to benefit the entire agent network through the master-slave CL algorithm [9].
Various CL algorithms have been investigated in the existing work [10,11,12,13,14,15], among which, a message-passing algorithm called belief propagation (BP) [14,15] is the most popular for its high-precision localization and natural distributed architecture of implementation. The original BP algorithm involves complex integration, product operations, and transmission of probability density functions (pdfs), so it cannot be directly applied in practice. For the case of linear models and Gaussian noise, the Gaussian BP [16,17] is a reduced and simplified version that can be easily implemented. However, it cannot handle real-world scenarios of nonlinear measurements and non-Gaussian noise. For that case, one effective approach is nonparametric belief propagation (NBP) [18,19]. As a generalization of particle filters, NBP uses particle representation [20] to approximate the incoming and outgoing messages of agents and thus can handle noise with arbitrary distribution and nonlinear measurements. However, a small number of particles are inadequate to provide satisfactory estimation accuracy, and too many particles will cause unacceptable communicational and computational burden. Ref. [21] proposed to propagate the beliefs instead of the messages, and used Gaussian mixture density with fewer components to approximate the beliefs. In this way, each agent can perform broadcast instead of peer-to-peer communication, which greatly reduces the communication overhead, requiring only a little extra computation. In Ref. [22], NBP was investigated in an anchor-free environment, and an intermittent BP framework based on dead reckoning was proposed, which naturally combined CL and NCL algorithms. Since the accumulated error of IMU was considered in the prediction process, the localization accuracy was further improved. In Refs. [23,24], the posterior linearization filter and the unscented Kalman filter were integrated with BP to deal with nonlinear measurements. Compared with NBP, these two methods also have satisfactory performance but less computation and communication. In Refs. [25,26], Two BP algorithms based on non-Gaussian ranging models were proposed to deal with non-Gaussian noise.
BP has demonstrated its powerful capabilities in CL. However, as a message-passing algorithm based on factor graphs, BP can only provide the optimal results when there is no loop in the factor graph. Otherwise, BP becomes iterative and can only obtain approximate solutions [27]. Further, for mobile agent networks, the existing methods based on BP merely allow forward message passing and thus neglect the loops across time. This definitely sacrifices the accuracy of localization. In this paper, we propose an augmented state BP algorithm for mobile agent networks, which realizes a backward propagation of messages by performing state augmentation. This backward propagation (or retrodiction) in essence reduces the number of loops in the factor graph, increases the available information at each agent, and thus improves the accuracy of localization. Considering the computational and communicational burden, we suggest to use a few limited steps of retrodiction. Besides the proposed augmented BP, the contribution of this paper includes the formulation of the CL problem under the assumption that each agent has a dynamic motion model. Different from formulations of the existing CL work, this assumption actually considers the prior motion information of agents. Our formulation can reduce to the existing ones by setting an extremely large covariance of process noise in the dynamic model. Thus, our formulation is more general. We also provide the corresponding centralized fusion algorithm for the formulated CL problem, which acts as a benchmark for evaluating the distributed CL algorithms. Experimental results demonstrate the effectiveness and advantage of our algorithm.
The paper is organized as follows. Section 2 describes the system model and formulates the CL problem. Section 3 provides the centralized fusion algorithm for the formulated CL problem. Section 4 briefly reviews the BP algorithm. Section 5 proposes the augmented state BP algorithm. Experimental results are given in Section 6. Section 7 concludes the paper.

2. System Model and Problem Formulation

Consider N agents in a two-dimensional plane in a GPS-denied scenario without any anchors. The set of agents is denoted as A . Each agent moves according to the following dynamic model
x k + 1 i = F k i x k i + G k i w k i ,   i A
where x k i = [ x k i , x ˙ k i , y k i , y ˙ k i ] T is the state of the i th agent at time k , and x k i , x ˙ k i and y k i , y ˙ k i are the horizontal and vertical positions and velocities, respectively. The process noise w k i is assumed white and Gaussian-distributed with zero mean and covariance Q k i . The process noises are assumed independent across agents. The initial state x 0 i is also assumed Gaussian-distributed and independent across agents. It is assumed that each agent i A can measure its own position and/or velocity increments with respect to a geostationary coordinate frame. The internal measurement of the i th agent at time k is modeled as
z k i = H k i ( x k i x k 1 i ) + v k i ,   i A
where H k i and v k i indicate the measurement matrix of the i th agent at time k and the corresponding measurement noise, respectively. When H k i = [ 1 0 0 0 0 0 1 0 ] , z k i is called displacement measurement, which is a special case of model (2). The measurement noise v k i is assumed to be white Gaussian with zero mean and covariance R k i . Certain pairs of agents, say i , j A , can obtain inter-agent measurements z k i . j , called relative measurements. Two agents having relative measurements are called neighbors and it is assumed that information can only be shared among neighbors. Denote by N i ,   k the set of neighbors of agent i at time k . The relative measurements have the following model
z k i . j = H k i . j ( x k j x k i ) + v k i . j ,   i A ,   j N i ,   k
where v k i . j is the corresponding measurement noise and is assumed to be white Gaussian with zero mean and covariance R k i , j . All measurement noises, process noises and initial states are assumed mutually independent.
Let
Z k [ ( Z k self ) T , ( Z k rel ) T ] T
where
Z k self [ ( z k 1 ) T , , ( z k N ) T ] T Z k rel [ ( Z k 1 ,   rel ) T , , ( Z k N ,   rel ) T ] T Z k i ,   rel [ z k i ,   j 1 , ,   z k i ,   j n i ] T , j 1 , ,   j n i N i ,   k
and n i is the number of neighbors of agent i . The problem considered is to use all available measurements including the internal measurements and the inter-agent relative measurements to estimate the states of all agents, i.e.,
X ^ k = E [ X k | Z k ]
where X k [ ( x k 1 ) T , , ( x k N ) T ] T and E [ ] denotes the minimum mean-squared error (MMSE) estimator. Note that the stacking way of the measurements in Equation (4) is for the convenience of developing of filtering algorithm. The linear Gaussian assumptions made through Equations (1)–(3) do not reduce the significance of our study, since our proposed algorithm can be directly combined with NBP to deal with nonlinear or non-Gaussian cases.
Remark 1.
The major difference between our formulation and the existing ones (see, e.g., Ref. [28]) lies in the assumption of dynamic models of agents, i.e., Equation (1). Traditionally, the displacement measurement model is commonly used as the dynamic model by viewing the displacement measurement  z k i as known inputs, i.e.,
[ x k i y k i ] = [ x k 1 i y k 1 i ] + z k i v k i
Thus, the information from displacement measurements is actually utilized in the prediction step of the filter. In our formulation, the internal measurement Equation (2), including displacement model as a special case, is more general. Another advantage of our formulation is that the prior motion information of agents can be adopted in the filtering algorithm to improve the localization performance. If such information is unavailable, we can simply set a large enough  Q k i to represent that we have very little faith in the dynamic model and it naturally has slight effect on the estimation results. Thus, intuitively, our formulation reduces to the existing ones by setting a large enough  Q k i .

3. Centralized Fusion

The centralized fusion, which utilizes all measurements from all agents to estimate the states of all agents at the fusion center, can provide the optimal state estimates [29]. The optimal centralized fusion in the best linear unbiased estimation (BLUE) sense for the formulation without the assumption of agents’ dynamic models is given in Ref. [28]. Therein, the system states are viewed as non-random unknown quantities, which is different from ours. Next, we investigate the centralized fusion for our formulated problem.
Stacking states and measurements from all agents, we have the following dynamic model
X k + 1 = F k X k + G k w k
Z k = H k c X k H k p X k 1 + V k
where
F k = diag ( F k 1 , ,   F k N ) ,   G k = diag ( G k 1 , ,   G k N ) w k = [ ( w k 1 ) T , , ( w k N ) T ] T ,   H k c = [ ( H k self ) T , ( H k rel ) T ] T H k p = [ ( H k self ) T ,   0 ] T , V k = [ ( v k 1 ) T , , ( v k N ) T ,   ( v k 1 , rel ) T , ,   ( v k N , rel ) T ] T v k i , rel = [ v k i ,   j 1 , ,   v k i ,   j n i ] T , j 1 , ,   j n i N i ,   k
The matrix H k self corresponding to the internal measurements is given by
H k self = diag ( H k 1 , ,   H k N )
The matrix H k rel corresponding to the relative measurements is determined by the concrete topology of the network. Taking the chain topology as an example, we have
H k rel = [ H k 1 ,   2 H k 1 ,   2 H k 2 ,   1 H k 2 ,   1 H k 2 ,   3 H k 2 ,   3 H k N ,   N 1 H k N ,   N 1 ]
The state estimation of system Equations (5) and (6) consists of two steps: prediction and update. The state prediction is calculated as
X ^ k + 1 | k = F k X ^ k | k
P k + 1 | k = F k P k | k F k T + G k Q k G k T
where Q k = diag ( Q k 1 , ,   Q k N ) is the covariance matrix of w k .
For the update step, the normal update in the Kalman filter cannot be used since the measurement Z k + 1 is also related to X k as shown in Equation (6). Next, we use the linear MMSE (LMMSE) estimator [30], which is also optimal in the MMSE sense due to the linear Gaussian assumptions made in this paper, and the update can be calculated as
X ^ k + 1 | k = F k X ^ k | k X ^ k + 1 | k + 1 = E [ X k + 1 | Z k + 1 ] = X ^ k + 1 | k + C k + 1 S k + 1 1 ( Z k + 1 Z ^ k + 1 | k )
and
P k + 1 | k + 1 = MSE ( X ^ k + 1 | k + 1 ) = P k + 1 | k C k + 1 S k + 1 1 C k + 1 T
where MSE denotes the mean-square error matrix. From Equation (6), the measurement prediction is
Z ^ k + 1 = H k + 1 c X ^ k + 1 | k H k + 1 p X ^ k | k
and
S k + 1 = E [ ( Z k + 1 Z ^ k + 1 | k ) ( Z k + 1 Z ^ k + 1 | k ) T ] = E [ ( H k + 1 c X ^ k + 1 | k H k + 1 p X ^ k | k + v k ) ( H k + 1 c X ^ k + 1 | k H k + 1 p X ^ k | k + v k ) T ] = R k + H k + 1 c P k + 1 | k ( H k + 1 c ) T + H k + 1 p P k | k ( H k + 1 p ) T H k + 1 c F k P k | k ( H k + 1 p ) T H k + 1 p P k | k F k T ( H k + 1 c ) T
where R k = E [ V k V k T ] . The cross-covariance C k + 1 is
C k + 1 = E [ ( X k + 1 X ^ k + 1 | k ) ( Z k + 1 Z ^ k + 1 | k ) T ] = E [ X ^ k + 1 | k ( H k + 1 c X ^ k + 1 | k H k + 1 p X ^ k | k + v k + 1 ) T ] = P k + 1 | k ( H k + 1 c ) T F k P k | k ( H k + 1 p ) T
Substituting Equations (11)–(13) into Equations (9) and (10), we can get
X ^ k + 1 | k + 1 = X ^ k + 1 | k + W k + 1 ( Z k + 1 Z ^ k + 1 | k )
P k + 1 | k + 1 = P k + 1 | k W k + 1 S k + 1 W k + 1 T
where the filter gain W k + 1 is given by
W k + 1 = C k + 1 S k + 1 1 = ( P k + 1 | k ( H k + 1 c ) T F k P k | k ( H k + 1 p ) T ) S k + 1 1
The centralized fusion Equation (9) can achieve the best performance in the MMSE sense, but it needs to transmit all measurements to the fusion center, which is impossible in practice especially for mobile agent networks.

4. BP-Based Cooperative Localization

In this section, we briefly introduce the BP algorithm, which serves as the basis for our augmented state BP discussed in the next section. The procedures of BP are presented in a general form with pdfs. For the linear Gaussian case, these pdfs are all Gaussian, and BP can be implemented with the first two-order moments straightforwardly.
The MMSE estimates of each agent is
x ^ k i = x k i f ( x k i | Z 1 : k ) d x k i
where Z 1 : k is all measurements up to time k . Obtaining f ( x k i | Z 1 : k ) involves the marginalization of the joint pdf f ( X 0 : k | Z 1 : k ) , where X 0 : k [ ( X 0 ) T , , ( X k ) T ] T . Since its computational complexity increases exponentially with the size of the agent network, direct marginalization is infeasible. To solve this problem, BP provides an effective way. According to the independent assumption, the joint pdf f ( X 0 : k | Z 1 : k ) can be factorized into
f ( X 0 : k | Z 1 : k ) i A f ( x 0 i ) t = 1 k i 1 A f ( x t i 1 | x t 1 i 1 ) f ( z t i 1 | x t i 1 ,   x t 1 i 1 ) j N i 1 ,   k f ( z t i 1 ,   j | x t i 1 ,   x t j )
Equation (17) is mapped to the factor graph as shown in Figure 1. A factor graph G = ( V ,   ) is composed of nodes V and edges . The nodes are divided into factor nodes and variable nodes. Factor nodes are represented by boxes, which are associated with f ( z k i | x k i ,   x k 1 i ) f ( x k i | x k 1 i ) or f ( z k i ,   j ,   z k j ,   i | x k i ,   x k j ) , and variable nodes are represented by circles, which are associated with x k i . Each edge connects the factor node with its related variable nodes. The BP algorithm propagates messages along the edges on the factor graph to calculate the belief b ( x k i ) corresponding to each variable node x k i . For the case of loop-free factor graph, the beliefs are the exact conditional marginal pdfs, i.e., b ( x k i ) = f ( x k i | Z 1 : k ) . Otherwise, the BP becomes iterative and can only provide an approximated solution. It is not ensured that, by iterations, the BP will converge, although it usually converges in many cases [23,27]. In real practice, a number of iterations are commonly performed.
For the p th iteration, the belief is given by
b p ( x k i ) m k ( x k i ) j N i ,   k m j i p ( x k i ) ,   i A
where m k ( x k i ) is the messages propagated from variable node x k 1 i to variable node x k i
m k ( x k i ) f ( z k i | x k i ,   x k 1 i ) f ( x k i | x k 1 i ) b p ( x k 1 i ) d x k 1 i
and m j i p ( x k i ) is the message propagated from the factor node f ( z k i ,   j ,   z k j ,   i | x k i ,   x k j ) to variable node x k i
m j i p ( x k i ) f ( z k i ,   j ,   z k j ,   i | x k i ,   x k j ) n j i p 1 ( x k j ) d x k j ,   j N i ,   k
with
n j i p 1 ( x k j ) m k ( x k j ) l N j ,   k \ i m l j p 1 ( x k j )
Note that since Equation (3) distinguishes between z k i ,   j and z k j ,   i , the conditional joint pdf of z k i ,   j and z k j ,   i is used in Equation (20). The iteration starts with n j i 0 ( x k j ) = m k ( x k j ) . When b ( x k i ) , i.e., the posterior f ( x k i | Z 1 : k ) , is obtained, we can get the approximate MMSE estimate of x k i .

5. Augmented State BP Cooperative Localization

5.1. Motivation

As we can see, the BP-based CL is generally recursive along time, and iterative at each time instant for the existence of loops in the factor graph. That is to say, the BP algorithm only accounts for the loops existing at one time instant and neglects the loops across time. This approximation brings to the BP-based CL the recursion property, conciseness in procedures, and straightforwardness in implementation. However, the resulting error caused by neglecting the loops across time will accumulate and degrade the accuracy of localization. Completely eliminating this approximation needs to consider all loops from the initial time to the current time, which is computationally infeasible, not to mention the tremendous communication cost. To deal with this problem, we propose to allow message backward passing in the factor graph via state augmentation.
To consider the loops across time, the messages in BP need to propagate in both forward and backward directions. However, due to the complexity of the factor graph structure, it is difficult to find a suitable message propagation order, i.e., message propagation schedule [27]. Fortunately, for mobile agent networks, all agents naturally have their historical states, which allow us to realize message backward propagation by state augmentation. The idea is similar to the fixed-lag smoother [31]. They both utilize state augmentation. However, the optimal fixed-lag smoother only cares about the estimation of a previous state (at a fixed-lag time) and cannot improve the state estimation at the current time. The augmented state BP (AS-BP) aims to improve the state estimation at current time and can intuitively achieve this goal by considering the loops across time. We use the following example to show how the state augmentation accounts for the loops across time.
Example 1.
Consider two different agent network topologies, i.e., a chain topology and a fully connected topology, as shown in Figure 2. For brevity, boxes representing the factor nodes are omitted in the factor graphs shown in Figure 2 and we know each link connecting two variable nodes acts as a factor node. Figure 2b,d show the factor graphs after state augmentation according to Figure 2a and Figure 2c, respectively. For the chain topology shown in Figure 2a, the factor graph after state augmentation is loop-free as shown in Figure 2b, and thus an accurate conditional marginal pdf can be obtained without any iteration. For the fully connected topology shown in Figure 2c, loops both at one time instant and across time exist. After stat-augmentation shown in Figure 2d, only one loop is left. Iterations are still needed in the BP for Figure 2d since there is still a loop. However, it is intuitive that the BP on the augmented state (shown in Figure 2d) outperforms the BP acting on the fact graph shown in Figure 2c since the number of loops is significantly reduced.

5.2. Augmented State BP (AS-BP) Algorithm

For the ease of presentation, we consider to augment states at two successive times. That is to say, we only consider the AS-BP with 1-step retrodiction. The n-step ( n 2 ) algorithms can be implemented according to readers’ concrete applications. Let X k + 2 i = [ ( x k + 2 i ) T , ( x k + 1 i ) T ] T and Z k + 2 i = [ ( z k + 2 i ) T , ( z k + 1 i ) T ] T . For each agent i A , the prediction message m k + 2 ( X k + 2 i ) is given by
m k + 2 ( X k + 2 i ) f ( Z k + 2 i | X k + 2 i ,   x k i ) f ( X k + 2 i | x k i ) b p ( x k i ) d x k i
From Equations (1) and (2), we have
m k + 2 ( X k + 2 i ) N ( X k + 2 i ,   X ^ k + 2 | k m ,   i ,   P k + 2 | k m ,   i )
The calculation of the message m k + 2 ( X k + 2 i ) involves prediction and update. Note that by state augmentation, the measurements used in the update involve not only z k + 2 i but also z k + 1 i . The prediction is
X ^ k + 2 | k p ,   i = F k p ,   i X ^ k | k i
P k + 2 | k p ,   i = F k p ,   i P k | k i ( F k p ,   i ) T + G k p ,   i Q k p ,   i ( G k p ,   i ) T
where
F k p ,   i = [ F k + 1 i F k i 0 F k i 0 ]   G k p ,   i = [ G k + 1 i F k + 1 i G k i 0 G k i ] Q k p ,   i = diag ( Q k + 1 i ,   Q k i )
and the update is
X ^ k + 2 | k m ,   i = X ^ k + 2 | k p ,   i + W k + 2 i ( Z k + 2 i Z ^ k + 2 i )
P k + 2 | k m ,   i = P k + 2 | k p ,   i W k + 2 i S k + 2 i ( W k + 2 i ) T
where
Z ^ k + 2 i = H k + 2 m 1 ,   i X ^ k + 2 | k p ,   i H k + 2 m 2 ,   i X ^ k | k i S k + 2 i = R k + 2 m ,   i + H k + 2 m 1 ,   i P k + 2 | k p ,   i ( H k + 2 m 1 ,   i ) T + H k + 2 m 2 ,   i P k | k i ( H k + 2 m 2 ,   i ) T          H k + 2 m 1 ,   i F k p ,   i P k | k i ( H k + 2 m 2 ,   i ) T H k + 2 m 2 ,   i P k | k i ( F k p ,   i ) T ( H k + 2 m 1 ,   i ) T W k + 2 i = ( P k + 2 | k p ,   i ( H k + 2 m 1 ,   i ) T F k p ,   i P k | k i ( H k + 2 m 2 ,   i ) T ) ( S k + 2 i ) 1
with H k + 2 m 1 , i = [ H k + 2 i H k + 2 i 0 H k + 1 i ] , H k + 2 m 2 , i = [ 0 0 H k + 1 i 0 ] and R k + 2 m , i = diag ( R k + 1 i ,   R k i ) . Then, it comes to calculate the message n j i p ( X k + 2 j ) . Following Equations (20) and (21), we have
n j i p ( X k + 2 j ) = m k + 2 ( X k + 2 j ) l N k + 2 j \ i f ( Z k + 2 l ,   j | X k + 2 l ,   X k + 2 j ) n l j p 1 ( X k + 2 l ) d X k + 2 l
where Z k + 2 l ,   j = [ ( z k + 2 l ,   j ) T , ( z k + 2 j ,   l ) T , ( z k + 1 l ,   j ) T , ( z k + 1 j ,   l ) T ] T and N k + 2 j = N j ,   k + 2 N j ,   k + 1 . Equation (28) can be calculated by performing the update step of Kalman filter and is given by
n j i p ( X k + 2 j ) N ( X k + 2 j ;   X ^ k + 2 p ,   j i ,   P k + 2 p ,   j i )
where
X ^ k + 2 p ,   j i = X ^ k + 2 | k m , j + W k + 2 p , j i ( Z k + 2 j i Z ^ k + 2 p ,   j i )
P k + 2 p ,   j i = P k + 2 | k m ,   j W k + 2 p ,   j i S k + 2 p ,   j i ( W k + 2 p ,   j i ) T
and
Z k + 2 j i = [ { ( Z k + 2 j ,   l ) T } l N k + 2 j \ i ] T Z ^ k + 2 p , j i = [ { ( H k + 2 j l ( X ^ k + 2 p 1 ,   l j X ^ k + 2 | k m ,   j ) ) T } l N k + 2 j \ i ] T R k + 2 n ,   j i = diag ( { R k + 2 j l + H k + 2 j l P k + 2 p 1 ,   l j ( H k + 2 j l ) T } l N k + 2 j \ i ) R k + 2 j l = diag ( R k + 2 j ,   l ,   R k + 2 l ,   j ,   R k + 1 j ,   l ,   R k + 1 l ,   j ) S k + 2 p ,   j i = H k + 2 n ,   j i P k + 2 | k m ,   j ( H k + 2 n ,   j i ) T + R k + 2 n ,   j i W k + 2 p ,   j i = P k + 2 | k m ,   j ( H k + 2 n ,   j i ) T ( S k + 2 p ,   j i ) 1 H k + 2 n ,   j i = [ { ( H k + 2 j l ) T } l N k + 2 j \ i ] T H k + 2 j l = [ H k + 2 j ,   l 0 H k + 2 l ,   j 0 0 H k + 1 j ,   l 0 H k + 1 l ,   j ]
For simplicity of notation, we use { ( a l ) T } l A to denote stacking all vectors a l ,   l = 1 , , | A | into a row vector with a fixed index order of elements in set A , where | A | is the cardinality of the set A .
The initial message n j i 0 ( X k + 2 j ) is m k + 2 ( X k + 2 j ) . Since Equation (18) is analogous to Equation (21), the belief b p ( X k + 2 i ) , which is approximately the conditional marginal pdf f ( X k + 2 i | Z 1 : k + 2 ) , can be calculated in a similar way to the message n j i p ( X k + 2 j ) . The overall algorithm is given in Algorithm 1.
Algorithm 1 AS-BP algorithm.
Start with x ^ 0 | 0 i = x ¯ 0 i , P 0 | 0 i = P 0 i , and compute at each agent i :
Step 1. Calculate the prediction message m k ( X k i ) as in Equations (23)–(27).
Step 2. Calculate the belief b p ( X k i ) :
    1: for  p = 1   to   p do
    2: Calculate the message n i j p 1 ( X k i ) as in Equations (29)–(31) and its initial value n i j 0 ( X k i ) is set to m k ( X k i ) .
    3: Send n i j p 1 ( X k i ) and z k i , j to its neighbor agent j .
    4: Receive n j i p 1 ( X k j ) and z k j , i , j N k i .
    5: Calculate the belief b p ( X k i ) as in Equations (29)–(31) but it is a little different, that is l N k i .
    6: end for

5.3. Computation and Communication Overhead

From Equations (18)–(21), it can be seen that the BP algorithm, for each agent i in each iteration, needs to calculate the belief b ( x k i ) and the outgoing message n i j p ( x k i ) based on the incoming messages m j i p ( x k i ) from all neighboring agents j A i ,   k and its own prediction message m k ( x k i ) . The computational complexity of the BP algorithm increases linearly with the scale of the network. Our proposed algorithm utilizes state augmentation to realize messages backward propagation, which capably retains the excellent characteristics of the BP algorithm. However, due to the use of state augmentation, the computational cost inevitably increases. Compared with the BP algorithm, the computational cost of the proposed algorithm increases linearly with the number of retrodiction steps. By using state augmentation, the computational time of AS-BP increases along with the retrodiction step compared with original BP. The computational complexity is generally O ( ( r d z ) 3 ) , where d z is the dimension of relative measurements and r is the number of retrodiction steps, since matrix inverse is involved in Equations (30) and (31). This nonlinear increase of computation can be avoided by a sequential (or recursive) implementation of the update of the Kalman filter due to the uncorrelated assumption of relative measurement noises [32]. With such an implementation, the computational time of AS-BP increases linearly with the retrodiction step, since it is equivalent to say r times of filtering of original BP are needed in the AS-BP. On the other hand, if the batch form of update in Equations (30) and (31) is directly used in the implementation of AS-BP. The computational time still shows nearly linear increase along with the retrodiction step r , when r is relatively small. This is due to the fact that the nonlinear part of the matrix inverse is not dominating in the whole filtering procedure.
For the communication cost, since Gaussian BP is used in this paper, the data that each agent i needs to transmit to its neighbor j are only the first two moments of the outgoing message n i j p ( x k i ) and the relative measurements between them. The communication cost of the proposed algorithm is increased by ( r 2 1 ) d 2 + ( r 1 ) d compared with the original BP algorithm, where d is the dimension of the agent state. The BP algorithm requires peer-to-peer communication, which has high communication requirements. To solve this problem, the belief can be transmitted instead of messages [10,21]. In this way, the agent can perform broadcast communication, which greatly reduces the communication requirements. This is beyond the scope of this paper and thus is not discussed in detail here.

6. Illustrative Examples

Two scenarios of CL are simulated to verify the performance of our proposed AS-BP algorithm. Both scenarios use the same motion agents but different network topologies, including a chain and a full connected topology. The network contains N = 9 agents and each agent moves according to a nearly constant velocity model in a two-dimensional plane. The dynamic model of each agent is formulated by Equation (1) with
F k i = diag ( F ,   F ) ,   G k i = diag ( G ,   G ) F = [ 1 T 0 1 ] ,   G = [ T 2 / 2 T ] w k i N ( 0 ,   Q )
where the time interval T = 1   s and Q = diag ( 0.01   m 2 / s 4 , 0.01   m 2 / s 4 ) .
In the simulation, the initial state x 0 i is distributed as N ( x 0 i ;   x ¯ 0 i ,   p 0 i ) , where x ¯ 0 i is set as
x ¯ 0 1 = [ 0   m , 20   m / s , 400   m , 20   m / s ] T , x ¯ 0 2 = [ 100   m , 20   m / s ,   400   m , 20   m / s ] T x ¯ 0 3 = [ 200   m , 20   m / s ,   400   m , 20   m / s ] T , x ¯ 0 4 = [ 300   m , 20   m / s ,   400   m , 20   m / s ] T x ¯ 0 5 = [ 400   m , 20   m / s ,   400   m , 20   m / s ] T ,   x ¯ 0 6 = [ 400   m , 20   m / s ,   300   m , 20   m / s ] T x ¯ 0 7 = [ 400   m , 20   m / s ,   200   m , 20   m / s ] T ,   x ¯ 0 8 = [ 400   m , 20   m / s ,   100   m , 20   m / s ] T x ¯ 0 9 = [ 400   m , 20   m / s ,   0   m , 20   m / s ] T
and the covariance P 0 i = diag ( 0.002 2   m 2 ,   0.002 2   m 2 / s 2 ,   0.002 2   m 2 ,   0.002 2   m 2 / s 2 ) . The internal measurements model and the relative measurement model are as in Equations (2) and (3) with H k i ,   j = H k i = [ 1 0 0 0 0 0 1 0 ] , R k i = diag ( 0.25   m 2 ,   0.25   m 2 ) and R k i ,   j = diag ( 4   m 2 ,   4   m 2 ) , where i = 1 ,   2 , ,   N and j N i ,   k . We use the average root mean square error (ARMSE) over all agents to compare different algorithms
ARMSE = 1 N i = 1 N RMSE ( x k i )
The absolute position refers to the position in the fixed global coordinate system, and the relative position refers to a moving coordinate system taking the position of agent 1 as the origin. For the chain topology, the number of iterations is set to be 3 for both the AS-BP and the BP. For the fully connected scenario, it is set to be 5. All simulations have performed a total of 5000 Monte Carlo runs. The simulation results are shown in Figure 3 and Figure 4.
In all simulations, the AS-BP algorithm and the BP algorithm show better absolute localization accuracy than DR. This reflects the advantage of CL in which the error accumulation of IMU can be effectively alleviated through cooperation. In the chain topology scenario, the AS-BP algorithm has better absolute localization accuracy and relative localization accuracy than the BP algorithm. As the number of retrodiction steps increases, the localization accuracy of the AS-BP gradually improves. This is because the AS-BP performs messages backward propagation by using state augmentation, which effectively increases the available information of agents and thereby reduces the accumulation of errors caused by the approximation of beliefs. In the scenario of chain topology, as the steps of retrodiction increase, the number of loops in the factor graph approaches to zero and the performance of the proposed algorithm approaches to the one of the centralized fusion. In the fully connected topology scenario, the BP algorithm and the AS-BP algorithm have a relative localization accuracy similar to that of the centralized fusion. This is because the entire network has adequate information from relative measurements and can achieve better performance. However, since the amount of absolute position information has not increased, the absolute localization accuracy of the BP algorithm and the AS-BP algorithm are still lower than that of the centralized fusion. On the other hand, due to the use of state augmentation, the AS-BP algorithm has a higher absolute localization accuracy than the BP algorithm. However, note that, unlike in the chain topology, in the fully connected topology scenario, the use of state augmentation can only reduce the loops in the factor graph but cannot completely eliminate them. Therefore, in this case, the AS-BP algorithm can only improve the performance of the BP algorithm, but cannot achieve the centralized fusion. More loops in the fully connected scenario exist than in the chain topology scenario. Thus, accumulation of errors caused by the belief approximation increases more severely in the fully connected scenario than in the chain scenario, that is why the absolute localization accuracy of the BP algorithm and the AS-BP algorithm in the fully connected topology scenario is lower than that in the chain topology scenario.
The relative computation time of the AS-BP algorithm and the BP algorithm in Matlab are provided in Table 1 and Table 2. It can be seen that the computational cost of the AS-BP algorithm increases almost linearly with the increase of the retrodiction steps.

7. Conclusions

In this paper, an augmented state BP algorithm for mobile agent networks has been proposed. Compared with the existing BP algorithms, AS-BP further considers the across time loops in the factor graph. By state augmentation, the messages in the factor graph can be backward propagated, which reduces the number of loops in the factor graph, increases the available information of the agent, and improves the localization accuracy of the agent. When the network topology has no loops, AS-BP can obtain the optimal estimation. Otherwise, AS-BP can only obtain approximate estimations. However, the large reduction of loops allows AS-BP to still have higher estimation accuracy than the original BP. AS-BP can be directly combined with nonlinear algorithms to deal with nonlinear measurement scenarios. Compared with the original BP, AS-BP has higher computational and communicational costs, so we suggest using limited steps of retrodiction. Experimental results demonstrate the performance of our algorithm. In addition, this paper has reformulated the CL problem in a more general form and gives its corresponding centralized fusion algorithm. Future work includes further reducing the computation and communication overhead of the algorithm.

Author Contributions

Formal analysis, G.G.; Investigation, B.Z. and Y.G.; Methodology, Y.G.; Validation, G.G.; Writing—original draft, B.Z.; Writing—review & editing, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (No. 61773306).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, K.; Lim, H.B.; Frazzoli, E.; Ji, H.; Lee, V.C.S. Improving positioning accuracy using GPS pseudorange measurements for cooperative vehicular localization. IEEE Trans. Veh. Technol. 2014, 63, 2544–2556. [Google Scholar] [CrossRef]
  2. Sun, G.; Zhou, R.; Xu, K.; Weng, Z.; Zhang, Y.; Dong, Z.; Wang, Y. Cooperative formation control of multiple aerial vehicles based on guidance route in a complex task environment. Chin. J. Aeronaut. 2020, 33, 701–720. [Google Scholar] [CrossRef]
  3. Ahmad, A.; Lawless, G.; Lima, P. An online scalable approach to unified multirobot cooperative localization and object tracking. IEEE Trans. Robot. 2017, 33, 1184–1199. [Google Scholar] [CrossRef]
  4. Roman, R.C.; Precup, R.E.G.; Petriu, E.M. Hybrid data-driven fuzzy active disturbance rejection control for tower crane systems. Eur. J. Control 2021, 58, 373–387. [Google Scholar] [CrossRef]
  5. Chi, R.H.; Li, H.Y.; Shen, D.; Hou, Z.; Huang, B. Enhanced P-type Control: Indirect Adaptive Learning from Set-point Updates. IEEE Trans. Autom. Control 2022. [Google Scholar] [CrossRef]
  6. Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV navigation and localization: A review. IEEE J. Ocean. Eng. 2014, 39, 1184–1199. [Google Scholar] [CrossRef]
  7. Roumeliotis, S.I.; Bekey, G.A. Distributed multirobot localization. IEEE Trans. Robot. Autom. 2002, 18, 781–795. [Google Scholar] [CrossRef] [Green Version]
  8. Shen, Y.; Wymeersch, H.; Win, M.Z. Fundamental limits of wideband localization-part II: Cooperative networks. IEEE Trans. Inf. Theory 2010, 56, 4981–5000. [Google Scholar] [CrossRef] [Green Version]
  9. Zhai, G.; Zhang, J.; Zhou, Z. Coordinated target localization base on pseudo measurement for clustered space robot. Chin. J. Aeronaut. 2013, 26, 1524–1533. [Google Scholar] [CrossRef] [Green Version]
  10. Wymeersch, H.; Lien, J.; Win, M.Z. Cooperative localization in wireless networks. Proc. IEEE 2009, 97, 427–450. [Google Scholar] [CrossRef]
  11. Nguyen, T.V.; Jeong, Y.; Shin, H.; Win, M.Z. Least square cooperative localization. IEEE Trans. Veh. Technol. 2015, 64, 1318–1330. [Google Scholar] [CrossRef]
  12. Ouyang, R.W.; Wong, A.K.S.; Lea, C.T. Received signal strength-based wireless localization via semidefinite programming: Noncooperative and cooperative schemes. IEEE Trans. Veh. Technol. 2010, 59, 1307–1318. [Google Scholar] [CrossRef]
  13. Vaghefi, R.M.; Buehrer, R.M. Cooperative joint synchronization and localization in wireless sensor networks. IEEE Trans. Signal Process. 2015, 63, 3615–3627. [Google Scholar] [CrossRef]
  14. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Mateo, CA, USA, 1988. [Google Scholar]
  15. Wainwright, M.J.; Jaakkola, T.S.; Willsky, A.S. Tree-based reparameterization framework for analysis of sum-product and related algorithms. IEEE Trans. Inf. Theory 2003, 49, 1120–1146. [Google Scholar] [CrossRef] [Green Version]
  16. Su, Q.; Wu, Y.C. Convergence analysis of the variance in Gaussian belief propagation. IEEE Trans. Veh. Technol. 2014, 62, 5119–5131. [Google Scholar] [CrossRef] [Green Version]
  17. Yuan, W.; Wu, N.; Etzlinger, B.; Wang, H.; Kuang, J. Cooperative joint localization and clock synchronization based on Gaussian message passing in asynchronous wireless networks. IEEE Trans. Veh. Technol. 2016, 65, 7258–7273. [Google Scholar] [CrossRef] [Green Version]
  18. Sudderth, E.B.; Ihler, A.T.; Freeman, W.T.; Willsky, A.S. Nonparametric belief propagation. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; pp. I-605–I-612. [Google Scholar]
  19. Ihler, A.T.; Fisher, J.W.; Moses, R.L.; Willsky, A.S. Nonparametric belief propagation for self-localization of sensor networks. IEEE J. Sel. Areas Commun. 2005, 23, 809–819. [Google Scholar] [CrossRef] [Green Version]
  20. Ihler, A.T.; Sudderth, E.B.; Freeman, W.T.; Willsky, A. Efficient multiscale sampling from products of Gaussian mixtures. In Proceedings of the 17th Annual Conference on Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 13–18 December 2004; pp. 1–8. [Google Scholar]
  21. Savic, V.; Zazo, S. Reducing communication overhead for cooperative localization using nonparametric belief propagation. IEEE Wirel. Commun. Lett. 2012, 1, 308–311. [Google Scholar] [CrossRef] [Green Version]
  22. Li, Y.; Wang, Y.; Yu, W.; Guan, X. Multiple autonomous underwater vehicle cooperative localization in anchor-free environments. IEEE J. Ocean. Eng. 2019, 44, 895–911. [Google Scholar] [CrossRef]
  23. Garcia-Fernandez, A.F.; Svensson, L.; Sarkka, S. Cooperative localization using posterior linearization belief propagation. IEEE Trans. Veh. Technol. 2018, 67, 832–836. [Google Scholar] [CrossRef]
  24. Meyer, F.; Hlinka, O.; Hlawatsch, F. Sigma point belief propagation. IEEE Signal Process. Lett. 2014, 21, 145–149. [Google Scholar] [CrossRef] [Green Version]
  25. Li, S.; Hedley, M.; Collings, I.B. New efficient indoor cooperative localization algorithm with empirical ranging error model. IEEE J. Sel. Areas Commun. 2015, 33, 1407–1417. [Google Scholar] [CrossRef]
  26. Georges, H.M.; Xiao, Z.; Wang, D. Hybrid cooperative vehicle positioning using distributed randomized sigma point belief propagation on non-Gaussian noise distribution. IEEE Sens. J. 2016, 16, 7803–7813. [Google Scholar] [CrossRef]
  27. Kschischang, F.R.; Frey, B.J.; Loeliger, H.A. Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 2001, 47, 498–519. [Google Scholar] [CrossRef] [Green Version]
  28. Barooah, P.; Russell, W.J.; Hespanha, J.P. Approximate distributed Kalman filtering for cooperative multi-agent localization. In Proceedings of the 6th IEEE International Conference on Distributed Computing in Sensor Systems, Santa Barbara, CA, USA, 21–23 June 2010; pp. 102–115. [Google Scholar]
  29. Li, X.R.; Zhu, Y.M.; Wang, J.; Han, C. Optimal linear estimation fusion—Part I: Unified fusion rules. IEEE Trans. Inf. Theory 2003, 49, 2192–2208. [Google Scholar] [CrossRef]
  30. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software; Wiley: New York, NY, USA, 2001. [Google Scholar]
  31. Gao, Y.X.; Li, X.R. Optimal linear fusion of smoothed state estimates. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1236–1248. [Google Scholar]
  32. Li, X.R. Recursibility and optimal linear estimation and filtering. In Proceedings of the 43rd IEEE Conference on Decision and Control, Nassau, The Bahamas, 14–17 December 2004; pp. 1761–1766. [Google Scholar]
Figure 1. Factor graph of the joint pdf f ( X 0 : k | Z 1 : k ) . For brevity, we only drew the part related to x k 1 in detail, and the factor nodes are represented by abbreviations, where f k i = f ( z k i | x k i ,   x k 1 i ) f ( x k i | x k 1 i ) and f k i , j = f ( z k i ,   j ,   z k j ,   i | x k i ,   x k j ) . The arrows indicate the directions of message propagation.
Figure 1. Factor graph of the joint pdf f ( X 0 : k | Z 1 : k ) . For brevity, we only drew the part related to x k 1 in detail, and the factor nodes are represented by abbreviations, where f k i = f ( z k i | x k i ,   x k 1 i ) f ( x k i | x k 1 i ) and f k i , j = f ( z k i ,   j ,   z k j ,   i | x k i ,   x k j ) . The arrows indicate the directions of message propagation.
Electronics 11 01959 g001
Figure 2. Factor graphs.
Figure 2. Factor graphs.
Electronics 11 01959 g002
Figure 3. Absolute and relative position RMSEs in chain topology. (a) Absolute position RMSE. (b) Relative position RMSE.
Figure 3. Absolute and relative position RMSEs in chain topology. (a) Absolute position RMSE. (b) Relative position RMSE.
Electronics 11 01959 g003
Figure 4. Absolute and relative position RMSEs in fully connected topology. (a) Absolute position RMSE. (b) Relative position RMSE.
Figure 4. Absolute and relative position RMSEs in fully connected topology. (a) Absolute position RMSE. (b) Relative position RMSE.
Electronics 11 01959 g004
Table 1. Relative computation time for the chain topology.
Table 1. Relative computation time for the chain topology.
BPAS-BP (1-Step Retrodiction)AS-BP (2-Step Retrodiction)AS-BP (3-Step Retrodiction)
1.00001.66972.20942.8575
Table 2. Relative computation time for the fully connected topology.
Table 2. Relative computation time for the fully connected topology.
BPAS-BP (1-Step Retrodiction)AS-BP (2-Step Retrodiction)AS-BP (3-Step Retrodiction)
1.00002.11223.73095.4305
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, B.; Gao, G.; Gao, Y. Cooperative Localization Based on Augmented State Belief Propagation for Mobile Agent Networks. Electronics 2022, 11, 1959. https://doi.org/10.3390/electronics11131959

AMA Style

Zhang B, Gao G, Gao Y. Cooperative Localization Based on Augmented State Belief Propagation for Mobile Agent Networks. Electronics. 2022; 11(13):1959. https://doi.org/10.3390/electronics11131959

Chicago/Turabian Style

Zhang, Bolun, Guangen Gao, and Yongxin Gao. 2022. "Cooperative Localization Based on Augmented State Belief Propagation for Mobile Agent Networks" Electronics 11, no. 13: 1959. https://doi.org/10.3390/electronics11131959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop