1. Introduction
The problem of distributed estimation for linear dynamic systems via the cooperation and coordination of multiple fixed/mobile agents has been extensively explored by numerous researchers [
1,
2,
3,
4,
5]. Distributed Kalman filtering has been broadly utilized as a powerful and efficient tool to solve the aforementioned scenario [
1,
6,
7,
8,
9,
10]. An essential problem involving cooperation of multiagent systems that has attracted much attention is the
consensus problem. In order to reconstruct the states of the dynamic system, a network of sensing agents is adopted; these may form either a homogeneous or a heterogeneous network. Each agent transmits state estimates to its immediate neighbors based on the communication topology. Through the consensus strategy, all agents in use eventually agree with each other on a common estimation value regarding the states of the dynamic system.
The early formal study of consensus problems [
11] formed the basis of distributed computing [
12], which has found wide and popular utilization in sensor network applications [
13,
14]. The dynamic consensus problem appears frequently in the cooperation and coordination of multiagent systems, including scenarios such as formation control [
5,
9,
15], selfalignment, flocking [
4,
16], and distributed filtering [
1,
17]. The typical consensus protocol and its performance analysis were first introduced by OlfatiSaber and Murray in the continuoustime model (see [
3,
7]). In [
7], the authors considered a cluster of firstorder integrators working cooperatively under the average consensus control algorithm, in which each agent finally agrees on a common value that is the average of initial states of all agents as the individual ultimate state. In [
1], a distributed Kalman filtering (DKF) algorithm was proposed in which data fusion was achieved through dynamic consensus protocols [
18]. Later, in [
17], the same author extended the results of [
1] to use two identical consensus filters for sensor fusion with different observation matrices, then presented [
1] an alternative distributed Kalman filtering algorithm which applies consensus to the state estimates. This idea forms the foundation of the present paper, in which we propose an adaptive Kalman consensus filtering algorithm. In [
9], the authors presented a different view towards designing consensus protocols based on the Kalman filter structure. By adjusting the timevarying consensus gains, [
9] proved that consensus can be achieved asymptotically under a nonoise condition. In addition, graph theory [
7,
19,
20] has been adopted to construct the communication topology among distributed agents. In this paper, we assume a fixed topology; however, it is not necessarily relaxed to the point of alltoall connection. This means that each agent is not constrained in communication with others, a consideration that would be more practical in realworld applications.
Recently, many extensions of consensus protocols have been explored to improve the convergence rate of dynamic systems among cooperative agents. This includes the study of communication topology design [
21,
22], optimal consensusbased estimation algorithms [
2,
23,
24], and adaptive consensus algorithms [
25,
26,
27] in both continuoustime and discretetime scenarios. Other extensions for system control purposes have been studied to realize finite time consensus among agents, methods of which involves eventtriggered and sliding mode control [
28,
29]. These research fruits have been considered and embedded in Kalman filtering algorithms [
30,
31], although the complexity of the algorithms makes practical implementation challenging, particularly when compared to the adaptive weight parameter method proposed in this paper.
The main contribution of this paper is to derive an adaptive Kalman consensus filtering (aKCF) strategy in a continuoustime model and analyze its stability and convergence properties. Extensive simulation results aim to demonstrate better effectiveness of aKCF compared with the previous work of OlfatiSaber[
17] along with a faster convergence rate of the estimation error when the consensus gains change adaptively based on the disagreement of these filters.
The remainder of this paper is organized as follows. In
Section 2, we provide preliminaries on algebraic graph theory [
20], which is the basis of the consensus strategy. In
Section 3, we provide a retrospective view of the previous work of OlfatiSaber on the Kalman consensus filtering algorithm, which our analysis relies upon. In
Section 4, we illustrate the main results of this paper, namely, derivation of the adaptive Kalman consensus filtering algorithm. The purpose is to adaptively adjust the consensus gain as the weight applied to the disagreement terms in order to improve the convergence of the estimation error. Simulation results are presented in
Section 5, then we conclude our work in
Section 6.
The following notations are be used throughout this paper: ${\mathbb{R}}^{n}$ and ${\mathbb{R}}^{m\times n}$ denote the n dimensional square matrices and the set of all $m\times n$ matrices, respectively. ${I}_{m}$ denotes the identity matrix with dimension $m\times m$. For a given vector or matrix, ${A}^{T}$ represents the transpose of the matrix A. For a given square matrix A, $tr\left(A\right)$ denotes its trace and the norm of A is defined by $\u2225A\u2225=\sqrt{trace\left({A}^{T}A\right)}$. If $f\in \mathcal{N}(a,\sigma )$, this indicates a random variable f that complies with a Gaussian distribution with mean a and variance $\sigma $.
2. Problem Statement
Consider a continuoustime dynamic system that has the following form:
with
m states measured by
n distributed agents via sensing devices and local filters. Here,
$x\left(t\right)\in {\mathbb{R}}^{m}$ denotes the states of this dynamic system,
${A}^{m\times m}$ represents the dynamical matrix,
$w\left(t\right)$ represents the white Gaussian noise of the system, which is distributed by the matrix
B with zero mean and covariance
$Q\left(t\right)=E\left[w\left(t\right)w{\left(t\right)}^{T}\right]$,
${x}_{0}$ is the initial guess for the states of the dynamic system with error covariance
${P}_{0}$, and the sensing capability of each agent is determined by the following equation:
with the measurement matrix
${C}_{i}^{\ell \times m}$. The sensing devices are not able to measure all the states of the system; thus, only partial information is available to the local filters for the state estimation. Here,
${v}_{i}\left(t\right)$ represents the white Gaussian noise of measurement for the
ith agent, with zero mean and covariance matrix
${R}_{ij}\left(t\right)=E\left[{v}_{i}\left(t\right){v}_{j}{\left(t\right)}^{T}\right]$. Throughout this paper, we assume that there is no noise coupling effect among the agents; therefore, we can set both
$Q\left(t\right)$ and
${R}_{ij}\left(t\right)$ as diagonal matrices. The
main purpose of this paper is to design an adaptive state estimation structure for each agent such that the state estimation of individuals can be exchanged among their immediate neighbors through the communication topology
$\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{A})$.
2.1. Graph Theory Preliminaries
We consider n distributed agents working cooperatively through a communication network/topology characterized by a weighted graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{A})$ with inconsistent information, where $\mathcal{V}=\{1,2,...,n\}$ represents the set of agents, $\mathcal{E}=\left[{\u03f5}_{ij}\right]$ denotes the set of edges in $\mathcal{G}$, and ${\u03f5}_{ij}$ represents the connected bridge of an ordered pair $(j,i)$. For each $\u03f5\in \mathcal{E}$, we mean that the ith agent can only receive information from the jth agent, not vice versa. In such cases, we call j the neighbor of i and denote ${\mathcal{N}}_{i}=\{j\in \mathcal{V}(j,i)\in \mathcal{E}\}$ as the set of neighbors of the ith agent. If $(j,i)\notin \mathcal{E}$, this means that there is no communication link between the jth and ith agents.
Let $\mathcal{A}=\left[{a}_{ij}\right]\in {\mathbb{R}}^{n\times n}$ represent the adjacency matrix of $\mathcal{G}$, defined as ${a}_{ij}=1,\forall i\ne j$ and $j\in {N}_{i}$; otherwise, ${a}_{ij}=0$.
We define the indegree of the ith agent as $de{g}_{in}\left(i\right)={\sum}_{j=1}^{{\mathcal{N}}_{i}}{a}_{ij}$ and the outdegree of the ith agent as $de{g}_{out}\left(i\right)={\sum}_{j=1}^{{\mathcal{N}}_{i}}{a}_{ji}$; then, the graph Laplacian of $\mathcal{G}$ can be defined as $L=\mathsf{\Delta}\mathcal{A}$, where $\mathsf{\Delta}=diag(de{g}_{in}\left(1\right),...,de{g}_{in}\left(n\right))\in {\mathbb{R}}^{n\times n}$.
One important property of
L (see [
19] for more details) is that all eigenvalues of
L are nonnegative, and at least one of them is zero. If we denote
${\mathsf{\Lambda}}_{i}\left(L\right)$ as the
ith eigenvalue of
L, then we have the following valid relation for any graph
$\mathcal{G}$:
$0={\lambda}_{1}\left(L\right)\le {\lambda}_{2}\left(L\right)\le ...\le {\lambda}_{n}\left(L\right)$ [
32].
In a special case, if the indegree of the ith agent or $de{g}_{in}\left(i\right)=n1;\forall i=1,...,n$, then we call the graph $\mathcal{G}$, as an alltoall connected topology. In such cases, the graph Laplacian L is a symmetric and positivesemidefinite matrix with only one zero eigenvalue.
2.2. Consensus Protocols
Several types of consensus protocols have been explored, and these have been utilized in many different scenarios over the years. In this paper, we mainly adopt the “consensus in network” strategy mentioned in [
17].
Using the background information provided in
Section 2.1 and assuming that there are
n integrator agents working cooperatively with dynamics
${\dot{x}}_{i}={u}_{i}$, the “consensus in network” strategy forces those agents to reach an agreement on their states by
${\dot{\widehat{x}}}_{i}\left(t\right)={\sum}_{j\in {N}_{i}}{a}_{ij}({\widehat{x}}_{j}\left(t\right){\widehat{x}}_{i}\left(t\right))$.
This distributed dynamic structure demonstrates that the
ith agent updates its state by penalizing the state disagreement between its immediate neighbor
$j\in {N}_{i}$ and itself in order to ensure that all of those
n agents finally agree on a common value as their ultimate state. Via this protocol, which [
17] proved to be stable and convergent, we can say consensus is achieved through the communication and cooperation among the agents.
Furthermore, we can explore the collective dynamics of individual agents in the graph
$\mathcal{G}$, which can be expressed as
$\dot{x}\left(t\right)=Lx\left(t\right)$, where
L is the aforementioned graph Laplacian in
Section 2.1.
3. Kalman Consensus Filtering Algorithm
In this section, we provide a retrospective view of the previous work of OlfatiSaber [
17], in which the author presented the continuoustime distributed Kalman filtering strategy without adopting the consensus filtering algorithm, instead forcing consensus through state estimates. We treat this work as the fundamental background for designing our adaptive Kalman consensus filters.
In [
17], the author considered
n distributed agents working cooperatively to complete a common task for estimating the state of a linear system, defined in Equation (
1), with the sensing capability defined as in Equation (
2). Each agent shares its instant state estimates with its immediate neighbors
${\mathcal{N}}_{i}$ through the communication topology
$\mathcal{G}$. OlfatiSaber [
17] proposed individual agents applying the following distributed estimation algorithm:
with initial conditions
${P}_{i}\left(0\right)={P}_{0}$ and
${\widehat{x}}_{i}\left(0\right)=x\left(0\right)$.
Through this distributed algorithm, it was claimed that the collective dynamics of the estimation errors are
${e}_{i}={x}_{i}x$ (in the absence of noise, this assumption is made to ease the computation complexity of the proof; later work showed that even if process and measurement noise are present, the stability and convergence properties of the adaptive Kalman consensus filtering algorithm remain valid). The errors converge to zero through analysis of the Lyapunov function
$V\left(e\right)={\sum}_{i=1}^{n}{e}_{i}^{T}{P}_{i}^{1}{e}_{i}$. Furthermore, all agents with the estimator structure in Equation (
3) asymptotically agree with each other on their state estimates, which match the value of the true states of the linear system in Equation (
1), such as
${\widehat{x}}_{1}=...={\widehat{x}}_{n}=x$. See the proof in [
17] for more detail.
4. Adaptive Kalman Consensus Filtering
In this section, we show our main contribution to this paper, in which we consider adding an adaptation mechanism to the Kalman consensus filtering algorithm shown in
Section 3. We name this adaptive Kalman consensus filtering, or aKCF. The proposed
ith agent estimation structure with adaptive gain
${\gamma}_{ij}\left(t\right)$ on the consensus term is as follows:
where
${\widehat{x}}_{i}\left(t\right)$ represents the state estimates of the
ith agent and
${\gamma}_{ij}\left(t\right)$ is the scalar adaptive gain of the estimation differences between the
ith agent and its neighbors. The consensus weighting matrix
${W}_{i}$ is designed based on Lyapunov stability analysis, as detailed in the following part.
If we consider a scenario in which no noise affects the error dynamics, as proposed in [
17], in a similar fashion, we can derive the local error dynamic of the
ith agent using the fact that
${\widehat{x}}_{j}\left(t\right){\widehat{x}}_{i}\left(t\right)={e}_{j}\left(t\right){e}_{i}\left(t\right)$, as shown below:
The basic idea for seeking the adaptive gain ${\gamma}_{ij}\left(t\right)$ involves consideration of two aspects:
Lemma 1. The proposed distributed estimator structure of the ith agent with adaptive gains ${\gamma}_{ij},\forall j\in {N}_{i}$ for each disagreement term is provided by Equation (4). By means of analyzing a Lyapunov function ${V}_{i}={e}_{i}^{T}{\mathsf{\Pi}}_{i}^{1}{e}_{i}+{\sum}_{j\in {N}_{i}}{\gamma}_{ij}^{2}$, where ${\mathsf{\Pi}}_{i}$ is a symmetric positive definite matrix, the local adaptation law can be derived by satisfying the system stability conditions for both ${V}_{i}$ and ${\dot{V}}_{i}$, which turns out to be ${\dot{\gamma}}_{ij}={\left({C}_{i}{e}_{i}\right)}^{T}{C}_{i}({\widehat{x}}_{j}{\widehat{x}}_{i})$. Proof.
We consider the following local Lyapunov function for the
ith agent based on Lyapunov redesign methods
where
${\mathsf{\Pi}}_{i}$ is a symmetric positive definite solution of the following Lyapunov equation:
with the assumption that
${\mathsf{\Omega}}_{i}$ is a symmetric positive definite matrix and all eigenvalues of the matrix
${A}_{i}$ have negative real parts.
Because ${A}_{i}=A{K}_{i}{C}_{i}$, there exist many ways to design ${K}_{i}$ in order to satisfy the aforementioned assumption. One possible way is to use the pole placement method to design the value for ${K}_{i}$ such that ${A}_{i}$ generates an exponentially stable local system.
Now, the derivative of
${V}_{i}$ is provided by
${\dot{V}}_{i}={\dot{e}}_{i}^{T}{\mathsf{\Pi}}_{i}^{1}{e}_{i}+{e}_{i}^{T}{\mathsf{\Pi}}_{i}^{1}{\dot{e}}_{i}+2{\sum}_{j\in {N}_{i}}{\gamma}_{ij}{\dot{\gamma}}_{ij}$. By substituting
${\dot{e}}_{i}$ from Equation (
5) into Equation (
6), we obtain the following expression:
In order to further simplify Equation (
8), we can assume the weighted consensus matrix
${W}_{i}={\mathsf{\Pi}}_{i}{C}_{i}^{T}{C}_{i}$. Thus, the expression of
${\dot{V}}_{i}$ is
Therefore, by setting the sum of the last two terms in Equation (
9) to equal zero, we can derive the adaptation law shown below:
Therefore, the local adaptation law is now provided by
□
Remark 1. The choice of the structure of ${W}_{i}$ is not necessarily unique; it is only necessary for it to be possible to cancel out the ${\mathsf{\Pi}}_{i}$ term in Equation (8), along with the arbitrary matrix ${Y}_{i}$, which has an appropriate dimension such that ${Y}_{i}{e}_{i}$ and ${Y}_{i}{e}_{j}$ are available signals for the estimation process. Here, we choose ${Y}_{i}={C}_{i}$ for simplification. Furthermore, if we examine the derivative of
${\dot{V}}_{i}$, then we find that
However, we can only argue that
This means we need additional arguments to guarantee that the convergence of ${e}_{i}$ reaches zero, as ${\dot{V}}_{i}$ is only negative semidefinite. Thus, in order to show that each individual error dynamic asymptotically converges to zero, we need to examine the collective error dynamics for all agents.
Let us denote
$\mathbb{E}={\left[\begin{array}{cccc}{e}_{1}& {e}_{2}& ...& {e}_{n}\end{array}\right]}^{T}$ as the collective error vector; now, we can write the collective error dynamics in the following form (without noise):
where
${A}_{i}=A{K}_{i}{C}_{i}$ and
Remark 2. The row sum of Γ is zero; furthermore, the sum of the offdiagonal entries in each row is equal to the negative value of the corresponding diagonal entry. The operator ⊗ denotes the Kronecker product, while ${I}_{m}$ denotes a $m\times m$ identity matrix.
Theorem 1. Consider a wireless sensor network consisting of n
agents which form a communication topology $\mathcal{G}(\mathcal{V},\mathcal{E},\mathcal{A})$. Each agent adaptively estimates the states of a linear dynamic system with the structure governed by Equation (4) using the local adaptation law in Equation (11). Each ${A}_{i}=A{K}_{i}{C}_{i}$ satisfies the Lyapunov equation as in Equation (7). Then, the collective dynamics of the estimation errors in Equation (14) (without noise) represent a stable system. Furthermore, the adaptive Kalman consensus filter (aKCF) and the adaptation law law can be implemented through Algorithm 1, which generates a stable estimation system. Proof.
The error dynamics can now be written as
where
and
$\mathsf{\Gamma}$ is defined as in Equation (
15).
We now can write the collective Lyapunov function as follows:
We denote
$\mathbb{D}=diag({\mathsf{\Pi}}_{1},...,{\mathsf{\Pi}}_{n})$, where each entry in
$\mathbb{D}$ is a symmetric positive definite solution of Equation (
7) with respect to a different index
i.
Now, we can use the collective error dynamics to represent
V in Equation (
18) as follows:
Then, we take the derivative of
V and combine it with the adaptation law in Equation (
11), resulting in the following expression:
where
$\mathsf{\Omega}=diag({\mathsf{\Omega}}_{1},...,{\mathsf{\Omega}}_{n})$ and each entry of
$\mathsf{\Omega}$ must satisfy Equation (
7) for the corresponding
ith index.
Therefore, we can argue that the collective error dynamics
$\mathbb{E}$ and
$\mathsf{\Gamma}$ from Equation (
20) are bounded; thus, we can say that
$\mathbb{E}\in {L}_{2}\cap {L}_{\infty}$. From Equation (
16), it can be concluded that
$\dot{\mathbb{E}}$ is bounded as well, that is,
$\dot{\mathbb{E}}\in {L}_{\infty}$. Therefore, using Barbalat’s Lemma, we can write
$\mathbb{E}\to 0$ as
$t\to \infty $; in other words, the error dynamics of each agent asymptotically converge to zero. □
Algorithm 1 Adaptive Kalman Consensus Filter 
 1:
Initialize: $\widehat{{x}_{i}}\left(0\right)=x\left(0\right)$, ${\widehat{P}}_{0}={P}_{0}$, ${e}_{i}0=e\left(0\right)$.  2:
Compute the weighted consensus matrix: ${W}_{i}={\mathsf{\Pi}}_{i}{C}_{i}^{T}{C}_{i}$, where ${\mathsf{\Pi}}_{i}$ is a symmetric positive definite matrix that satisfies the Equation (7).  3:
Compute the Kalman gain: ${K}_{i}$ = ${P}_{i}{C}_{i}^{T}{R}_{i}^{1}$, and ${K}_{i}$ satisfies to generate an exponentially stable local system.  4:
Compute the adaptive gain: ${\dot{\gamma}}_{ij}={\left({C}_{i}{e}_{i}\right)}^{T}{C}_{i}({\widehat{x}}_{j}{\widehat{x}}_{i})$  5:
Compute the estimated state: ${\dot{\widehat{x}}}_{i}\left(t\right)={A}_{i}{\widehat{x}}_{i}\left(t\right)+{K}_{i}{y}_{i}\left(t\right)+{W}_{i}{\sum}_{j\in {N}_{i}}{\gamma}_{ij}\left(t\right)({\widehat{x}}_{j}\left(t\right){\widehat{x}}_{i}\left(t\right))$  6:
return
${\widehat{x}}_{i}$

Special case: when the adaptive gain ${\gamma}_{ij}={\gamma}_{i}$
If we assume that the adaptive gains
${\gamma}_{ij}$ for all neighbors of the
ith agent are identical, such that
${\gamma}_{ij}={\gamma}_{i},\forall j\in {N}_{i}$. Then, we can take the
${\gamma}_{ij}$ term out of the summation in Equation (
4); in this manner, we can further simplify the structure of
$\mathsf{\Gamma}$ in Equation (
15) to
$\mathsf{\Gamma}=diag({\gamma}_{1},...,{\gamma}_{n})$. Therefore, the error dynamics in Equation (
16) change to
where
L is the graph Laplacian of the communication topology
$\mathcal{G}$.
Therefore, we can rewrite the adaptation law in this special case as follows:
Remark 3. By carefully examining the above analysis, we find that there is vagueness when observing the agreement evolution of our adaptive Kalman consensus filters. Thus, it is necessary to show that the convergence of individual aKCF to a common value, which implies that each agent agrees with the others on their estimates. To this end, it is necessary to define a quantity to measure the disagreement level of the proposed aKCF.
Lemma 2. In a similar manner as [
17],
we define the measure of disagreement of n aKCFs as follows:where $\mu \left(t\right)=\frac{1}{n}{\sum}_{i=1}^{n}{\widehat{x}}_{i}\left(t\right)$ is the mean estimate of n aKCFs. Therefore, ${\delta}_{i}$ represents the deviation of each agent’s estimate from the mean estimate. Furthermore, by examining the disagreement dynamics of Equation (23), it is proved to converge to zero in finite time. Proof.
The measurement of disagreement can be transformed into the following expression:
Then, we can write the collective dynamics of
${\delta}_{i}\left(t\right)$ as follows:
It can instantly be recognized that the convergence of
$\delta \left(t\right)$ is related to the convergence of
$\mathbb{E}\left(t\right)$. We have proved that
$\mathbb{E}\left(t\right)$ converges to zero in finite time; however, due to the zero eigenvalue of the matrix
$\mathcal{L}$, Equation (
24) does not simply imply that
$\delta \left(t\right)$ converges to zero in finite time. Therefore, we need to find an alternate approach to prove this lemma.
Let us define
${\widehat{x}}_{ij}={\widehat{x}}_{i}{\widehat{x}}_{j}$ as the estimate difference between the
ith and the
jth agents and denote
${e}_{ij}={e}_{i}{e}_{j}$ as the estimation error difference between the
ith and the
jth agents. It is immediately clear that
${e}_{ij}={\widehat{x}}_{ij}$. Now, we can rewrite Equation (
23) into the following form:
From Equation (
26), it can be argued that if we are able to prove the convergence of the dynamics of
${\widehat{x}}_{ij}$ or
${e}_{ij}$ to zero in finite time, we can conclude that
${\delta}_{i}$ converges to zero, as
$t\to \infty $.
For simplification, consider the special case when
${\gamma}_{ij}={\gamma}_{i}$. According to Equation (
5), we can find the dynamics of
${e}_{ij}$ as follows:
In
Section 4, we showed that
$A{K}_{i}{C}_{i}$ generates an exponentially stable linear dynamic system for the
ith agent, while both
${\gamma}_{i}{W}_{i}$ and
${\gamma}_{j}{W}_{j}$, along with
${K}_{j}{C}_{j}{K}_{i}{C}_{i}$, are bounded quantities. In the proof of Theorem 1, we concluded that
$\mathbb{E}\in {L}_{2}\cap {L}_{\infty}$; hence,
${e}_{j}\in {L}_{2}\cap {L}_{\infty}$. To complete this proof, we need to prove that both
${e}_{ki}$ and
${e}_{lj}$ are
${L}_{2}$ bounded.
We now proceed to study the
${L}_{2}$ norm of
${e}_{ij}$, which is denoted as
${\int}_{0}^{\infty}{{e}_{j}{e}_{i}}^{2}d\tau $; then,
From above, it can be proved that
${e}_{ij}\in {L}_{2}\cap {L}_{\infty}$; therefore, we can conclude that
${e}_{ij}$ converges to zero, as
$t\to \infty $. Furthermore,
${\delta}_{i}\left(t\right)$ in Equation (
26) converges to zero in finite time. □
5. An Example
Consider a linear dynamic system in Equation (
1) with
$A=\left[\begin{array}{cc}2& 1\\ 0& 1\end{array}\right]$; assume that there are three agents distributed in a 2D domain. Each agent embeds a sensing device with measurement capability
${C}_{i}$, such as
${C}_{1}=\left[\begin{array}{cc}1& 0\end{array}\right],\phantom{\rule{1.em}{0ex}}{C}_{2}=\left[\begin{array}{cc}0& 1\end{array}\right],\phantom{\rule{1.em}{0ex}}{C}_{3}=\left[\begin{array}{cc}0.5& 0\end{array}\right]$.
It is clear while that the system itself is not observable by individual agents, it is observable through all of them. In this simulation, we carry out a comparison between the Kalman consensus filtering algorithm proposed by OlfatiSaber in [
17] and our adaptive Kalman consensus filtering algorithm from Theorem 1 on their performance and effectiveness.
We set up our simulation with the following parameter configuration:
In both cases above, we assume the same initial conditions for the error dynamics, such as
${e}_{1}\left(0\right)={[0.6,0]}^{T}$;
${e}_{2}\left(0\right)={[2,1]}^{T}$ and
${e}_{3}\left(0\right)={[2,0.5]}^{T}$. In order to construct a fair comparison between the KCF and aKCF algorithms, we assume that the dynamics of the error covariance matrix in Equation (
3) comply with the algebraic Riccati equation, and use the same filter gains
${K}_{i}$ in both cases.
The simulation results are shown in
Figure 1,
Figure 2,
Figure 3 and
Figure 4. From a comparison of
Figure 1a,b, we can conclude that aKCF has better performance regarding the error convergence rate than the KCF in [
17].
Figure 2a,b illustrates the evolution of the disagreement term in both the KCF and the aKCF algorithms. According to the simulation results in
Figure 3a,b, both the state estimation error and the disagreement term converge faster in the aKCF case.
Figure 4 demonstrates the dynamics of the adaptive gain.