Next Article in Journal
Review of Navigation Assistive Tools and Technologies for the Visually Impaired
Next Article in Special Issue
Statistical Analysis of the Impact of COVID-19 on PM2.5 Concentrations in Downtown Quito during the Lockdowns in 2020
Previous Article in Journal
Machine Learning Sensors for Diagnosis of COVID-19 Disease Using Routine Blood Values for Internet of Things Application
Previous Article in Special Issue
Smart and Portable Air-Quality Monitoring IoT Low-Cost Devices in Ibarra City, Ecuador
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trajectory Modeling by Distributed Gaussian Processes in Multiagent Systems

School of Electronic Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 7887; https://doi.org/10.3390/s22207887
Submission received: 20 September 2022 / Revised: 8 October 2022 / Accepted: 13 October 2022 / Published: 17 October 2022

Abstract

:
This paper considers trajectory a modeling problem for a multi-agent system by using the Gaussian processes. The Gaussian process, as the typical data-driven method, is well suited to characterize the model uncertainties and perturbations in a complex environment. To address model uncertainties and noises disturbances, a distributed Gaussian process is proposed to characterize the system model by using local information exchange among neighboring agents, in which a number of agents cooperate without central coordination to estimate a common Gaussian process function based on local measurements and datum received from neighbors. In addition, both the continuous-time system model and the discrete-time system model are considered, in which we design a control Lyapunov function to learn the continuous-time model, and a distributed model predictive control-based approach is used to learn the discrete-time model. Furthermore, we apply a Kullback–Leibler average consensus fusion algorithm to fuse the local prediction results (mean and variance) of the desired Gaussian process. The performance of the proposed distributed Gaussian process is analyzed and is verified by two trajectory tracking examples.

1. Introduction

Trajectory tracking is a common problem in control and robotics, and its generation systems represent a large class of dynamical physical models. In the past few decades, various control schemes have been investigated and modeled, and most of them can be considered as a subset of computed torque control laws [1]. Generally speaking, in order to track trajectories, one needs to know the system model, such as the kinematic model, observation model, and motion model [2]. However, in many practical applications, one usually cannot obtain the model information/knowledge, or the system model is dynamical and is difficult to characterize. The system model is often filled with a high degree of uncertainty, nonlinearity, and dependency, which makes it difficult to model accurately. Therefore, traditional modeling methods are no longer suitable for the actual dynamical environment [3]. More recently, data-driven approaches are getting more and more attention in many fields, such as the control and machine learning communities [4,5]. Since data-driven methods can train the system model with high efficiency and precision, they have become the most popular choice for system modeling [6,7]. In particular, the Gaussian process (GP) is the most representative one, and it has been successfully applied to many fields.
A research frontier in the realm of GP is the trajectory modeling issue. Due to its capability to tackle complex perturbations, uncertainties, dependencies, and nonlinearities, GP is becoming a popular choice in various systems, such as in solar power forecasting [8], permanent magnet spherical motors [9], iterative learning control [10], and in swarm kinematic model [11]. In particular, GP has been proven to be effective in improving the learning accuracy and the learning effectiveness of uncertainties and dependencies in low data regimes [12]. More recently, a non-parametric Gaussian process (GP) was proposed for modeling with quantifiable uncertainty and nonlinearity [13,14] based on implicit variance trade-off [15,16]. This bridges the system modeling and data-driven methods. However, computational burden and hardware requirements make GP impractical for big data sets. Furthermore, the high cost of GP also severely hinders the application to an actual physical system. The engineering community has acknowledged these limitations and has attempted to address the problem. Since one can decompose the learning process into a part for a solution, this inspires one to address it in a distributed manner. Accordingly, a distributed GP is an urgent need [17,18].
Generally speaking, the processing is called distributed manner if it is carried out by a cooperative strategy among nodes without central coordination [19]. The distributed method aims at minimizing the amount of computation and communication required by each node as well as making these requirements scalable in the number of nodes [20]. Distributed methods are available for parameter estimation [14], Kalman filtering [21], control [22], optimization [23], learning [13], etc. A major division among distributed methods is based on whether all nodes estimate the full system state [24] or whether each node only estimates a subset of the state variables [25]. The challenge consists in how to execute the update and fusion step in a distributed manner. Existing fusion strategies are usually from the perspective of state estimation and estimation error covariance. Since GP is indeed a Gaussian probability density function (PDF), the trajectory model constructed by GP requires us to consider fusion strategy from the view of PDF [26]. Therefore, this paper is targeted to design a novel GP fusion strategy for multi-agent systems. Generally, the strategy is organized as follows: after obtaining the local predicted results of GP, we perform a fusion of the Kullback–Leibler average consensus on local predictions of GP among neighbors. The distributed GP model can then be developed and successfully applied in large-scale multi-agent systems.

1.1. Related Works

Gaussian process-based modeling and based trajectory tracking have been widely investigated and applied over the past two decades. In the first place, most focus on the centralized GP and the multi-input–output GP. In addition, they are developed based on the need for engineering applications in learning and control fields such as GP-based tracking control, state space model learning, and their applications to trajectory tracking. For example, Beckers et al. studied the stable Gaussian process-based tracking control of Lagrangian systems [1]. Umlauft et al. learned stable Gaussian process state space models [27], while Mohammad et al. learned stable nonlinear dynamical systems with Gaussian mixture models [5]. In addition, Pushpak et al. designed control barrier functions for unknown nonlinear systems using Gaussian processes [28]. Umlauft et al. considered human motion tracking with stable Gaussian process state space models in ref. [29] and proposed an uncertainty-based control Lyapunov approach for control-affine systems modeled by the Gaussian process [30]. They also calculated uniform error bounds of Gaussian process regression with application to safe control [31]. Even Gaussian process-based trajectory tracking and control are becoming research hotspots; they focus mainly on one agent and are seldom involved in multi-agent systems. In the second place, distributed and centralized GPs are flourishing in solving data-driven learning algorithms for multi-agent systems. Generally speaking, the main research results are organized as follows: (1) For contributions of models and theories, the unknown map function was modeled and characterized as a GP but with zero-mean assumption, and a distributed parameter and non-parameter Gaussian regression was proposed by using Karhunen–Loeve expansion in refs. [13,14]. To scale GP to large datum, Deisenroth et al. introduced a robust Bayesian committee machine, a practical and scalable product-of-experts model for large-scale distributed GP regression [32]. To address the hyperparameter optimization problem in big data processing, Xie et al. proposed an alternative distributed GP hyperparameter optimization scheme using the efficient proximal alternating direction method of multipliers [33]. Multiple-task GP was studied in ref. [34], while multi-out regression by GP was studied in ref. [35]. Both of them were centralized approaches and could not be extended to a large-scale problem. GP networks were flexible and effective to be used in multi-output regression by combining with variational inference and distributed variational inference in ref. [36], which involved applications to settle non-linear dimension reduction and regression, and provided a powerful tool to address uncertainty and over-fitting problems. (2) For engineering applications, Nerurkar et al. [37] presented a distributed conjugate gradient algorithm for cooperative localization. Franceschelli and Gasparri [38] presented a distributed gossip-based approach to address the pose estimation problem. Cunnigham et al. [39] developed an approach for robot smoothing and mapping by using Gaussian elimination. Distributed localization from distance measurements is studied in [40]. The distributed position estimation was considered in [41]. Distributed rotation estimation algorithm was developed in various engineering [42,43,44]. Distributed Gauss–Seidel algorithm was studied in [45]. GP for data learning in robotic control was considered in [46]. (3) For trajectory tracking in a multi-agent system, an efficient algorithm was presented in ref. [47] to generate trajectory. Gaussian mixture models were used to learn stable trajectory in ref. [5]. The centralized GP for human motion tracking was studied [48]. (4) For distributed model predictive control (MPC), an overview and future research opportunities were discussed in ref. [49]. A cooperative distributed model predictive control for nonlinear systems was studied in [50], for tracking was studied in ref. [51], for linear systems was studied in ref. [52], and for event-based communication and parallel optimization, it was developed in ref. [53]. Additionally, non-cooperative distributed model predictive control was investigated in ref. [54]. More recently, explicit distributed and localized model predictive control via system-level synthesis was investigated in refs. [55,56] and was applied to the trajectory generation of a multi-agent system in ref. [57]. In short, the study of distributed GP is scarce, especially in the trajectory modeling problem.

1.2. Contributions

More recently, GP was widely used to model tracking systems and applied to track the target in a real-world environment, such as speed racing (quadrotors) [58], trajectory tracking for wheeled mobile robots [59], 3D people tracking [60], and Simultaneous Localization and Mapping (SLAM) [61,62]. However, these applications focus on one agent, which ignores the advantages of multi-agent systems. After surveying these related references, we find that the trajectory tracking problem is mainly solved by control methods, not data-driven methods, and we focus on one agent, not a multi-agent collaboration. In addition, these existing GP-based learning algorithms are limited by training manners. Motivated by the above discussion, we investigate the distributed GP to learn the trajectory system model in this paper. More specifically, the main contributions of the paper are four-fold. (1) Compared with GP in Lagrangian systems [1,58], this paper considers a general state-space model for both discrete-time and continuous-time. (2) Compared with centralized GP in the state space model [27,28,29,30], PD control and model predictive control are combined together with GP to achieve tracking system modeling and estimating, which can make the estimation error globally uniformly bounded. (3) Compared with existing multiple GP-based centralized approaches, such as collaborative GP [32,33,34,35,63,64], this paper achieves a distributed GP manner to estimate the state, and we apply Kullback–Leibler (KL) average consensus to fuse local training results of GPs, which is different with the Wasserstein metric for measuring GP [65]. (4) Compared with the centralized GP without giving the performance bound [32,33,34,35,63,64] or only providing Kullback–Leibler average analysis [26], this paper analyzes the probabilistically globally ultimate bound of distributed GP.

1.3. Paper Structure

The remainder of the paper is organized as follows. Section 2 introduces some preliminaries, including notations, graph theory, Gaussian process, and Kullback–Leibler average consensus. Section 3 states the considered systems. Section 4 designs the local control strategy and proposes a Kullback–Leibler (KL) average consensus to fuse the local predictions of GP. Section 5 provides two tracking experiments. Finally, Section 6 concludes the paper.

2. Preliminaries

2.1. Notation

Throughout the paper, vectors and vector-valued functions are denoted with bold characters. Matrices are described with capital letters. t r a c e ( · ) , l o g ( · ) , d e t ( · ) , · , · ,   · , , , N ( · , · ) , and G P ( · , · ) denote, respectively, the trace operation, the logarithm operation, the determinate operation, the inner product, the 2-norm of a matrix or vector, the addition operation of probability density functions (PDFs), the multiplication operation of PDFs, a Gaussian distribution, and a Gaussian process. Moreover, x ˙ , x ¨ , x ^ , and x ¯ denote, respectively, the first-order differential operation on x , the second-order differential operation, the prediction, and the mean operation of x . In addition, D KL ( p q ) , E   V , I n denote, respectively, the KL divergence/distance between probabilities p and q , the expectation operation, the variance operation, and an n-by-n identity matrix. In addition, d x d u , x u , and O ( · ) denote, respectively, the derivative operation, the partial derivative operation, and the complexity.

2.2. Graph Theory

A graph is defined as G = ( N , ε ) ; where N is a set of nodes and ε N × N a set of edges. In particular, graph G is undirected iff ( u , v ) ε ( v , u ) ε for all u , v N . The order is | N | and the size of G is | ε | . Further, let N i = { j N : ( j , i ) } ε denote the set of neighbors for node i .

2.3. Gaussian Process

Definition 1.
([66]). A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
A Gaussian process is completely specified by its mean function and covariance function. We define mean function m ( x ) : x χ and the covariance function (kernel) k ( x , x ) : χ × χ of a real process f ( x ) as m ( x ) = E ( f ( x ) ) , k ( x , x ) = E [ ( f ( x ) x ) ( f ( x ) x ) ] , and denote the Gaussian process as f ( x ) G P ( m ( x ) , k ( x , x ) ) . When f : χ n is an n-dimensional map, the GP can be denoted by f j ( x ) G P ( m ( x ) , k ( x , x ) ) ,   j { 1 , , n } .
Then, the Gaussian process can be organized as [29]
f ( x ) = { f 1 ( x ) G P ( m 1 ( x ) , k 1 ( x , x ) ) f n ( x ) G P ( m n ( x ) , k n ( x , x ) ) .
In addition, the covariance function (kernel) measures similarity between any two states/variables x , x χ and the common kernel functions include the linear kernel, squared-exponential (SE) kernel, the polynomial kernel, the Gaussian kernel, and the Matèrn kernel.
Assumption 1
. Suppose the measurement equation is y = f ( x ) + ϵ , where y m  is the observed vector, x X n  is the state vector defined in a compact set X , and ϵ is the measurement noise obeying a Gaussian distribution with zero mean and variance σ 2 I n  (denoted by ϵ N ( 0 , σ 2 I n ) ). In addition, f is the unknown mapping function ( f : χ n ) and is assumed to be a GP (denoted by f G P ( 0 , k θ ( x , x ) ) ). Here k θ ( x , x ) is a kernel function with respect to hyper-parameters θ , K X X = k θ ( X , X ) denotes the covariance matrix of set X , and K x X = k θ ( x , X )  denotes the covariance matrix between x  and X .
Generally speaking, given a training set D = ( X , y ) , (input X and output y ) [29], the log-likelihood can be computed by
log p ( y | X , θ ) = 1 2 y T ( K X X + σ 2 I n ) 1 y         1 2 log | K X X + σ 2 I n | n 2 log 2 π .
Then, when a new input x * is introduced, the posterior prediction of the Gaussian process [29] is p ( f ( x * ) ) = N ( μ ( x * ) , ( x * ) ) , where
μ ( x ) = K x X ( K X X + σ 2 I n ) 1 y ,       ( x ) = K x X K x X ( K X X + σ 2 I n ) 1 K X x .
To summarize, the likelihood maximization of (2) is performed to compute gradients for training, and the mean and covariance functions (3) are used for fast predictions.
More specifically, given an arbitrary new testing input x * χ conditioning a dataset D described above, the prediction response y * is jointly Gaussian distributed with the training set, which is given by
[ y j y j ] N ( [ m j ( x ) m j ] , [ k j k j T k j K j + σ 2 I ] ) , y j = [ y j ( 1 ) y j ( N ) ] T n , m j = [ m j ( x ( 1 ) ) m j ( x ( N ) ) ] T n , k j = k j ( x , x ) , k j = [ k j ( x ( 1 ) , x ) k j ( x ( N ) , x ) ] T , K j = [ k j ( x ( 1 ) , x ( 1 ) ) k j ( x ( 1 ) , x ( N ) ) k j ( x ( N ) , x ( 1 ) ) k j ( x ( N ) , x ( N ) ) ] n × n .
For j = 1,…, n, the posterior distribution corresponding to fj(·) at x yields a Gaussian distribution with mean function and covariance function as
E [ y j | D , x ] = m j ( x ) + k j T ( K j + σ 2 I n ) 1 ( y j m j ) , V [ y j | D , x ] = k j k j T ( K j + σ 2 I n ) 1 k j .
Furthermore, in order to learn the hyper-parameter θ j given a chosen kernel, we can use the maximum likelihood function based on Bayes’ rules as
max θ j log p ( y j | x ( 1 : n ) , θ j ) = max θ j ( 1 2 y j T K j 1 y j 1 2 log d e t K j n 2 log ( 2 π ) ) .
which can be solved by gradient-based approaches [29].

2.4. Kullback–Leibler Average Consensus Algorithm

This section introduces the consensus/fusion algorithm of GPs. A Gaussian process is a Gaussian probability density function over mean function and covariance function. Therefore, the fusion of GPs is indeed the fusion of probabilities. It raises a problem: How to achieve consensus/fusion of probabilities among multiple agents?
Before proceeding on, we first introduce some definitions.
Definition 2.
(Probability space [26]). Let
P { p ( ) : n   such   that   n p ( x ) d x = 1   and   p ( x ) 0 , x n }
denote the set of probabilities (PDFs) over n and let p i ( · ) P ( i N ) denote the local probability/PDF of agent i .
Definition 3.
(Kullback–Leibler divergence [26]). In statistics, the Kullback–Leibler divergence, D K L ( p q ) (also called relative entropy), is a statistical distance: a measure of the probability distribution p(·) is different from the probability distribution q(·), which is defined as (for distributions p ( · ) and q ( · ) of a continuous random variable x )
D K L ( p q ) = p ( x ) log p ( x ) q ( x ) d x .
Definition 4.
(Probabilistic operation [26]). Define the plus and multiplicative operators over probabilities ( p ( · ) and q ( · ) ) for a variable ( x ) and a real constant as a
p ( x ) q ( x ) p ( x ) q ( x ) p ( x ) q ( x ) d x , a p ( x ) [ p ( x ) ] a [ p ( x ) ] a d x .
Then, we attempt to find a Kullback–Leibler average consensus/fusion algorithm over probabilities obtained by multiple agents.
First, according to [26], Kullback–Leibler average (KLA) is to average over probabilities. Motivated by this, we define the weighted KLA ( p ¯ ) among the probabilities { p i } i = 1 N as
p ¯ =   a r g inf p P i N a i D KL ( p p i ) ,
where a i 0 denotes the weight of agent i and satisfies i N   a i = 1 .
Then, the average consensus/fusion problem is to achieve
lim l p l i = p ¯ ,
for all agents i N , where l is the consensus step and p ¯ represents the asymptotic KLA with uniform weights.
Second, we attempt to find the solution of the average consensus p ¯ in (9). Based on [26], the solution is
p ¯ ( x ) = i N [ p i ( x ) ] a i i N [ p i ( x ) ] a i d x i N ( a i p i ( x ) ) ,
with a i = 1 / | N | . In addition, the local consensus of agent i at the l-th consensus step can be obtained by
p l i ( x ) = i N ( a i , j p l 1 j ( x ) ) , i N ,
where a i j is the consensus weight satisfying a i j 0 , i N a i , j = 1 and a i j represents the ( i , j )-th component of the consensus matrix A (if j N i   a i , j = 0 ). Therefore, when the l -th iteration of the consensus algorithm is initialized by p 0 i ( · ) = p i ( · ) , we can finally obtain the consensus as
p l i ( x ) = i N ( a l i , j p j ( x ) ) , i N
Third, for special Gaussian case, the local probability p i ( · ) takes the form as
p i ( x ) = N ( x ; μ i , i ) 1 d e t ( 2 π i ) e 1 2 ( x μ i ) T ( i ) 1 ( x μ i ) ,
where μ i n and i n × n denote the mean vector and the covariance matrix, respectively. In view of this case, the KLA can be directly obtained by operating the means and the covariances instead of probabilities. The following lemma states the KLA on Gaussian distributions.
Lemma 1.
([26]). Given N Gaussian distributions ( p i ( x ) , i = 1 , , N ) defined in (14), with corresponding weigh  a i , then the weighted KLA p ¯ ( · ) = N ( · ; μ , ¯ ¯ ) can be calculated by directly fusing the means  μ i and the covariance matrices i as
¯ 1 = i = 1 N a i ( i ) 1 , ¯ 1 μ ¯ = i = 1 N a i ( i ) 1 μ i .
Lemma 1 indicates that the consensus/fusion of Gaussian probabilities can directly operate their means and covariance matrices. Note that a Gaussian process is indeed a Gaussian probability. Therefore, the KLA consensus/fusion on GPs can be directly obtained by fusing the mean functions and the covariance functions.

2.5. Uniform Error Bounds

This section analyzes the probabilistic uniform error bounds.
Definition 5.
(Probabilistic uniform error bound [31]). x X , if there exists a function η ( x ) such that μ ( x ) f ( x ) η ( x ) , then, on a compact set X n , GP has a uniformly bounded error. A probabilistic uniform error bound is one that holds with a probability of at least 1 δ for any δ ( 0 , 1 ) .
Definition 6.
(Lipschitzconstant of the kernel [64]). The Lipschitz constant of a differentiable covariance kernel k ( · , · ) is
L k : = m a x x , x X [ k ( x , x ) x 1 k ( x , x ) x n ] T .
Next, we show that the posterior prediction (3) of GP is continuous. Given the continuous unknown f with Lipschitz constant L f and the Lipschitz continuous kernel k with Lipschitz constant L k , we then have the following theorem.
Theorem 1.
([31]). Given a GP defined by the continuous covariance kernel function k with Lipschitz constant L k , a continuous unknown map f with Lipschitz constant L f and measurements satisfying Assumption 1. Then, the posterior predictions ( μ ( · )   a n d   ( · ) ) of the GP conditioning on the training date set D = { x t , y t i } t = 1 , , N i = 1 , , N are continuous with Lipschitz constant L μ and modulus of continuity ω such that
L μ L k N | ( K + σ 2 I ) 1 y | , ω ( τ ) 2 τ L k ( 1 + N | ( K + σ 2 I ) 1 | m a x x , x X k ( x , x ) ) .
for any τ + and δ ( 0 , 1 ) with
β ( τ ) = 2 log ( M ( τ , X ) δ ) , γ ( τ ) = ( L μ + L f ) τ + β ( τ ) ω ( τ ) .
In addition, x X , it follows that
p ( f ( x ) μ ( x ) γ ( τ ) + β ( τ ) ( x ) ) 1 δ .
Proof of Theorem 1.
The proof is given in Appendix A. □

Asymptotic Analysis

The asymptotic analysis of the error bound (19) in the limit N is organized as the following theorem.
Theorem 2.
([31]). Given a GP defined by the continuous covariance kernel function k with Lipschitz constant L k , and an infinite data response of measurements ( X t , y t i ) of the continuous unknown map f   with Lipschitz constant L f and the maximum absolute value f m a x . The first N measurements inform the posterior predictions of the GP as ( μ ( · )   a n d   ( · ) ) . If there exists a ϵ > 0 such that ( x ) O ( log ( N ) 1 2 ϵ ) , x X  for any δ ( 0 , 1 ) , it follows that
p ( sup x X f ( x ) μ N ( x ) O ( log ( N ) 1 2 ϵ ) ) 1 δ .
Proof of Theorem 2.
The proof is given in Appendix B. □

3. Problem Formulation

The trajectories generate from a continuous dynamical system
x ˙ = f ( x , u ) + w ,
where x χ n in a compact set, χ denotes the state (location), u U n denotes the control input, ω denotes the process noise with w N ( 0 , σ ω 2 I ) and the initial state is x ( 0 ) = x 0 . We have N agents/sensors connected with a network to acquire measurements (location or velocity). In particular, sensor i measures
y j i = f i ( x ) + ϵ j i ,
where y j i is the observed vector of sensor i ( i = 1 , , N ) at the j -th step ( j = 1 , , N ) , ϵ j i is the measurement noise with ϵ j i N ( 0 , σ 2 I ) .
Suppose that a training data set D of trajectories is given. D contains the state (current location) and the measurement, which is denoted by D = { x i , y j i } i = 1 N j = 1 N . The nonlinear map function f : χ n is unknown and is assumed to be a Gaussian process. In addition, the following assumption is satisfied.
Assumption 2.
f ( x ) Suppose the measurement is Lipschitz continuous and has a bounded RKHS (reproducing kernel Hilbert space) norm with respect to fixed common kernel k , f k = f , f k < .
The objective is to find an estimated f ^ of f , for which the output trajectory x tracks the desired trajectory x d = [ x d x ˙ d ] T such that the tracking error e = x x d = [ e 1 e 2 ] T vanishes over time, i.e., l i m t e = 0 . l. Since the noises ω and ϵ j i and the uncertain dynamics affect the system and control, we use multiple agents to eliminate the influence of stochastic uncertainty, i.e., given local f ^ i , the goal is also to fuse them and to find a fused/consensus f ¯ such that the uncertainty also vanishes over time.

4. Control Design and Analysis

Classical control uses static feedback gains. Low feedback gains are designed to avoid saturation of the actuators and good noise suppression. However, the considered unknown dynamics require a more minimal feedback gain to keep the tracking error under a defined limit. After performing a training procedure, we use the mean function of GP to adapt the gains. For this purpose, the uncertainty of the GP and multiple agents are employed to scale the feedback gains.
Before proceeding on, the following natural assumptions and lemmas are given.
Assumption 3.
The desired trajectory x d ( t ) is bounded by x d = [ x d x ˙ d ] T q d = [ q d q ˙ d ] T .
Lemma 2.
([67]) If there exist a positive constant b + such that a + , if there exists a function T = T ( a , b ) satisfying x ( t 0 ) a , then we have that t t 0 + T , x ( t ) b , i.e., the trajectory x ( t ) of the dynamics (21) is globally ultimately bounded.
Lemma 3.
([67]) If there exists a Lyapunov function V such that V ˙ ( x ) < 0 for all x X \ B , the dynamical system x ˙ = f ( x , u ) is globally ultimately bounded to a set B X .
Next, we design the controller and the control law such that stability and high-performance tracking are achieved. The controller is designed as
u = f ^ ( x ) + ρ ,
where f ^ is the model estimation of nonlinear dynamics f and f ^ is obtained by utilizing the posterior mean function µ N of GP trained by the data set D , and ρ is the control law.
In addition, ρ is designed as a Proportional–Derivative (PD) type controller
ρ = x ¨ d k d r k p e 2 ,
where r = k p e 1 + e 2 is the filtered state with r ˙ = f ( x ) f ^ ( x ) k c r , k p + , and the control gain k p + .
Given the above controller, one needs to verify the effectiveness of the model estimation f ^ and the choices of the parameters k d and k p . The following theorem states the control law with guaranteed boundedness of the tracking error.
Theorem 3.
Consider the system (23), where f satisfies Assumption 2 and admits a Lipschitz constant L f . If Assumption 3 is satisfied, then the controller with f ^ = µ N and the control law guarantee that the tracking error is globally ultimately bounded and converges to a ball B = { e γ ( τ ) + β ( τ ) N ( x ) k c λ 2 + 1 } , x X , where β and γ are given in Theorem 1, with a probability of at least 1 δ ,   δ ( 0 , 1 ) .
Proof of Theorem 3.
The proof is given in Appendix C. □
Remark 1.
From Theorem 3, it can be seen that trajectory tracking with high probability is achieved with the proposed GP-based controller. Compared with most existing results where only uniformly ultimate boundedness of the trajectory tracking errors was achieved [1,68], the proposed control law ensures high control precision in the presence of the estimation errors from GP.
Proof of Remark 1.
The proof is given in Appendix C. □

4.1. Consensus

The aforementioned control law focuses one agent/sensor. Since the noises ω and ϵ j i affect the measurements and the dynamical system, the proposed controller will fluctuate for different agents. Furthermore, since the uncertainty of dynamics exists, the proposed controller may also change for different agents. Therefore, this section will fuse them and make them reach a consensus, i.e., given local f ^ i ( µ N i and N i ), the goal is to fuse them and to find a fused/consensus f ¯ ( µ ¯ N   and   ¯ N ) such that the uncertainty and the disturbance can vanish over time. Obviously, the controller (23) and the control law (24) for different agents can reach a consensus f ¯ ( µ ¯ N   and   ¯ N ) .
More specially, after training the local f ^ i by using GP, node i sends the result to its neighbors N i . After collecting the training results from neighbors, it performs the following dynamic consensus/fusion step. Given weights a i satisfying a i 0 and i N i a i = 1 based on the Kullback–Leibler average consensus given in Section 2.4, the desired weighted KLA takes the Gaussian form as f ¯ ( · ) = N ( · ; μ ¯ , ¯ ) , in which the fusion of the mean function μ ¯ and the fusion of the covariance function ¯ can be calculated by
¯ 1 = i = 1 M a i ( N i ) 1 , ¯ 1 μ ¯ = i = 1 M a i ( N i ) 1 μ N i ,
while the global/centralized fusion using i = 1 , , N . The flowchart is given in Figure 1.
After obtaining the consensus mean function, the controllers of different agents can be designed to be a unified controller.
Remark 2.
The main advantage of the distributed method lies in that local nodes can only receive part of the training data or even missing data. Neighboring nodes can make the prediction faster and keep high accuracy through information interaction and consensus algorithm, which can also avoid processor failure caused by data loss or node/sensor failure.

4.2. GP-Based Model Predictive Control for Discrete-Time System

The above discussion discusses the continuous-time system. Usually, we need to discretize the system in an actual physical system. This section designs the control strategy for discrete-time by using GP-based model predictive control (MPC).
First, the considered system (21) is assumed to be discrete-time and can be modeled by GP, where the control tuple x k = [ x k u k ] T and the state difference δ x k = x k + 1 x k are, respectively, designed as the training input and the desired target. Given the training date set D = { x t , y = δ x k } k = 1 N , according to (4) and (5), at a new training input x * , we can obtain the mean function and covariance function as follows
E [ δ x k | D , x ] = k T ( K + σ 2 I N ) 1 y , V [ δ x k | D , x ] = k k T ( K + σ 2 I N ) 1 k ,
where k * = k ( x * , x * ) , k = [ k ( x ( 1 ) , x * ) k ( x ( n ) , x * ) ] T , and K is defined in (4). Therefore, (26) is given to predict the next step. By using the moment matching approach [46], the mean function and covariance function of the training target at time k can be calculated by
μ k δ = E [ E [ δ x k ] ] , k δ = [ k ( δ x k 1 , δ x k 1 ) k ( δ x k n , δ x k 1 ) k ( δ x k 1 , δ x k n ) k ( δ x k n , δ x k n ) ] .
At time k + 1 , the mean and covariance functions are updated as
μ k + 1 = μ k + μ k δ , k + 1 = k + k δ + k ( x k , δ x k ) + k ( δ x k , x k ) .
For more details, please refer to [46].
Then, based on (6), we next attempt to learn the hyper-parameters θ . A distributed GP-based MPC scheme is presented to address this problem. First, we design the objective function as
J k = min u E [ V ( x k , u k 1 ) ] ,
where the cost function is
E [ V ( x , u ) ] = l = 1 L E [ ( x k + l p k + l ) T Q ( x k + l p k + l ) + u k + l 1 T R u k + l 1 ] ,
where p is the desired trajectory (desired state), Q and R are positive definite weight matrices, and L is the prediction horizon and also the control horizon. According to GP in Section 2, (30) can be rewritten as
E [ V ( x , u ) ] = l = 1 L E [ ( u k + l p k + l ) T Q ( u k + l p k + l ) + trace ( Q k + l ) + u k + l 1 T R u k + l 1 ] ,
Next, to address the optimization problem (29), a gradient-based method is used. Set F l = ( u k + l p k + l ) T Q ( u k + l p k + l ) + trace ( Q k + l ) + u k + l 1 T R u k + l 1 and E [ V ( x , u ) ] = l = 1 L E F l . Using the chain rule, the gradient can be calculated by
d d u k 1 E [ V ( x k , u k 1 ) ] = l = 1 L d F l d u k + l 1 , d F l d u k + l 1 = F l u k + l u k + l u k + l 1 + F l k + l k + l u k + l 1 + F l u k + l 1 .
where F l u l , F l l   and   F l u l 1 are easy to calculate. In addition,
u k + l u k + l 1 = u k + l u k + l 1 u k + l 1 u k + l 1 , k + l u k + l 1 = k + l k + l 1 k + l 1 u k + l 1 ,
where u k + l 1 u k + l 1   and   k + l 1 u k + l 1 are easy to calculate.
Finally, the gradient-based algorithm is formulated as Algorithm 1.
Algorithm 1 Gradient-based optimization method
 Input: learning GP, L , p , Q , and R
 Output: Optimal control u *
 1: Initialization: Max iteration number N = 1000 , threshold ε = 10 8 , the initialized input u 0 and optimal control u * = u 0 ;
 2: for  k = 1 to N  do
 3:   if  E [ V ] < ε  then
 4:      u * = u k ;
 5:   end Loop;
 6:   else
 7:     Calculate the gradient d E [ V ( u k ) ] d u k 1 by (32);
 8:     Update search step size based on [69];
 9:     Update control u k + l = u k + α l d E [ V ( u l ) ] d u l 1 ;
 10    Go next l l + 1
 end
 end
 11: return Optimal control u * .
Remark 3.
Similarly, due to the stochastic uncertainty caused by the noises and model perturbations, we can use multiple agents to address this problem. The consensus/fusion algorithm is given above, which is similar to the continuous system. Therefore, we will not introduce it any more.
Remark 4.
The GP has been widely applied in various real-world applications such as quadrotor tracking, 3D people tracking, localization and mapping, and control-based application models. These applications have attracted much attention from engineers and researchers. As for the limitations, in our opinion, the first is that the model needs real-world data to achieve perfect training and application. The second is that the dynamics are Gaussian distributed or Gaussian-approximate distributed.

5. Simulations

To evaluate the performance and to verify the effectiveness of the proposed algorithms, this section provides two trajectory tracking examples, where one is the trajectory tracking of a robotic manipulator and the other one is the trajectory tracking of an unmanned quadrotor. All simulations are conducted on a computer with 2.6 GHz Intel(R) Core(TM) i7-5600U CPU and MATLAB R2015b.

5.1. Trajectory Tracking of Robotic Manipulator

First, we consider the trajectory of a Puma 560 robot arm manipulator in x - y - z plane with 6 degrees of freedom (DoFs), which is shown in Figure 2. The Puma 560 robot was designed to have approximately the dimensions and reach of a human worker. It also had a spherical joint at the wrist, just as humans have. Roboticists use like waist, shoulder, elbow, and wrist when describing serial link manipulators. For the Puma, these terms correspond respectively to joints 1, 2, 3, and 4–6, which is shown in Figure 2.
For the considered robot arm, τ 1 , τ 2 , and τ 3 are the control torques of the motors controlling the joint angles ϕ , θ , ψ . The trajectory of the robotic manipulator can be controlled by these torques. The motion can be described by the following Lagrangian system [1]
H ( q ) q ¨ + C ( q , q ˙ + G ( q ) + κ ( q ˘ ) = τ ,
where q denotes the generalized coordinates with their time derivatives q ˙ , q ¨ and τ denotes the generalized input. H is the mass matrix, C is the Coriolis matrix and G is the potential energy matrix. An additional unknown dynamic κ ( q ˘ ) (train to obtain its form), which depends on q ˘ = [ q ¨ T , q ˙ T , q T ] T , affects the system as a generalized force, in which one can refer to [2] for details. The process methodology is illustrated in Figure 3.
Tracking error is the error between the actual value of joint angle or velocity with the desired values
q ˜ = [ ϕ ϕ d θ θ d ψ ψ d ] = q ( t ) q d ( t ) , q ˜ ˙ = q ˙ ( t ) q ˙ d ( t ) .
The following controllers are tested. (1) Computed torque (CT) controller: τ i n = H ( q ) q ¨ d + C ( q , q ˙ ) q ˙ d + G ( q ) K p ( q ˜ ) K d ( q ˜ ˙ ) . The gains for this controller is K p = 50 and K d = 40. (2) The proposed PD controller (24): the composite error is S = q ˜ ˙ + λ q ˜ , a reference velocity q ˙ r is q ˙ r = q ˙ d λ q ˜ q ˜ = q ( t ) q d ( t ) , and the control torque is τ i n = H ( q ) q ˙ r + C ( q , q ˙ ) q ˙ r + G ( q ) K p ( q ˜ ) K d S . The gains for this controller are λ = 30 and K d = 20. (3) The adaptive controller: the control torque is τ i n = Y 0 + Y 1 m ^ 3 + Y 2 I ^ x x 3 + Y 3 I ^ y y 3 + Y 4 I ^ z z 3 K d S , where m ^ 3 , I ^ x x 3 , I ^ y y 3 , I ^ z z 3 are the unknown parameters. Y 0   Y 1   Y 2   Y 3   Y 4 are called the regressor vectors defined as [70]. m ^ 3   =   ( S T Y 1 ) γ 1 , I ^ x x 3   =   ( S T Y 2 ) γ 2 , I ^ y y 3   =   ( S T Y 3 ) γ 3 , I ^ z z 3   =   ( S T Y 4 ) γ 4 , where γ 1 , γ 2 , γ 3 , γ 4 are controller gains, in addition to γ 1 = 50, γ 2 = 30, γ 3 = 20, γ 4 = 50, and K d = 20.
Data. Speeds: 5 s, 10 s, 15 s, and 20 s completion times; 4 paths × 4 speeds with 16 different trajectories; 15 loads (0.2 kg…3.0 kg), various shapes and sizes; 10 agents. Training Data. One desired trajectory common to handling of all loads; one trajectory has no data for any context; sixteen unique training trajectories, one for each load. Test Data. Interpolation data sets for testing on reference trajectory and the unique trajectory for each load. Extrapolation data sets for testing on all trajectories.
From Figure 4, it can be seen that the PD controller action is able to hold the position of the robot arm at the desired joint angles. λ = 30 and K d = 20 are the gains associated with holding the respective positions necessary. The convergence of the plots is achieved in about 10 s. The first couple runs of the robot can be used for tuning the robot, and the robot should have good repeatability after that. In addition, from the position and velocity plots of a computed torque controller (Figure 5) and adaptive controller (Figure 6), it is observed that both controllers are able to achieve convergence of parameters to the desired values. However, the proposed PD torque controller has a quicker convergence time and also has lesser gains compared to the computed torque controller. The error results from Table 1 further verify the effectiveness. Therefore, the proposed PD torque controller is better for the considered application. In addition, we can also obtain that the distributed GP can effectively eliminate the uncertainty and disturbance caused by the system model and the noises.
To further compare with the multiple agent processing methods, these approaches are tested: (1) Independent GP (IGP): model trained independently for each input [6]; (2) Combined GP (CGP): one agent to train GP by combining data across inputs [34]; (3) Proposed distributed GP with BIC (Bayesian Information Criterion) criterion. The training results (interpolation and extrapolation manners) of NMSE (normalized mean square error) with regard to the number of training data points are demonstrated in Figure 7. Note that IGP and CGP, i.e., existing GP, are centralized methods, which are different from the proposed GP, which is a distributed method for training. From Figure 7, the first line displays the training results of the interpolation manner for the three methods. As we can see from it, for joint 1, the proposed distributed GP achieves the best performance for any number of training points. For joint four and joint six, the performance of the proposed GP is close to IGP and also better than CGP when the training points increase. The second line displays the training results of the extrapolation manner for the three methods. As we can see from it, for joint 1, the proposed distributed GP achieves the best performance for small numbers of training points (<500). However, when training points are increased further, the CGP is better than the proposed distributed GP (note that they are very close). For joint four and joint six, the performance of the proposed GP is close to IGP and also better than CGP when the training points increase. To sum up, the proposed distributed GP model can reach the performance of the centralized method and is close to (even outperforms) the existing state-of-the-art centralized-based multiple combined and multi-task methods.

5.2. Trajectory Tracking of an Unmanned Quadrotor

This section tests the proposed distributed GP-based model predictive control (GPMPC) of an unmanned quadrotor. The trajectory of an unmanned quadrotor is generated by a discrete-time Euler–Lagrange dynamical system [2]. The goal is to track its positions (X, Y, Z) and Euler angles ( ϕ , θ ,  ψ ). To compare with the state-of-the-art controllers, the efficient MPC (EMPC) [71] and the efficient nonlinear MPC (ENMPC) [72] are also tested in the simulations. The parameters are selected as L = 5 and Q = R = d i a g ( 1 , 1 , 1 ) .
In the first scenario, the unmanned quadrotor tracks a “Lorenz” trajectory with Gaussian white noise (zero mean and unit variance), which is shown in Figure 8. To train the system model, we use the efficient MPC design proposed in [71]. One hundred seventy measurements, states, and controls are used to train the GP. The datum from the rotational system is with the range [0, 1], the angle φ is with a range [ 1.6 ,   1.6 ] , and the input is with the range [ 4 × 10 8 ,   7 × 10 8 ] . The training of GP takes 5 s, and we use 10 agents to train. The values of mean squared error (MSE) trained by GP are very small. The mean squared error (MSE) for the positions is 4.3618 × 10 4 close to the stable GPMPC (GPMPC1) [73] with 4.3498 × 10 4 ; MSE for the angels is 1.5743 × 10 8 also close to GPMPC1 with 4.3030 × 10 9 . This indicates that the proposed distributed GP (GPMPC2) is efficient and well-trained, which is illustrated in Figure 9 (with different confidences). Note that the stable GPMPC (GPMPC1) is the most recent best method at present and is also a method for one agent. Therefore, the training results are very close, which indicates that the proposed distributed GP can achieve a good training performance.
The positions and attitudes tracking results are demonstrated in Figure 10, and the tracking errors are displayed in Figure 11 and Table 2. As we can see from Figure 10 and Figure 11, the proposed distributed GP can learn the system model well and track the trajectories with high precision, which is close to the state-of-the-art controllers. This also indicates that as long as the training sets are introduced, we can track the trajectory without model knowledge (model-free), i.e., the proposed GP can learn the system model well. In addition, as long as multiple agents are introduced, the model uncertainties and noise disturbances can be eliminated and suppressed.
Furthermore, the covariance on positions and attitudes by different GP models (the stable GPMPC (GPMPC1) [69] and the proposed distributed GPMPC (GPMPC2)) is displayed in Figure 12. This indicates that the proposed distributed GP can also reach the performance of the state-of-the-art GP model.
In the second scenario, the unmanned quadrotor tracks an “Elliptical” trajectory with Gaussian white noise (zero mean and unit variance), which is shown in Figure 13. The tracking performance and the tracking errors are shown in Figure 14 and Figure 15 and Table 3. From Figure 13, Figure 14 and Figure 15, we can also see that the proposed distributed GP can learn the trajectory model effectively, which is very close to the desired reference trajectory and is close to the state-of-the-art controllers. The covariance results by GPMPC1 and GPMPC2 are shown in Figure 16, which further verifies the effectiveness of the proposed distributed GP.

6. Conclusions

This paper used the Gaussian process to learn the trajectory model, and a distributed GP-based model learning strategy was proposed. For the continuous- and discrete-time system, we, respectively, designed a GP-based PD controller and a GP-based MPC controller to address the problem. To address the uncertainties of the model and the disturbances of the noises, a distributed multiple-agent system was used to train the model. In addition, since data-driven algorithms needed a large number of training sets, the distributed GP model could also be employed to address this problem by using a Kullback–Leibler average consensus fusion criterion.
The proposed GP can solve the actual model-free problem as long as the training data sets are given. Since the considered multi-agent is interconnected and it is only used to eliminate the uncertainties of the model and disturbances of the noises, future research mainly focuses on the efficiency of distributed Gaussian process and the robustness of the multi-agent network. In the future, we will focus on the application deployment of an unmanned aerial vehicle (UAV) and its usage in UAV detection and location. UAV racing is a challenging problem to overcome.

Author Contributions

Conceptualization, L.S. and D.X.; methodology, D.X.; software, D.X.; validation, D.X.; formal analysis, L.S. and D.X.; investigation, L.S. and D.X.; resources, D.X.; data curation, D.X.; writing—original draft preparation, D.X.; writing—review and editing, L.S. and D.X.; supervision, L.S.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Shaanxi Provincial Fund under Grant 2020JM-185 and the National Natural Science Foundation of China under grant 62171338.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Proof.
First, we prove the Lipschitz constant of μ ( · ) and the modulus of continuity of ( · ) . At two different states x and x , the norm of the difference of μ ( · ) is μ ( x ) μ ( x ) = ( K x X K x X ) α with α = ( K X X + σ 2 I ) 1 y . Then, based on the Lipschitz continuity of the kernel and the Cauchy–Schwarz inequality, we have μ ( x ) μ ( x ) L k N α x x , which proves that µ is Lipschitz continuous.
Similarly, we have
Σ ( x ) Σ ( x ) 2 L k x x + K x X K x X ( K X X + σ 2 I ) 1 K X x + K X x .
Due to the Lipschitz continuity of kernel k (also K ), we have
K x X K x X L k N x x , and   K X x + K X x 2 N m a x k ( x , x )
Therefore, the continuous modulus ω can be obtained by combing the above equations and taking the square root.
Finally, we prove the probabilistic uniform error bound. According to [74], for every grid X τ with the | X τ | -th number of grid points and max x X min x X τ x x τ , . x X τ , it follows that
f ( x ) μ ( x ) β ( τ ) ( x ) ,
with a probability of at least 1 | X τ | e β ( τ ) 2 . Then, set β ( τ ) = 2 log ( | X τ | δ ) , then (36) holds with a probability of at least 1 δ . Furthermore, since f , µ , and are continuous, x X , x X τ , it follows τ L f m i n f ( x ) f ( x ) , τ L μ m i n μ ( x ) μ ( x )   and   ω Σ ( x ) m i n Σ ( x ) Σ ( x ) . In addition, based on [74], the minimum number of grid points is denoted by the covering number M ( τ , X ) . Therefore, x X it follows that p ( f ( x ) μ ( x ) γ ( τ ) + β ( τ )   ( x ) ) 1 δ where γ ( τ ) = ( L μ + L f ) τ + β ( τ ) ω   ( τ )   and   β ( τ ) = 2 log ( M ( τ , X ) δ ) . This concludes the proof. □

Appendix B. Proof of Theorem 2

Proof.
According to Theorem 1, given β N ( τ )   =   2 l o g ( M ( τ , x ) π 2 N 2 3 δ ) and N > 0 , it follows that
sup x X | f ( x ) μ N ( x ) | γ N ( τ ) + β N ( τ ) N ( x ) .
with a probability of at least 1 δ / 2 . In addition, given the distance r = m a x x , x X x x , we can obtain a trivial bound M ( τ , X ) ( 1 + r τ ) n . Then, we can obtain that β N ( τ ) 2 n log ( 1 + r τ ) + 4 log π N 2 log 3 δ .
On the one hand, Lipschitz constant L µ is bounded by
L μ L k N ( K X N X N + σ 2 I ) 1 y N .
On the other hand, since f is bounded by f m a x and K X N X N is positive semi-definite, ( K X N X N + σ 2 I ) 1 y N is bounded by
( K X N X N + σ 2 I ) 1 y N y N ρ ( K X N X N + σ 2 I ) m i n N f m a x + φ N σ 2
where vector φ N composed of N variables is a Gaussian disturbance with zero mean and covariance σ 2 . This indicates that φ N 2 σ 2 obeys a chi-square distribution, i.e., φ N 2 σ 2 χ N 2 . Note that with a probability of at least 1 η N we have
φ N 2 ( 2 N η N + 2 η N + N ) σ 2 .
Then, by using the union bounds over all N > 0 and setting η N = log ( π 2 N 2 3 δ ) , we can obtain that
( K X N X N + σ 2 I N ) 1 y N N f m a x + 2 N η N + 2 η N + N σ σ 2
with a probability of at least 1 δ 2 . Therefore, the Lipschitz constant L μ of the posterior mean function u N ( · ) has
L μ N f m a x + N ( 2 N η N + 2 η N + N ) σ σ 2
In addition, since ηN is logarithmically increased with the number of training samples N , it follows that L μ O ( N ) with a probability of at least 1 δ 2 .
Furthermore, based on (17) and ( K X N X N + σ 2 I N ) 1 , we can bound the modulus of continuity ω N ( · ) as
ω ( τ ) 2 τ L k ( 1 + N m a x x , x X k ( x , x ) σ 2 )
According to (37), the uniform bound holds with a probability of at least 1 δ with
γ N ( τ ) 2 τ L k β N ( τ ) ( 1 + N m a x x , x X k ( x , x ) σ 2 ) + L f τ + L k N f m a x + N ( 2 N η N + 2 η N + N ) σ σ 2
Note that if the error is designed to vanish, the above equation should be guaranteed convergence to 0 as N . This indicates that τ ( N ) decreases faster than O ( ( Nlog ( N ) ) 1 ) . Therefore, we can set τ ( N ) O ( N 2 ) , then we can obtain that l i m N γ N ( τ N ) = 0 . Furthermore, this indicates that β N ( τ ( N ) ) O ( log ( N ) ) . Therefore, there exists an ϵ > 0 such that N ( x ) O ( log ( N ) 1 2 ϵ ) , it follows that β N ( τ ( N ) ) N ( x ) O ( log ( N ) ϵ ) , which concludes the proof. □

Appendix C. Proof of Theorem 3

Proof.
According to Lemma 2 and 3, and recalling that the noise w is the stationary Gaussian process, we use the following Lyapunov candidate V ˙ ( x ) = 1 / 2 r 2 . | r | > f ( x ) μ N ( x ) k c , it follows that V ˙ ( x ) = V r r ˙ = r ( f ( x ) f ^ ( x ) k c r ) | r | | f ( x ) μ N ( x ) | k c | r | 2 0 . According to Theorem 1, the model error is bounded with probability p ( V ˙ ( x ) < 0 ) 1 δ . According to Lemma 3, we can obtain the global ultimate boundedness. □

References

  1. Beckers, T.; Umlauft, J.; Kulic, D.; Hirche, S. Stable Gaussian process based tracking control of Lagrangian systems. In Proceedings of the 2017 IEEE 56th Annual Conference on Decision and Control (CDC), Melbourne, Australia, 12–15 December 2017; pp. 5180–5185. [Google Scholar]
  2. Corke, P.I.; Khatib, O. Robotics, Vision and Control: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011; Volume 73. [Google Scholar]
  3. Wang, D.; Mu, C. Adaptive-critic-based robust trajectory tracking of uncertain dynamics and its application to a spring-mass-damper system. IEEE Trans. Ind. Electron. 2018, 65, 654–663. [Google Scholar] [CrossRef]
  4. Choi, J.; Jung, J.; Park, I. Area-efficient approach for generating quantized gaussian noise. IEEE Trans. Circuits Syst. I Regul. Pap. 2016, 63, 1005–1013. [Google Scholar] [CrossRef]
  5. Khansari-Zadeh, S.M.; Billard, A. Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Trans. Robot. 2011, 27, 943–957. [Google Scholar] [CrossRef] [Green Version]
  6. Choi, J. Data-aided sensing for Gaussian process regression in iot systems. IEEE Internet Things 2021, 8, 7717–7726. [Google Scholar] [CrossRef]
  7. Diaz-Rozo, J.; Bielza, C.; Larranaga, P. Clustering of data streams with dynamic Gaussian Mixture Models: An IoT application in industrial processes. IEEE Internet Things J. 2018, 5, 3533–3547. [Google Scholar] [CrossRef] [Green Version]
  8. Sheng, H.; Xiao, J.; Cheng, Y.; Ni, Q.; Wang, S. Short-term solar power forecasting based on weighted Gaussian process regression. IEEE Trans. Ind. Electron. 2018, 65, 300–308. [Google Scholar] [CrossRef]
  9. Wen, Y.; Li, G.; Wang, Q.; Guo, X.; Cao, W. Modeling and analysis of permanent magnet spherical motors by a multi-task Gaussian process method and finite element method for output torque. IEEE Trans. Ind. Electron. 2021, 68, 8540–8549. [Google Scholar] [CrossRef]
  10. Jin, X. Fault tolerant nonrepetitive trajectory tracking for mimo output constrained nonlinear systems using iterative learning control. IEEE Trans. Cybern. 2019, 49, 3180–3190. [Google Scholar] [CrossRef] [PubMed]
  11. Fedele, G.; D’Alfonso, L. A kinematic model for swarm finite-time trajectory tracking. IEEE Trans. Cybern. 2019, 49, 3806–3815. [Google Scholar] [CrossRef]
  12. Wilson, A.G.; Knowles, D.A.; Ghahramani, Z. Gaussian process regression networks. In Proceedings of the 29th International Conference on Machine Learning, Edinburgh, UK, 26 June–1 July 2012. [Google Scholar]
  13. Pillonetto, G.; Schenato, L.; Varagnolo, D. Distributed multi-agent gaussian regression via finite-dimensional approximations. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2098–2111. [Google Scholar] [CrossRef]
  14. Varagnolo, D.; Pillonetto, G.; Schenato, L. Distributed parametric and nonparametric regression with on-line performance bounds computation. Automatica 2012, 48, 2468–2481. [Google Scholar] [CrossRef]
  15. Krivec, T.; Papa, G.; Kocijan, J. Simulation of variational Gaussian process NARX models with GPGPU. ISA Trans. 2021, 109, 141–151. [Google Scholar] [CrossRef]
  16. Aman, K.; Kocijan, J. Application of Gaussian processes for black-box modelling of biosystems. ISA Trans. 2007, 46, 443–457. [Google Scholar] [CrossRef] [PubMed]
  17. Hensman, J.; Durrande, N.; Solin, A. Variational fourier features for Gaussian processes. J. Mach. Learn. Res. 2017, 18, 5537–5588. [Google Scholar]
  18. Damianou, A.C.; Titsias, M.K.; Lawrence, N.D. Variational inference for latent variables and uncertain inputs in Gaussian processes. J. Mach. Learn. Res. 2016, 17, 1425–1486. [Google Scholar]
  19. Meng, Z.; Lin, Z.; Ren, W. Robust cooperative tracking for multiple non-identical second-order nonlinear systems. Automatica 2013, 49, 2363–2372. [Google Scholar] [CrossRef]
  20. Pu, S.; Yu, X.; Li, J. Distributed Kalman filter for linear system with complex multichannel stochastic uncertain parameter and decoupled local filters. Int. J. Adapt. Control. Signal Process. 2021, 35, 1498–1512. [Google Scholar] [CrossRef]
  21. Yu, X.; Li, J. Adaptive Kalman filtering for recursive both additive noise and multiplicative noise. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 1634–1649. [Google Scholar] [CrossRef]
  22. Huang, Y.; Meng, Z. Bearing-based distributed formation control of multiple vertical take-off and landing UAVs. IEEE Trans. Control. Netw. Syst. 2021, 8, 1281–1292. [Google Scholar] [CrossRef]
  23. Yang, T.; Yi, X.; Wu, J.; Yuan, Y.; Wu, D.; Meng, Z.; Hong, Y.; Wang, H.; Lin, Z.; Johansson, K.H. A survey of distributed optimization. Annu. Rev. Control. 2019, 47, 278–305. [Google Scholar] [CrossRef]
  24. Li, X.; Caimou, H.; Haoji, H. Distributed filter with consensus strategies for sensor networks. J. Appl. Math. 2013, 2013, 683249. [Google Scholar] [CrossRef] [Green Version]
  25. Zhou, T. Coordinated one-step optimal distributed state prediction for a networked dynamical system. IEEE Trans. Autom. Control. 2013, 58, 2756–2771. [Google Scholar] [CrossRef]
  26. Battistelli, G.; Chisci, L. Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability. Automatica 2014, 50, 707–718. [Google Scholar] [CrossRef]
  27. Umlauft, J.; Lederer, A.; Hirche, S. Learning stable Gaussian process state space models. In Proceedings of the 2017 American Control Conference (ACC), Seattle, DC, USA, 24–26 May 2017; pp. 1499–1504. [Google Scholar]
  28. Jagtap, P.; Pappas, G.J.; Zamani, M. Control barrier functions for unknown nonlinear systems using Gaussian processes. In Proceedings of the 2020 59th IEEE Conference on Decision and Control (CDC), Jeju Island, Korea, 14–18 December 2020; pp. 3699–3704. [Google Scholar]
  29. Pöhler, L.D.; Umlauft, J.; Hirche, S. Uncertainty-based Human Motion Tracking with Stable Gaussian Process State Space Models. IFAC-Pap. 2019, 51, 8–14. [Google Scholar] [CrossRef]
  30. Umlauft, J.; Pöhler, L.D.; Hirche, S. An uncertainty-based control Lyapunov approach for control-affine systems modeled by Gaussian process. IEEE Control. Syst. Lett. 2018, 2, 483–488. [Google Scholar] [CrossRef] [Green Version]
  31. Lederer, A.; Umlauft, J.; Hirche, S. Uniform error bounds for Gaussian process regression with application to safe control. Adv. Neural Inf. Process. Syst. 2019, 32, 659–669. [Google Scholar]
  32. Deisenroth, M.; Ng, J.W. Distributed Gaussian processes. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1481–1490. [Google Scholar]
  33. Xie, A.; Yin, F.; Xu, Y.; Ai, B.; Chen, T.; Cui, S. Distributed Gaussian processes hyperparameter optimization for big data using proximal ADMM. IEEE Signal Processing Lett. 2019, 26, 1197–1201. [Google Scholar] [CrossRef]
  34. Bonilla, E.V.; Chai, K.M.; Williams, C. Multi-task Gaussian process prediction. In Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS 2007), Vancouver, BC, Canada, 3–5 December 2008; pp. 153–160. [Google Scholar]
  35. Alvarez, M.; Lawrence, N.D. Sparse convolved Gaussian processes for multi-output regression. In Proceedings of the Advances in Neural Information Processing Systems 21 (NIPS 2008), Vancouver, BC, Canada, 8 December 2009. [Google Scholar]
  36. Gal, Y.; van der Wilk, M.; Rasmussen, C.E. Distributed variational inference in sparse Gaussian process regression and latent variable models. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, Canada, 8–13 December 2014; pp. 3257–3265. [Google Scholar]
  37. Nerurkar, E.D.; Roumeliotis, S.I.; Martinelli, A. Distributed maximum a posteriori estimation for multi-robot cooperative localization. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 6 July 2009; pp. 1402–1409. [Google Scholar]
  38. Franceschelli, M.; Gasparri, A. On agreement problems with gossip algorithms in absence of common reference frames. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 15 July 2010; pp. 4481–4486. [Google Scholar]
  39. Cunningham, A.; Indelman, V.; Dellaert, F. DDF-SAM 2.0: Consistent distributed smoothing and mapping. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 5220–5227. [Google Scholar]
  40. Anderson, B.D.; Shames, I.; Mao, G.; Fidan, B. Formal theory of noisy sensor network localization. SIAM J. Discret. Math. 2010, 24, 684–698. [Google Scholar] [CrossRef]
  41. Carron, A.; Todescato, M.; Carli, R.; Schenato, L. An asynchronous consensus-based algorithm for estimation from noisy relative measurements. IEEE Trans. Control. Netw. Syst. 2014, 1, 283–295. [Google Scholar] [CrossRef]
  42. Thunberg, J.; Montijano, E.; Hu, X. Distributed attitude synchronization control. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 1962–1967. [Google Scholar]
  43. Piovan, G.; Shames, I.; Fidan, B.; Bullo, F.; Anderson, B.D. On frame and orientation localization for relative sensing networks. Automatica 2013, 49, 206–213. [Google Scholar] [CrossRef] [Green Version]
  44. Sarlette, A.; Sepulchre, R. Consensus optimization on manifolds. SIAM J. Control. Optim. 2009, 48, 56–76. [Google Scholar] [CrossRef] [Green Version]
  45. Choudhary, S.; Carlone, L.; Nieto, C.; Rogers, J.; Christensen, H.I.; Dellaert, F. Distributed trajectory estimation with privacy and communication constraints: A two-stage distributed Gauss-seidel approach. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5261–5268. [Google Scholar]
  46. Deisenroth, M.P.; Fox, D.; Rasmussen, C.E. Gaussian processes for data-efficient learning in robotics and control. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 408–423. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Robinson, D.R.; Mar, R.T.; Estabridis, K.; Hewer, G. An efficient algorithm for optimal trajectory generation for heterogeneous multi-agent systems in non-convex environments. IEEE Robot. Autom. Lett. 2018, 3, 1215–1222. [Google Scholar] [CrossRef]
  48. Wang, J.M.; Fleet, D.J.; Hertzmann, A. Gaussian process dynamical models for human motion. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 283–298. [Google Scholar] [CrossRef] [Green Version]
  49. Negenborn, R.R.; Maestre, J.M. Distributed model predictive control: An overview and roadmap of future research opportunities. IEEE Control. Syst. Mag. 2014, 34, 87–97. [Google Scholar]
  50. Stewart, B.T.; Wright, S.J.; Rawlings, J.B. Cooperative distributed model predictive control for nonlinear systems. J. Process Control. 2011, 21, 698–704. [Google Scholar] [CrossRef]
  51. Ferramosca, A.; Limón, D.; Alvarado, I.; Camacho, E.F. Cooperative distributed MPC for tracking. Automatica 2013, 49, 906–914. [Google Scholar] [CrossRef]
  52. Conte, C.; Jones, C.N.; Morari, M.; Zeilinger, M.N. Distributed synthesis and stability of cooperative distributed model predictive control for linear systems. Automatica 2016, 69, 117–125. [Google Scholar] [CrossRef] [Green Version]
  53. Groß, D.; Stursberg, O. A cooperative distributed MPC algorithm with event-based communication and parallel optimization. IEEE Trans. Control. Netw. Syst. 2015, 3, 275–285. [Google Scholar] [CrossRef]
  54. Alrifaee, B.; Heßeler, F.J.; Abel, D. Coordinated non-cooperative distributed model predictive control for decoupled systems using graphs. IFAC-Pap. 2016, 49, 216–221. [Google Scholar]
  55. Alonso, C.A.; Matni, N. Distributed and localized closed loop model predictive control via system level synthesis. In Proceedings of the 2020 59th IEEE Conference on Decision and Control (CDC), Jeju, Korea, 14–18 December 2020; pp. 5598–5605. [Google Scholar]
  56. Alonso, C.A.; Matni, N.; Anderson, J. Explicit distributed and localized model predictive control via system level synthesis. In Proceedings of the 2020 59th IEEE Conference on Decision and Control (CDC), Jeju, Korea, 14–18 December 2020; pp. 5606–5613. [Google Scholar]
  57. Luis, C.E.; Schoellig, A.P. Trajectory generation for multiagent point-to-point transitions via distributed model predictive control. IEEE Robot. Autom. Lett. 2019, 4, 375–382. [Google Scholar] [CrossRef] [Green Version]
  58. Torrente, G.; Kaufmann, E.; Föhn, P.; Scaramuzza, D. Data-driven MPC for quadrotors. IEEE Robot. Autom. Lett. 2021, 6, 3769–3776. [Google Scholar] [CrossRef]
  59. Liu, D.; Tang, M.; Fu, J. Robust adaptive trajectory tracking for wheeled mobile robots based on Gaussian process regression. Syst. Control. Lett. 2022, 163, 105210. [Google Scholar] [CrossRef]
  60. Akbari, B.; Zhu, H. Tracking Dependent Extended Targets Using Multi-Output Spatiotemporal Gaussian Processes. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18301–18314. [Google Scholar] [CrossRef]
  61. Hidalgo-Carrió, J.; Hennes, D.; Schwendner, J.; Kirchner, F. Gaussian process estimation of odometry errors for localization and mapping. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5696–5701. [Google Scholar]
  62. Brossard, M.; Bonnabel, S. Learning wheel odometry and IMU errors for localization. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 291–297. [Google Scholar]
  63. Nguyen, T.V.; Bonilla, E.V. Collaborative multi-output Gaussian processes. In Proceedings of the UAI’14: Thirtieth Conference on Uncertainty in Artificial Intelligence, Citeseer, Quebec City, QC, Canada, 23–27 July 2014; pp. 643–652. [Google Scholar]
  64. Carron, A.; Todescato, M.; Carli, R.; Schenato, L.; Pillonetto, G. Multi-agents adaptive estimation and coverage control using Gaussian regression. In Proceedings of the 2015 European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 2490–2495. [Google Scholar]
  65. Mallasto, A.; Feragen, A. Learning from uncertain curves: The 2-wasserstein metric for gaussian processes. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  66. Rasmussen, C.E.; Williams, C.K. Gaussian Processes for Machine Learning. In Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  67. Khalil, H.K. Nonlinear Systems: International Edition. Bull. Am. Acad. Arts Sci. 2002, 53, 20–24. [Google Scholar]
  68. Umlauft, J.; Hirche, S. Feedback linearization based on Gaussian processes with event triggered online learning. IEEE Trans. Autom. Control. 2020, 65, 4154–4169. [Google Scholar] [CrossRef]
  69. Zhou, B.; Gao, L.; Dai, Y.H. Gradient methods with adaptive step-sizes. Comput. Optim. Appl. 2006, 35, 69–86. [Google Scholar] [CrossRef]
  70. Ivanov, S.E.; Zudilova, T.; Voitiuk, T.; Ivanova, L.N. Mathematical modeling of the dynamics of 3-DOF robot-manipulator with software control. Procedia Comput. Sci. 2020, 178, 311–319. [Google Scholar] [CrossRef]
  71. Abdolhosseini, M. Model Predictive Control of an Unmanned Quadrotor Helicopter: Theory and Flight Tests. Ph.D. Thesis, Concordia University, Montreal, QC, Canada, 2012. [Google Scholar]
  72. Cannon, M. Efficient nonlinear model predictive control algorithms. Annu. Rev. Control. 2004, 28, 229–237. [Google Scholar] [CrossRef]
  73. Beckers, T.; Kulic, D.; Hirche, S. Stable Gaussian process based tracking control of Euler-Lagrange systems. Automatica 2019, 103, 390–397. [Google Scholar] [CrossRef] [Green Version]
  74. Srinivas, N.; Krause, A.; Kakade, S.M.; Seeger, M.W. Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 2012, 58, 3250–3265. [Google Scholar] [CrossRef]
Figure 1. The flowchart of consensus/fusion.
Figure 1. The flowchart of consensus/fusion.
Sensors 22 07887 g001
Figure 2. The diagram of the Puma 560 robot arm manipulator (6 DoFs).
Figure 2. The diagram of the Puma 560 robot arm manipulator (6 DoFs).
Sensors 22 07887 g002
Figure 3. The process methodology and flowchart.
Figure 3. The process methodology and flowchart.
Sensors 22 07887 g003
Figure 4. The proposed PD controller: (a) Position Plot; (b) Velocity Plot.
Figure 4. The proposed PD controller: (a) Position Plot; (b) Velocity Plot.
Sensors 22 07887 g004
Figure 5. The CT controller: (a) Position Plot; (b) Velocity Plot.
Figure 5. The CT controller: (a) Position Plot; (b) Velocity Plot.
Sensors 22 07887 g005
Figure 6. The Adaptive controller: (a) Position Plot; (b) Velocity Plot.
Figure 6. The Adaptive controller: (a) Position Plot; (b) Velocity Plot.
Sensors 22 07887 g006
Figure 7. Training results using different methods.
Figure 7. Training results using different methods.
Sensors 22 07887 g007
Figure 8. Lorenz trajectory tracking.
Figure 8. Lorenz trajectory tracking.
Sensors 22 07887 g008
Figure 9. Training performance with 60%, 80%, and 90% confidence: (a) Y 1 ( k ) ; (b) Y 2 ( k ) .
Figure 9. Training performance with 60%, 80%, and 90% confidence: (a) Y 1 ( k ) ; (b) Y 2 ( k ) .
Sensors 22 07887 g009
Figure 10. Tracking results: (a) Positions tracking; (b) Attitudes tracking.
Figure 10. Tracking results: (a) Positions tracking; (b) Attitudes tracking.
Sensors 22 07887 g010
Figure 11. Tracking errors: (a) Positions errors; (b) Attitudes errors.
Figure 11. Tracking errors: (a) Positions errors; (b) Attitudes errors.
Sensors 22 07887 g011
Figure 12. Covariance results: (a) Positions covariance; (b) Attitudes covariance.
Figure 12. Covariance results: (a) Positions covariance; (b) Attitudes covariance.
Sensors 22 07887 g012
Figure 13. Elliptical trajectory tracking.
Figure 13. Elliptical trajectory tracking.
Sensors 22 07887 g013
Figure 14. Tracking results: (a) Positions tracking; (b) Attitudes tracking.
Figure 14. Tracking results: (a) Positions tracking; (b) Attitudes tracking.
Sensors 22 07887 g014
Figure 15. Tracking errors: (a) Positions errors; (b) Attitudes errors.
Figure 15. Tracking errors: (a) Positions errors; (b) Attitudes errors.
Sensors 22 07887 g015
Figure 16. Covariance results: (a) Positions covariance; (b) Attitudes covariance.
Figure 16. Covariance results: (a) Positions covariance; (b) Attitudes covariance.
Sensors 22 07887 g016
Table 1. Mean absolute error.
Table 1. Mean absolute error.
Controller ϕ θ ψ ˙ θ ˙ ψ ˙
CT0.00770.02840.04880.01350.02390.0289
Adaptive0.02160.12260.10090.03780.04510.1969
The Proposed PD0.02090.02050.13920.00720.01370.0349
Table 2. Training errors and tracking errors.
Table 2. Training errors and tracking errors.
Training Methods\Training ErrorsPositionsAttitudes
GPMPC14.3496 × 10−44.3030 × 10−9
GPMPC24.3618 × 10−41.5746 × 10−8
Controllers\Mean Absolute ErrorsPositionsAttitudes
EMPC0.02879.6239 × 10−11
ENMPC0.00507.6412 × 10−11
GPMPC0.00492.2407 × 10−11
Table 3. Training errors and tracking errors.
Table 3. Training errors and tracking errors.
Training Methods\Training ErrorsPositionsAttitudes
GPMPC16.1494 × 10−83.8282 × 10−10
GPMPC25.1161 × 10−71.1889 × 10−9
Controllers\Mean Absolute ErrorsPositionsAttitudes
EMPC0.02870.0263
ENMPC0.00501.8769 × 10−4
GPMPC0.00498.4033 × 10−5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xin, D.; Shi, L. Trajectory Modeling by Distributed Gaussian Processes in Multiagent Systems. Sensors 2022, 22, 7887. https://doi.org/10.3390/s22207887

AMA Style

Xin D, Shi L. Trajectory Modeling by Distributed Gaussian Processes in Multiagent Systems. Sensors. 2022; 22(20):7887. https://doi.org/10.3390/s22207887

Chicago/Turabian Style

Xin, Dongjin, and Lingfeng Shi. 2022. "Trajectory Modeling by Distributed Gaussian Processes in Multiagent Systems" Sensors 22, no. 20: 7887. https://doi.org/10.3390/s22207887

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop