Next Article in Journal
Vibration Energy Conversion Power Supply Based on the Piezoelectric Thin Film Planar Array
Next Article in Special Issue
Collection of a Continuous Long-Term Dataset for the Evaluation of Wi-Fi-Fingerprinting-Based Indoor Positioning Systems
Previous Article in Journal
Irradiance Independent Spectrum Reconstruction from Camera Signals Using the Interpolation Method
Previous Article in Special Issue
Globally Optimal Distributed Fusion Filter for Descriptor Systems with Time-Correlated Measurement Noises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Compensation Strategies for Optimal Estimation in Sensor Networks with Random Matrices, Time-Correlated Noises, Deception Attacks and Packet Losses

by
Raquel Caballero-Águila
1,
Jun Hu
2 and
Josefa Linares-Pérez
3,*
1
Department of Statistics and Operations Research, University of Jaén, Campus Las Lagunillas, 23071 Jaén, Spain
2
Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
3
Department of Statistics and Operations Research, University of Granada, Av. Fuentenueva, 18071 Granada, Spain
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8505; https://doi.org/10.3390/s22218505
Submission received: 12 October 2022 / Revised: 28 October 2022 / Accepted: 31 October 2022 / Published: 4 November 2022
(This article belongs to the Special Issue Algorithms, Systems and Applications of Smart Sensor Networks)

Abstract

:
Due to its great importance in several applied and theoretical fields, the signal estimation problem in multisensor systems has grown into a significant research area. Networked systems are known to suffer random flaws, which, if not appropriately addressed, can deteriorate the performance of the estimators substantially. Thus, the development of estimation algorithms accounting for these random phenomena has received a lot of research attention. In this paper, the centralized fusion linear estimation problem is discussed under the assumption that the sensor measurements are affected by random parameter matrices, perturbed by time-correlated additive noises, exposed to random deception attacks and subject to random packet dropouts during transmission. A covariance-based methodology and two compensation strategies based on measurement prediction are used to design recursive filtering and fixed-point smoothing algorithms. The measurement differencing method—typically used to deal with the measurement noise time-correlation—is unsuccessful for these kinds of systems with packet losses because some sensor measurements are randomly lost and, consequently, cannot be processed. Therefore, we adopt an alternative approach based on the direct estimation of the measurement noises and the innovation technique. The two proposed compensation scenarios are contrasted through a simulation example, in which the effect of the different uncertainties on the estimation accuracy is also evaluated.

1. Introduction

As a fundamental topic in the fields of control and signal processing, the estimation problem in networked systems has attracted great research attention in recent years. The presence of different uncertainty sources—errors in the measurement devices, limitations in transmission processes or vulnerability of the network, among others—often causes certain limitations, such as lack of signal information (commonly referred to as uncertain or missing observations), fading measurements or transmission delays and packet dropouts, which are usually random in nature. The performance of the estimators proposed in conventional systems can significantly degrade due to these constraints, which typically result in random imperfections in the data available for estimation. As a result, new problems arise when studying the estimation problem in multisensor systems with networked-induced phenomena. A comprehensive review of the main results and new challenges related to this problem can be found in [1,2,3,4].
Some of the aforementioned networked-induced phenomena that occur in a great variety of application fields—e.g., digital control of chemical processes, radar control, navigation systems and economic systems—can be modeled by including stochastic parameters in the measurement equations. Consequently, the use of random parameter matrices in the mathematical model of the sensor measurements offers a unified framework for describing such random events. Systems with multiplicative noises are a specific example of systems with random measurement matrices and are of tremendous interest due to their applicability in various areas of communication, image processing, etc. Systems with uncertain observations or sensor gain degradation serve as another example. In addition, networked systems with random delays can be transformed into systems with random matrices. These facts, among others, explain why research on the estimation problem in these kinds of systems with random parameter matrices have become increasingly popular over the past years. For some representative contributions, see, for example, [5,6,7,8,9,10] and references therein.
Over the last few decades, the research on the estimation problem in networked systems with packet dropouts has been considerably reported. Random packet losses during the process of data transmission can be caused by congestion-related buffer overflows, transmission failures in the physical network links or long transmission delays that cause the discarding of outdated packets, among other reasons. A major topic in this kind of systems is how to compensate the data packets that are lost. The most popular compensation strategies are the zero-input mechanism and the hold-input mechanism, under which the filter input is either set to zero or it is held at the most recent data that were received, respectively, when the current data are lost [11,12,13,14]. In [15], a packet dropout compensation framework that includes the popular zero-input and hold-input mechanisms as special cases is proposed. An alternative prediction compensation methodology has drawn the attention of several authors in recent years; this methodology involves compensating each missed measurement packet by its predictor (see, e.g., [16,17,18] and references therein). Using this compensation strategy, centralized fusion estimators, including filter, predictor and smoother, in the linear unbiased minimum variance sense are designed in [16]. The problem of self-tuning distributed fusion state estimation in networked systems with unknown packet receiving rates, noise variances and model parameters is addressed in [17]. A solution to the distributed fusion estimation problem in systems with random parameter matrices is proposed in [18], under the assumption that transmissions to local processors may experience one-step delays and packet dropouts, and either one or two measurements can be simultaneously processed at each time instant.
In practice, time-correlated measurement and channel noises are found in many engineering applications, such as radar systems, global navigation satellite systems, or wireless networks, where the sampling frequency is usually high enough, thus making the measurement noises be significantly correlated in two or more consecutive sampling periods. The estimation problem under the assumption that the time-correlated measurement noises are the output of a linear system model with white noise has been an important research focus during the past years. The most popular methods to deal with this kind of noise correlation are the state augmentation method—which is simple and direct, but computationally expensive— and the measurement differencing method, which avoids increasing dimensions but needs two consecutive measurements to compute the difference (see, e.g., [19,20,21,22,23]). It should be noted that most of the published results are concerned with the estimation problem in single-sensor systems, but the fusion estimation problem in networked systems has received significantly less attention (see, e.g., [24,25]).
When the system is subject to random delays or packet dropouts, the sensor measurement may not be received on time at the processor, and consequently, the measurement differencing method cannot be used. In other words, for systems having packet losses, the measurement differencing method will not work, and finding new non-augmentation methods to deal with the time-correlated measurement noise in these kinds of systems is a challenging issue. The state estimation problem for stochastic uncertain systems with time-correlated additive noises and random packet dropouts in transmission is addressed in [26]—for linear systems—and [27]—for non-linear systems—using the predictor of a sensor measurement as compensation when such measurement is lost.
Despite their undeniable benefits, sensor networks have some weaknesses that must be considered when dealing with the estimation problem to guarantee the accuracy of the designed estimators. One of the most common dangers that make a network less reliable is the possibility of suffering cyber-attacks, and analyzing the success rate of such attacks launched by adversaries has recently become a research topic of great interest. In practice, successful attacks can usually be understood as intermittent or random in their implementation. A comprehensive literature review related to cyber-attacks on networked systems can be found in [28].
One of the most common types of attacks are the so-called deception attacks, which violate data integrity by purposefully altering sensor measurements. The complexity and significance of studying the estimation problem in networked systems subject to deception attacks have inspired many fruitful efforts by the scientific community. In [29], an integrated approach is proposed to simultaneously address the problems of detection and estimation in discrete-time stochastic systems with event-triggered transmission, subject to random disturbances and deception attacks. The problem of detection against deception attacks in a remote estimation framework in multi-sensor systems is addressed in [30]. The distributed filtering problem for discrete-time systems with multiplicative noises and deception attacks has been studied in [31]. In [32], a cluster-based approach is used to address the distributed fusion estimation problem for networked systems when measurements are subject to random deception attacks and the distributed estimation problem in networked systems with a given topology has been studied in [33]—under false data injection attacks—and in [34,35]—under deception attacks.

Main Contributions and Related Work

To the best of our knowledge, the optimal linear estimation problem in multi-sensor systems with random parameter matrices and time-correlated additive noises has not been fully investigated when the measurements are subject to random deception attacks, and packet dropouts may occur during transmission. Motivated by the above discussion, we consider a system model with the following characteristics: (a) the evolution model of the signal to be estimated does not need to be known, since a covariance-based estimation approach is used; (b) the model for the sensor-measured output under consideration includes random parameter matrices, offering a broad framework for many network-induced phenomena; (c) the sensor measured outputs are randomly affected by deception attacks; (d) random packet dropouts may occur during data transmissions from the sensors to the processing center and two different compensation models—based on measurement prediction—are proposed to describe the measurements available after transmission. Apart from the fact of considering a widespread system model that covers many general situations, the most significant contribution of this paper lies in the fact that, in contrast to the existing state-augmentation and measurement differencing methods, a non-augmentation technique is used to deal with the time-correlation of the additive noises. More precisely, through the direct estimation of the measurement noises and using the innovation technique under a covariance-based approach, recursive centralized fusion filtering and fixed-point smoothing algorithms are designed.
Some of the most closely related papers in the literature are [18,24,25,26]. In [18], systems with random parameter matrices, one-step delays, packet dropouts and multi-packet processing are considered; the main difference between the system model in [18] and the current one is the presence of deception attacks and time-correlated noises, apart from the fact that [18] allows the possibility that two packets can be received at each instant of time. In [24], using the measurement-differencing method, centralized and distributed fusion filtering and fixed-point smoothing algorithms are designed for networked systems with random parameter matrices and time-correlated channel noise; so, in addition to the fact that we do not use the measurement-differencing approach, the key distinction with the current results resides in the presence of random deception attacks and the possibility of random packet dropouts in the transmission. The centralized and sequential fusion filtering problems for networked uncertain systems, where the measurement noises are time-correlated, are addressed in [25]; our study also considers time-correlated additive noises and it extends the results in [25] by including random parameter matrices and deception attacks in the measurement model, as well as random packet dropouts in the transmission. Finally, random packet dropouts in transmission and time-correlated additive noises are simultaneously considered in [26], but neither random parameter matrices nor random deception attacks are considered in the measurement equation. It is also noteworthy that the derivation in the estimation algorithms in [25,26] is based on the knowledge of the state-space model, whereas the algorithms proposed in the current paper do not require the signal evolution equation, but just the covariance function factorization into a separable form (covariance information). These considerations are summarized in Figure 1.
Paper structure. The paper’s structure is outlined as follows. The problem under consideration and the characteristics of the observation model are described in Section 2, with special emphasis on the two compensation strategies proposed. The main results are presented in Section 3, where some auxiliary lemmas are firstly introduced before the optimal linear filtering and smoothing estimation algorithms are derived under the two compensation frameworks considered. In Section 4, a simulation example is presented to show the feasibility of the proposed estimators. Finally, some conclusions are drawn in Section 5.
Notation. As far as possible, standard mathematical notation will be used. If not explicitly stated, all vectors and matrices are assumed to be of suitable dimensions, compatible with algebraic operations.
R n and R m × n Set of n-dimensional real vectors and set of m × n real matrices
δ k , h Kronecker delta function
M T and M 1 Transpose and inverse of matrix M
M ( a ) T and M ( a ) 1 Shorthand for ( M ( a ) ) T and ( M ( a ) ) 1
( M 1 , , M k ) Partitioned matrix whose blocks are the submatrices M 1 , , M k
D i a g ( N 1 , , N m ) Block diagonal matrix, whose main-diagonal blocks are N 1 , , N m
I n and 1 n n × n identity matrix and n × 1 all-ones vector
0Zero scalar or matrix of compatible dimension
⊗ and ∘Kronecker and Hadamard product of matrices, respectively
E [ a ] = a ¯ Mathematical expectation of a random vector or matrix a
P ( ) Probability of an event ☆
Σ k , h a b ( i j ) Covariance of random vectors a k ( i ) and b h ( j ) ( Σ k , h a ( i j ) = Σ k , h a a ( i j ) )
Σ k , h a b ( i j ) = C o v [ a k ( i ) , b h ( j ) ] = E a k ( i ) a ¯ k ( i ) b h ( j ) b ¯ h ( j ) T
G k = G k , k Function G k , h , depending on the time instants k and h, when h = k
F ( i ) = F ( i i ) Function F ( i j ) , depending on the sensors i and j, when i = j
a ^ k / s ( * ) Optimal linear estimator of the vector a k based on y 1 ( * ) , , y s ( * )

2. Problem Statement and Observation Model

The goal of this work is to design recursive algorithms for the centralized fusion least-squares (LS) linear filtering and fixed-point smoothing problems, under the assumption that the sensor measurements of the signal to be estimated are transmitted over unreliable communication channels and random packet dropouts may occur during the process of data transmission. In addition, the output measurements—which are perturbed by time-correlated additive noises and subject to stochastic deception attacks—may randomly contain different uncertainties. By incorporating random parameter matrices into the measurement model, a general framework to model multiple random phenomena is proposed, including sensor gain degradation, missing or fading measurements, uncertainties brought on by the presence of multiplicative noise, or both multiplicative noises and missing measurements.
In order to compensate the lost data, two prediction compensation mechanisms are proposed, using either the predictor of the lost data packet, or the predictor of the data packet obtained by the sensor before the deception attack is launched.

2.1. Signal Process

The signal evolution equation does not need to be known, as the design of the proposed estimation algorithms will not be based on the state-space model—which requires an explicit mathematical model to express both the time variation of the signal and the relationship of that signal to the observations used for estimation. We will use a covariance-based estimation approach for the design of the algorithms that requires only the first- and second-order moments of the processes involved in the model describing the observations of all sensors. The advantage of this approach is that it is not necessary to derive a different estimation algorithm when the signal evolution model varies; instead, the mean function of the signal is assumed to be zero and its covariance function is expressed in a separable form. More specifically, the following assumption on the signal process is required:
Hypothesis (H1).
The signal { x k } k 1 is an n x -dimensional second-order zero-mean random process, whose covariance function can be factorized in a separable form: Σ k , h x = E [ x k x h T ] = A k B h T , h k , where A k , B h R n x × n are known matrices.
Remark 1.
The separable form of the signal covariance function required in assumption (H1) covers many practical situations. For instance, when the state-space model is available, x k = Φ k 1 x k 1 + w k 1 , k 1 , assuming non-singular transition matrices and a white noise independent of the initial state, the covariance function of the signal can be expressed as E [ x k x h T ] = Φ k , h E [ x h x h T ] , h k , where Φ k , h = Φ k 1 Φ h , and assumption (H1) is fulfilled taking, for example, A k = Φ k , 0 and B h = E [ x h x h T ] ( Φ h , 0 1 ) T (see, e.g., [24]). Likewise, for the state-space model with stationary signals, x k = Φ x k 1 + w k 1 , k 1 , under the same assumptions of non-singularity and independence, the covariance function can be expressed as E [ x k x h T ] = Φ k h E [ x h x h T ] , h k ; then, taking A k = Φ k and B h = E [ x h x h T ] ( Φ h ) T , assumption (H1) is clearly true. Hence, the separability assumption on the signal autocovariance function required in (H1) covers different types of stationary and non-stationary signals and, as a result, the estimation problem approach based on such hypothesis provides a unifying approach to obtain general algorithms that are applicable to a wide range of practical situations, regardless of whether or not the state-space model is fully known. Consequently, the covariance-based estimation approach provides us with a large variety of options for dealing with different signal models without the need to design specific algorithms for each one. The signal of linear systems or that of uncertain systems with state-dependent multiplicative noise are examples of processes satisfying (H1), as it will be shown in Section 4.

2.2. Measurement Model with Random Parameter Matrices and Time-Correlated Additive Noises

The measured outputs of the discrete-time random signal x k are assumed to be perturbed by random parameter matrices and time-correlated additive noises, according to the following model:
z k ( i ) = C k ( i ) x k + v k ( i ) , k 1 ; i = 1 , , m ,
where z k ( i ) R n z is the output measurement of the i-th sensor at time k. The following hypotheses about the matrices C k ( i ) k 1 and the additive noises v k ( i ) k 1 are required:
Hypothesis (H2).
C k ( i ) k 1 , i = 1 , , m , are independent sequences of independent random parameter matrices. Denoting c p q ( i ) ( k ) the ( p , q ) -th entry of C k ( i ) , the first and second-order moments E c p q ( i ) ( k ) and E c p q ( i ) ( k ) c p q ( j ) ( k ) , for i , j = 1 , , m ,   p , p = 1 , , n z and q , q = 1 , , n x , are assumed to be known.
From (H2), it is clear that the means C ¯ k ( i ) = E C k ( i ) are known and their ( p , q ) -th entries are E c p q ( i ) ( k ) , p = 1 , , n z , q = 1 , , n x . Moreover, for any deterministic matrix G R n x × n x , the ( p , q ) -th entry of E [ C k ( i ) G C k ( j ) T ] is given by a = 1 n x b = 1 n x E c p a ( i ) ( k ) c q b ( j ) ( k ) G a b .
Hypothesis (H3).
The measurement noises v k ( i ) k 0 , i = 1 , , m , are time-correlated sequences satisfying
v k ( i ) = D k 1 ( i ) v k 1 ( i ) + ξ k 1 ( i ) , k 1 ; i = 1 , , m ,
where D k ( i ) k 0 are given deterministic parameter matrices and the following assumptions are required:
Hypothesis (H3(a)).
v 0 ( i ) , i = 1 , , m , are zero-mean, second-order random vectors with known covariance matrices: E [ v 0 ( i ) v 0 ( j ) T ] = Σ 0 v ( i j ) , i , j = 1 , , m .
Hypothesis (H3(b)).
{ ξ k ( i ) } k 0 , i = 1 , , m , are zero-mean second-order white processes. They are independent of each other, except at the same time instant, with known covariance matrices: E [ ξ k ( i ) ξ h ( j ) T ] = Σ k ξ ( i j ) δ k , h , k , h 0 ; i , j = 1 , , m .

2.3. Stochastic Deception Attacks

Let us consider that the sensor measured outputs are randomly perturbed by deception attacks that are launched by a potential adversary, who injects some false information involving two components: one that neutralizes the actual measurements and a noise component, which is the blurred deceptive information added by the adversary. Specifically, at the i-th sensor, i = 1 , , m , the deception signal is modeled as
z ˘ k ( i ) = z k ( i ) + w k ( i ) , k 1 ; i = 1 , , m ,
where the following assumption on the noises w k ( i ) k 1 is required:
Hypothesis (H4).
The noises w k ( i ) k 1 , i = 1 , , m , are zero-mean, second-order white processes. They are independent of each other, except at the same time instant, with known covariance matrices: E w k ( i ) w h ( j ) T = Σ k w ( i j ) δ k , h , k , h 1 ; i , j = 1 , , m .
At every sensor i and every sampling time k, the deception attacks may randomly succeed or fail, which can be modeled by a Bernoulli random variable, λ k ( i ) , taking the value one, if the attack is successful, and the value zero, if it is unsuccessful. More precisely, the sensor measurements, y ˘ k ( i ) , subject to random deception attacks, are modeled by:
y ˘ k ( i ) = z k ( i ) + λ k ( i ) z ˘ k ( i ) , k 1 ; i = 1 , , m ,
or, equivalently,
y ˘ k ( i ) = ( 1 λ k ( i ) ) z k ( i ) + λ k ( i ) w k ( i ) , k 1 ; i = 1 , , m .
Hence, a successful attack ( λ k ( i ) = 1 ) means that only the noise injected by the adversary, w k ( i ) , will be transmitted to the processing center, while an unsuccessful attack ( λ k ( i ) = 0 ) means that the actual measured output, z k ( i ) , remains unchanged and will be transmitted intact. The following assumption on these Bernoulli random variables is required:
Hypothesis (H5).
λ k ( i ) k 1 , i = 1 , , m , are independent white sequences of Bernoulli random variables with known probabilities P λ k ( i ) = 1 = λ ¯ k ( i ) , k 1 .

2.4. Observation Model after Transmission: Two Packet Dropout Compensation Strategies

Let us consider that there exist random packet dropouts during data transmissions from the sensors to the processing center. This will be modeled by a Bernoulli random variable, γ k ( i ) , taking the value one, if the transmission is successful, and the value zero, if the corresponding data packet is lost during transmission. The following assumption on these Bernoulli random variables is required:
Hypothesis (H6).
γ k ( i ) k 1 , i = 1 , , m , are independent white sequences of Bernoulli random variables with known probabilities P γ k ( i ) = 1 = γ ¯ k ( i ) , k 1 .
When the current measurement is not received at the processing center, a compensating measurement will be used instead. Among the different compensation strategies proposed in the literature, we focus on the prediction compensation approach. Taking into account that the observation model under consideration is subject to stochastic deception attacks, two possible compensation models naturally arise:
Model I: Compensation with the prediction estimator of the measurement that is lost (i.e., the one transmitted by the sensor after the attack is launched), y ˘ ^ k / k 1 ( i ) I . The observations used at the processing center in this case are given by:
y k ( i ) I = γ k ( i ) y ˘ k ( i ) + ( 1 γ k ( i ) ) y ˘ ^ k / k 1 ( i ) I , k 1 ; i = 1 , , m .
Model II: Compensation with the prediction estimator of the actual sensor measured output (the one obtained by the sensor before the attack is launched), z ^ k / k 1 ( i ) I I . The observation model after transmission in this case is given by:
y k ( i ) I I = γ k ( i ) y ˘ k ( i ) + ( 1 γ k ( i ) ) z ^ k / k 1 ( i ) I I , k 1 ; i = 1 , , m .
Finally, the following independence assumption is required:
Hypothesis (H7).
For i = 1 , , m , the signal process { x k } k 1 , the vector v 0 ( i ) and the processes { C k ( i ) } k 1 , { ξ k ( i ) } k 0 , { w k ( i ) } k 1 , { λ k ( i ) } k 1 and { γ k ( i ) } k 1 are mutually independent.
Remark 2.
The proposed compensation strategies work as follows:
  • If γ k ( i ) = 1 (successful transmission from sensor i with no loss at the sampling time k), y ˘ k ( i ) will be used for the estimation under both models.
  • If γ k ( i ) = 0 (at the sampling time k, the data packet from sensor i is lost):
    -
    The predictor of y ˘ k ( i ) , given by y ˘ ^ k / k 1 ( i ) I = ( 1 λ ¯ k ( i ) ) z ^ k / k 1 ( i ) I , will be used for the estimation under Model I. Consequently, the compensating measurement considered is the predictor of z k ( i ) weighted by the probability that z k ( i ) has not been attacked at time k.
    -
    The predictor of z k ( i ) , that is z ^ k / k 1 ( i ) I I , will be used for the estimation under Model II. Consequently, the compensating measurement considered in this case does not take into account the possibility of an attack.
Remark 3.
For the considered kind of systems with time-correlated additive noises and transmission random losses, the measurement differencing method—typically used to deal with the time-correlation phenomena—is not successful, since some sensor measurements are randomly lost and, consequently, cannot be processed (see [26]). Therefore, an alternative approach, based on the direct estimation of the measurement noises and the innovation technique, will be used to address the centralized fusion optimal linear estimation problem in the next section.

3. Main Results

Our aim is to design centralized fusion optimal linear filtering and fixed-point smoothing algorithms, based on the observations within the compensation frameworks proposed in the previous section. For this purpose, at every sampling time, k 1 , we consider the vector constituted by the measurements of all sensors, z k = z k ( 1 ) T , , z k ( m ) T T , satisfying the following stacked measurement equation, which is easily derived from (1):
z k = C k x k + v k , k 1 ,
where C k = C k ( 1 ) T , , C k ( m ) T T and v k = v k ( 1 ) T , , v k ( m ) T T . Let us observe that, from (2), the additive noise { v k } k 0 is a time-correlated sequence, satisfying
v k = D k 1 v k 1 + ξ k 1 , k 1 ,
with D k = D i a g D k ( 1 ) , , D k ( m ) and ξ k = ξ k ( 1 ) T , , ξ k ( m ) T T .
Using (3) and denoting
y ˘ k = y ˘ k ( 1 ) T , , y ˘ k ( m ) T T , Λ k = D i a g λ k ( 1 ) , , λ k ( m ) I n z a n d w k = w k ( 1 ) T , , w k ( m ) T T ,
it is straightforward to check that the stacked vector y ˘ k , constituted by the measurements subject to random deception attacks, satisfies the following equation:
y ˘ k = ( I m n z Λ k ) z k + Λ k w k , k 1 .
According to (4) and (5), and denoting
y k ( I ) = y k ( 1 ) I T , , y k ( m ) I T T , y k ( I I ) = y k ( 1 ) I I T , , y k ( m ) I I T T ,
y ˘ ^ k / k 1 ( I ) = y ˘ ^ k / k 1 ( 1 ) I T , , y ˘ ^ k / k 1 ( m ) I T T , z ^ k / k 1 ( I I ) = z ^ k / k 1 ( 1 ) I I T , , z ^ k / k 1 ( m ) I I T T
and Γ k = D i a g γ k ( 1 ) , , γ k ( m ) I n z , the observations used for the estimation can be modeled by the following compact equations:
Stacked Model I:
y k ( I ) = Γ k y ˘ k + ( I m n z Γ k ) y ˘ ^ k / k 1 ( I ) , k 1 .
Stacked Model II:
y k ( I I ) = Γ k y ˘ k + ( I m n z Γ k ) z ^ k / k 1 ( I I ) , k 1 .
Then, the centralized fusion optimal linear estimation (filtering and fixed-point smoothing) problem is reformulated as the task of finding the LS linear estimator of the signal, x k , based on the observations up to the time instant k + N , N 0 , given in (9) or (10).

3.1. Preliminary Lemmas

Before deriving the centralized fusion filtering and fixed-point smoothing algorithms, we analyze the statistical properties of the processes involved in the observation models under consideration, as they will be necessary to address the LS linear estimation problem. These properties will be given in the following preliminary lemmas, whose proof is omitted since they follow quite easily from hypotheses (H1)–(H7).
Lemma 1.
The processes involved in (6) and (7) satisfy the following properties:
(a)
{ C k } k 1 is a sequence of independent random parameter matrices with means C ¯ k = E [ C k ] = C ¯ k ( 1 ) T , , C ¯ k ( m ) T T , k 1 . Moreover, for any deterministic matrix G R n x × n x , the ( i , j ) -th entry of the matrix E C k G C k T is E [ C k ( i ) G C k ( j ) T ] , i , j = 1 , , m .
(b)
v 0 is a zero-mean, second-order random vector with Σ 0 v = E [ v 0 v 0 T ] = Σ 0 v ( i j ) i , j = 1 , , m .
(c)
{ ξ k } k 0 is a zero-mean, second-order white process with Σ k ξ = E [ ξ k ξ k T ] = Σ k ξ ( i j ) i , j = 1 , , m .
(d)
The covariance function of the time-correlated noise { v k } k 0 , Σ k , h v = E [ v k v h T ] , h k , admits the following factorization: Σ k , h v = D k F h T , in which D k = D k , 0 , F h T = D h , 0 1 Σ h v , D k , h = D k 1 D h and Σ h v is recursively computed as follows:
Σ h v = D h 1 Σ h 1 v D h 1 T + Σ h 1 ξ , h 1 .
Lemma 2.
The following properties hold for the processes involved in (8):
(a)
{ w k } k 1 is a zero-mean, second-order white process with Σ k w = E [ w k w k T ] = Σ k w ( i j ) i , j = 1 , , m .
(b)
{ Λ k } k 1 is a sequence of independent random matrices with
Λ ¯ k = E Λ k = D i a g λ ¯ k ( 1 ) , , λ ¯ k ( m ) I n z , k 1 .
Denoting λ k = λ k ( 1 ) 1 n z T , , λ k ( m ) 1 n z T T = λ k ( 1 ) , , λ k ( m ) T 1 n z , k 1 , the covariance matrices K k λ = E λ k λ k T and K k 1 λ = E ( 1 m n z λ k ) ( 1 m n z λ k ) T are known, and their entries can be computed taking into account that E [ λ k ( i ) λ k ( j ) ] = λ ¯ k ( i ) , i = j , λ ¯ k ( i ) λ ¯ k ( j ) , i j .
(c)
The autocovariance function Σ k y ˘ = E y ˘ k y ˘ k T is given by:
Σ k y ˘ = K k 1 λ E [ C k A k B k T C k T ] + D k F k T + K k λ Σ k w , k 1 .
Lemma 3.
The stochastic process { Γ k } k 1 modeling the packet dropout phenomena in the stacked observation models (9) and (10) is a sequence of independent random matrices with
Γ ¯ k = E Γ k = D i a g γ ¯ k ( 1 ) , , γ ¯ k ( m ) I n z , k 1 .
Denoting γ k = γ k ( 1 ) 1 n z T , , γ k ( m ) 1 n z T T = γ k ( 1 ) , , γ k ( m ) T 1 n z , k 1 , the covariance matrices K k γ = E γ k γ k T are known, and their entries can be computed taking into account that E [ γ k ( i ) γ k ( j ) ] = γ ¯ k ( i ) , i = j , γ ¯ k ( i ) γ ¯ k ( j ) , i j .
Moreover, the signal process { x k } k 1 , the vector v 0 and the processes { C k } k 1 , { ξ k } k 0 , { w k } k 1 , { Λ k } k 1 and { Γ k } k 1 are mutually independent.

3.2. Optimal Filtering and Fixed-Point Smoothing Algorithms (Model I)

We will use an innovation approach to obtain x ^ k / L ( I ) , the LS linear estimator of the signal x k based on the observations { y 1 ( I ) , , y L ( I ) } defined by (9); more specifically, we aim at obtaining the signal filtering ( L = k ) and fixed-point smoothing ( L = k + N , N 1 ) estimators. According to this approach, the LS linear estimators of the signal, x ^ k / L ( I ) , based on a set of observations y h ( I ) h L , can be expressed as a linear combination of the innovations μ h ( I ) h L as follows:
x ^ k / L ( I ) = h = 1 L X k , h ( I ) Π h ( I ) 1 μ h ( I ) , k , L 1 ,
where X k , h ( I ) = E [ x k μ h ( I ) T ] , and the innovations and their covariances are μ h ( I ) = y h ( I ) y ^ h / h 1 ( I ) and Π h ( I ) = E [ μ h ( I ) μ h ( I ) T ] , respectively.
So, the first key point is to find an appropriate expression for the one-stage observation predictors y ^ h / h 1 ( I ) —or, equivalently, for the innovations μ h ( I ) —that allows us to obtain the coefficients X k , h ( I ) and the innovation covariances Π h ( I ) appearing in expression (12) of x ^ k / L ( I ) .
From (9), taking into account the properties set out in Lemma 3 and the Orthogonal Projection Lemma (OPL), the observation predictors are given by:
y ^ k / k 1 ( I ) = Γ ¯ k y ˘ ^ k / k 1 ( I ) + ( I m n z Γ ¯ k ) y ˘ ^ k / k 1 ( I ) = y ˘ ^ k / k 1 ( I ) , k 1 .
Using (8), Lemma 1 and the OPL, we obtain:
y ˘ ^ k / k 1 ( I ) = ( I m n z Λ ¯ k ) z ^ k / k 1 ( I ) , k 1 ,
and from (6), Lemma 2 and using the OPL again, we obtain:
z ^ k / k 1 ( I ) = C ¯ k x ^ k / k 1 ( I ) + v ^ k / k 1 ( I ) , k 1 .
Consequently, the innovations can be written as:
μ k ( I ) = Γ k y ˘ k ( I m n z Λ ¯ k ) C ¯ k x ^ k / k 1 ( I ) + v ^ k / k 1 ( I ) , k 1 .
Hence, we need the one-stage predictors of both the signal, x ^ k / k 1 ( I ) , and the noise, v ^ k / k 1 ( I ) . Note that, similarly to (12), defining now V k , h ( I ) = E [ v k μ h ( I ) T ] , the noise estimators, v ^ k / L ( I ) , are expressed as a linear combination of the innovations as follows:
v ^ k / L ( I ) = h = 1 L V k , h ( I ) Π h ( I ) 1 μ h ( I ) , k , L 1 .
Expressions (12) and (16) for the LS linear estimators as linear combination of the innovations, are the starting points to derive the recursive algorithms for the centralized LS linear filter, x ^ k / k ( I ) , and smoothers, x ^ k / k + N ( I ) , at the fixed point k, for N 1 , which are presented in the following theorem.
Theorem 1.
Under hypotheses (H1) to (H7), the centralized LS linear filtering estimators, x ^ k / k ( I ) , and the corresponding error covariance matrices, Σ ^ k / k ( I ) = E ( x k x ^ k / k ( I ) ) ( x k x ^ k / k ( I ) ) T , are computed by:
x ^ k / k ( I ) = ( A k , 0 ) O k ( I ) , k 1 , Σ ^ k / k ( I ) = A k B k T ( A k , 0 ) r k ( I ) ( A k , 0 ) T , k 1 ,
where
O k ( I ) = O k 1 ( I ) + J k ( I ) Π k ( I ) 1 μ k ( I ) , k 1 ; O 0 ( I ) = 0 , J k ( I ) = C ¯ k B k , F k C ¯ k A k , D k r k 1 ( I ) T I m n z Λ ¯ k Γ ¯ k , k 1 , r k ( I ) = r k 1 ( I ) + J k ( I ) Π k ( I ) 1 J k ( I ) T , k 1 ; r 0 ( I ) = 0 .
The innovations, μ k ( I ) , and their covariance matrices, Π k ( I ) = E μ k ( I ) μ k ( I ) T , are obtained by:
μ k ( I ) = y k ( I ) ( I m n z Λ ¯ h ) C ¯ k A k , D k O k 1 ( I ) , k 1 , Π k ( I ) = K k γ Σ k y ˘ ( I m n z Λ ¯ k ) Σ k z ^ ( I ) ( I m n z Λ ¯ k ) , k 1 ,
with Σ k y ˘ given in (11) and
Σ k z ^ ( I ) = C ¯ k A k , D k r k 1 ( I ) C ¯ k A k , D k T , k 1 .
Additionally, at any sampling time k 1 , by starting from the filter, x ^ k / k ( I ) , and its error covariance matrix, Σ ^ k / k ( I ) , as initial conditions, the centralized LS linear smoothers, x ^ k / k + N ( I ) , and their error covariances, Σ ^ k / k + N ( I ) = E ( x k x ^ k / k + N ( I ) ) ( x k x ^ k / k + N ( I ) ) T , are recursively obtained as follows:
x ^ k / k + N ( I ) = x ^ k / k + N 1 ( I ) + X k , k + N ( I ) Π k + N ( I ) 1 μ k + N ( I ) , N 1 , Σ ^ k / k + N ( I ) = Σ k / k + N 1 ( I ) X k , k + N ( I ) Π k + N ( I ) 1 X k , k + N ( I ) T , N 1 ,
where X k , k + N ( I ) = E [ x k μ k + N ( I ) T ] is computed by:
X k , k + N ( I ) = B k , 0 M k , k + N 1 ( I ) C ¯ k + N A k + N , D k + N T ( I m n z Λ ¯ k + N ) Γ ¯ k + N , N 1 ,
and M k , k + N ( I ) = E [ x ^ k / k + N ( I ) O k + N ( I ) T ] are obtained from the recursive formula
M k , k + N ( I ) = M k , k + N 1 ( I ) + X k , k + N ( I ) Π k + N ( I ) 1 J k + N ( I ) T , N 1 ; M k , k ( I ) = ( A k , 0 ) r k ( I ) .
Proof. 
We first obtain the signal prediction and filtering estimators x ^ k / s ( I ) , s k . From (15), it is clear that the coefficients X k , h ( I ) = E [ x k μ h ( I ) T ] verify
X k , h ( I ) = E x k y ˘ h T E x k x ^ h / h 1 ( I ) T C ¯ h T + E x k v ^ h / h 1 ( I ) T ( I m n z Λ ¯ h ) Γ ¯ h , 1 h k .
Now, using (8) and the properties of the processes involved in this equation, we have that E x k y ˘ h T = A k B h T C ¯ h T ( I m n z Λ ¯ h ) , and from (12) and (16), we obtain
E x k x ^ h / h 1 ( I ) T = j = 1 h 1 X k , j ( I ) Π j ( I ) 1 X h , j ( I ) T ; E x k v ^ h / h 1 ( I ) T = j = 1 h 1 X k , j ( I ) Π j ( I ) 1 V h , j ( I ) T , h 2 .
Hence,
X k , h ( I ) = A k B h T C ¯ h T ( 1 δ h , 1 ) j = 1 h 1 X k , j ( I ) Π j ( I ) 1 C ¯ h X h , j ( I ) + V h , j ( I ) T ( I m n z Λ ¯ h ) Γ ¯ h , 1 h k .
Then, by defining:
J h x ( I ) = B h T C ¯ h T ( 1 δ h , 1 ) j = 1 h 1 J j x ( I ) Π j ( I ) 1 C ¯ h A h J j x ( I ) + V h , j ( I ) T ( I m n z Λ ¯ h ) Γ ¯ h , h 1 ,
we conclude that X k , h ( I ) = A k J h x ( I ) , h k , and denoting:
O s x ( I ) = h = 1 s J h x ( I ) Π h ( I ) 1 μ h ( I ) , s 1 ; O 0 x ( I ) = 0 ,
we have that the signal predictors and filter are given by:
x ^ k / s ( I ) = A k O s x ( I ) , 1 s k .
Secondly, to obtain the prediction and filtering estimators of the noise, v ^ k / s ( I ) , s k , we follow an analogous reasoning to that used for the signal estimators, which leads us to:
V k , h ( I ) = D k F h T ( 1 δ h , 1 ) j = 1 h 1 V k , j ( I ) Π j ( I ) 1 C ¯ h X h , j ( I ) + V h , j ( I ) T ( I m n z Λ ¯ h ) Γ ¯ h , 1 h k ,
and defining:
J h v ( I ) = F h T ( 1 δ h , 1 ) j = 1 h 1 J j v ( I ) Π j ( I ) 1 C ¯ h X h , j ( I ) + D h J h v ( I ) T ( I m n z Λ ¯ h ) Γ ¯ h , h 1 ,
we obtain that V k , h ( I ) = D k J h v ( I ) , h k . Hence, the noise predictors and filter are given by:
v ^ k / s ( I ) = D k O s v ( I ) , 1 s k ,
where
O s v ( I ) = h = 1 k J h v ( I ) Π h ( I ) 1 μ h ( I ) , s 1 ; O 0 v ( I ) = 0 .
Now, by substituting (26) and (28) for s = k 1 in (15), we have the following expression for the innovation:
μ k ( I ) = y k ( I ) ( I m n z Λ ¯ h ) C ¯ k A k O k 1 x ( I ) + D k O k 1 v ( I ) , k 1 .
Next, from (25) and (29), the following recursive relations for the vectors O k x ( I ) and O k v ( I ) are straightforward:
O k a ( I ) = O k 1 a ( I ) + J k a ( I ) Π k ( I ) 1 μ k ( I ) , k 1 ; O 0 a ( I ) = 0 , ( a = x , v ) ,
and from (24) and (27), it is clear that:
J k x ( I ) = C ¯ k B k C ¯ k A k r k 1 x ( I ) D k r k 1 v x ( I ) T ( I m n z Λ ¯ k ) Γ ¯ k , k 1 , J k v ( I ) = F k C ¯ k A k r k 1 x v ( I ) D k r k 1 v ( I ) T ( I m n z Λ ¯ k ) Γ ¯ k , k 1 ,
where
r k a b ( I ) = E O k a ( I ) O k b ( I ) T = h = 1 k J h a ( I ) Π h ( I ) 1 J h b ( I ) T , k 1 ; r 0 a b ( I ) = 0 , ( a , b = x , v ) ,
which clearly satisfies
r k a b ( I ) = r k 1 a b ( I ) + J k a ( I ) Π k ( I ) 1 J k b ( I ) T , k 1 ; r 0 a b ( I ) = 0 , ( a , b = x , v ) .
From now on, we use the following notations for simplicity:
O k ( I ) = O k x ( I ) O k v ( I ) , J k ( I ) = J k x ( I ) J k v ( I ) , r k ( I ) = E [ O k ( I ) O k ( I ) T ] = r k x ( I ) r k x v ( I ) r k v x ( I ) r k v ( I ) .
Derivation of expressions (17)–(20):
  • From (26), it is clear that x ^ k / k ( I ) = ( A k , 0 ) O k ( I ) and, hence, its covariance is given by E x ^ k / k ( i ) x ^ k / k ( i ) T = ( A k , 0 ) r k ( I ) ( A k , 0 ) T . Then, using the OPL to write the filtering error covariances as Σ ^ k / k ( I ) = E [ x k x k T ] E x ^ k / k ( I ) x ^ k / k ( I ) T and taking into account that, from (H1), E [ x k x k T ] = A k B k T , expression (17) is proven.
  • From (31)–(33), expression (18) is directly obtained.
  • From (30), the expression of μ k ( I ) in (19) is straightforward.
    To obtain Π k ( I ) = E [ μ k ( I ) μ k ( I ) T ] = E [ Γ k ( y ˘ k y ˘ ^ k / k 1 ( I ) ) ( y ˘ k y ˘ ^ k / k 1 ( I ) ) T Γ k ] , we apply the Hadamar product properties and the OPL to express these matrices as:
    Π k ( I ) = K k γ Σ k y ˘ E [ y ˘ ^ k / k 1 ( I ) y ˘ ^ k / k 1 ( I ) T ] , k 1 ,
    and using (13) for y ˘ ^ k / k 1 ( I ) , the expression of Π k ( I ) in (19) is proven.
  • Substituting (26) and (28) in (14), we obtain z ^ k / k 1 ( I ) = C ¯ k A k , D k O k 1 ( I ) , and expression (20) for its covariance Σ k z ^ ( I ) is easily obtained.
Derivation of expressions (21)–(23):
  • Expression (21) for the smoothers x ^ k / k + N ( I ) is easily derived using (12), and from it, the recursive formula for the fixed-point smoothing error covariance matrices, Σ ^ k / k + N ( I ) , is immediately deduced.
  • Expression (22) for X k , k + N ( I ) = E [ x k μ k + N ( I ) T ] = E x k y ˘ k + N T E x k y ˘ ^ k + N / k + N 1 ( I ) T Γ ¯ k + N is derived as follows:
    -
    On the one hand, the independence properties, together with (H1) and (8), lead us to E [ x k y ˘ k + N T ] = B k A k + N T C k + N T ( I m n z Λ ¯ k + N ) , which can be written as:
    E [ x k y ˘ k + N T ] = B k , 0 C ¯ k + N A k + N , D k + N T ( I m n z Λ ¯ k + N ) , N 1 .
    -
    On the other hand, using expression (13) for the one-stage predictors y ˘ ^ k + N / k + N 1 ( I ) with z ^ k + N / k + N 1 ( I ) = C ¯ k + N A k + N , D k + N O k + N 1 ( I ) , it is clear that
    E x k y ˘ ^ k + N / k + N 1 ( I ) T = E x k O k + N 1 ( I ) T C ¯ k + N A k + N , D k + N T ( I m n z Λ ¯ k + N ) , N 1 .
    Therefore, denoting M k , k + N ( I ) = E [ x k O k + N ( I ) T ] , N 0 , expression (22) holds.
  • Using that O k + N ( I ) = O k + N 1 ( I ) + J k + N ( I ) Π k + N ( I ) 1 μ k + N ( I ) , the recursive expression (23) for the matrices M k , k + N ( I ) is directly obtained. Its initial condition, M k , k ( I ) = ( A k , 0 ) r k ( I ) , is easily derived taking into account that, from the OPL, M k , k ( I ) = E [ x ^ k / k ( I ) O k ( I ) T ] .
Remark 4.
The simultaneous consideration of random parameter matrices, time-correlated additive noises and random deception attacks in the sensor measured outputs, together with the presence of random packet dropouts during transmission, is a novelty itself and involves some difficulties in the derivation of the proposed algorithms. One of the main challenges is concerned with the time-correlation of the noise. Since the measurement differencing approach has not been used to transform the original measurements into an equivalent set of observations that do not depend on time-related noise, a first difficulty was to obtain the noise estimators. On the one hand, since a covariance-based estimation approach is used, it has been necessary to obtain a factorization—in a separable form—for the noise covariance function (Lemma 1 (c)). On the other hand, even though the derivation of expression (28) for the noise estimators, v ^ k / s ( I ) , is analogous to (26) for the signal estimators, x ^ k / s ( I ) , additional difficulties are met when deducing simple formulas for the innovation covariances matrix, Π k ( I ) , from the innovation μ k ( I ) (expression (30)). The definition of the vectors O k ( I ) and J k ( I ) , as well as the matrices r k ( I ) , made the derivation of Π k ( I ) —and, consequently, the design of the algorithms—significantly simpler. In addition, the proposed filtering and smoothing algorithms have an attractive recursive structure thanks to the above.

3.3. Optimal Filtering and Fixed-Point Smoothing Algorithms (Model II)

Recursive algorithms for the centralized LS linear filter, x ^ k / k ( I I ) , and smoothers, x ^ k / k + N ( I I ) , at the fixed point k for any N 1 based on the observations (10) within the compensation framework proposed in Model II, are presented in the following theorem.
Theorem 2.
Under hypotheses (H1) to (H7), the centralized LS linear filtering estimators, x ^ k / k ( I I ) , and the corresponding error covariance matrices, Σ ^ k / k ( I I ) = E ( x k x ^ k / k ( I I ) ) ( x k x ^ k / k ( I I ) ) T , can be obtained by replacing the superscript “ ( I ) ” with “ ( I I ) ” in (17). The vectors O k ( I I ) and the matrices r k ( I I ) and J k ( I I ) are computed by replacing the superscript “ ( I ) ” with “ ( I I ) ” in (18).
The innovations, μ k ( I I ) , and their covariance matrices, Π k ( I I ) = E μ k ( I I ) μ k ( I I ) T , are obtained by:
μ k ( I I ) = y k ( I I ) ( I m n z Γ ¯ k Λ ¯ h ) C ¯ k A k , D k O k 1 ( I I ) , k 1 , Π k ( I I ) = K k γ Σ k y ˘ ( I m n z Λ ¯ k ) Σ k z ^ ( I I ) + Σ k z ^ ( I I ) Λ ¯ k Γ ¯ k Λ ¯ k Σ k z ^ ( I I ) Λ ¯ k Γ ¯ k , k 1 ,
where Σ k y ˘ is given in (11) and Σ k z ^ ( I I ) is computed by replacing the superscript “ ( I ) ” with “ ( I I ) ” in (20).
The recursive algorithm for the centralized LS linear fixed-point smoothers, x ^ k / k + N ( I I ) , and their error covariances, Σ ^ k / k + N ( I I ) = E ( x k x ^ k / k + N ( I I ) ) ( x k x ^ k / k + N ( I I ) ) T , is provided by (21)–(23), just replacing the superscript “ ( I ) ” by “ ( I I ) ”.
Proof. 
The filtering and fixed-point smoothing algorithms in Theorem 2 can be proven in a similar way to those in Theorem 1; therefore, the details are omitted to save space. □

4. Simulation Example

In this section, we illustrate the implementation of the proposed centralized fusion filtering and fixed-point smoothing algorithms by a simulation example.
Scalar signal process. As in [18], let us consider a discrete-time scalar signal process { x k } k 0 described by an AR(1) model. Specifically, we consider the following model (perturbed by both additive noise and signal-dependent multiplicative noise) to generate the signal:
x k + 1 = 0.9 + 0.01 α k x k + ε k , k 0 ,
where α k k 0 and ε k k 0 are standard Gaussian white processes and the initial signal, x 0 , is a zero-mean Gaussian variable with variance 0.1 . These noise sequences and the initial signal are assumed to be mutually independent; then, it is easy to establish that the signal covariance function is given by Σ k , h x = E [ x k x h ] = 0 . 9 k h E [ x h x h T ] , h k ; hence, it is clear that it can be expressed in a separable form as Σ k , h x = A k B h , with A k = 0 . 9 k and B h = 0 . 9 h Σ h x , where Σ h x = E [ x h 2 ] is recursively obtained by Σ h x = 0.8101 Σ h 1 x + 1 , h 1 with initial condition Σ 0 x = 0.1 .
Sensor measured outputs. Let us consider a three-sensor network providing scalar measurements of the signal that fit the following model:
z k ( i ) = C k ( i ) x k + v k ( i ) , k 1 , i = 1 , 2 , 3 .
In addition, let us assume, in accordance with the theoretical study, that different uncertainties are present:
  • At each sensor, i = 1 , 2 , 3 , the random parameter sequences { C k ( i ) } k 1 , are chosen to model different kinds of network-induced uncertainties, namely:
    -
    C k ( 1 ) = 0.8 θ k ( 1 ) , in which { θ k ( 1 ) } k 1 is a sequence of independent random variables, with uniform distribution over the interval [ 0.3 , 0.7 ] (continuous fading measurements in sensor 1).
    -
    C k ( 2 ) = 0.7 θ k ( 2 ) , where { θ k ( 2 ) } k 1 is a sequence of independent random variables with probability mass function P ( θ k ( 2 ) = 0 ) = 0.1 , P ( θ k ( 2 ) = 0.5 ) = 0.5 , P ( γ k ( 2 ) = 1 ) = 0.4 (discrete fading measurements in sensor 2).
    -
    C k ( 3 ) = 0.9 θ k ( 3 ) , in which { θ k ( 3 ) } k 1 is a sequence of independent Bernoulli random variables with P ( θ k ( 3 ) = 1 ) = θ ¯ ( 3 ) , k 1 (missing measurements in sensor 3).
    Moreover, θ k ( i ) k 1 , i = 1 , 2 , 3 , are assumed to be mutually independent white sequences.
  • The noise processes { v k ( i ) } k 0 , i = 1 , 2 , 3 , are defined by v k ( i ) = D ( i ) v k 1 ( i ) + ξ k 1 ( i ) , k 1 , in which:
    -
    D ( 1 ) = D ( 3 ) = 0.8 and D ( 2 ) = 0.7 ;
    -
    ξ k ( i ) = a i ξ k , k 0 , i = 1 , 2 , 3 , with a 1 = a 3 = 0.5 , a 2 = 0.25 and { ξ k } k 0 a standard Gaussian white process;
    -
    v 0 ( i ) = v 0 is a standard Gaussian variable, for i = 1 , 2 , 3 .
In addition, according to the theoretical model under consideration, it is assumed that the measurements at each sensor are affected by deception attacks. The data injected by the attackers are described by z ˘ k ( i ) = z k ( i ) + w k ( i ) , in which the attack noises are defined as w k ( i ) = w ( i ) ζ k , for all i = 1 , 2 , 3 , where w ( 1 ) = 0.5 , w ( 2 ) = 0.25 , w ( 3 ) = 0.75 , and { ζ k } k 1 is a standard Gaussian white process. Clearly, these attack noises are correlated and S k ( i j ) = w ( i ) w ( j ) , i , j = 1 , 2 , 3 . The attacks are considered to be randomly successful or frustrated and, once the attacks are launched, the available measurements are described by (3):
y ˘ k ( i ) = z k ( i ) + λ k ( i ) z ˘ k ( i ) , k 1 , i = 1 , 2 , 3 ,
where the Bernoulli random variables, λ k ( i ) k 1 , i = 1 , 2 , 3 , modeling whether the deception attacks are actually successful or not, are independent and identically distributed. The probability of success is assumed to be time-invariant and to take the same value for the three sensors, namely, P ( λ k ( i ) = 1 ) = λ ¯ , k 1 , i = 1 , 2 , 3 .
Observations with random packet dropouts. The observations used for the estimation, y k ( i ) , are described using one of the following two models:
M o d e l   I y k ( i ) I = γ k ( i ) y ˘ k ( i ) + 1 γ k ( i ) y ˘ ^ k / k 1 ( i ) I , k 1 , i = 1 , 2 , 3 , M o d e l   I I y k ( i ) I I = γ k ( i ) y ˘ k ( i ) + 1 γ k ( i ) z ^ k / k 1 ( i ) I I , k 1 , i = 1 , 2 , 3 ,
where the sequences modeling the transmission packet losses, γ k ( i ) k 1 , i = 1 , 2 , 3 , are independent sequences of independent Bernoulli random variables with time-invariant packet arrival probabilities, which are the same for the three sensors, P ( γ k ( i ) = 1 ) = γ ¯ , k 1 , i = 1 , 2 , 3 .
A MATLAB program has been developed to obtain the centralized fusion estimators and the corresponding error variances, and fifty iterations of the estimation algorithms proposed in Theorems 1 and 2 have been run for the above observation models with random packet dropouts. The estimation accuracy has been analyzed by examining the error variances for several probabilities θ ¯ ( 3 ) of the Bernoulli random variables modeling the missing measurement phenomenon of the third sensor and different values of λ ¯ and γ ¯ that determine the probabilities of successful attacks and packet dropouts during the process of data transmissions, respectively.
Performance of the centralized fusion filtering and fixed-point smoothing estimators. Let us assume the same value, 0.5, for the probabilities θ ¯ ( 3 ) , λ ¯ and γ ¯ . First, the error variances are analyzed to compare the proposed filtering and fixed-point smoothing estimators considering both observation models—Model I and Model II— given in (34). The results of this comparison are displayed in Figure 2, which shows, on the one hand, that both estimators (filter and smoother) present lower error variances under Model I than under Model II. On the other hand, it is gathered that, for both observation models, the smoothing error variances are less than the filtering ones. Additionally, it can be inferred that as the number of available observations increases, the smoothers at each fixed-point k become more accurate; this fact is more pronounced for N 5 , since the difference is practically negligible for N > 5 .
Influence of the missing measurement probabilities (sensor 3). Let us assume that, as in Figure 2, the attack and packet arrival probabilities are λ ¯ = 0.5 and γ ¯ = 0.5 , respectively. Figure 3 displays the filtering and smoothing error variances for both models, considering several values of the probability θ ¯ ( 3 ) (more precisely, θ ¯ ( 3 ) = 0.3 , 0.5 , 0.7 and 0.9 ) to illustrate the impact of the missing measurement phenomenon in sensor 3. For both models, the error variances show a similar behavior, and we can draw the conclusion that the probability θ ¯ ( 3 ) that the signal is present in the measured outputs of sensor 3 indeed has an effect on the estimators’ performance. Actually, as it could be predicted, the estimation error variances drop as the values of this probability rise; as a result, the filtering and smoothing estimators perform better when the probability of missing data, 1 θ ¯ ( 3 ) , decreases. As in Figure 2, this figure also shows that the error variances corresponding to the smoothers for both models are lower than those of the filters and that the smoother with lag N = 3 performs better than the smoother with lag N = 1 .
Effect of the successful deception attack probabilities. We examine the impact of the deception attacks on the estimation accuracy considering, as in Figure 2, θ ¯ ( 3 ) = 0.5 and γ ¯ = 0.5 . For this purpose, we compare the filtering error variances for several values of the successful attack probability of the three sensors, λ ¯ . Under Model I, Figure 4a shows the filtering error variances for λ ¯ = 0.1 to 0.9; it can be seen from this figure that the filter performance is indeed affected by this probability, showing—as expected—a deterioration when the attack probability, λ ¯ , rises (similar results are obtained under Model II). Taking into account that, from k = 25 onwards, the behavior of the error variances is analogous in all the iterations, Figure 4b only displays the filtering error variances at iteration k = 50 , versus λ ¯ , to better illustrate this decreasing trend for both Model I and Model II. Similar outcomes, and hence the same conclusions, are drawn for the smoothing error variances.
Influence of the transmission loss probabilities. For θ ¯ ( 3 ) = 0.5 and λ ¯ = 0.5 , different values for the probability γ ¯ of the Bernoulli variables modeling the packet loss phenomenon have been considered to analyze their influence on the filter performance. Under Model I, Figure 5a compares the filtering error variances for γ ¯ ranging from 0.1 to 0.9. This figure leads to the conclusion that the error variances decrease as γ ¯ increases, meaning that, as expected, better estimations are obtained when the probability of packet dropouts during transmission decreases. Similar results—and, consequently, analogous conclusions—are obtained in all the considered scenarios when assuming different packet arrival probabilities and comparing the error variances for Model II. As in Figure 4b, Figure 5b only displays the filtering error variances at iteration k = 50 , versus γ ¯ , to better visualize the decreasing trend of such error variances under both Model I and Model II. Similar outcomes, and hence the same conclusions, are drawn for the smoothing error variances.

5. Conclusions

Recursive algorithms for the centralized fusion optimal linear filtering and fixed-point smoothing estimation problems are proposed from multi-sensor measurements perturbed by random parameter matrices, time-correlated additive noises and random deception attacks. Under the assumption that some data packets may be randomly lost during the transmission process from the sensors to the processing center, two compensation scenarios—both on the basis of the prediction compensation methodology—are analyzed. The first one consists of using the prediction estimator of the lost measurement (i.e., the one transmitted by the sensor after the attack is launched) as a compensator, whereas the second one uses the prediction estimator of the actual output measured by the sensor before the attack is launched. Both scenarios are compared by some numerical results, which show the performance of the proposed estimators and illustrates how the theoretical system model under consideration covers some common networked-induced phenomena (namely, missing measurements and fading measurements). Furthermore, the influence of the missing measurement probabilities, the effect of the deception attack success probabilities and the impact of the transmission dropout probabilities on the estimation accuracy are also analyzed in the context of the numerical results.
An interesting task to be considered in the future would be the theoretical study of the influence of the successful attack probabilities and the transmission loss probabilities on the estimation accuracy. In addition, the theoretical and experimental comparison between the proposed compensation models and some other compensation schemes—such as the zero-input or the hold-input schemes—could be considered. The design of quadratic or polynomial estimation algorithms that outperform the widely used linear ones is also an interesting further research topic.

Author Contributions

Conceptualization, R.C.-Á., J.H. and J.L.-P.; funding acquisition, R.C.-Á. and J.L.-P.; investigation, R.C.-Á., J.H. and J.L.-P.; methodology, R.C.-Á., J.H. and J.L.-P.; resources, R.C.-Á., J.H. and J.L.-P.; software, R.C.-Á., J.H. and J.L.-P.; writing—original draft, R.C.-Á., J.H. and J.L.-P.; writing—review and editing, R.C.-Á., J.H. and J.L.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación” and European Regional Development Fund (ERDF), grant number PID2021-124486NB-I00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, J.; Wang, Z.; Chen, D.; Alsaadi, F.E. Estimation, filtering and fusion for networked systems with network-induced phenomena: New progress and prospects. Inf. Fusion 2016, 31, 65–75. [Google Scholar] [CrossRef] [Green Version]
  2. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked systems: A review paper. Inf. Fusion 2017, 38, 122–134. [Google Scholar] [CrossRef]
  3. He, S.; Shin, H.S.; Xu, S.; Tsourdos, A. Distributed estimation over a low-cost sensor network: A review of state-of-the-art. Inf. Fusion 2020, 54, 21–43. [Google Scholar] [CrossRef]
  4. Hu, Z.; Hu, J.; Yang, G. A survey on distributed filtering, estimation and fusion for nonlinear systems with communication constraints: New advances and prospects. Syst. Sci. Control Eng. 2020, 8, 189–205. [Google Scholar] [CrossRef] [Green Version]
  5. Yang, Y.; Liang, Y.; Pan, Q.; Qin, Y.; Yang, F. Distributed fusion estimation with square-root array implementation for Markovian jump linear systems with random parameter matrices and cross-correlated noises. Inf. Sci. 2016, 370–371, 446–462. [Google Scholar] [CrossRef]
  6. Wang, W.; Zhou, J. Optimal linear filtering design for discrete time systems with cross-correlated stochastic parameter matrices and noises. IET Control Theory Appl. 2017, 11, 3353–3362. [Google Scholar] [CrossRef]
  7. Han, F.; Dong, H.; Wang, Z.; Li, G.; Alsaadi, F.E. Improved Tobit Kalman filtering for systems with random parameters via conditional expectation. Signal Process. 2018, 147, 35–45. [Google Scholar] [CrossRef]
  8. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.; Wang, Z. A new approach to distributed fusion filtering for networked systems with random parameter matrices and correlated noises. Inf. Fusion 2019, 45, 324–332. [Google Scholar] [CrossRef]
  9. Liu, W.; Xie, X.; Qian, W.; Xu, X.; Shi, Y. Optimal linear filtering for networked control systems with random matrices, correlated noises, and packet dropouts. IEEE Access 2020, 8, 59987–59997. [Google Scholar] [CrossRef]
  10. Sun, S. Distributed optimal linear fusion predictors and filters for systems with random parameter matrices and correlated noises. IEEE Trans. Signal Process. 2020, 68, 1064–1074. [Google Scholar] [CrossRef]
  11. Sun, S.; Xie, L.; Xiao, W.; Soh, Y. Optimal linear estimation for systems with multiple packet dropouts. Automatica 2008, 44, 1333–1342. [Google Scholar] [CrossRef]
  12. Schenato, L. To zero or to hold control inputs with lossy links? IEEE Trans. Autom. Control 2009, 54, 1093–1099. [Google Scholar] [CrossRef] [Green Version]
  13. Liang, Y.; Chen, T.W.; Pan, Q. Optimal linear state estimator with multiple packet dropouts. IEEE Trans. Autom. Control 2010, 55, 1428–1433. [Google Scholar] [CrossRef]
  14. Gao, J.; Wu, H.; Fu, M. Two schemes of data dropout compensation for LQG control of networked control systems. Asian J. Control 2015, 17, 55–63. [Google Scholar] [CrossRef]
  15. Ma, J.; Sun, S. A general packet dropout compensation framework for optimal prior filter of networked multi-sensor systems. Inf. Fusion 2019, 45, 128–137. [Google Scholar] [CrossRef]
  16. Ding, J.; Sun, S.; Ma, J.; Li, N. Fusion estimation for multi-sensor networked systems with packet loss compensation. Inf. Fusion 2019, 45, 138–149. [Google Scholar] [CrossRef]
  17. Wang, M.; Sun, S. Self-tuning distributed fusion filter for multi-sensor networked systems with unknown packet receiving rates, noise variances, and model parameters. Sensors 2019, 19, 4436. [Google Scholar] [CrossRef] [Green Version]
  18. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Networked distributed fusion estimation under uncertain outputs with random transmission delays, packet losses and multi-packet processing. Signal Process. 2019, 156, 71–83. [Google Scholar] [CrossRef]
  19. Liu, W. Recursive filtering for discrete-time linear systems with fading measurement and time-correlated channel noise. J. Comput. Appl. Math. 2016, 298, 123–137. [Google Scholar] [CrossRef]
  20. Li, W.; Jia, Y.; Du, J. Distributed filtering for discrete-time linear systems with fading measurements and time-correlated noise. Digit. Signal Process. 2017, 60, 211–219. [Google Scholar] [CrossRef]
  21. Liu, W.; Shi, P.; Pan, J.S. State estimation for discrete-time Markov jump linear systems with time-correlated and mode-dependent measurement noise. Automatica 2017, 85, 9–21. [Google Scholar] [CrossRef]
  22. Geng, H.; Wang, Z.; Cheng, Y.; Alsaadi, F.; Dobaie, A.M. State estimation under non-Gaussian Lévy and time-correlated additive sensor noises: A modified Tobit Kalman filtering approach. Signal Process. 2019, 154, 120–128. [Google Scholar] [CrossRef]
  23. Liu, W.; Shi, P. Convergence of optimal linear estimator with multiplicative and time-correlated additive measurement noises. IEEE Trans. Autom. Control 2019, 64, 2190–2197. [Google Scholar] [CrossRef]
  24. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Networked fusion estimation with multiple uncertainties and time-correlated channel noise. Inf. Fusion 2020, 54, 161–171. [Google Scholar] [CrossRef]
  25. Ma, J.; Liu, S.; Zhang, Q. Globally optimal centralized and sequential fusion filters for uncertain systems with time-correlated measurement noises. IEEE Access 2022, 10, 89011–89021. [Google Scholar] [CrossRef]
  26. Ma, J.; Sun, S. Optimal linear recursive estimators for stochastic uncertain systems with time-correlated additive noises and packet dropout compensations. Signal Process. 2020, 176, 107704. [Google Scholar] [CrossRef]
  27. Cheng, G.; Ma, M.; Tan, L.; Song, S. Gaussian estimation for non-linear stochastic uncertain systems with time-correlated additive noises and packet dropout compensations. IET Control Theory Appl. 2022, 176, 600–614. [Google Scholar] [CrossRef]
  28. Sánchez, H.S.; Rotondo, D.; Escobet, T.; Puig, V.; Quevedo, J. Bibliographical review on cyber attacks from a control oriented perspective. Annu. Rev. Control 2019, 48, 103–128. [Google Scholar] [CrossRef] [Green Version]
  29. Li, Y.; Wu, Q.; Peng, L. Simultaneous event-triggered fault detection and estimation for stochastic systems subject to deception attacks. Sensors 2018, 18, 321. [Google Scholar] [CrossRef] [Green Version]
  30. Li, Y.; Shi, L.; Chen, T. Detection against linear deception attacks on multi-sensor remote state estimation. IEEE Trans. Control Netw. Syst. 2018, 5, 846–856. [Google Scholar] [CrossRef]
  31. Han, F.; Dong, H.; Wang, Z.; Li, G. Local design of distributed H-consensus filtering over sensor networks under multiplicative noises and deception attacks. Int. J. Robust. Nonlinear Control 2019, 29, 2296–2314. [Google Scholar] [CrossRef]
  32. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Covariance-based estimation for clustered sensor networks subject to random deception attacks. Sensors 2019, 19, 3112. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Yang, W.; Zhang, Y.; Chen, G.; Yang, C.; Shi, L. Distributed filtering under false data injection attacks. Automatica 2019, 102, 34–44. [Google Scholar] [CrossRef]
  34. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. A two-phase distributed filtering algorithm for networked uncertain systems with fading measurements under deception attacks. Sensors 2020, 20, 6445. [Google Scholar] [CrossRef]
  35. Xiao, S.; Han, Q.; Ge, X.; Zhang, Y. Secure distributed finite-time filtering for positive systems over sensor networks under deception attacks. IEEE Trans. Cybern. 2020, 50, 1200–1228. [Google Scholar] [CrossRef]
Figure 1. Related work [18,24,25,26]: comparison with the current paper.
Figure 1. Related work [18,24,25,26]: comparison with the current paper.
Sensors 22 08505 g001
Figure 2. Error variance comparison of the centralized fusion filtering and smoothing estimators considering observations from Model I and Model II.
Figure 2. Error variance comparison of the centralized fusion filtering and smoothing estimators considering observations from Model I and Model II.
Sensors 22 08505 g002
Figure 3. Centralized fusion filtering and smoothing error variances for θ ¯ ( 3 ) = 0.3 , 0.5 , 0.7 , 0.9 under Model I and Model II.
Figure 3. Centralized fusion filtering and smoothing error variances for θ ¯ ( 3 ) = 0.3 , 0.5 , 0.7 , 0.9 under Model I and Model II.
Sensors 22 08505 g003
Figure 4. Centralized fusion filtering error variances: (a) under Model I when λ ¯ = 0.1 to 0.9; (b) under Model I and Model II at k = 50 versus λ ¯ .
Figure 4. Centralized fusion filtering error variances: (a) under Model I when λ ¯ = 0.1 to 0.9; (b) under Model I and Model II at k = 50 versus λ ¯ .
Sensors 22 08505 g004
Figure 5. Centralized fusion filtering error variances: (a) under Model I when γ ¯ = 0.1 to 0.9; (b) under Model I and Model II at k = 50 , versus γ ¯ .
Figure 5. Centralized fusion filtering error variances: (a) under Model I when γ ¯ = 0.1 to 0.9; (b) under Model I and Model II at k = 50 , versus γ ¯ .
Sensors 22 08505 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Caballero-Águila, R.; Hu, J.; Linares-Pérez, J. Two Compensation Strategies for Optimal Estimation in Sensor Networks with Random Matrices, Time-Correlated Noises, Deception Attacks and Packet Losses. Sensors 2022, 22, 8505. https://doi.org/10.3390/s22218505

AMA Style

Caballero-Águila R, Hu J, Linares-Pérez J. Two Compensation Strategies for Optimal Estimation in Sensor Networks with Random Matrices, Time-Correlated Noises, Deception Attacks and Packet Losses. Sensors. 2022; 22(21):8505. https://doi.org/10.3390/s22218505

Chicago/Turabian Style

Caballero-Águila, Raquel, Jun Hu, and Josefa Linares-Pérez. 2022. "Two Compensation Strategies for Optimal Estimation in Sensor Networks with Random Matrices, Time-Correlated Noises, Deception Attacks and Packet Losses" Sensors 22, no. 21: 8505. https://doi.org/10.3390/s22218505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop