Nothing Special   »   [go: up one dir, main page]

Optimal Fusion Estimation With Multi-Step Random D

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/317115816

Optimal Fusion Estimation with Multi-Step


Random Delays and Losses in Transmission

Article in Sensors · May 2017


DOI: 10.3390/s17051151

CITATIONS READS

0 2

3 authors, including:

Raquel Caballero-Aguila
Universidad de Jaén
68 PUBLICATIONS 494 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Further research on the estimation problem in stochastic systems with network-induced random
phenomena View project

All content following this page was uploaded by Raquel Caballero-Aguila on 18 June 2017.

The user has requested enhancement of the downloaded file.


sensors
Article
Optimal Fusion Estimation with Multi-Step Random
Delays and Losses in Transmission
Raquel Caballero-Águila 1, *, Aurora Hermoso-Carazo 2 and Josefa Linares-Pérez 2
1 Dpto. de Estadística, Universidad de Jaén, Paraje Las Lagunillas, 23071 Jaén, Spain
2 Dpto. de Estadística,Universidad de Granada, Avda. Fuentenueva, 18071 Granada, Spain;
ahermoso@ugr.es (A.H.-C.); jlinares@ugr.es (J.L.-P.)
* Correspondence: raguila@ujaen.es; Tel.: +34-953-212-926

Academic Editors: Xue-Bo Jin, Shuli Sun, Hong Wei and Feng-Bao Yang
Received: 7 April 2017; Accepted: 15 May 2017; Published: 18 May 2017

Abstract: This paper is concerned with the optimal fusion estimation problem in networked stochastic
systems with bounded random delays and packet dropouts, which unavoidably occur during
the data transmission in the network. The measured outputs from each sensor are perturbed by
random parameter matrices and white additive noises, which are cross-correlated between the
different sensors. Least-squares fusion linear estimators including filter, predictor and fixed-point
smoother, as well as the corresponding estimation error covariance matrices are designed via the
innovation analysis approach. The proposed recursive algorithms depend on the delay probabilities
at each sampling time, but do not to need to know if a particular measurement is delayed or not.
Moreover, the knowledge of the signal evolution model is not required, as the algorithms need only
the first and second order moments of the processes involved. Some of the practical situations covered
by the proposed system model with random parameter matrices are analyzed and the influence of
the delays in the estimation accuracy are examined in a numerical example.

Keywords: recursive fusion estimation; sensor networks; random parameter matrices; random delays;
packet dropouts

1. Introduction
Over the last few decades, research on the estimation problem for networked stochastic systems
has gained considerable attention, due to the undeniable advantages of networked systems, whose
applicability is encouraged, among other causes, by the development and advances in communication
technology and the growing use of wireless networks. As it is well known, the Kalman filter
provides a recursive algorithm for the optimal least-squares estimator in stochastic linear systems,
assuming that the system model is exactly known and all of the measurements are instantly updated.
The development of sensor networks motivates the necessity of designing new estimation algorithms
that integrate the information of all the sensors to achieve a satisfactory performance; thus, using
different fusion techniques, the measurements from multiple sensors are combined to obtain more
accurate estimators than those obtained when a single sensor is used. In this framework, important
extensions of the Kalman filter have been proposed for conventional sensor networks in which the
measured outputs of the sensors always contain the actual signal contaminated by additive noises,
and the transmissions are carried through perfect connections (see, e.g., [1–3] and references therein).
However, in a network environment, usually the standard observation models are not suitable
due to the existence of network-induced uncertainties that can occur in both of the sensor measured
outputs, and during the data transmission through the network. Accordingly, the consideration of
appropriate observation models is vitally important to address the estimation problem in networked
systems. Random failures in the transmission of measured data, together with the inaccuracy of

Sensors 2017, 17, 1151; doi:10.3390/s17051151 www.mdpi.com/journal/sensors


Sensors 2017, 17, 1151 2 of 17

the measurement devices, cause often the degradation of the estimator performance in networked
systems. In light of these concerns, the estimation problem with one or even several network-induced
uncertainties is recently attracting considerable attention, and the design of new fusion estimation
algorithms has become an active research topic (see, e.g., [4–13] and references therein). In
addition, some recent advances on the estimation, filtering and fusion for networked systems with
network-induced phenomena can be reviewed in [14,15], where a detailed overview of this field
is presented.
One of the most common network-induced uncertainties in the measured outputs of the different
sensors is the presence of multiplicative noise, due to different reasons, such as interferences or
intermittent sensor failures. Specifically, in situations involving random observation losses (see [5]),
sensor gain degradation (see [6]), missing or fading measurements (see [13,16], respectively), the
sensor observation equations include multiplicative noises. A unified framework to model these
random phenomena is provided by the use of random measurement matrices in the sensor observation
model. For this reason, the estimation problem in networked systems with random measurement
matrices has become a fertile research subject, since this class of systems allow for covering
different networked-induced random uncertainties as those mentioned above (see, e.g., [17–23],
and references therein).
In relation to the network-induced uncertainties during the data transmission, it must be indicated
that sudden changes in the environment and the unreliability of the communication network, together
with the limited bandwidths of communication channels, cause unavoidable random failures during
the transmission process. Generally, random communication delays and/or transmission packet
dropouts are two essential issues that must be taken into account to model the measurements, which,
being available after transmission, will be used for the estimation. Several estimation algorithms
have been proposed in multisensor systems considering either transmission delays or packet losses
(see, e.g., [24,25]) and also taking into account random delays and packet dropouts simultaneously
(see, e.g., [4,23,26]). By using the state augmentation method, systems with random delays and packet
dropouts can be transformed into systems with random parameter matrices (see, e.g., [8–10,20]).
Hence, systems with random parameter measurement matrices also provide an appropriate unified
context for modelling these random phenomena in the transmission.
Nevertheless, it must be indicated that the state augmentation method leads to a rise of the
computational burden, due to the increase of the state dimension. Actually, in models with more than
one or two-step random delays, the computational cost can be excessive and alternative ways to model
and address the estimation problem in this class of systems need to be investigated. Recently, a great
variety of models have been used to describe the phenomena of multi-step random delays and packet
losses during the data transmission in networked systems, and fusion estimation algorithms have been
proposed based on different approaches—for example, the recursive matrix equation method in [6],
the measurement reorganisation approach in [27], the innovation analysis approach in [28] and the
state augmentation approach in [29–31]. It should be noted that, in the presence of multi-step random
delays and packet losses during the data transmission, many difficulties can arise in the design of the
optimal estimators when the state augmentation approach is not used.
In view of the above considerations, this paper is concerned with the optimal fusion estimation
problem, in the least-squares linear sense, for sensor networks featuring stochastic uncertainties in
the sensor measurements, together with multi-step random delays and packet dropouts during the
data transmission. The derivation of the estimation algorithms will be carried out without using
the evolution model generating the signal process. The uncertainties in the measured outputs of the
different sensors are described by random measurement matrices. The multi-step random delays in the
transmissions are modeled by using a collection of Bernoulli sequences with known distributions and
different characteristics at each sensor; the exact value of these Bernoulli variables is not required, and
only the information about the probability distribution is needed. To the best of the authors’ knowledge,
the optimal estimation problem (including prediction, filtering and fixed-point smoothing) has not
Sensors 2017, 17, 1151 3 of 17

been investigated for systems involving random measurement matrices and transmission multi-step
random delays simultaneously, and, therefore, it constitutes an interesting research challenge. The main
contributions of this research can be highlighted as follows: (a) even though our approach, based on
covariance information, does not require the signal evolution model, the proposed algorithms are
also applicable in situations based on the state-space model (see Remark 1); (b) random measurement
matrices are considered in the measured outputs, thus providing a unified framework to address
different network-induced phenomena (see Remark 2); (c) besides the stochastic uncertainties in the
sensor measurements, simultaneous multi-step random delays and losses with different rates are
considered in the data transmission; (d) unlike most papers about multi-step random delays, in which
only the filtering problem is considered, we propose recursive algorithms for the prediction, filtering
an fixed-point smoothing estimators under the innovation approach, which are computationally
very simple and suitable for online applications; and (e) optimal estimators are obtained without
using the state augmentation approach, thus reducing the computational cost in comparison with the
augmentation method.
The rest of the paper is organized as follows. In Section 2, we present the sensor network and
the assumptions under which the optimal linear estimation problem will be addressed. In Section 3,
the observation model is rewritten in a compact form and the innovation approach to the least-squares
linear estimation problem is formulated. In Section 4, recursive algorithms for the prediction, filtering
and fixed-point smoothing estimators are derived. A simulation example is given in Section 5 to show
the performance of the proposed estimators. Finally, some conclusions are drawn in Section 6.
Notations. The notations used throughout the paper are standard. Rn and Rm×n denote the
n-dimensional Euclidean space and the set of all m × n real matrices, respectively. For a matrix
A, A T and A−1 denote its transpose and inverse, respectively. The shorthand Diag( A1 , . . . , Am ) stands
for a block-diagonal matrix whose diagonal matrices are A1 , . . . , Am . 1n = (1, . . . , 1) T denotes the
all-ones n × 1-vector and In represent the n × n-identity matrix. If the dimensions of matrices are not
explicitly stated, they are assumed to be compatible with algebraic operations. The Kronecker and
Hadamard product of matrices will be denoted by ⊗ and ◦, respectively. δk,s denotes the Kronecker
delta function. For any a, b ∈ R, a ∧ b is used to mean the minimum of a and b.

2. Observation Model and Preliminaries


In this paper, the optimal fusion estimation problem of multidimensional discrete-time
random signals from measurements obtained by a sensor network is addressed. At each sensor,
the measured outputs are perturbed by random parameter matrices and white additive noises that are
cross-correlated at the same sampling time between the different sensors. The estimation is performed
in a processing centre connected to all sensors, where the complete set of sensor data is combined,
but due to eventual communication failures, congestion or other causes, random delays and packet
dropouts are unavoidable during the transmission process. To reduce the effect of such delays and
packet dropouts without overloading the network traffic, each sensor measurement transmits a fixed
number of consecutive sampling times and, when several packets arrive at the same time, the receiver
discards the oldest ones, so that only one measured output is processed for the estimation at each
sampling time.

2.1. Signal Process


The optimal estimators will be obtained using the least-squares (LS) criterion and without
requiring the evolution model generating the signal process. Actually, the proposed estimation
algorithms, based on covariance information, only need the mean vectors and covariance functions of
the processes involved, and the only requirement will be that the signal covariance function must be
factorizable according to the following assumption:
Sensors 2017, 17, 1151 4 of 17

(A1) The n x -dimensional signal process { xk ; k ≥ 1} has zero mean and its autocovariance
function is expressed in a separable form, E[ xk xsT ] = Ak BsT , s ≤ k, where Ak , Bs ∈ Rnx ×n
are known matrices.

Remark 1 (on assumption (A1)). The estimation problems based on the state-space model require new
estimation algorithms when the signal evolution model is modified; therefore, the algorithms designed for
stationary signals driven by xk+1 = Φxk + ξ k cannot be applied for non-stationary signals generated by
xk+1 = Φk xk + ξ k , and these, in turn, cannot be used in uncertain systems where xk+1 = (Φk + ek Φ b k ) xk + ξ k .
A great advantage of assumption (A1) is that it covers situations in which the signal evolution model is
known, for both stationary and non-stationary signals (see, e.g., [23]). In addition, in uncertain systems with
state-dependent multiplicative noise, as those considered in [6,32], the signal covariance function is factorizable,
as it is shown in Section 5. Hence, assumption (A1) on the signal autocovariance function provides a unified
context to deal with different situations based on the state-space model, avoiding the derivation of specific
algorithms for each of them.

2.2. Multisensor Observation Model


(i )
Assuming that there are m different sensors, the measured outputs before transmission, zk ∈ Rnz ,
are described by the following observation model:

(i ) (i ) (i )
zk = Hk xk + vk , k ≥ 1; i = 1, . . . , m, (1)

(i ) (i )
where the measurement matrices, Hk , and the noise vectors, vk , satisfy the following assumptions:
 (i )
(A2) Hk ; k ≥ 1 , i = 1, . . . , m, are independent sequences of independent random parameter
matrices, whose entries have known means and second-order moments; we will denote
(i ) (i )
H ≡ E[ Hk ], k ≥ 1.
 k(i)
(A3) vk ; k ≥ 1 , i = 1, . . . , m, are white noise sequences with zero mean and known second-order
 (i ) ( j ) T  (ij)
moments, satisfying E vk vs = Rk δk,s , i, j = 1, . . . , m.

Remark 2 (on assumption (A2)). Usually, in network environments, the measurements are subject to
different network-induced random phenomena and new estimation algorithms must be designed to incorporate
the effects of these random uncertainties. For example, in systems with stochastic sensor gain degradation or
missing measurements as those considered in [6,7], respectively, or in networked systems involving stochastic
multiplicative noises in the state and measurement equations (see, e.g., [31,32]), new estimation algorithms are
proposed since the conditions necessary to implement the conventional ones are not met. The aforementioned
systems are particular cases of systems with random measurement matrices, and, hence, assumption (A2) allows
for designing a unique estimation algorithm, which is suitable to address all of these situations involving random
uncertainties. In addition, based on an augmentation approach, random measurement matrices can be used to
model the measured outputs of sensor networks with random delays and packet dropouts (see, e.g., [8–10,20]).
Therefore, assumption (A2) provides a unified framework to deal with a great variety of network-induced random
phenomena as those mentioned above.

2.3. Measurement Model with Transmission Random Delays and Packet Losses
(i )
Assuming that the maximum time delay is D, the measured output of the i-th sensor at time r, zr ,
is transmitted during the sampling times r, r + 1, · · · , r + D, but, at each sampling time k > D, only one
(i ) (i )
of the measurements zk− D , . . . , zk is processed. Consequently, at any time k > D, the measurement
processed can either arrive on time or be delayed by d = 1, . . . , D sampling periods, while at any time
k ≤ D, the measurement processed can be delayed only by d = 1, . . . , k − 1 sampling periods, since
(i ) (i )
only z1 , . . . , zk are available. Assuming, moreover, that the transmissions are perturbed by additive
Sensors 2017, 17, 1151 5 of 17

noises, the measurements received at the processing centre, impaired by random delays and packet
losses, can be described by the following model:

(k−1)∧ D
(i ) (i ) (i ) (i )
yk = ∑ γd,k zk−d + wk , k ≥ 1, (2)
d =0

(i )
where the following assumptions on the random variables modelling the delays, γd,k , and the
(i )
transmision noise, wk , are required:
 (i )
(A4) For each d = 0, 1, . . . , D, γd,k ; k > d , i = 1, . . . , m, are independent sequences of independent
(k−1)∧ D
(i ) (i ) (i )
Bernoulli random variables with P[γd,k = 1] = γd,k and ∑ γd,k ≤ 1, k ≥ 1.
 (i ) d =0
(A5) wk ; k ≥ 1 , i = 1, . . . , m, are white noise sequences with zero mean and known second-order
 (i ) ( j ) T  (ij)
moments, satisfying E wk ws = Qk δk,s , i, j = 1, . . . , m.

(i )
Remark 3 (on assumption (A4)). For i = 1, . . . , m, when γ0,k = 1, the transmission of the i-th sensor is
(i )
perfect and neither delay nor loss occurs at time k; that is, with probability γ0,k , the k-th measurement of the
(k−1)∧ D
(i ) (i )
i-th sensor is received and processed on time. Since ∑ γd,k ≤ 1, if γ0,k = 0, there then exists at most one
d =0
(i ) (i ) (i )
d = 1, . . . , D, such that γd,k = 1. If there exists d such that γd,k = 1 (which occurs with probability γd,k ),
(i )
then the measurement is delayed by d sampling periods. Otherwise, γd,k = 0 for all d, meaning that the
(k−1)∧ D
(i )
measurement gets lost during the transmission at time k with probability 1 − ∑ γd,k .
d =0

Finally, the following independence hypothesis is assumed:


(i ) (i )
(A6) For i = 1, . . . , m and d = 0, 1, . . . , D, the processes { xk ; k ≥ 1}, { Hk ; k ≥ 1}, {vk ; k ≥ 1},
(i )  (i )
{wk ; k ≥ 1} and γd,k ; k > d are mutually independent.

3. Problem Statement
Given the observation Equations (1) and (2) with random measurement matrices and transmission
random delays and packet dropouts, our purpose is to find the LS linear estimator, xbk/L , of the signal xk
(i ) (i )
based on the observations from the different sensors {y1 , . . . , y L , i = 1, . . . , m}. Specifically, our aim
is to obtain recursive algorithms for the predictor (L < k), filter ( L = k) and fixed-point smoother
(k fixed and L > k).

3.1. Stacked Observation Model


Since the measurements coming from the different sensors are all gathered and jointly processed
at each sampling time k, we will consider the vector constituted by the meaurements from all sensors,
(1) T (m) T  T
yk = yk , . . . , yk . More specifically, the observation Equations (1) and (2) of all sensors are
combined, yielding the following stacked observation model:

zk = Hk xk + vk , k ≥ 1,
(k−1)∧ D
(3)
yk = ∑ Γd,k zk−d + wk , k ≥ 1,
d =0

(1) T (m) T  T (1) T (m) T  T (1) T (m) T  T


where zk = zk , . . . , zk , Hk = Hk, . . . , Hk , vk = vk , . . . , vk ,
(1) T (m) T  T (1) (m) 
wk = wk , . . . , wk and Γd,k = Diag γd,k , . . . , γd,k ⊗ Inz .
Sensors 2017, 17, 1151 6 of 17

Hence, the problem is to obtain the LS linear estimator of the signal, xk , based on the measurements
{y1 , . . . , y L }, given in the observation Equation (3). Next, we present the statistical properties of the
processes involved in Equation (3), from which the LS estimation algorithms of the signal will be
derived; these properties are easily inferred from the assumptions (A1)–(A6).

(P1) Hk ; k ≥ 1 is a sequence of independent random parameter matrices with known means,
 
(1) T (m) T T
H k ≡ E[ Hk ] = H k , . . . , H k , and
 
(i ) ( j) T
E[ Hk xk xsT HsT ] = E[ Hk Ak BsT HsT ] = E[ Hk Ak BsT Hs ] , s ≤ k,
i,j=1...,m

 (i ) ( j) T  (i ) ( j) T
where E Hk Ak BsT Hs = H k Ak BsT H s , for j 6= i or s 6= k, and the entries of
 (i ) (i ) T 
E Hk Ak BkT Hk are computed as follows:

   nx nx
(i ) (i ) T  (i ) (i )
E Hk Ak BkT Hk =
pq
∑ ∑ E[h pa (k)hqb (k)]( Ak BkT )ab , p, q = 1, . . . , nz ,
a =1 b =1

(i ) (i )
where h pq (k) denotes the ( p, q)−entry of the matrix Hk .
 
(P2) The noises vk ; k ≥ 1 and wk ; k ≥ 1 are zero-mean sequences with known second-order
(ij)  (ij) 
moments given by the matrices Rk ≡ Rk i,j=1,...,m and Qk ≡ Qk i,j=1,...,m .

(P3) {Γd,k ; k > d}, d = 0, 1, . . . , D, are sequences of independent random matrices with known means,
   
(1) (m) (1) (m) T
Γd,k ≡ E[Γd,k ] = Diag γd,k , . . . , γd,k ⊗ Inz , and if we denote γd,k = γd,k , . . . , γd,k ⊗ 1nz and
γd,k = E[γd,k ], the covariance matrices Σγd,dk 0 ≡ E[(γd,k − γd,k )(γd0 ,k − γd0 ,k )T ], for d, d0 = 0, 1, . . . , D,
are also known matrices. Specifically,
      
(1) (1) (m) (m)
Σd,dk 0 = Diag Cov γd,k , γd0 ,k , . . . , Cov γd,k , γd0 ,k ⊗ 1nz 1nTz ,
γ
(4)

(
  (i )
γd,k (1 − γd,k ),
(i )
d0 = d,
(i ) (i )
with Cov γd,k , γd0 ,k = (i ) (i )
−γd,k γd0 ,k , d0 6= d.
Moreover, for any deterministic matrix S, the Hadamard product properties guarantee that
  
Γd,k − Γd,k S Γd0,k − Γd0,k = Σd,dk 0 ◦ S.
γ
E

(P4) For d = 0, 1, . . . , D, the signal, { xk ; k ≥ 1}, and the processes { Hk ; k ≥ 1}, {vk ; k ≥ 1},
{wk ; k ≥ 1} and {Γd,k ; k > d} are mutually independent.

Remark 4 (on the observation covariance matrices). From the previous properties, it is clear that the
observation process {zk ; k ≥ 1} is a zero-mean sequence whose covariance function, Σzk,s ≡ E[zk zsT ], is obtained
by the following expression:
 
Σzk,s = E Hk Ak BsT HsT + Rk δk,s , s ≤ k, (5)
 
where E Hk Ak BsT HsT and Rk are calculated as it is indicated in properties (P1) and (P2), respectively.

3.2. Innovation Approach to the LS Linear Estimation Problem


The proposed covariance-based recursive algorithms for the LS linear prediction, filtering and
fixed-point smoothing estimators will be derived by an innovation approach. This approach consists of
transforming the observation process {yk ; k ≥ 1} into an equivalent one of orthogonal vectors called an
innovation process, which will be denoted {µk ; k ≥ 1} and defined by µk = yk − ybk/k−1 , where ybk/k−1
is the orthogonal projection of yk onto the linear space spanned by {µ1 , . . . , µk−1 }. Since both
Sensors 2017, 17, 1151 7 of 17

processes span the same linear subspace, the LS linear estimator of any random vector αk based on the
observations {y1 , . . . , y N }, denoted by b
αk/N , is equal to that based on the innovations {µ1 , . . . , µ N },
and, denoting Πh = E[µh µh ], the following general expression for the LS linear estimators of αk
T

is obtained
N  
αk/N = ∑ E αk µhT Π−
b 1
h µh . (6)
h =1

Hence, to obtain the signal estimators, it is necessary to find an explicit formula beforehand for
the innovations and their covariance matrices.

Innovation µ L and Covariance Matrix Π L . Applying orthogonal projections in Equation (3), it is clear
that the innovation µ L is given by

( L−1)∧ D
µL = yL − ∑ Γd,L b
z L−d/L−1 , L ≥ 2; µ1 = y1 , (7)
d =0

so it is necessary to obtain the one-stage predictor b z L/L−1 and the estimators b z L−d/L−1 ,
for d = 1, . . . , ( L − 1) ∧ D, of the observation process.
In order to obtain the covariance matrix Π L , we use Equation (3) to express the innovations as:

( L−1)∧ D  
µL = ∑ (Γd,L − Γd,L )z L−d + Γd,L (z L−d − b
z L−d/L−1 ) + w L , L ≥ 2 (8)
d =0

and, taking into account that


h i
z L−d0 /L−1 ) T Γd0 ,L = 0, ∀d, d0 ,
E (Γd,L − Γd,L )z L−d (z L−d0 − b

we have
( L−1)∧ D  
ΠL = ∑0 Σd,dL 0 ◦ ΣzL−d,L−d0 + Γd,L PLz−d,L−d0 /L−1 Γd0 ,L + Q L , L ≥ 2;
γ

d,d =0
(9)

Π1 = Σ0,01 + γ0,1 γ0,1
T
◦ Σ1,1
z +Q ,
γ
1

where the matrices Σd,dL 0 and ΣzL−d,L−d0 are given in the Equations (4) and (5), respectively, and
γ

PLz−d,L−d0 /L−1 ≡ E[(z L−d − b z L−d0 /L−1 ) T ].


z L−d/L−1 )(z L−d0 − b

4. Least-Squares Linear Signal Estimators


In this section, we derive recursive algorithms for the LS linear estimators, xbk/L , k ≥ 1, of the
signal xk based on the observations {y1 , . . . , y L } given in Equation (3); namely, a prediction and filtering
algorithm (L ≤ k) and a fixed-point smoothing algorithm (k fixed and L > k) are designed.

4.1. Signal Predictor and Filter xbk/L , L ≤ k


From the general expression given in Equation (6), to obtain the LS linear estimators
L  
xbk/L = ∑ E xk µhT Π− 1
h µ h , L ≤ k, it is necessary to calculate the coefficients
h =1
     
Xk,h ≡ E xk µhT = E xk yhT − E xk ybh/h
T
−1 , h ≤ k.

• On the one hand, using Equation (3) together with the independence hypotheses and
assumption (A1) on the signal covariance factorization, it is clear that

  (h−1)∧ D (h−1)∧ D
T
E xk yhT = ∑ E[ xk ( Hh−d xh−d + vh−d ) T ]Γd,h = Ak ∑ BhT−d H h−d Γd,h , h ≤ k.
d =0 d =0
Sensors 2017, 17, 1151 8 of 17

(h−1)∧ D
• On the other hand, since ybh/h−1 = ∑ Γd,h b
zh−d/h−1 , h ≥ 2, and taking into account
d =0
h −1
that, from Equation (6), b
zh−d/h−1 = ∑ Zh−d,j Π−j 1 µ j with Zh−d,j = E[zh−d µ Tj ], the following
j =1
identity holds
!
  (h−1)∧ D h−1
E T
xk ybh/h −1 = ∑ ∑ Xk,j Π− 1 T
j Z h−d,j Γd,h .
d =0 j =1

Therefore, it is easy to check that Xk,h = Ak Eh , 1 ≤ h ≤ k, where Eh is a function satisfying that


!
(h−1)∧ D (h−1)∧ D h−1
T
Eh = ∑ BhT−d H h−d Γd,h − ∑ ∑ E j Π−j 1 ZhT−d,j Γd,h , h ≥ 2,
(10)
d =0 d =0 j =1
T
E1 = B1T H 1 Γ0,1 .

Hence, it is clear that the signal prediction and filtering estimators can be expressed as

xbk/L = Ak e L , L ≤ k, k ≥ 1, (11)

L
where the vectors e L are defined by e L = ∑ Eh Π−h 1 µh , for L ≥ 1, with e0 = 0, thus obeying the
h =1
recursive relation
e L = e L −1 + E L Π − 1
L µ L , L ≥ 1; e0 = 0. (12)

Matrices E L . Taking into account the above relation, an expression for E L , L ≥ 1, must be derived.
For this purpose, Equation (10) is rewritten for h = L as
!
( L−1)∧ D ( L−1)∧ D L −1
T
EL = ∑ BLT−d H L−d Γd,L − ∑ ∑ E j Π− 1 T
j Z L−d,j Γd,L , L ≥ 2,
d =0 d =0 j =1

and we examine the cases d = 0 and d ≥ 1 separately:

− For d = 0, using Equation (3), it holds that Z L,j = H L X L,j = H L A L E j , for j < L, and, by denoting
L
ΣeL = ∑ Eh Π−h 1 Eh , L ≥ 1, we obtain that
h =1
! !
L −1 L −1
T T
∑ E j Π−j 1 Z L,j
T
Γ0,L = ∑ E j Π−j 1 E jT A TL H L Γ0,L = ΣeL−1 A TL H L Γ0,L .
j =1 j =1

− For d ≥ 1, since Z L−d,j = H L−d A L−d E j , for j < L − d, we can see that
! !
L −1 L −1
T
∑ E j Π−j 1 Z LT−d,j Γd,L = ΣeL−d−1 A TL−d H L−d Γd,L + ∑ E j Π− 1 T
j Z L−d,j Γd,L .
j =1 j= L−d

By substituting the above sums in E L , it is deduced that


!
( L−1)∧ D T T ( L−1)∧ D L −1
EL = ∑ BL−d − A L−d ΣeL−d−1 H L−d Γd,L − ∑ ∑ E j Π− 1 T
j Z L−d,j Γd,L , L ≥ 2;
(13)
d =0 d =1 j= L−d
T
E1 = B1T H 1 Γ0,1 ,
Sensors 2017, 17, 1151 9 of 17

where the matrices Z L−d,j , j ≥ L − d, will be obtained in the next subsection, as they correspond to the
observation smoothing estimators, and the matrices ΣeL are recursively obtained by

ΣeL = ΣeL−1 + E L Π− 1 T e
L E L , L ≥ 1; Σ0 = 0. (14)

Finally, from assumption (A1) and since the estimation errors are orthogonal to the estimators, we
x
have that the error covariance matrices, Pk/L ≡ E[( xk − xbk/L )( xk − xbk/L )T ], are given by

x
Pk/L = Ak ( Bk − Ak ΣeL )T , L ≤ k, k ≥ 1. (15)

4.2. Estimators of the Observations b


zk/L , k ≥ 1
As it has been already indicated, the Equation (7) require obtaining the observation estimators
(predictor, filter and smoother). From the general expression for the estimators given in Equation (6),
L
we have that b
zk/L = ∑ Zk,j Π−j 1 µ j , with Zk,j = E[zk µTj ]. Next, recursive expressions will be derived
j =1
separately for L < k (predictors) and L ≥ k (filter and smoothers).

Observation Prediction Estimators. Since Zk,j = H k Ak E j , for j < k, we have that the prediction estimators
of the observations are given by

b
zk/L = H k A L e L , L < k, k ≥ 1. (16)

Observation Filtering and Fixed-Point Smoothing Estimators. Clearly, the filter and fixed-point smoothers
of the observations are obtained by the following recursive expression:

b zk/L−1 + Zk,L Π−
zk/L = b 1
L µ L , L ≥ k, k ≥ 1, (17)

with initial condition given by the one-stage predictor b


zk/k−1 = H k Ak ek−1 .

Hence, the matrices Zk,L must be calculated for L ≥ k. Since the innovation is a white process,
   
E bzk/L−1 µ TL = 0 and hence Zk,L = E[zk µ TL ] = E (zk − b zk/L−1 )µ TL . Now using Equation (8) for µ L
h i
zk/L−1 )z TL−d (Γd,L − Γd,L ) = 0, ∀d, we have
and, taking into account that E (zk − b

( L−1)∧ D
Zk,L = ∑ z
Pk,L −d/L−1 Γd,L , L ≥ k, (18)
d =0

z z L−d/L−1 ) T ].
where Pk,L −d/L−1 ≡ E [(zk − b
zk/L−1 )(z L−d − b
z
Consequently, the error covariance matrices Pk,h/m = E[(zk − b zh/m ) T ] must be derived,
zk/m )(zh − b
for which the following two cases are analyzed separately:
 
T , it is
∗ For m ≥ k ∧ h, using Equation (17) and taking into account that Zk,m = E (zk − b zk/m−1 )µm
easy to see that
z z −1 T
Pk,h/m = Pk,h/m −1 − Zk,m Πm Z h,m , m ≥ k ∧ h. (19)
∗ For m < h ≤ k, using Equation (16), assumption (A1) and the orthogonality between the
estimation errors and the estimators, we obtain

z T
Pk,h/m = Ak BhT − H k Ak Σem AhT H h , m < h ≤ k. (20)
Sensors 2017, 17, 1151 10 of 17

4.3. Signal Fixed-Point Smoother xbk/L , L > k


x , it is clear that
Starting with the filter, xbk/k , and the filtering error covariance matrix, Pk/k
the signal fixed-point smoother xbk/L , L > k, and the corresponding error covariance matrix,
x
Pk/L ≡ E[( xk − xbk/L )( xk − xbk/L )T ], are obtained by

xbk/L = xbk/L−1 + Xk,L Π− 1


L µ L , L > k, k ≥ 1,
− 1 T (21)
−1 − Xk,L Π L Xk,L , L > k, k ≥ 1.
x
Pk/L x
= Pk/L

An analogous reasoning to that of Equation (18) leads to the following expression for the
matrices Xk,L :
( L−1)∧ D
Xk,L = ∑ xz
Pk,L −d/L−1 Γd,L , L > k, (22)
d =0
xz z L−d/L−1 ) T ].
bk/L−1 )(z L−d − b
where Pk,L −d/L−1 ≡ E [( xk − x
xz
The derivation of the error cross-covariance matrices Pk,h/m zh/m ) T ] is
= E[( xk − xbk/m )(zh − b
z
similar to that of the matrices Pk,h/m , and they are given by

−1 T
−1 − Xk,m Πm Z h,m ,
xz
Pk,h/m xz
= Pk,h/m m ≥ k ∧ h,
xz
Pk,h/m = Ak ( Bh − Ah Σem )T HhT , m < h ≤ k, (23)
Pk,h/m = ( Bk − Ak Σem ) AhT HhT ,
xz m < k ≤ h,

where Xk,m = Ak Em , for m ≤ k, and Zh,m = H h Am Em , for m < h, and otherwise these matrices are
given by Equations (22) and (18), respectively.

4.4. Recursive Algorithms: Computational Procedure


The computational procedure of the proposed prediction, filtering and fixed-point smoothing
algorithms can be summarized as follows:

Covariance Matrices. The covariance matrices Σd,dk 0 and Σzk are obtained by Equations (4) and (5),
γ
(1)
respectively; these matrices only depend on the system model information, so they can be
calculated offline before the observed data packets are available.
(2) LS Linear Prediction and Filtering Recursive Algorithm. At the sampling time k, once the (k − 1)-th
iteration is finished and Ek−1 , Πk−1 , Σek−1 µk−1 and ek−1 are known, the proposed prediction and
filtering algorithm operates as follows:

(2a) Compute Zk,k−1 = H k Ak Ek−1 and Zk−d,k−1 , for d = 1, . . . , (k − 1) ∧ D, by Equation (18).


From these matrices, we obtain the observation estimators b zk−d/k−1 , for d = 0, 1, . . . , (k −
1) ∧ D, by Equation (19) and (20), and the observation error covariance matrices
Pkz−d,k−d0 /k−1 , for d, d0 = 0, 1, . . . , (k − 1) ∧ D, by Equation (19) and (20).
(2b) Compute Ek by Equation (13) and use Pkz−d,k−d0 /k−1 to obtain the innovation covariance
matrix Πk by Equation (9). Then, Σek is obtained by Equation (14) and, from it, the
prediction and filtering error covariance matrices, Pk/k x x
−s and Pk/k , respectively, are
obtained by Equation (15).
(2c) When the new measurement yk is available, the innovation µk is computed by Equation (7)
using b zk−d/k−1 , for d = 0, 1, . . . , (k − 1) ∧ D, and, from the innovation, ek is obtained
by Equation (12). Then, the predictors, xbk/k−s and the filter, xbk/k are computed by
Equation (11).

(3) LS linear fixed-point smoothing recursive algorithm. Once the filter, xbk/k , and the filtering error
x are available, the proposed smoothing estimators and the corresponding
covariance matrix, Pk/k
error covariance matrix are obtained as follows:
Sensors 2017, 17, 1151 11 of 17

For L = k + 1, k + 2, . . . , compute the error cross-covariance matrices Pk,L xz


−d/L−1 , for
d = 0, 1, . . . , (k − 1) ∧ D, using Equation (23) and, from these matrices, Xk,L is derived by
x are obtained
Equation (22); then, the smoothers xbk/L and their error covariance matrices Pk/L
from Equation (21).

5. Computer Simulation Results


In this section, a numerical example is presented with the following purposes: (a) to show that,
although the current covariance-based estimation algorithms do not require the evolution model
generating the signal process, they are also applicable to the conventional formulation using the
state-space model, even in the presence of state-dependent multiplicative noise; (b) to illustrate some
kinds of uncertainties which can be covered by the current model with random measurement matrices;
and (c) to analyze how the estimation accuracy of the proposed algorithms is influenced by the sensor
uncertainties and the random delays in the transmissions.

Signal Evolution Model with State-Dependent Multiplicative Noise. Consider a scalar signal { xk ; k ≥ 0}
whose evolution is given by the following model with multiplicative and additive noises:

xk = (0.9 + 0.01ek−1 ) xk−1 + ξ k−1 , k ≥ 1,

where x0 is a standard Gaussian variable and {ek ; k ≥ 0}, {ξ k ; k ≥ 0} are zero-mean Gaussian white
processes with unit variance. Assuming that x0 , {ek ; k ≥ 0} and {ξ k ; k ≥ 0} are mutually independent,
the signal covariance function is given by E[ xk xs ] = 0.9k−s Ds , s ≤ k, where Ds = E[ xs2 ] is recursively
obtained by Ds = 0.8101Ds−1 + 1, for s ≥ 1, with D0 = 1; hence, the signal process satisfies assumption
(A1) taking, for example, Ak = 0.9k and Bs = 0.9−s Ds .

Sensor Measured Outputs. As in [22], let us consider scalar measurements provided by four sensors with
different types of uncertainty: continuous and discrete gain degradation in sensors 1 and 2, respectively,
missing measurements in sensor 3, and both missing measurements and multiplicative noise in sensor 4.
These uncertainties can be described in a unified way by the current model with random measurement
matrices; specifically, the measured outputs are described according to Equation (1):

(i ) (i ) (i )
zk = Hk xk + vk , k ≥ 1, i = 1, 2, 3, 4,

with the following characteristics:


(i ) (i ) (4) 0  (4)
• For i = 1, 2, 3, Hk = C (i) θk and Hk = C (4) + C (4 ) ρk θk , where C (1) = C (3) = 0.8,
0
C (2) = C (4) = 0.75, C (4 ) = 0.95, and {ρk ; k ≥ 1} is a zero-mean Gaussian white process with
(i )
unit variance. The sequences {ρk ; k ≥ 1} and {θk ; k ≥ 1}, i = 1, 2, 3, 4, are mutually
(i )
independent, and {θk ; k ≥ 1}, i = 1, 2, 3, 4, are white processes with the following time-invariant
probability distributions:
(1)
– θk is uniformly distributed over the interval [0.1, 0.9];
(2) (2) (2)
– P[θk = 0] = 0.3, P[θk = 0.5] = 0.3 P[θk = 1] = 0.4;
(i )
– For i = 3, 4, θk are Bernoulli random variables with the same time-invariant probabilities in
(i )
both sensors P[θk = 1] = p.
(i )
• The additive noises are defined by vk = ci ηkv , i = 1, 2, 3, 4, where c1 = 0.5, c2 = c3 = 0.75, c4 = 1,
and {ηkv ; k ≥ 1} is a zero-mean Gaussian white process with unit variance. Clearly, the additive
(i ) (ij)
noises {vk ; k ≥ 1}, i = 1, 2, 3, 4, are correlated at any time, with Rk = ci c j , i, j = 1, 2, 3, 4.

Observations with Bounded Random Delays and Packet Dropouts. Next, according to the theoretical
observation model, let us suppose that bounded random measurement delays and packet dropouts,
Sensors 2017, 17, 1151 12 of 17

with different delay probabilities, exist in the data transmission. Specifically, assuming that the largest
delay is D = 3, let us consider the observation Equation (2):

(k−1)∧3
(i ) (i ) (i ) (i )
yk = ∑ γd,k zk−d + wk , k ≥ 1,
d =0

(i )
where, for i = 1, 2, 3, 4 and d = 0, 1, 2, 3, {γd,k ; k > d}, are sequences of independent Bernoulli variables
3
(i )
with the same time-invariant delay probabilities for the four sensors γd,k = γd , where ∑ γd ≤ 1.
d =0
3
(i )
Hence, the packet dropout probability is 1 − ∑ γd . The transmission noise is defined by wk = ci ηkw ,
d =0
i = 1, 2, 3, 4, where {ηkw ; k ≥ 1} is a zero-mean Gaussian white process with unit variance.

Finally, in order to apply the proposed algorithms, and according to (A5), we will assume that all
of the processes involved in the observation equations are mutually independent.

To illustrate the feasibility and effectiveness of the proposed algorithms, they were implemented
in MATLAB (R2011b 7.13.0.564, The Mathworks, Natick, MA, USA) and one hundred iterations of the
prediction, filtering and fixed-point smoothing algorithms have been performed. In order to analyze
the effect of the network-induced uncertainties on the estimation accuracy, different values of the
probabilities p of the Bernoulli random variables that model the uncertainties of the third and fourth
sensors, and several values of the delay probabilities γd , d = 0, 1, 2, 3, have been considered.

Performance of the Prediction, Filtering and Fixed-Point Smoothing Estimators. Considering the values
1 − γ0
p = 0.5, γ0 = 0.6 (packet arrival probability) and γd = , d = 1, 2, 3 (delay probabilities),
4
Figure 1 displays a simulated trajectory along with the prediction, filtering and smoothing estimations,
showing a satisfactory and efficient tracking performance of the proposed estimators. Figure 2 shows
the error variances of the predictors b zk/k−2 and b zk/k−1 , the filter b
zk/k and the smoothers b zk/k+1 and
b
zk/k+2 . Analogously to what happens for non-delayed observations, the performance of the estimators
becomes better as more observations are used; that is, the error variances of the smoothing estimators
are less than the filtering ones which, in turn, are less than those of the predictors. Hence, the estimation
accuracy of the smoothers is superior to that of the filter and predictors and improves as the number of
iterations in the fixed-point smoothing algorithm increases. The performance of the proposed filter has
also been evaluated in comparison with the standard Kalman filter; for this purpose, the filtering mean
square error (MSE) at each sampling time was calculated by considering one thousand independent
simulations and one hundred iterations of each filter. The results of this comparison are displayed in
Figure 3, which shows that the proposed filter performs better than the Kalman filter, a fact that was
expected since the latter does not take into account either the uncertainties in the measured outputs or
the delays and losses during transmission.

Influence of the Missing Measurements. To analyze the sensitivity of the estimation performance on the
effect of the missing measurements phenomenon in the third and fourth sensors, the error variances
are calculated for different values of the probability p. Specifically, considering again γ0 = 0.6 and
γd = 0.1, d = 1, 2, 3, Figure 4 displays the prediction, filtering and fixed-point smoothing error
variances for the values p = 0.5 to p = 0.9. This Figure shows that, as p increases, the estimation error
variances become smaller and, hence, as it was expected, the performance of the estimators improves
as the probability of missing measurements 1 − p decreases.
Sensors 2017, 17, 1151 13 of 17

5
Simulated values xk
Prediction estimates xbk/k−1
4 bk/k
Filtering estimates x
bk/k+1
Fixed-point smoothing estimates x

−1

−2

−3

−4

−5
0 10 20 30 40 50 60 70 80 90 100
Time k

Figure 1. Simulated signal and proposed prediction, filtering and smoothing estimates when p = 0.5
and γ0 = 0.6, γd = 0.1, d = 1, 2, 3.

3.5

2.5

1.5
x
Prediction error variances Pk/k−2
x
Prediction error variances Pk/k−1
x
Filtering error variances Pk/k
1 x
Smoothing error variances Pk/k+1
x
Smoothing error variances Pk/k+2

5 10 15 20 25 30 35 40 45 50
Time k

Figure 2. Prediction, filtering and smoothing error variances when p = 0.5 and γ0 = 0.6, γd = 0.1,
d = 1,2,3.
Sensors 2017, 17, 1151 14 of 17

MSE Kalman filter


MSE proposed filter
3.5

2.5

1.5

10 20 30 40 50 60 70 80 90 100
Iteration k

Figure 3. MSE of Kalman filter and proposed filter when p = 0.5 and γ0 = 0.6, γd = 0.1, d = 1, 2, 3.

p = 0.5 p = 0.6 p = 0.7 p = 0.8 p = 0.9


3.5

x
Pk/k−2
3

2.5

x
Pk/k
2

x
1.5
Pk/k+2

5 10 15 20 25 30 35 40 45 50
Time k

Figure 4. Prediction, filtering and fixed-point smoothing error variances for different values of the
probability p, when γ0 = 0.6 and γd = 0.1, d = 1, 2, 3.

Influence of the Transmission Random Delays and Packet Dropouts. Considering a fixed value of the
probability p, namely, p = 0.5, in order to show how the estimation accuracy is influenced by
Sensors 2017, 17, 1151 15 of 17

the transmission random delays and packet dropouts, the prediction, filtering and smoothing
error variances are displayed in Figure 5 when γ0 is varied from 0.1 to 0.9, considering again
1 − γ0
γd = , d = 1, 2, 3. Since the behaviour of the error variances is analogous from a certain
4
iteration on, only the results at the iteration k = 50 are shown in Figure 5. From this figure, we see
that the performance of the estimators (predictor, filter and smoother) is indeed influenced by the
transmission delay and packet dropout probabilities and, as it was expected, it is confirmed that the
error variances become smaller, and hence the performance of the estimators improves, as the packet
arrival probability γ0 increases. Moreover, for the filter and the smoothers, this improvement is more
significant than for the predictors. In addition, as it was deduced from Figure 2, it is observed that, for
all the values of γ0 , the performance of the estimators is better as more observations are used, and this
improvement is more significant as γ0 increases.

5
x
P 50/48
x
P 50/49
x
4.5 P 50/50
x
P 50/51
x
P 50/52
4

3.5

2.5

1.5

1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Probability γ 0

Figure 5. Estimation error variances for different values of the probability γ0 , when p = 0.5.

Finally, it must be remarked that analogous conclusions are deduced for other values of
the probabilities p and γd , d = 0, 1, 2, 3, and also when such probabilities are different at the
different sensors.

6. Conclusions
This paper makes valuable contributions to the optimal fusion estimation problem in networked
stochastic systems with random parameter matrices, when multi-step delays or even packet dropouts
occur randomly during the data transmission. By an innovation approach, recursive prediction,
filtering and fixed-point smoothing algorithms have been designed, which are easily implementable
and do not require the signal evolution model, but only the mean and covariance functions of the
system processes.
Unlike other estimation algorithms proposed in the literature, where the estimators are restricted
to obey a particular structure, in this paper, recursive optimal estimation algorithms are designed
without requiring a particular structure on the estimators, but just using the LS optimality criterion.
Sensors 2017, 17, 1151 16 of 17

Another advantage is that the current approach does not resort to the augmentation technique and,
consequently, the dimension of the designed estimators is the same as that of the original signal,
thus reducing the computational burden in the processing centre.

Acknowledgments: This research is supported by the “Ministerio de Economía y Competitividad” and “Fondo
Europeo de Desarrollo Regional” FEDER (Grant No. MTM2014-52291-P).
Author Contributions: All of the authors contributed equally to this work. Raquel Caballero-Águila,
Aurora Hermoso-Carazo and Josefa Linares-Pérez provided original ideas for the proposed model and collaborated
in the derivation of the estimation algorithms; they participated equally in the design and analysis of the simulation
results; and the paper was also written and reviewed cooperatively.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Ran, C.; Deng, Z. Self-tuning weighted measurement fusion Kalman filtering algorithm. Comput. Stat.
Data Anal. 2012, 56, 2112–2128.
2. Feng, J.; Zeng, M. Optimal distributed Kalman filtering fusion for a linear dynamic system with
cross-correlated noises. Int. J. Syst. Sci. 2012, 43, 385–398.
3. Yan, L.; Li, X.; Xia, Y.; Fu, M. Optimal sequential and distributed fusion for state estimation in cross-correlated
noise. Automatica 2013, 49, 3607–3612.
4. Ma, J.; Sun, S. Centralized fusion estimators for multisensor systems with random sensor delays, multiple
packet dropouts and uncertain observations. IEEE Sens. J. 2013, 13, 1228–1235.
5. Gao, S.; Chen, P. Suboptimal filtering of networked discrete-time systems with random observation losses.
Math. Probl. Eng. 2014, 2014, 151836.
6. Liu, Y.; He, X.; Wang, Z.; Zhou, D. Optimal filtering for networked systems with stochastic sensor gain
degradation. Automatica 2014, 50, 1521–1525.
7. Chen, B.; Zhang, W.; Yu, L. Distributed fusion estimation with missing measurements, random transmission
delays and packet dropouts. IEEE Trans. Autom. Control 2014, 59, 1961–1967.
8. Li, N.; Sun, S.; Ma, J. Multi-sensor distributed fusion filtering for networked systems with different delay
and loss rates. Digit. Signal Process. 2014, 34, 29–38.
9. Wang, S.; Fang, H.; Tian, X. Recursive estimation for nonlinear stochastic systems with multi-step
transmission delays, multiple packet dropouts and correlated noises. Signal Process. 2015, 115, 164–175.
10. Chen, D.; Yu, Y.; Xu, L.; Liu, X. Kalman filtering for discrete stochastic systems with multiplicative noises
and random two-step sensor delays. Discret. Dyn. Nat. Soc. 2015, 2015, 809734.
11. García-Ligero, M.J.; Hermoso-Carazo, A.; Linares-Pérez, J. Distributed fusion estimation in networked
systems with uncertain observations and markovian random delays. Signal Process. 2015, 106, 114–122.
12. Gao, S.; Chen, P.; Huang, D.; Niu, Q. Stability analysis of multi-sensor Kalman filtering over lossy networks.
Sensors 2016, 16, 566.
13. Lin, H.; Sun, S. State estimation for a class of non-uniform sampling systems with missing measurements.
Sensors 2016, 16, 1155.
14. Hu, J.; Wang, Z.; Chen, D.; Alsaadi, F.E. Estimation, filtering and fusion for networked systems with
network-induced phenomena: New progress and prospects. Inf. Fusion 2016, 31, 65–75.
15. Sun, S.; Lin, H.; Ma, J.; Li, X. Multi-sensor distributed fusion estimation with applications in networked
systems: A review paper. Inf. Fusion 2017, 38, 122–134.
16. Li, W.; Jia, Y.; Du, J. Distributed filtering for discrete-time linear systems with fading measurements and
time-correlated noise. Digit. Signal Process. 2017, 60, 211–219.
17. Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally optimal multisensor distributed random
parameter matrices Kalman filtering fusion with applications. Sensors 2008, 8, 8086–8103.
18. Shen, X.J.; Luo, Y.T.; Zhu, Y.M.; Song, E.B. Globally optimal distributed Kalman filtering fusion. Sci. China
Inf. Sci. 2012, 55, 512–529.
19. Hu, J.; Wang, Z.; Gao, H. Recursive filtering with random parameter matrices, multiple fading measurements
and correlated noises. Automatica 2013, 49, 3440–3448.
Sensors 2017, 17, 1151 17 of 17

20. Linares-Pérez, J.; Caballero-Águila, R.; García-Garrido, I. Optimal linear filter design for systems with
correlation in the measurement matrices and noises: Recursive algorithm and applications. Int. J. Syst. Sci.
2014, 45, 1548–1562.
21. Yang, Y.; Liang, Y.; Pan, Q.; Qin, Y.; Yang, F. Distributed fusion estimation with square-root array
implementation for Markovian jump linear systems with random parameter matrices and cross-correlated
noises. Inf. Sci. 2016, 370–371, 446–462.
22. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Networked fusion filtering from outputs with
stochastic uncertainties and correlated random transmission delays. Sensors 2016, 16, 847.
23. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Fusion estimation using measured outputs with
random parameter matrices subject to random delays and packet dropouts. Signal Process. 2016, 127, 12–23.
24. Feng, J.; Zeng, M. Descriptor recursive estimation for multiple sensors with different delay rates. Int. J. Control
2011, 84, 584–596
25. Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J. Least-squares linear-estimators using
measurements transmitted by different sensors with packet dropouts. Digit. Signal Process. 2012,
22, 1118–1125.
26. Ma, J.; Sun, S. Distributed fusion filter for networked stochastic uncertain systems with transmission delays
and packet dropouts. Signal Process. 2017, 130, 268–278.
27. Wang, S.; Fang, H.; Liu, X. Distributed state estimation for stochastic non-linear systems with random delays
and packet dropouts. IET Control Theory Appl. 2015, 9, 2657–2665
28. Sun, S. Linear minimum variance estimators for systems with bounded random measurement delays and
packet dropouts. Signal Process. 2009, 89, 1457–1466.
29. Sun, S. Optimal linear filters for discrete-time systems with randomly delayed and lost measurements
with/without time stamps. IEEE Trans. Autom. Control 2013, 58, 1551–1556.
30. Sun, S.; Ma, J. Linear estimation for networked control systems with random transmission delays and packet
dropouts. Inf. Sci. 2014, 269, 349–365.
31. Wang, S.; Fang, H.; Tian, X. Minimum variance estimation for linear uncertain systems with one-step
correlated noises and incomplete measurements. Digit. Signal Process. 2016, 49, 126–136.
32. Tian, T.; Sun, S.; Li, N. Multi-sensor information fusion estimators for stochastic uncertain systems with
correlated noises. Inf. Fusion 2016, 27, 126–137.

c 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

View publication stats

You might also like