Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Gait Disorder Detection and Classification Method Using Inertia Measurement Unit for Augmented Feedback Training in Wearable Devices
Previous Article in Journal
Actuator Fault Detection for Unmanned Ground Vehicles Considering Friction Coefficients
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Filter Approach for State Estimation Utilizing Quantized Output Data

1
Departamento Electrónica, Universidad Técnica Federico Santa María (UTFSM), Av. España 1680, Valparaíso 2390123, Chile
2
Advanced Center for Electrical and Electronic Engineering, AC3E, Gral. Bari 699, Valparaíso 2390136, Chile
3
Escuela de Ingeniería Eléctrica, Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso 2374631, Chile
4
Department of Mechanical Engineering, Boston University, Boston, MA 02215, USA
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(22), 7675; https://doi.org/10.3390/s21227675
Submission received: 27 September 2021 / Revised: 16 November 2021 / Accepted: 16 November 2021 / Published: 18 November 2021
(This article belongs to the Section Sensor Networks)
Figure 1
<p>Diagram of a real dynamic system with quantized data.</p> ">
Figure 2
<p>State-space model with quantized output.</p> ">
Figure 3
<p>Representation of the (<b>left</b>) uniform infinite- and (<b>right</b>) finite-level quantizers defined in terms of the quantized values <math display="inline"><semantics> <msub> <mi>β</mi> <mi>i</mi> </msub> </semantics></math> and the intervals <math display="inline"><semantics> <msub> <mi mathvariant="script">J</mi> <mi>i</mi> </msub> </semantics></math>.</p> ">
Figure 4
<p>The shaded area represents the probability of <math display="inline"><semantics> <msub> <mi>y</mi> <mi>t</mi> </msub> </semantics></math> taking a value <math display="inline"><semantics> <msub> <mi>β</mi> <mi>i</mi> </msub> </semantics></math> that is equal to the probability of <math display="inline"><semantics> <msub> <mi>z</mi> <mi>t</mi> </msub> </semantics></math> belonging to set <math display="inline"><semantics> <mrow> <mo>∈</mo> <msub> <mi mathvariant="script">J</mi> <mi>i</mi> </msub> </mrow> </semantics></math>. Here, <math display="inline"><semantics> <mrow> <mi mathvariant="double-struck">P</mi> <mfenced open="{" close="}"> <mo>·</mo> </mfenced> </mrow> </semantics></math> denotes probability.</p> ">
Figure 5
<p>Example 1. Filtering PDFs at some instants of time for <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>q</mi> </msub> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and 100 particles. GT stands for ground truth. QKF, KF, PF, and GSF stand for quantized Kalman filter, Kalman filter, particle filter, and Gaussian sum filter, respectively.</p> ">
Figure 6
<p>MSE between the true and estimated state for <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>q</mi> </msub> <mo>=</mo> <mfenced separators="" open="{" close="}"> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>7</mn> </mfenced> </mrow> </semantics></math>. KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.</p> ">
Figure 7
<p>Smoothing PDF at time <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <mn>6</mn> <mo>,</mo> <mn>8</mn> <mo>,</mo> <mn>10</mn> </mfenced> </mrow> </semantics></math>, where <span class="html-italic">K</span> increases to the right, and for <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>q</mi> </msub> <mo>=</mo> <mfenced separators="" open="{" close="}"> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>7</mn> </mfenced> </mrow> </semantics></math>, where <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>q</mi> </msub> </semantics></math> increases upwards. GT stands for the ground truth. GSS, KS, QKS, and PS stand for the Gaussian sum smoother, Kalman smoother, quantized Kalman smoother, and particle smoother, respectively.</p> ">
Figure 8
<p>Two-tank system. <math display="inline"><semantics> <msub> <mi>h</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>h</mi> <mn>2</mn> </msub> </semantics></math> denote the liquid level in Tank 1 and Tank 2, respectively. The liquid flows into Tank 1 at a rate <math display="inline"><semantics> <msub> <mi>f</mi> <mn>1</mn> </msub> </semantics></math> and out of Tank 2 at a rate <math display="inline"><semantics> <msub> <mi>f</mi> <mn>2</mn> </msub> </semantics></math>. The quantizer has minimum and maximum values <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mi>L</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p> ">
Figure 9
<p>(<b>Left</b>) The quantized and nonquantized measurements; (<b>right</b>) the MSE between the true and estimated filtered state. KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.</p> ">
Versions Notes

Abstract

:
Filtering and smoothing algorithms are key tools to develop decision-making strategies and parameter identification techniques in different areas of research, such as economics, financial data analysis, communications, and control systems. These algorithms are used to obtain an estimation of the system state based on the sequentially available noisy measurements of the system output. In a real-world system, the noisy measurements can suffer a significant loss of information due to (among others): (i) a reduced resolution of cost-effective sensors typically used in practice or (ii) a digitalization process for storing or transmitting the measurements through a communication channel using a minimum amount of resources. Thus, obtaining suitable state estimates in this context is essential. In this paper, Gaussian sum filtering and smoothing algorithms are developed in order to deal with noisy measurements that are also subject to quantization. In this approach, the probability mass function of the quantized output given the state is characterized by an integral equation. This integral was approximated by using a Gauss–Legendre quadrature; hence, a model with a Gaussian mixture structure was obtained. This model was used to develop filtering and smoothing algorithms. The benefits of this proposal, in terms of accuracy of the estimation and computational cost, are illustrated via numerical simulations.

1. Introduction

It is well known that discrete-time dynamical systems can be described as first-order difference equations relating internal variables called states [1]. State estimation is a scientific discipline that studies methodologies and algorithms for estimating the state of dynamical systems from input–output measurements [2,3]. There are a variety of applications that use state estimation, such as control [4,5,6,7], parameter identification [8,9,10], power systems [11,12], fault detection [13,14,15,16,17], prognosis [18,19], cyber–physical systems [20], hydrologic and geophysical data assimilation [21,22], maritime tracking [23], consensus-based state estimation using wireless sensor networks [24,25,26], navigation systems [27], and transportation and highway traffic management [28,29,30], to mention a few. Depending on the measurements that are used, two algorithms of state estimation can be distinguished: filtering and smoothing. Filtering algorithms estimate the current state using measurements up to the current instant, and smoothing algorithms estimate the state at some time in the past using measurement up to the current instant [23,31].
In general, the experimental noisy data can suffer a significant loss of information introduced by low-resolution and cost-effective sensors [32] in the digitalization process [33]. Typically, the digitalization process encompasses a process known as quantization. Quantization is a nonlinear map that partitions the whole signal space and represents all of the values in each subspace by a single one [32]. In spite of the loss of information, the benefits of quantization have led to a number of applications in which quantized measurements arise. This occurs due to fundamental limitations on measuring equipment and bandwidth resources [34], digital and analog converters [35], and experimental designs where it is necessary to quantize the data in order to store it or minimize communication resource utilization [36]. In particular, estimation problems utilizing quantized measurements arise in networked control over limited-/finite-capacity communication channels, where usually, encoder–decoder state estimation schemes are used [37,38,39,40]. In addition, in order to deal with uncertain dynamic systems, robust estimation algorithms have also been developed; see, e.g., [37,38,41].
Currently, state estimation from quantized data has gained significant attention in a growing number of applications such as fault detection [42,43], networked control [42,44,45], and system identification [35,46,47,48]. For instance, in [49,50], Kalman-based state estimation algorithms were developed using multiple sensors for distributed systems. In [24], a Kalman smoothing algorithm was developed for any-time minimum-mean-squared error optimal-consensus-based state estimation using wireless sensor networks. In [27], an online smoothing algorithm was developed to estimate the positional and orientation parameters of integrated navigation systems utilizing a low-cost microelectromechanical system inertial sensor in near-real time.
In Figure 1, a usual representation of a process is shown, which is defined by the interconnection of three blocks: (1) an actuator, (2) a process, and (3) a sensor. This representation includes a link that can be a communication channel and a base station. The actuator input usually comes from a control system, and the output of the process is measured with noise by a sensor. The sensor introduces quantization to the noisy measurements. This representation has been used for state estimation and control in a microgrid with multiple distributed energy resources, where a dequantizer is used to reconstruct the received signal and then perform the standard Kalman filter [51]. In [42], a fault isolation filter for a discrete-time networked control system with multiple faults was developed using the Kalman filter, where the sensor measurements were transmitted only when the output signal was greater than a threshold. The authors in [52] dealt with a similar structure to the one in Figure 1 to estimate the vehicle sideslip angle of an in-vehicle networked system with sensor failure, dynamic quantization, and data dropouts. For more examples, see, e.g., [15,53].
For linear systems and Gaussian noises (without quantization), the optimal estimator of the system state is the celebrated Kalman filter [1] and smoother [22]. However, in the most general case, i.e., nonlinear systems and non-Gaussian noises, it is not possible to obtain an optimal estimator because the computation of some integrals in the filtering and smoothing equations is difficult or the integrals are just intractable. As mentioned above, the quantization process is a nonlinear map that results in a significant loss of information on the system dynamics, which produces a biased state estimation and incorrect characterization of the filtering and smoothing probability density functions (PDFs). In this context, several suboptimal filtering and smoothing algorithms for state-space systems with quantized data have been developed, for instance, standard- [49,54], unscented- [55], and extended- [36] Kalman filters for quantized data, in which some structural elements of the state-space models and the quantizer are exploited. Sequential Monte Carlo methods have been also used for filtering and smoothing with quantized data, where complex integrals are approximated by a set of weighted samples called particles [31], which define (approximately) a desired PDF. However, the most common difficulty in these methods is dealing with the quantizer model. Some approaches have been proposed for this purpose such as gradient-based approximation of the quantizer [56] or modeling the quantizer as uniformly distributed additive noise [32,57]. In [58], an approximation of the integral in the non-Gaussian probability mass function (PMF) of the quantized output given the state was proposed by using Gaussian quadrature rules in order to deal with binary data. This approximation naturally yields an explicit Gaussian mixture model (GMM) form for the PMF of the quantized output. In this paper, the approximation of this integral was extended by considering a more general (finite- and infinite-level) quantizer, and it was used to develop Gaussian sum filtering and smoothing algorithms.

Main Contributions

The main contributions of this work are:
  • Developing an explicit model (of the GMM form) for the PMF of the quantized output considering a finite- and infinite-level quantizer, to solve in closed-form filtering and smoothing recursions in a Bayesian framework;
  • Designing Gaussian sum filtering and smoothing algorithms to deal with quantized data, providing closed expressions for the state estimates and for the filtering and smoothing PDFs.
The filtering algorithm for quantized data presented in this paper includes, as a particular case, the filtering algorithm presented in [58], where only the case of binary data was considered. Additionally, the smoothing algorithm presented here, based on the approximation of the PMF of the quantized output given the state, is completely novel. The remainder of the paper is as follows: In Section 2, the problem of interest is defined. In Section 3, the characterization and approximation of the PMF of the quantized data given the state are presented, and using this explicit model, the Gaussian sum filtering and smoothing algorithms are developed. In Section 4, some examples are presented to show the benefits of this proposal in terms of accuracy and computational cost. Finally, in Section 5, conclusions are presented.

2. Statement of the Problem

2.1. System Model

Consider the state-space model for a discrete-time linear-time-invariant system with quantized output (see Figure 2):
(1) x t + 1 = A x t + B u t + w t , (2) z t = C x t + D u t + v t , (3) y t = q z t ,
where x t R n is the state vector, z t R is the nonquantized output, y t R is the quantized output, and  u t R m is the input of the system. The matrices are A R n × n , B R n × m , C R 1 × n , and D R 1 × m . The nonlinear map q · is the quantizer. The state noise w t R n and the output noise v t R are zero-mean white Gaussian noises with covariance matrix Q and R, respectively. The system in (1)–(2) can be described using the state transition and conditional nonquantized output probability distributions:
(4) p ( x 1 ) = N x 1 ( μ 1 , P 1 ) , (5) p ( x t + 1 | x t ) = N x t + 1 ( A x t + B u t , Q ) , (6) p ( z t | x t ) = N z t ( C x t + D u t , R ) ,
where N x ( μ , P ) represents a PDF corresponding to a Gaussian distribution with mean μ and the covariance matrix P of the variable x. The initial condition x 1 , the model noise w t , and the measurement noise v t are statistically independent random variables.

2.2. Quantizer Model

Let the quantizer q · : R V be a map from the real line defined by the set of intervals J i R : i I to the finite or countable infinite set V = β i R : i I [59], where I is a set of indices defined by the quantizer type. In this work, two quantizers were considered. The first an infinite-level quantizer, in which the output of the quantizer has infinite (countable) levels of quantization corresponding to the indices’ set:
I = , 1 , 2 , , L , .
The definition of the infinite-level quantizer is as follows (see Figure 3 (left)):
q z t = β i if z t J i , i I ,
where the sets J i = z t : q i 1 z t < q i are disjoint intervals and each β i is the value that the quantizer takes in the region J i . The second is a finite-level quantizer, in which the output of the quantizer is limited to a minimum and maximum values (saturated quantizer) corresponding to the indices’ set:
I = 1 , 2 , , L 1 , L .
The definition of the finite-level quantizer (see Figure 3 (right)) is similar to (8), where I is defined in (9), J 1 = z t : z t < q 1 , J L = z t : q L 1 z t , and  J i = z t : q i 1 z t < q i with i = 2 , L 1 .

2.3. Problem Definition

The problem of interest can be defined as follows: Given the available data u 1 : N = u 1 , u 2 , , u N and y 1 : N = y 1 , y 2 , , y N , where N is the data length, obtain the filtering and smoothing PDFs of the state given the quantized measurements, p ( x t | y 1 : t ) and p ( x t | y 1 : N ) , respectively the state estimators:
(10) x ^ t | t = E x t | y 1 : t , (11) x ^ t | N = E x t | y 1 : N ,
and the corresponding covariance matrices of the estimation error:
(12) Σ t | t = E ( x t x ^ t | t ) ( x t x ^ t | t ) T | y 1 : t , (13) Σ t | N = E ( x t x ^ t | N ) ( x t x ^ t | N ) T | y 1 : N ,
where t N and E x | y denotes the conditional expectation of x given y.

3. Gaussian Sum Filtering and Smoothing for Quantized Data

Here, the Gaussian sum filtering and smoothing algorithms for quantized output data are explained in detail.

3.1. Gaussian Mixture Models

Gaussian mixture models refer to a convex combination of Gaussian densities corresponding to a random variable ζ R n . Then, the PDF can be written as [60]: p ( ζ ) = i = 1 K φ i N ζ υ i , Γ i subject to φ i > 0 and i = 1 K φ i = 1 , where φ i is the ith mixing weight, υ i R n is the ith mean, and  Γ i R n × n is the ith covariance matrix. GMMs are used in a variety of applications to approximate non-Gaussian densities [61,62,63] and filtering and smoothing [64,65], to mention a few.

3.2. General Bayesian Framework

The well-known equations for Bayesian filtering (see, e.g., [31]) are given by:
(14) p ( x t | y 1 : t ) = p ( y t | x t ) p ( x t | y 1 : t 1 ) p ( y t | y 1 : t 1 ) , (15) p ( x t + 1 | y 1 : t ) = p ( x t + 1 | x t ) p ( x t | y 1 : t ) d x t ,
where p ( x t | y 1 : t ) and p ( x t + 1 | y 1 : t ) are the measurement and time-update equations, respectively. The PDF p ( x t + 1 | x t ) is obtained from the model in (5), and  p ( y t | y 1 : t 1 ) is a normalization constant. On the other hand, the well-known formula for Bayesian smoothing (see, e.g., [31]) is defined by:
p ( x t | y 1 : N ) = p ( x t | y 1 : t ) p ( x t + 1 | y 1 : N ) p ( x t + 1 | x t ) p ( x t + 1 | y 1 : t ) d x t + 1 .
However, the PDF in (16) is difficult to compute because of the division by a non-Gaussian density. This difficulty was overcome in [64] where the smoothing equations were separated in a two-fold formula filter: (i) the first formula, called the backward filter, defined by the following recursion:
(17) p ( y t + 1 : N | x t ) = p ( y t + 1 : N | x t + 1 ) p ( x t + 1 | x t ) d x t + 1 , (18) p ( y t : N | x t ) = p ( y t | x t ) p ( y t + 1 : N | x t ) ,
where p ( y t + 1 : N | x t ) and p ( y t : N | x t ) are the backward prediction and the backward- measurement-update equations, respectively, and (ii) the second formula, given by:
p ( x t | y 1 : N ) = p ( x t | y 1 : t 1 ) p ( y t : N | x t ) p ( y t : N | y 1 : t 1 ) ,
where p ( x t | y 1 : t 1 ) is the time-update equation from the filtering step, p ( y t : N | y 1 : t 1 ) is a normalization constant, and  p ( y t : N | x t ) is obtained using the backward recursion given in (17) and (18).
Due to the non-Gaussianity of p ( y t | x t ) , the integrals in both the filtering and backward-filtering algorithms in (15) and (17), respectively, are difficult to compute or intractable. Widely used methods to deal with these integrals are Monte-Carlo-based algorithms, such as particle filtering and smoothing; see, e.g., [31,66]. These methods represent the posterior distributions p ( x t | y 1 : t ) and p ( x t | y 1 : N ) by a set of weighted random samples. However, in general, they exhibit a growing computational complexity as the model dimension increases. Here, an alternative method to compute the PMF p ( y t | x t ) using Gauss–Legendre quadrature is proposed. This procedure results in a GMM form. Hence, the integrals in both the filtering and backward filtering in (15) and (17), respectively, can be computed in closed form.
Remark 1.
Notice that since y t is a discrete random variable, the measurement-update equation in (14) and the backward-measurement-update equation in (18) comprise PDFs and a PMF. Hence, generalized probability density functions are used here; see, e.g., [67].

3.3. Computing an Explicit Model of p ( y t | x t )

From (8), it is observed that the output z t J i is mapped to a single value β i . Then, the probability that y t takes the value β i is the same as the probability of z t belonging to set J i , as shown in Figure 4. Notice that the quantizer regions J i include finite and semi-infinite intervals.
In the following theorem, the characterization of p ( y t | x t ) is formalized via the computation of the probability P z t J i shown in Figure 4 through the integral definition of probabilities [68]. Therefore, the approximation of this integral by using the Gauss–Legendre quadrature rule is presented (see, e.g., [69]).
Theorem 1.
Consider the system (1)–(3), the infinite- and finite-level quantizers defined in (8). Then, the PMF of the discrete random variable y t given the state x t is given by:
p ( y t | x t ) = a t b t N v t 0 , R d v t ,
where a t and b t are functions of the boundary values of each region of the quantizers and are defined in Table 1. In addition, the integral in (20) can be approximated using the Gauss–Legendre quadrature rule, yielding:
p ( y t | x t ) τ = 1 K ς t τ N η t τ C x t + D u t + μ t τ , R ,
where K is the number of points from the Gauss–Legendre quadrature rule (defined by the user), ς t τ , η t τ , and μ t τ are defined in Table 1, and ω τ and ψ τ are weights and points defined by the quadrature rule, given in, e.g., [69].
Proof. 
From the infinite- and finite-level quantizers, it is observed that the random variable y t can only take the values β i with i in the indices’ sets given in (7) and (9). Then, the probability of y t = , β 1 , β 2 , , β L , or y t = β 1 , β 2 , , β L 1 , β L is the same as the probability that the random variable z t belongs to the sets J i . This probability can be obtained from the distribution function as follows:
P y t = β i | x t = P z t J i | x t ,
where P · denotes probability. Considering the infinite-level quantizer and the output equation in (2), the following expressions are obtained for y t = β i with i = , 1 , 2 , L , :
P y t = β i | x t = P q i 1 z t < q i | x t , = P a t v t < b t | x t ,
where a t = q i 1 C x t D u t and b t = q i C x t D u t . Additionally, for the finite-level quantizer, (23) holds for y t = β i with i = 2 , , L 1 , and for y t = β 1 and y t = β L , the following holds:
(24) P v t < b t | x t if y t = β 1 , (25) P v t a t | x t if y t = β L ,
where b t = q 1 C x t D u t and a t = q L 1 C x t D u t . Then, using the fact that p ( v t ) = N v t ( 0 , R ) and using (23)–(25), the integral in (20) can be obtained, where the integral limits are given in Table 1.
On the other hand, the Gauss–Legendre quadrature for approximating a definite integral over the interval 1 , 1 is given by:
1 1 f ( r ) d r τ = 1 K ω τ f ( ψ τ ) ,
where ω τ and ψ τ are the quadrature weights and the roots of the order K Legendre polynomial, respectively; see, e.g., [69]. Notice that the integral over a t , b t can be mapped onto the interval 1 , 1 using:
a t b t f ( r ) d r = b t a t 2 1 1 f b t a t 2 r + b t + a t 2 d r ,
and using the definition in (26), this integral is approximated by:
a t b t f ( r ) d r b t a t 2 τ = 1 K ω τ f b t a t 2 ψ τ + b t + a t 2 .
Defining a t = q i 1 C x t D u t , b t = q i C x t D u t , and f ( v t ) = N v t 0 , R with i = , 1 , 2 , L , , the approximation of p ( y t | x t ) given in (21) for the infinite-level quantizer is derived.
For the finite-level quantizer, (28) holds for y t = β i with i = 2 , , L 1 . Then, the approximation of p ( y t | x t ) for y t = β 1 and y t = β L is defined. First, it is observed that the integral over the semi-infinite interval [ a t , ) can be mapped onto the interval ( 0 , 1 ] using r = a t + ( 1 ξ ) / ξ , so that:
a t f ( r ) d r = 0 1 f a t + 1 ξ ξ d ξ ξ 2 .
Then, using an appropriate change of variable, it can be mapped onto the interval ( 1 , 1 ] , yielding:
a t f ( r ) d r = 1 1 f a t + 1 s 1 + s 2 d s ( 1 + s ) 2 .
Using the Gauss–Legendre approximation given in (26), the approximation of p ( y t | x t ) for y t = β L is obtained as follows:
a t f ( r ) d r τ = 1 K ω τ f a t + 1 ψ τ 1 + ψ τ 2 ( 1 + ψ τ ) 2 .
Defining a t = q L 1 C x t D u t , b = , and  f ( v t ) = N v t 0 , R , (21) is obtained. A similar procedure for the integral over the semi-infinite interval ( , b t ] can be applied using r = b t ( 1 ξ ) / ξ to find the approximation of p ( y t | x t ) for y t = β 1 , as follows:
b t f ( r ) d r τ = 1 K ω τ f b t 1 ψ τ 1 + ψ τ 2 ( 1 + ψ τ ) 2 .
Defining a t = , b t = q 1 C x t D u t , and  f ( v t ) = N v t 0 , R , (21) is obtained. This completes the proof.   □
Remark 2.
Notice that any quadrature rule, such as Newton–Cotes, Gauss–Laguerre, and Gauss–Hermite (see, e.g., [69,70]), used to approximate the integral in (20), yields a weighted sum of Gaussian distributions evaluated as a linear function of the values q i and the state x t . Furthermore, it is possible to interpret p ( y t | x t ) as a weighted sum of Gaussian distributions in the random variable x t . This weighted sum is denoted as the GMM structure. Thus, this structure is considered for developing the Gaussian sum filtering and smoothing algorithms that deal with quantized data.
Remark 3.
Notice that in [70], a suboptimal filtering algorithm called the quadrature Kalman filter, where a linear regression is used to linearize the nonlinear process and measurement functions using the Gauss–Hermite quadrature rule, was developed. This approach is not directly applicable to the problem of interest in this paper, so that the quantizer is a nondifferentiable nonlinearity.
On the other hand, the cubature Kalman filter [70] and smoother [71] are approaches that use the spherical–radial cubature rule to approximate the n-dimensional integrals when computing the expected values in (10)–(13) in a nonlinear state-space model. These integral approximations are obtained under the assumption that p ( x t | y 1 : t ) and p ( x t | y 1 : N ) are Gaussian distributed. The difference between these approaches and the proposed method in this paper is that the Gauss quadrature rule was used to approximate the integral in the probabilistic model of p ( y t | x t ) . It is clear that in this paper, the Gaussian assumption in the filtering and smoothing densities was not used. In fact, it is shown that p ( x t | y 1 : t ) and p ( x t | y 1 : N ) are non-Gaussian PDFs.

3.4. Understanding the Gaussian Sum Filtering and Smoothing Algorithms

The general Bayesian framework for filtering leads to the product p ( y t | x t ) and p ( x t | y 1 : t 1 ) , as shown in (14). From the definition of p ( y t | x t ) in (21), it is observed that p ( x t | y 1 : t ) results in the product of two GMMs. This, in turn, results in a GMM with more components than the individual factors p ( y t | x t ) and p ( x t | y 1 : t 1 ) . This implies that the time-update equation p ( x t + 1 | y 1 : t ) in (15) is also a GMM distribution. A similar situation occurs in the backward-measurement-update equation p ( y t : N | x t ) in (18), which is the product of two GMM structures p ( y t | x t ) and p ( y t + 1 : N | x t ) . This yields another GMM structure with more components than the individual factors, which implies that the backward-prediction equation in (17) is also a GMM structure. For the clarity of the presentation, reducing the product of two GMMs (structures) into one is necessary.
In order to understand the transformation of the product of two summations into another summation, the following sums are considered: g = τ = 1 K g τ and f = = 1 M f . Then, for each two-tuple ( τ , ) where τ = 1 , , K and = 1 , , M , the product h = f g is another summation and has K M terms indexed by k = ( 1 ) K + τ . Then, reordering these terms, the following sum is obtained: h = k = 1 K M h k , where h k = f g τ .

3.5. Gaussian Sum Filtering for Quantized Data

Using the approximation of the PMF p ( y t | x t ) defined in Theorem 1, the Gaussian sum filtering algorithm can be derived using (14) and (15) as follows:
Theorem 2 (Gaussian sum filter).
Consider the system in (4)–(6) and the approximation of p ( y t | x t ) in (21). The filtering equations for state-space systems with quantized output data are defined as follows:
Initialization: For t = 1 , the predicted distribution is p ( x 1 ) = N x 1 ( μ 1 , P 1 ) , where p ( x 1 ) is the prior distribution of the initial state. Then, for  t = 1 , , N , the measurement-update and the time-update equations for the Gaussian sum filtering are defined as follows:
Measurement update: The PDF of the current state x t given the current and previous measurements, that is y 1 , , y t , is the GMM given by:
p ( x t | y 1 : t ) = k = 1 M t | t γ t | t k N x t ( x ^ t | t k , Σ t | t k ) ,
where γ t | t k , x ^ t | t k , and  Σ t | t k are given in Appendix B, M t | t = K M t | t 1 , and M t | t 1 is the number of Gaussian components in the time update step. The initial values satisfy that M 1 | 0 = 1 , γ 1 | 0 = 1 , x ^ 1 | 0 = μ 1 , and Σ 1 | 0 = P 1 .
Time update: The PDF of the future state x t + 1 , one-step-ahead prediction given the measurements y 1 , , y t , is the GMM given by:
p ( x t + 1 | y 1 : t ) = k = 1 M t + 1 | t γ t + 1 | t k N x t + 1 ( x ^ t + 1 | t k , Σ t + 1 | t k ) ,
where M t + 1 | t = M t | t γ t + 1 | t k , x ^ t + 1 | t k , and  Σ t + 1 | t k are given in Appendix B.
Proof. 
Consider the recursion in (14) and (15). The required PDF p ( x t + 1 | x t ) and PMF p ( y t | x t ) are obtained from (5) and (21), respectively. Then, using the measurement-update equations in (14) and Lemma 2 in [65] (p. 86), the following expression is obtained:
p ( x t | y 1 : t ) τ = 1 K = 1 M t | t 1 ς t τ γ t | t 1 N η t τ κ t τ , V t N x t ( x ^ t | t k , Σ t | t k )
where κ t τ , V t , x ^ t | t k , and Σ t | t k are defined in (A11), (A12), (A7), and (A8), respectively. Notice that N η t τ ( κ t τ , V t ) is a coefficient since the measurement y t is available. Then, rewriting the double summation in (35) as a single one with a new index k = ( 1 ) K + τ , the measurement-update equation in (33) is derived defining M t | t = K M t | t 1 , the weights γ ¯ t | t k as in (A9), and computing the corresponding normalization constant. On the other hand, using the time-update equation in (15) and Lemma 2 in [65] (p. 86), the following equation is obtained:
p ( x t + 1 | y 1 : t ) = k = 1 M t | t γ t | t k N x t m t k , S t k N x t + 1 x ^ t + 1 | t k , Σ t + 1 | t k d x t ,
where x ^ t + 1 | t k and Σ t + 1 | t k are defined in (A14), and (A15), respectively. Then, solving this integral, the time-update equation in (34) can be derived defining M t + 1 | t = M t | t , the weights γ t + 1 | t k as in (A13), and computing the corresponding normalization constant. This completes the proof.   □

3.6. Computing the State Estimator x ^ t | t from a GMM

Provided p ( x t | y 1 : t ) in (33) as a GMM, the state estimator given in (10) and the covariance matrix of the estimation error in (12) can be computed as follows:
(37) x ^ t | t = k = 1 M t | t γ t | t k x ^ t | t k , (38) Σ t | t = k = 1 M t | t γ t | t k Σ t | t k + ( x ^ t | t k x ^ t | t ) ( x ^ t | t k x ^ t | t ) T .

3.7. Backward Filtering for Quantized Data

Using the approximation of the PMF p ( y t | x t ) defined in Theorem 1, the backward filter recursion can be derived as follows:
Theorem 3 (Backward filtering).
Consider the system in (4)–(6) and the approximation of p ( y t | x t ) in (21). Then, the backward filter for state-space systems with quantized output data is:
Initialization: For t = N , the backward measurement update is given by:
p ( y N | x N ) = k = 1 S N | N ϵ N | N k λ N | N k exp 1 2 x N T F N | N k x N 2 G N | N k T x N + H N | N k ,
where S N | N = K , and: 
ϵ N | N k = ς N k , F N | N k = C T R 1 C , λ N | N k = det 2 π R 1 / 2 G N | N k T = θ N k T R 1 C , θ N k = η N k D u N μ N k , H N | N k = θ N k T R 1 θ N k ,
where ς N k , η N k , and μ N k are defined in Table 1. Then, for  t = N 1 , , 1 , the backward prediction and the backward-measurement-update equations are defined as follows:
Backward predictions: The backward-prediction equation is given by:
p ( y t + 1 : N | x t ) = k = 1 S t | t + 1 ϵ t | t + 1 k λ t | t + 1 k exp 1 2 x t T F t | t + 1 k x t 2 G t | t + 1 k T x t + H t | t + 1 k ,
where S t | t + 1 = S t + 1 | t + 1 , S t + 1 | t + 1 is the number of components in the backward-measurement-update step, and ϵ t | t + 1 k , λ t | t + 1 k , F t | t + 1 k , G t | t + 1 k T , H t | t + 1 k are given in Appendix C.
Backward-measurement update: The distribution of y t : N | x t evaluated at y t , , y N is:
p ( y t : N | x t ) = k = 1 S t | t ϵ t | t k λ t | t k exp 1 2 x t T F t | t k x t 2 G t | t k T x t + H t | t k ,
where S t | t = K S t | t + 1 , S t | t + 1 is the number of components in the backward-prediction step, and  ϵ t | t k , λ t | t k , F t | t k , G t | t k T , H t | t k are given in Appendix C.
Proof. 
Consider the recursion in (17) and (18). The required PDF p ( x t + 1 | x t ) and PMF p ( y t | x t ) are given by Equations (5) and (21), respectively. The proof is carried out by induction in reverse time. First, it is verified that it holds for t = N 1 , then it is assumed to be true for t = s + 1 , and finally, it is verified that it holds for t = s . Notice that the recursion starts in t = N with p ( y N : N | x N ) = p ( y N | x N ) , which is defined in (39). From (17), at time t = N 1 , the backward-prediction step is defined as:
p ( y N : N | x N 1 ) = p ( y N : N | x N ) p ( x N | x N 1 ) d x N ,
where p ( y N : N | x N ) is given by (39) and p ( x N | x N 1 ) is defined by the system model in (5). From the definition given in (43), the equation below is obtained:
p ( y N : N | x N 1 ) = k = 1 S N | N ϵ N | N k λ N | N k det 2 π Q exp 1 2 x N T P ¯ N k x N 2 V ¯ N k T x N + S ¯ N k d x N ,
where P ¯ N k = F N | N k + Q 1 , V ¯ N k = G N | N k + J N 1 , S ¯ N k = H N | N k + L N 1 , J N 1 T = ( A x N 1 + B u N 1 ) T Q 1 , and L N 1 = ( A x N 1 + B u N 1 ) T Q 1 ( A x N 1 + B u N 1 ) . Then, completing the square and solving the integral in (44), it is obtained:
p ( y N : N | x N 1 ) = k = 1 S N 1 | N ϵ N 1 | N k λ N 1 | N k × exp 1 2 x N 1 T F N 1 | N k x N 1 2 G N 1 | N k T x N 1 + H N 1 | N k ,
where S N 1 | N = S N | N and the remaining terms in (45) are defined in (A22)–(A26), but evaluated at t = N 1 . The backward-measurement-update step in (18) is as follows:
p ( y N 1 : N | x N 1 ) = p ( y N 1 | x N 1 ) p ( y N : N | x N 1 ) .
Thus, using (45) in the definition given in (46), it is obtained:
p ( y N 1 : N | x N 1 ) = τ = 1 K = 1 S N 1 | N ς N 1 τ ϵ N 1 | N λ N 1 | N det 2 π R × exp 1 2 x N 1 T F ˜ N 1 x N 1 2 G ˜ N 1 τ T x N 1 + H ˜ N 1 τ × exp 1 2 x N 1 T F N 1 | N x N 1 2 G N 1 | N T x N 1 + H N 1 | N ,
where F ˜ N 1 = C T R 1 C , G ˜ N 1 τ T = θ N 1 τ T R 1 C , and H ˜ N 1 τ = θ N 1 τ T R 1 θ N 1 τ with θ N 1 τ = η N 1 τ D u N 1 μ N 1 τ . Finally, rewriting the double summation in the last equation into a single one with a new index k = ( 1 ) K + τ results in:
p ( y N 1 : N | x N 1 ) = k = 1 S N 1 | N 1 ϵ N 1 | N 1 k λ N 1 | N 1 k × exp 1 2 x N 1 T F N 1 | N 1 k x N 1 2 G N 1 | N 1 k T x N 1 + H N 1 | N 1 k ,
where S N 1 | N 1 = K S N 1 | N and the remaining terms in (48) are the same as the ones shown in (A16)–(A21), but evaluated at t = N 1 . Thus, it is concluded that it holds for t = N 1 . A similar procedure was applied to find that, for both the backward-prediction and backward-measurement-update steps in the backward filter, it yields the same expressions in (A22)–(A26) and (A16)–(A21), but evaluated at t = s . Thus, it is concluded that Theorem 3 holds for all t.    
From Theorems 2 and 3, it is clear that the number of elements in the mixture grows exponentially with every iteration, making the algorithm computationally intractable after a few iterations. In addition, it would be necessary to save and manage a large amount of information in every iteration of the Gaussian sum filtering and in the backward recursion. Therefore, an algorithm that reduces the number of GMM components should be implemented in every iteration of these two algorithms in order to keep the number of components bounded. Different methods have been proposed to perform this kind of procedure, termed Gaussian sum reduction, such us pruning, joining, and integral-squared-error-based methods; see [70]. In this work, the Kullback–Leibler approach for Gaussian sum reduction proposed in [64,72] was used. The idea behind the Gaussian sum reduction is to transform the GMM φ i , υ i , Γ i i = 1 J into a GMM φ i , υ i , Γ i i = 1 S , where 1 S J . In [64], it was suggested to use a measure of dissimilarity between two components and pooling the pair of components that minimize this measure. Then, based on this idea, in [72], the Kullback–Leibler information number was used as the measure of dissimilarity. The author in [72] provided an algorithm to merge two components so that the merged component preserves the first- and second-order moments of the original two components, which is given by:
φ i j , υ i j , Γ i j = M φ i , υ i , Γ i , φ j , υ j , Γ j ,
where M · , · is a merging function such that:
(50) φ i j = φ i + φ j , (51) υ i j = φ ¯ i υ i + φ ¯ j υ j , (52) Γ i j = φ ¯ i Γ i + φ ¯ j Γ j + φ ¯ i φ ¯ j υ i υ j υ i υ j T ,
where φ ¯ i = φ i / ( φ i + φ j ) and φ ¯ j = φ j / ( φ i + φ j ) . The merging function applied to two components i and j ( i j ) minimizes the dissimilarity measure D ( i , j ) , defined as:
D ( i , j ) = 1 2 φ i j log det Γ i j φ i log det Γ i φ j log det Γ j ,
The function D ( i , j ) satisfies D ( i , j ) = D ( j , i ) and D ( i , i ) = 0 . This implies that the total number of combinations to merge is reduced to 0.5 J ( J 1 ) . The authors in [73] used Runnalls’ algorithm in a Bayesian filtering environment, and they modified it to include a user-defined threshold for the number of components after reduction and a user-defined threshold ε that satisfies D ( i , j ) < ε . In Algorithm 1, the steps for implementing the Gaussian sum filtering are summarized.
Algorithm 1 Gaussian sum filter algorithm for quantized output data
  1 Input: The PDF of the initial state p ( x 1 ) , e.g.,  M 1 | 0 = 1 , γ 1 | 0 = 1 , x ^ 1 | 0 = μ 1 ,
      Σ 1 | 0 = P 1 . The points of the Gauss–Legendre quadrature ω τ , ψ τ τ = 1 K .
Sensors 21 07675 i001
20 Output: The state estimation in (10), the covariance matrix of the estimation error
   in (12), the filtering PDFs p ( x t | y 1 : t ) , the predictive PDFs p ( x t + 1 | y 1 : t ) , and the set
    ς t τ , η t τ , μ t τ , for  t = 1 , , N .
The backward recursion in Theorem 3 is used to obtain the smoothing PDF in (19). For this purpose, p ( y t : N | x t ) is converted into a GMM structure of the random variable x t . Then, the Gaussian sum reduction algorithm is applied to the GMM structure of p ( y t : N | x t ) to obtain:
p ( y t : N | x t ) = k = 1 S red δ t | t k N x t z t | t k , U t | t k ,
where S red is the number of Gaussian components kept after the Gaussian reduction procedure and δ t | t k , z t | t k , and  U t | t k are the corresponding weight, mean, and covariance matrix. This reduced GMM structure is used to obtain the smoothing PDFs. However, for the next recursion in the backward filter, reconverting the reduced GMM structure into the backward filter form to obtain the reduced version of the backward-measurement-update step in (42) is required. This conversion process between the backward filter and GMM structure is summarized in Lemma A3 in Appendix A. In Algorithm 2, the steps for implementing the backward filter are summarized.
Algorithm 2 Backward-filtering algorithm for quantized output data
  1  Input: The initial backward measurement p ( y N | x N ) given in (39), e.g.,  S N | N , ϵ N | N k ,
      λ N | N k , F N | N k , G N | N k T , and  H N | N k . The set ς t τ , η t τ , μ t τ for t = 1 , , N computed in Algorithm 1.
Sensors 21 07675 i002
19  Output: The backward prediction p ( y t + 1 : N | x t ) and the backward measurement
    update p ( y t : N | x t ) for t = N , , 1 .

3.8. Smoothing Algorithm with Quantized Data

In order to obtain the smoothing PDF in (19), the GMM structure p ( y t : N | x t ) in the random variable x t given in (54) is used. This GMM structure is multiplied by the time-update equation p ( x t | y 1 : t 1 ) of the filtering algorithm given in (34). Then, the smoothing PDF is obtained from the following:
Theorem 4 (Gaussian sum smoothing).
Consider the system in (4)–(6). Given p ( x t | y 1 : t 1 ) , p ( x N | y 1 : N ) and p ( y t : N | x t ) , then the smoothing PDF at time t = N is given by p ( x N | y 1 : N ) , and for t = N 1 , , 1 , the PDF p ( x t | y 1 : N ) is a GMM given by:
p ( x t | y 1 : N ) = k = 1 S t | N ϵ t | N k N x t ( x ^ t | N k , Σ t | N k ) ,
where S t | N = M t | t 1 S r e d , M t | t 1 and S r e d are given in (34) and (54), respectively, and  ϵ t | N k , x ^ t | N k , and  Σ t | N k are given in Appendix D.
Proof. 
Consider the definition of Bayesian smoothing given in (19), the time-update equation p ( x t | y 1 : t 1 ) obtained from (34), and the reduced version of the measurement-update step in the backward filter p ( y t : N | x t ) defined in (54). For  t = N 1 , N 2 , , 1 , it is obtained:
p ( x t | y 1 : N ) τ = 1 M t | t 1 = 1 S red γ t | t 1 τ δ t | t N x t z t | t , U t | t N x t ( x ^ t | t 1 τ , Σ t | t 1 τ ) .
Defining g ( x t ) = N x t ( z t | t , U t | t ) N x t ( x ^ t | t 1 τ , Σ t | t 1 τ ) , the following equation is derived:
g ( x t ) = exp 1 2 ϕ 1 t + ϕ 2 t τ ϕ 3 t τ N x t L t τ 1 ρ t τ , L t τ 1 ( 2 π ) n 2 det L t τ det U t | t det Σ t | t 1 τ ,
where L t τ , ρ t τ , ϕ 1 t , ϕ 2 t τ , and  ϕ 3 t τ are defined in (A32), (A31), (A34), (A35), and (A33), respectively. Next, expressing the double summation in (56) as a single one with a new index k = ( 1 ) M t | t 1 + τ , it is obtained:
p ( x t | y 1 : N ) k = 1 S t | N ϵ ¯ t | N k N x t ( x ^ t | N k , Σ t | N k ) ,
where S t | N = M t | t 1 S red and  ϵ ¯ t | N k , Σ t | N k , and x ^ t | N k are given in (A30), (A28), and (A29), respectively. Finally, the smoothing PDF in (55) is obtained by computing the normalized wights as (A27), and this completes the proof.   □
Provided p ( x t | y 1 : N ) in (55) as a GMM, the state estimator given in (11) and the covariance matrix of the estimation error in (13) can be computed as follows:
(59) x ^ t | N = k = 1 S t | N ϵ t | N k x ^ t | N k , (60) Σ t | N = k = 1 S t | N ϵ t | N k Σ t | N k + ( x ^ t | N k x ^ t | N ) ( x ^ t | N k x ^ t | N ) T .
In Algorithm 3, the steps to implement the Gaussian sum smoothing are summarized.
Algorithm 3 Gaussian sum smoothing algorithm for quantized output data
  1  Input: The PDFs p ( x t | y 1 : t 1 ) and p ( x N | y 1 : N ) obtained from Algorithm 1 and
      p ( y t : N | x t ) obtained from Algorithm 2.
  2  Save the PDF p ( x N | y 1 : N )
Sensors 21 07675 i003
13  Output: The state estimation in (11), the covariance matrix of the estimation error
     in (13), and the smoothing PDFs p ( x t | y 1 : N ) , for  t = 1 , , N .

3.9. Computing the Smoothing Joint PDF p ( x t + 1 , x t | y 1 : N )

In Theorems 2–4, the filtering and smoothing problems for quantized data are solved. However, many strategies for the identification of state-state space systems [8,9] require the joint PDF p ( x t + 1 , x t | y 1 : N ) , which for the quantized output data case is given by the following:
Theorem 5.
Consider the system in (4)–(6), the PDF p ( x t | y 1 : t ) and the backward prediction equation p ( y t + 1 : N | x t + 1 ) given in Theorems 2 and 3, respectively, and the PDF p ( x t + 1 | x t ) given in Equation (5). Then, for  t = N 1 , , 1 , the joint PDF p ( x t + 1 , x t | y 1 : N ) is the GMM given by:
p ( x t + 1 , x t | y 1 : N ) = k = 1 S t , t + 1 α k N χ t χ ^ t | N k , E t | N k ,
where S t , t + 1 = M t | t S t + 1 | t + 1 , M t | t and S t + 1 | t + 1 are given in (33) and (42), respectively, α k , χ ^ t | N k , and  E t | N k are given in Appendix E, and  χ t is the extended vector given by:
χ t T = [ x t + 1 T x t T ] T .
At time t = N , the joint PDF p ( x N + 1 , x N | y 1 : N ) is given by (61) with:
S N + 1 | N + 1 = 1 , ϵ N + 1 | N + 1 = 1 , λ N + 1 | N + 1 = 1 , F N + 1 | N + 1 = 0 , G N + 1 | N + 1 T = 0 , H N + 1 | N + 1 = 0 .
Proof. 
Using Bayes’ theorem, the joint PDF p ( x t + 1 , x t | y 1 : N ) can be obtained as:
p ( x t + 1 , x t | y 1 : N ) = p ( y t + 1 : N | x t + 1 ) p ( x t + 1 | x t ) p ( x t | y 1 : t ) p ( y t + 1 : N | y 1 : t ) ,
where p ( y t + 1 : N | y 1 : t ) is a normalization constant. The required PDFs p ( x t | y 1 : t ) and p ( x t + 1 | x t ) and backward-prediction equation p ( y t + 1 : N | x t + 1 ) are given in (33), (5) and (42), respectively. Using Lemma A2, p ( x t | y 1 : t ) in (33) can be rewritten as:
p ( x t | y 1 : t ) = τ = 1 M t | t γ t | t τ det 2 π Σ t | t τ exp 1 2 x t T Σ t | t τ 1 x t 2 J t τ T x t + L t τ ,
where J t τ T and L t τ are defined in (A44) and (A45), respectively. Considering p ( y t + 1 : N | x t + 1 ) , p ( x t | y 1 : t ) , p ( x t + 1 | x t ) , and the extended vector of the state given in (62), the argument of these three functions can be written as follows:
A 1 = χ t T 0 0 0 Σ t | t τ 1 χ t 2 0 J t τ T χ t + L t τ ,
A 2 = χ t T F t + 1 | t + 1 0 0 0 χ t 2 G t + 1 | t + 1 T 0 χ t + H t + 1 | t + 1 ,
A 3 = χ t T Q 1 Q 1 A A T Q 1 A T Q 1 A χ t 2 u t T B T Q 1 u t T B T Q 1 A χ t + u t T B T Q 1 B u t ,
Then, adding these expressions, the following equation is derived:
p ( x t + 1 , x t | y 1 : N ) τ = 1 M t | t = 1 S t + 1 | t + 1 γ t | t τ ϵ t + 1 | t + 1 λ t + 1 | t + 1 ( 2 π ) n det Q det Σ t | t τ × exp 1 2 χ t T F t τ χ t 2 G t τ T χ t + H t τ ,
where F t τ , G t τ T , and H t τ are defined in (A41), (A42), and (A43), respectively. Completing the square and expressing the double summation as a single one with a new index k = ( 1 ) M t | t + τ , it is obtained:
p ( x t + 1 , x t | y 1 : N ) k = 1 S t , t + 1 α ¯ k N χ t χ ^ t | N k , E t | N k ,
where S t , t + 1 = M t | t S t + 1 | t + 1 , α ¯ k is defined in (A39), and  χ ^ t | N k and E t | N k are defined in (A37) and (A38), respectively. Finally, the smoothing joint distribution in (64) for  t = N 1 , , 1 is derived by computing the normalized wights as in (A36). Notice that, for  t = N , the smoothing joint PDF p ( x N + 1 , x N | y 1 : N ) = p ( x N + 1 | x N ) p ( x N | y 1 : N ) is obtained as (61) considering the expressions given in (63). Then, the proof is finished.   □
In Algorithm 4, the steps to implement the Gaussian sum smoothing for computing the joint PDF p ( x t + 1 , x t | y 1 : N ) are summarized.
Algorithm 4 Gaussian sum smoother to compute p ( x t + 1 , x t | y 1 : N ) for quantized output data
  1  Input: The PDF p ( x t | y 1 : t ) obtained in Algorithm 1, p ( y t + 1 : N | x t + 1 ) computed in
     Algorithm 2, and the PDF p ( x t + 1 | x t ) given in Equation (5).
Sensors 21 07675 i004
11  Compute and store p ( x N + 1 , x N | y 1 : N ) according to Theorem 5 with S N + 1 | N + 1 ,
     ϵ N + 1 | N + 1 , λ N + 1 | N + 1 , F N + 1 | N + 1 , G N + 1 | N + 1 T , and H N + 1 | N + 1 given in (63).
12  Output: The smoothing PDFs p ( x t + 1 , x t | y 1 : N ) , for  t = 1 , , N .

4. Numerical Example

In this section, a numerical example to illustrate the benefits of this paper proposal is presented. Furthermore, a practical simulation example is studied: the problem of estimating the tank liquid level in a two-tank system is addressed. Typically, a numerical simulation approach is used for testing new algorithms and designs in applications of state estimation [74], control [75], and system identification [63], among others. This approach is used to evaluate the performance of the estimation algorithms, in order to avoid safety problems that can occur in a real-world processes.
To evaluate the performance of the proposed filtering and smoothing methods for quantized data, a comparison with three classical techniques is presented: standard Kalman filtering and smoothing [76], quantized Kalman filtering and smoothing [49,77], and particle filtering and smoothing [31]. Notice that it is also possible to implement a version of particle filtering and smoothing algorithms using the approximation of p ( y t | x t ) given in (21). However, in the numerical examples run for filtering, the computation time was high. In order to validate the approximation in Theorem 1, the standard particle filtering and smoothing algorithms with a large number of particles were used, where p ( y t | x t ) was computed using the integral in (20) from the MATLAB function mvncdf, which computes the multivariate normal cumulative distribution function. The true filtering and smoothing PDFs were considered to be provided by the particle filter and smoother with 20,000 particles, which was defined as the ground truth.
In the following examples, the discrete-time system in state-space form given in (1)–(3) is used with:
y t = Δ q round z t / Δ q ,
where the quantizer is defined in terms of the round function in MATLAB and the quantization step Δ q . The infimum and supremum values of the sets J i defined in (8), q i 1 and q i , can be calculated for the infinite-level quantizer as q i 1 = y t 0.5 Δ q and q i = y t + 0.5 Δ q for i = , 1 , 2 , L , .
The experiments were carried out on a computer with the following specifications: processor: Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz, RAM memory: 8.00 GB, operating system: Windows 10 with MATLAB 2020b.

4.1. Example 1: First-Order System

In this example, the following state-space system was considered:
(72) x t + 1 = 0.9 x t + 1.0 u t + w t , (73) z t = 2.0 x t + 0.5 u t + v t ,
where w t N w t 0 , 1 and v t N v t 0 , 0.5 . Furthermore, the input was considered to be drawn from N u t ( 0 , 2 ) and x 1 N x 1 ( 1 , 1 ) . In Figure 5, the filtering PDFs for some time instants are shown. The quantization step used in this example and the number of Gaussian components to approximate p ( y t | x t ) were considered to be Δ q = 7 and K = 10 , respectively. Furthermore, S red = K .
Figure 5 shows that the filtering PDFs obtained with our proposal, the Gaussian sum filter, were that best fit to the ground truth. In contrast, the PDFs obtained with the Kalman filter, quantized Kalman filter, and particle filter were different from the ground truth. Notice that the results obtained using the particle filter with 100 particles were good at t = 3 , 9 , 40 , whilst the resulting PDFs for t = 25 , 32 , 59 , 72 , 82 , 99 differed slightly more from the ground truth. However, the performance of particle filtering can be improved by increasing the number of particles, which produces an increment in the execution time.
To compare the accuracy of the state estimation, 100 Monte Carlo trials were run, and the mean-squared error between the true state and the estimation obtained by the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter is computed as follows:
MSE = 1 R k = 1 R x t x ^ t | t 2 2 ,
where x t is the true state (which in a real-word system is not available, but in simulations, it can be used to analyze the performance of the estimation techniques), x ^ t | t is the estimation of the state system, and · 2 2 denotes the squared Euclidean norm. In Figure 6, the box plots corresponding to 100 Monte Carlo runs for Δ q = 3 , 5 , 7 are shown.
Figure 6 shows that the MSE between the true state and the estimation obtained with the Kalman filter and quantized Kalman filter increased fast as Δ q increased. However, the MSE increased slowly for the state estimation obtained with the Gaussian sum filter and particle filter.
In Figure 7, the smoothing PDFs at time t = 100 are shown. The quantization step was considered to be chosen from Δ q = 3 , 5 , 7 , and the number of Gaussian components to approximate p ( y t | x t ) was chosen from K = 6 , 8 , 10 . Furthermore, the number of Gaussian components kept after the reduction procedure was considered as S red = K . In order to obtain the adequate number of particles to compare the particle smoother with the Gaussian sum smoother, 100 Monte Carlo simulations were carried out to obtain the number of particles that yielded smoothing PDFs that were as close to the true PDFs as the Gaussian sum smoother using K = 6 , 8 , 10 . For comparison purposes, the particle smoother execution time corresponding to the time required to implement the particle smoother algorithm using the number of particles that produces a similar result to the Gaussian sum smoother with K components is defined as Par(K). The L 2 -norm of the difference between the true and the estimated PDFs as the measure of similarity was used:
q q ^ 2 = k = 1 M | q q ^ | 2 1 / 2 ,
where q represents the true PDF and q ^ represents the estimated PDF, which was chosen so that q q ^ 2 < 1 × 10 6 . The approximated number of particles (labeled in each PDF in Figure 7) was used to compare the execution time of both algorithms, the Gaussian sum smoother and particle smoother, and the results are shown in Table 2. Figure 7 shows that the smoothing PDFs obtained with our proposal using a small number of Gaussian components and the PDFs obtained using the particle smoother with a large number of particles fit the ground truth. In contrast, the PDFs obtained with the Kalman smoother and quantized Kalman smoother were different from the ground truth. Furthermore, the execution time in Table 2 shows that the required time to perform the Gaussian sum smoother was less compared to the time to perform the particle smoother, which needs a large number of particles to produce a result similar to the Gaussian sum smoother.
From the results shown in Figure 5, Figure 6 and Figure 7 and Table 2, it can be concluded that:
  • The filtering and smoothing PDFs are non-Gaussian, although the process and output noises in (1) and (2) are Gaussian distributed;
  • The accuracy of the standard and quantized Kalman filtering and smoothing decreased as the quantization step increased;
  • The state estimates obtained with particle filter and smoother were similar to the results obtained using the Gaussian sum filter and smoother. However, the characterization of the filtering and smoothing PDFs using the Gaussian sum filter and smoother were better than the PDF obtained by the particle filter and smoother. Notice that a correct characterization of a PDF is important when high-order moments need to be computed, especially in system identification tasks;
  • In order to implement the Gaussian sum filter and smoother, the parameters K (the number that defines the quality of the p ( y t | x t ) approximation) and S red (the Gaussian components kept after the Gaussian sum reduction algorithm) need to be chosen by the user. These parameters can be found in a simulation study by the trial and error approach and should be set by a trade-off between the time complexity and the accuracy of the estimation. A large value of K produces an accurate estimate, but a high computational load;
  • The larger the quantization step is, the larger the number of Gaussian components, K, needed to approximate p ( y t | x t ) in order to obtain an accurate approximation. However, for a large quantization step, the number K needed to obtain a good approximation of the filtering and smoothing PDFs is relatively small compared to the number of particles required to obtain similar results using the particle filter and smoother;
  • The maximum number of Gaussian components kept after the Gaussian reduction procedure is important for the accuracy of the approximation. In the simulations, S red = K was used. Furthermore, it was noticed that once an adequate S red was defined, incrementing this value did not produce a significant improvement in the estimation. However, this increment in S red was really critical for the resulting numerical complexity of the algorithm (and hence, the execution time), which increased since the Gaussian sum reduction procedure (e.g., Kullback–Leibler reduction) utilized more time to reduce a large amount of Gaussian components;
  • The Gaussian sum smoother execution time for all values of Δ q was small. This occurred because in each case, a relatively small number of Gaussian components to approximate p ( y t | x t ) were used. However, the particle smoother execution time is variable for different values of Δ q . As Δ q decreased, the L 2 -norm between the Gaussian sum smoother and the ground truth decreased, and a larger number of particles to obtain a comparable L 2 -norm between the particle smoother and the ground truth were required.

4.2. Real-World Application: Tank Liquid Level

In this example, the problem of estimating the tank liquid level in a two-tank system by using the measurements taken by a low-cost sensor based on a variable resistor that is attached to an arm with a floater is considered; see Figure 8.
A linearized model of this system can be found in [78]. Here, it was assumed that h 2 can be measured and h 1 cannot. The discrete-time version of the model in [78] with sample time 0.1 s was considered:
(76) x t + 1 =   0.9959 0.0041 0.0041 0.9959 x t + 0.0998 0.0002 0.0002 0.0998 u t + w t , (77) z t =   0 1.0 x t + v t ,
where x t = [ h 1 h 2 ] T and u t = [ f 1 k f 2 + k ] T with k = 0.4111 . The sensor measures h 2 vary its resistance in discrete steps, with minimum and maximum values β 1 = 2 and β L = 10 . To simulate this system, it was considered that: w t N w t 0 , 0.001 I 2 , v t N v t 0 , 0.0001 ; the input [ f 1 f 2 ] T was drawn from N [ 10 2 ] T , 10 I 2 ; the initial condition x 1 N x 1 ( [ 10 5 ] T , 0.01 I 2 ) . For this example, 100 Monte Carlo runs were simulated. In Figure 9 (left), the output z t that corresponds to the values of h 2 that are nonquantized and the output y t that corresponds to the measurements given by the sensor (for one of the Monte Carlo runs) are shown. In this figure, the level of quantization in the measurements can be observed. In Figure 9 (right), the MSE between the true and estimated state is shown. It was observed that the proposal presented in this paper, the Gaussian sum filter, yielded the most accurate estimation of h 1 , followed by the particle filter, Kalman filter, and quantized Kalman filter. In this example, K = 50 and 1000 particles were used to implement the Gaussian sum filter and particle filter, respectively.
In this example, a relatively high number of Gaussian components ( K = 50 ) were required to obtain a good estimation of the filtering and smoothing distributions, and hence of the state. This produced an increment in the execution time since the Gaussian sum reduction algorithm needed more time to deal with a high number of Gaussian components in every iteration. This resulted in similar execution times for our proposed algorithm and the traditional particle filter. However, the execution time of the Gaussian sum filter and smoother was smaller than the execution time of the ground truth (particle filter and smoother with 20,000 particles).

5. Conclusions

In this paper, Gaussian sum filtering and smoothing algorithms for linear-time-invariant state-space systems with quantized output data were developed. An approximation of the integral equation that defines the probability mass function of the quantized data given the current state, p ( y t | x t ) , using Gaussian quadrature was considered. This approximation naturally yielded an explicit mathematical model with a GMM structure for this probability function. Using the approximation of p ( y t | x t ) summarized in Theorem 1, it was possible to solve in closed form the general equations of filtering and smoothing to deal with quantized data. This fact allowed for a closed-form expression of the system state estimators given the quantized data x ^ t | t and x ^ t | N . Via numerical simulations, it was shown that approximating p ( y t | x t ) with a small number of Gaussian components was adequate, yielding an approximation comparable to the true filtering and smoothing PDFs given by the particle approach (using a large number of particles, namely 20,000 particles). This reduced number of Gaussian components allowed for a low computational load, especially when the system order increased. In addition, our results showed overall less computational load for our proposed techniques since the number of Gaussian components was considerably less than the number of particles used in particle filtering and smoothing.
The proposed Gaussian sum filtering and smoothing algorithms can be utilized, in principle, to develop system identification algorithms and control strategies having quantized output measurements.

Author Contributions

Conceptualization, A.L.C., R.A. and J.C.A.; methodology, A.L.C. and J.C.A.; software, A.L.C.; validation, R.C. and B.I.G.; formal analysis, A.L.C., R.A., R.C., B.I.G. and J.C.A.; investigation, A.L.C., R.A. and J.C.A.; resources, J.C.A.; writing—original draft preparation, A.L.C., R.A., R.C., B.I.G. and J.C.A.; writing—review and editing, A.L.C., R.C., B.I.G. and J.C.A.; visualization, R.C. and B.I.G.; supervision, J.C.A. All authors have read and agreed to the published version of the manuscript.

Funding

PIIC program of DGP at Universidad Técnica Federico Santa María No. 062/2018 and 035/2021. Grants ANID-Fondecyt 1211630 and 11201187, ANID-Basal Project FB0008 (AC3E). Chilean National Agency for Research and Development (ANID) Scholarship Program/Doctorado Nacional/2020-21202410.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PDF/PMF:Probability density/mass function
GMM:Gaussian mixture model
GSF/GSS:Gaussian sum filter/smoother
PF/PS:Particle filter/smoother
KF/KS:Kalman filter/smoother
QKF/QKS:Quantized Kalman filter/smoother
FLQ/ILQ:Finite-/infinite-level quantizer
MSE:Mean squared error
S red :Number of Gaussian components kept after the reduction procedure
GT:Ground truth obtain by using the particle filter/smoother with a large number
of particles
K:Order of the polynomials in the Gauss–Legendre quadrature

Appendix A. Technical Lemmata

Lemma A1.
The PDF N y C x + μ , R of the random variable Y R p can be rewritten as follows:
N y C x + μ , R = 1 det 2 π R exp 1 2 x T F x 2 G T x + H ,
where F = C T R 1 C , G T = ( y μ ) T R 1 C , and H = ( y μ ) T R 1 ( y μ ) .
Proof. 
Directly expand the exponential argument and reorder the terms in the variable x. □
Lemma A2.
The PDF N x A w + ν , Q of the random variable X R n can be rewritten as follows:
N x A w + ν , Q = 1 det 2 π Q exp 1 2 x T Q 1 x 2 J T x + L ,
where J T = ( A w + ν ) T Q 1 and L = ( A w + ν ) T Q 1 ( A w + ν ) .
Proof. 
Directly expand the exponential argument and reorder the terms in the variable x. □
Lemma A3.
Consider the backward filter function p ( y t : N | x t ) given in (42). Its GMM structure is given by:
p ( y t : N | x t ) = k = 1 S t | t σ t | t k N x t ν t | t k , Ω t | t k ,
where σ t | t k = σ ¯ t | t k / s = 1 S t | t σ ¯ t | t s is the normalized mixing weight and ν t | t k = ( F t | t k ) 1 G t | t k and Ω t | t k = ( F t | t k ) 1 , are the mean vector and the covariance matrix, respectively, with:
σ ¯ t | t k = ϵ t | t k λ t | t k exp 1 2 H t | t k G t | t k T F t | t k 1 G t | t k 2 π n 2 det F t | t k .
On the other hand, consider the GMM structure given by (54). Then, the backward filter form in (42) is obtained using the following:
S t | t = S red , F t | t k = U t | t k 1 , ϵ t | t k = δ t | t k , G t | t k T = z t | t k T U t | t k 1 , λ t | t k = det 2 π U t | t k 1 / 2 , H t | t k = z t | t k T U t | t k 1 z t | t k .
Proof. 
Directly by using Lemma A1. □

Appendix B. Quantities to Perform the Gaussian Sum Filter

For each two-tuple ( τ , ) , where τ = 1 , , K and = 1 , , M t | t 1 , let k be a new index, so that k = ( 1 ) K + τ . Then:
(A6) γ t | t k = γ ¯ t | t k s = 1 M t | t γ ¯ t | t s , (A7) x ^ t | t k = x ^ t | t 1 + K t ( η t τ κ t τ ) , (A8) Σ t | t k = ( I K t C ) Σ t | t 1 ,
are the weights, means, and covariance matrices of the measurement-update equation in the Gaussian sum filter, with the following definitions:
(A9) γ ¯ t | t k = ς t τ γ t | t 1 N η t τ κ t τ , V t , (A10) K t = Σ t | t 1 C T V t 1 , (A11) κ t τ = C x ^ t | t 1 + D u t + μ t τ , (A12) V t = R + C Σ t | t 1 C T ,
where ς t τ , η t τ , and μ t τ are defined in Table 1, and:
(A13) γ t + 1 | t k = γ t | t k , (A14) x ^ t + 1 | t k = A x ^ t | t k + B u t , (A15) Σ t + 1 | t k = Q + A Σ t | t k A T ,
are the weights, means, and covariance matrices of the time-update equation in the Gaussian sum filter.

Appendix C. Quantities to Perform the Backward Filter

For each two-tuple ( τ , ) , where τ = 1 , , K and = 1 , , S t | t + 1 , let k be a new index, so that k = ( 1 ) K + τ . Then:
(A16) ϵ t | t k = ς t τ ϵ t | t + 1 , (A17) λ t | t k = det 2 π R 1 / 2 λ t | t + 1 , (A18) θ t τ = η t τ D u t μ t τ , (A19) F t | t k = F t | t + 1 + C T R 1 C , (A20) G t | t k T = G t | t + 1 T + θ t τ T R 1 C , (A21) H t | t k = H t | t + 1 + θ t τ T R 1 θ t τ .
are the quantities needed to compute the backward-measurement-update equation in the backward filter, where ς t τ , η t τ , and μ t τ are defined in Table 1, and:
(A22) ϵ t | t + 1 k = ϵ t + 1 | t + 1 k , (A23) λ t | t + 1 k = det Q det F q k 1 / 2 λ t + 1 | t + 1 k , (A24) F t | t + 1 k = A T M q k A , (A25) G t | t + 1 k T = G t + 1 | t + 1 k T F q k 1 Q 1 A u t T B T M q k A , (A26) H t | t + 1 k = H t + 1 | t + 1 k G t + 1 | t + 1 k T F q k 1 G t + 1 | t + 1 k + u t T B T M q k B u t 2 u t T B T Q 1 F q k 1 G t + 1 | t + 1 k ,
are the quantities needed to compute the backward prediction equation in the backward filter, where F q k = F t + 1 | t + 1 k + Q 1 and M q k = Q 1 Q 1 F q k 1 Q 1 .

Appendix D. Quantities to Perform the Gaussian Sum Smoother

For each two-tuple ( τ , ) , where τ = 1 , , M t | t 1 and = 1 , , S red , let k be a new index, so that k = ( 1 ) M t | t 1 + τ . Then:
(A27) ϵ t | N k = ϵ ¯ t | N k s = 1 S t | N ϵ ¯ t | N s , (A28) x ^ t | N k = ( L t τ ) 1 ρ t τ , (A29) Σ t | N k = ( L t τ ) 1 ,
are the weights, means, and covariance matrices of the smoothing PDF p ( x t | y 1 : N ) , where:
(A30) ϵ ¯ t | N k = γ t | t 1 τ δ t | t exp 1 2 ϕ 1 t + ϕ 2 t τ ϕ 3 t τ ( 2 π ) n 2 det L t τ det U t | t det Σ t | t 1 τ , (A31) ρ t τ = U t | t 1 z t | t + Σ t | t 1 τ 1 x ^ t | t 1 τ , (A32) L t τ = U t | t 1 + Σ t | t 1 τ 1 , (A33) ϕ 3 t τ = ρ t τ T L t τ 1 ρ t τ , (A34) ϕ 1 t = z t | t T U t | t 1 z t | t , (A35) ϕ 2 t τ = x ^ t | t 1 τ T Σ t | t 1 τ 1 x ^ t | t 1 τ ,
γ t | t 1 τ , x ^ t | t 1 τ and Σ t | t 1 τ are obtained from the time-update step of Theorem 2 (34), and δ t | t , z t | t , and U t | t are obtained from the reduced measurement-update step of the backward-filtering algorithm in (54).

Appendix E. Quantities to Compute p(xt+1, xt|y1:N)

For each two-tuple ( τ , ) , where τ = 1 , , M t | t and = 1 , , S t + 1 | t + 1 , let k be a new index, so that k = ( 1 ) M t | t + τ . Then:
(A36) α k = α ¯ k s = 1 S t , t + 1 α ¯ s , (A37) χ ^ t | N k = ( F t τ ) 1 G t τ , (A38) E t | N k = ( F t τ ) 1 ,
are the weights, means, and covariance matrices of the joint PDF p ( x t + 1 , x t | y 1 : N ) , where:
(A39) α ¯ k = γ t | t τ ϵ t + 1 | t + 1 λ t + 1 | t + 1 S t τ det Q det F t τ det Σ t | t τ , (A40) S t τ = exp 1 2 H t τ G t τ T F t τ 1 G t τ , (A41) F t τ = Q 1 + F t + 1 | t + 1 Q 1 A A T Q 1 Σ t | t τ 1 + A T Q 1 A , (A42) G t τ T = G t + 1 | t + 1 T + u t T B T Q 1 J t τ T u t T B T Q 1 A , (A43) H t τ = H t + 1 | t + 1 + L t τ + u t T B T Q 1 B u t ,
with:
(A44) J t τ T = x ^ t | t τ T Σ t | t τ 1 , (A45) L t τ = x ^ t | t τ T Σ t | t τ 1 x ^ t | t τ ,
γ t | t τ , x ^ t | t τ , and Σ t | t τ are obtained from the measurement-update step in Theorem 2, and ϵ t + 1 | t + 1 , λ t + 1 | t + 1 , F t + 1 | t + 1 , G t + 1 | t + 1 T , and H t + 1 | t + 1 are obtained from the reduced backward-measurement-update step in Theorem 3.

References

  1. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. Trans. ASME-J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  2. Kamen, E.W.; Su, J.K. Introduction to Optimal Estimation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  3. Anderson, B.D.O.; Moore, J.B. Optimal Filtering; Prentice-Hall, Inc.: Hoboken, NJ, USA, 1979. [Google Scholar]
  4. Leong, A.S.; Dey, S.; Quevedo, D.E. Transmission scheduling for remote state estimation and control with an energy harvesting sensor. Automatica 2018, 91, 54–60. [Google Scholar] [CrossRef]
  5. Liang, H.; Guo, X.; Pan, Y.; Huang, T. Event-Triggered Fuzzy Bipartite Tracking Control for Network Systems Based on Distributed Reduced-Order Observers. IEEE Trans. Fuzzy Syst. 2021, 29, 1601–1614. [Google Scholar] [CrossRef]
  6. Liu, L.; Gao, T.; Liu, Y.J.; Tong, S.; Chen, C.L.P.; Ma, L. Time-varying IBLFs-based adaptive control of uncertain nonlinear systems with full state constraints. Automatica 2021, 129, 109595. [Google Scholar] [CrossRef]
  7. Liu, L.; Liu, Y.J.; Chen, A.; Tong, S.; Chen, C.L.P. Integral Barrier Lyapunov function-based adaptive control for switched nonlinear systems. Sci. China Inf. Sci. 2020, 63, 132203. [Google Scholar] [CrossRef] [Green Version]
  8. Gibson, S.; Ninness, B. Robust maximum-likelihood estimation of multivariable dynamic systems. Automatica 2005, 41, 1667–1682. [Google Scholar] [CrossRef]
  9. Agüero, J.C.; Tang, W.; Yuz, J.I.; Delgado, R.; Goodwin, G.C. Dual time–frequency domain system identification. Automatica 2012, 48, 3031–3041. [Google Scholar] [CrossRef]
  10. Kaltiokallio, O.; Hostettler, R.; Yiğitler, H.; Valkama, M. Unsupervised Learning in RSS-Based DFLT Using an EM Algorithm. Sensors 2021, 21, 5549. [Google Scholar] [CrossRef] [PubMed]
  11. Zhao, J.; Netto, M.; Huang, Z.; Yu, S.S.; Gómez-Expósito, A.; Wang, S.; Kamwa, I.; Akhlaghi, S.; Mili, L.; Terzija, V.; et al. Roles of Dynamic State Estimation in Power System Modeling, Monitoring and Operation. IEEE Trans. Power Syst. 2021, 36, 2462–2472. [Google Scholar] [CrossRef]
  12. Ji, X.; Yin, Z.; Zhang, Y.; Wang, M.; Zhang, X.; Zhang, C.; Wang, D. Real-time robust forecasting-aided state estimation of power system based on data-driven models. Int. J. Electr. Power Energy Syst. 2021, 125, 106412. [Google Scholar] [CrossRef]
  13. Bonvini, M.; Sohn, M.D.; Granderson, J.; Wetter, M.; Piette, M.A. Robust on-line fault detection diagnosis for HVAC components based on nonlinear state estimation techniques. Appl. Energy 2014, 124, 156–166. [Google Scholar] [CrossRef]
  14. Nemati, F.; Safavi Hamami, S.M.; Zemouche, A. A nonlinear observer-based approach to fault detection, isolation and estimation for satellite formation flight application. Automatica 2019, 107, 474–482. [Google Scholar] [CrossRef]
  15. Jeong, H.; Park, B.; Park, S.; Min, H.; Lee, S. Fault detection and identification method using observer-based residuals. Reliab. Eng. Syst. Saf. 2019, 184, 27–40. [Google Scholar] [CrossRef]
  16. Noshad, Z.; Javaid, N.; Saba, T.; Wadud, Z.; Saleem, M.Q.; Alzahrani, M.E.; Sheta, O.E. Fault Detection in Wireless Sensor Networks through the Random Forest Classifier. Sensors 2019, 19, 1568. [Google Scholar] [CrossRef] [Green Version]
  17. Huang, C.; Shen, B.; Zou, L.; Shen, Y. Event-Triggering State and Fault Estimation for a Class of Nonlinear Systems Subject to Sensor Saturations. Sensors 2021, 21, 1242. [Google Scholar] [CrossRef]
  18. Corbetta, M.; Sbarufatti, C.; Giglio, M.; Todd, M.D. Optimization of nonlinear, non-Gaussian Bayesian filtering for diagnosis and prognosis of monotonic degradation processes. Mech. Syst. Signal Process. 2018, 104, 305–322. [Google Scholar] [CrossRef]
  19. Ahwiadi, M.; Wang, W. An Adaptive Particle Filter Technique for System State Estimation and Prognosis. IEEE Trans. Instrum. Meas. 2020, 69, 6756–6765. [Google Scholar] [CrossRef]
  20. Ding, D.; Han, Q.L.; Ge, X.; Wang, J. Secure State Estimation and Control of Cyber-Physical Systems: A Survey. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 176–190. [Google Scholar] [CrossRef]
  21. McLaughlin, D. An integrated approach to hydrologic data assimilation: Interpolation, smoothing, and filtering. Adv. Water Resour. 2002, 25, 1275–1286. [Google Scholar] [CrossRef]
  22. Cosme, E.; Verron, J.; Brasseur, P.; Blum, J.; Auroux, D. Smoothing problems in a Bayesian framework and their linear Gaussian solutions. Mon. Weather Rev. 2012, 140, 683–695. [Google Scholar] [CrossRef]
  23. Stone, L.D.; Streit, R.L.; Corwin, T.L.; Bell, K.L. Bayesian Multiple Target Tracking; Artech House: Norwood, MA, USA, 2013. [Google Scholar]
  24. Schizas, I.D.; Giannakis, G.B.; Roumeliotis, S.I.; Ribeiro, A. Consensus in Ad Hoc WSNs With Noisy Links—Part II: Distributed Estimation and Smoothing of Random Signals. IEEE Trans. Signal Process. 2008, 56, 1650–1666. [Google Scholar] [CrossRef]
  25. Liu, J.; Liu, Y.; Dong, K.; Ding, Z.; He, Y. A Novel Distributed State Estimation Algorithm with Consensus Strategy. Sensors 2019, 19, 2134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Han, Y.; Cui, M.; Liu, S. Optimal Sensor and Relay Nodes Power Scheduling for Remote State Estimation with Energy Constraint. Sensors 2020, 20, 1073. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Chiang, K.W.; Duong, T.T.; Liao, J.K.; Lai, Y.C.; Chang, C.C.; Cai, J.M.; Huang, S.C. On-Line Smoothing for an Integrated Navigation System with Low-Cost MEMS Inertial Sensors. Sensors 2012, 12, 17372–17389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Rostami Shahrbabaki, M.; Safavi, A.A.; Papageorgiou, M.; Papamichail, I. A data fusion approach for real-time traffic state estimation in urban signalized links. Transp. Res. Part C Emerg. Technol. 2018, 92, 525–548. [Google Scholar] [CrossRef]
  29. Ahmed, A.; Naqvi, S.A.A.; Watling, D.; Ngoduy, D. Real-Time Dynamic Traffic Control Based on Traffic-State Estimation. Transp. Res. Rec. 2019, 2673, 584–595. [Google Scholar] [CrossRef]
  30. Schreiter, T.; van Lint, H.; Treiber, M.; Hoogendoorn, S. Two fast implementations of the Adaptive Smoothing Method used in highway traffic state estimation. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, 19–22 September2010; pp. 1202–1208. [Google Scholar]
  31. Särkkä, S. Bayesian Filtering and Smoothing; Cambridge University Press: Cambridge, UK, 2013; Volume 3. [Google Scholar]
  32. Widrow, B.; Kollár, I. Quantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communications; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  33. Msechu, E.J.; Giannakis, G.B. Sensor-Centric Data Reduction for Estimation With WSNs via Censoring and Quantization. IEEE Trans. Signal Process. 2012, 60, 400–414. [Google Scholar] [CrossRef]
  34. Wang, L.Y.; Yin, G.G.; Zhang, J. System identification using binary sensors. IEEE Trans. Automat. Contr. 2003, 48, 1892–1907. [Google Scholar] [CrossRef] [Green Version]
  35. Curry, R.E. Estimation and Control with Quantized Measurements; MIT Press: Cambridge, MA, USA, 1970. [Google Scholar]
  36. Liu, S.; Wang, Z.; Hu, J.; Wei, G. Protocol-based extended Kalman filtering with quantization effects: The Round-Robin case. Int. J. Robust Nonlinear Control 2020, 30, 7927–7946. [Google Scholar] [CrossRef]
  37. Malyavej, V.; Savkin, A.V. The problem of optimal robust Kalman state estimation via limited capacity digital communication channels. Syst. Control Lett. 2005, 54, 283–292. [Google Scholar] [CrossRef]
  38. Farhadi, A.; Charalambous, C.D. Stability and reliable data reconstruction of uncertain dynamic systems over finite capacity channels. Automatica 2010, 46, 889–896. [Google Scholar] [CrossRef]
  39. Silva, E.I.; Agüero, J.C.; Goodwin, G.C.; Lau, K.; Wang, M. The SNR Approach to Networked Control. In The Control Handbook: Control System Applications; Levine, W.S., Ed.; CRC Press: Boca Raton, FL, USA, 2011; Chapter 25; pp. 1–26. [Google Scholar]
  40. Godoy, B.I.; Agüero, J.C.; Carvajal, R.; Goodwin, G.C.; Yuz, J.I. Identification of sparse FIR systems using a general quantisation scheme. Int. J. Control 2014, 87, 874–886. [Google Scholar] [CrossRef]
  41. Li, Z.M.; Chang, X.H.; Yu, L. Robust quantized H∞ filtering for discrete-time uncertain systems with packet dropouts. Appl. Math. Comput. 2016, 275, 361–371. [Google Scholar] [CrossRef]
  42. Li, S.; Sauter, D.; Xu, B. Fault isolation filter for networked control system with event-triggered sampling scheme. Sensors 2011, 11, 557–572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Zhang, L.; Liang, H.; Sun, Y.; Ahn, C.K. Adaptive Event-Triggered Fault Detection Scheme for Semi-Markovian Jump Systems With Output Quantization. IEEE Trans. Syst. Man, Cybern. Syst. 2021, 51, 2370–2381. [Google Scholar] [CrossRef]
  44. Zhang, X.; Han, Q.; Ge, X.; Ding, D.; Ding, L.; Yue, D.; Peng, C. Networked control systems: A survey of trends and techniques. IEEE/CAA J. Autom. Sin. 2020, 7, 1–17. [Google Scholar] [CrossRef]
  45. Goodwin, G.C.; Haimovich, H.; Quevedo, D.E.; Welsh, J.S. A moving horizon approach to Networked Control system design. IEEE Trans. Autom. Control 2004, 49, 1427–1445. [Google Scholar] [CrossRef]
  46. Gustafsson, F.; Karlsson, R. Statistical results for system identification based on quantized observations. Automatica 2009, 45, 2794–2801. [Google Scholar] [CrossRef] [Green Version]
  47. Wang, L.Y.; Yin, G.G.; Zhang, J.; Zhao, Y. System Identification with Quantized Observations; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  48. Marelli, D.E.; Godoy, B.I.; Goodwin, G.C. A scenario-based approach to parameter estimation in state-space models having quantized output data. In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 2011–2016. [Google Scholar]
  49. Leong, A.S.; Dey, S.; Nair, G.N. Quantized Filtering Schemes for Multi-Sensor Linear State Estimation: Stability and Performance Under High Rate Quantization. IEEE Trans. Signal Process. 2013, 61, 3852–3865. [Google Scholar] [CrossRef]
  50. Li, D.; Kar, S.; Alsaadi, F.E.; Dobaie, A.M.; Cui, S. Distributed Kalman Filtering with Quantized Sensing State. IEEE Trans. Signal Process. 2015, 63, 5180–5193. [Google Scholar] [CrossRef]
  51. Rana, M.M.; Li, L. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids. Sensors 2015, 15, 4302–4325. [Google Scholar] [CrossRef] [Green Version]
  52. Chang, X.H.; Liu, Y. Robust H∞ Filtering for Vehicle Sideslip Angle With Quantization and Data Dropouts. IEEE Trans. Veh. Technol. 2020, 69, 10435–10445. [Google Scholar] [CrossRef]
  53. Curiac, D. Towards wireless sensor, actuator and robot networks: Conceptual framework, challenges and perspectives. J. Netw. Comput. Appl. 2016, 63, 14–23. [Google Scholar] [CrossRef]
  54. Allik, B.; Piovoso, M.J.; Zurakowski, R. Recursive estimation with quantized and censored measurements. In Proceedings of the 2016 American Control Conference (ACC), Boston, MA, USA, 6–8 July 2016; pp. 5130–5135. [Google Scholar]
  55. Zhou, Y.; Li, J.; Wang, D. Unscented Kalman Filtering based quantized innovation fusion for target tracking in WSN with feedback. In Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Baoding, China, 12–15 July 2009; Volume 3, pp. 1457–1463. [Google Scholar]
  56. Wigren, T. Approximate gradients, convergence and positive realness in recursive identification of a class of non-linear systems. Int. J. Adapt. Control Signal Process. 1995, 9, 325–354. [Google Scholar] [CrossRef]
  57. Gustafsson, F.; Karlsson, R. Estimation based on Quantized Observations. IFAC Proc. Vol. 2009, 42, 78–83. [Google Scholar] [CrossRef]
  58. Cedeño, A.L.; Albornoz, R.; Carvajal, R.; Godoy, B.I.; Agüero, J.C. On Filtering Methods for State-Space Systems having Binary Output Measurements. IFAC-PapersOnLine 2021, 54, 815–820. [Google Scholar] [CrossRef]
  59. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 159. [Google Scholar]
  60. Frühwirth, S.; Celeux, G.; Robert, C.P. Handbook of Mixture Analysis; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  61. Lo, J. Finite-dimensional sensor orbits and optimal nonlinear filtering. IEEE Trans. Inf. Theory 1972, 18, 583–588. [Google Scholar] [CrossRef]
  62. Cedeño, A.L.; Orellana, R.; Carvajal, R.; Agüero, J.C. EM-based identification of static errors-in-variables systems utilizing Gaussian Mixture models. IFAC-PapersOnLine 2020, 53, 863–868. [Google Scholar] [CrossRef]
  63. Orellana, R.; Carvajal, R.; Escárate, P.; Agüero, J.C. On the Uncertainty Identification for Linear Dynamic Systems Using Stochastic Embedding Approach with Gaussian Mixture Models. Sensors 2021, 21, 3837. [Google Scholar] [CrossRef] [PubMed]
  64. Kitagawa, G. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Ann. Inst. Stat. Math. 1994, 46, 605–623. [Google Scholar] [CrossRef]
  65. Kitagawa, G.; Gersch, W. Smoothness Priors Analysis of Time Series; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 116. [Google Scholar]
  66. Karlsson, R.; Gustafsson, F. Particle filtering for quantized sensor information. In Proceedings of the 2005 13th European Signal Processing Conference, Antalya, Turkey, 4–8 September 2005; pp. 1–4. [Google Scholar]
  67. DeGroot, M.H. Optimal Statistical Decisions; Wiley Classics Library, Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  68. Borovkov, A.A. Probability Theory; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  69. Cohen, H. Numerical Approximation Methods; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  70. Arasaratnam, I.; Haykin, S.; Elliott, R.J. Discrete-time nonlinear filtering algorithms using Gauss–Hermite quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
  71. Arasaratnam, I.; Haykin, S. Cubature kalman smoothers. Automatica 2011, 47, 2245–2250. [Google Scholar] [CrossRef]
  72. Runnalls, A.R. Kullback-Leibler approach to Gaussian mixture reduction. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 989–999. [Google Scholar] [CrossRef]
  73. Balenzuela, M.P.; Dahlin, J.; Bartlett, N.; Wills, A.G.; Renton, C.; Ninness, B. Accurate Gaussian Mixture Model Smoothing using a Two-Filter Approach. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami, FL, USA, 17–19 December 2018; pp. 694–699. [Google Scholar]
  74. Reddy, H.P.; Narasimhan, S.; Bhallamudi, S.M. Simulation and state estimation of transient flow in gas pipeline networks using a transfer function model. Ind. Eng. Chem. Res. 2006, 45, 3853–3863. [Google Scholar] [CrossRef]
  75. Chang, S.I.; Wang, S.J.; Lin, M.S. The Role of Simulation in Control System Design/Modification. In Nuclear Simulation; Springer: Berlin/Heidelberg, Germany, 1987; pp. 205–222. [Google Scholar]
  76. Anderson, B.D.O.; Moore, J.B. Optimal Control: Linear Quadratic Methods; Courier Corporation: Massachusetts, MA, USA, 2007. [Google Scholar]
  77. Gómez, J.C.; Sad, G.D. A State Observer from Multilevel Quantized Outputs. In Proceedings of the 2020 Argentine Conference on Automatic Control (AADECA), Buenos Aires, Argentina, 28–30 October 2020; pp. 1–6. [Google Scholar]
  78. Goodwin, G.C.; Graebe, S.F.; Salgado, M.E. Control System Design; Prentice Hall: Hoboken, NJ, USA, 2001; Volume 240. [Google Scholar]
Figure 1. Diagram of a real dynamic system with quantized data.
Figure 1. Diagram of a real dynamic system with quantized data.
Sensors 21 07675 g001
Figure 2. State-space model with quantized output.
Figure 2. State-space model with quantized output.
Sensors 21 07675 g002
Figure 3. Representation of the (left) uniform infinite- and (right) finite-level quantizers defined in terms of the quantized values β i and the intervals J i .
Figure 3. Representation of the (left) uniform infinite- and (right) finite-level quantizers defined in terms of the quantized values β i and the intervals J i .
Sensors 21 07675 g003
Figure 4. The shaded area represents the probability of y t taking a value β i that is equal to the probability of z t belonging to set J i . Here, P · denotes probability.
Figure 4. The shaded area represents the probability of y t taking a value β i that is equal to the probability of z t belonging to set J i . Here, P · denotes probability.
Sensors 21 07675 g004
Figure 5. Example 1. Filtering PDFs at some instants of time for Δ q = 7 , K = 10 , and 100 particles. GT stands for ground truth. QKF, KF, PF, and GSF stand for quantized Kalman filter, Kalman filter, particle filter, and Gaussian sum filter, respectively.
Figure 5. Example 1. Filtering PDFs at some instants of time for Δ q = 7 , K = 10 , and 100 particles. GT stands for ground truth. QKF, KF, PF, and GSF stand for quantized Kalman filter, Kalman filter, particle filter, and Gaussian sum filter, respectively.
Sensors 21 07675 g005
Figure 6. MSE between the true and estimated state for Δ q = 3 , 5 , 7 . KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.
Figure 6. MSE between the true and estimated state for Δ q = 3 , 5 , 7 . KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.
Sensors 21 07675 g006
Figure 7. Smoothing PDF at time t = 100 , for K = 6 , 8 , 10 , where K increases to the right, and for Δ q = 3 , 5 , 7 , where Δ q increases upwards. GT stands for the ground truth. GSS, KS, QKS, and PS stand for the Gaussian sum smoother, Kalman smoother, quantized Kalman smoother, and particle smoother, respectively.
Figure 7. Smoothing PDF at time t = 100 , for K = 6 , 8 , 10 , where K increases to the right, and for Δ q = 3 , 5 , 7 , where Δ q increases upwards. GT stands for the ground truth. GSS, KS, QKS, and PS stand for the Gaussian sum smoother, Kalman smoother, quantized Kalman smoother, and particle smoother, respectively.
Sensors 21 07675 g007
Figure 8. Two-tank system. h 1 and h 2 denote the liquid level in Tank 1 and Tank 2, respectively. The liquid flows into Tank 1 at a rate f 1 and out of Tank 2 at a rate f 2 . The quantizer has minimum and maximum values β 1 = 2 and β L = 10 .
Figure 8. Two-tank system. h 1 and h 2 denote the liquid level in Tank 1 and Tank 2, respectively. The liquid flows into Tank 1 at a rate f 1 and out of Tank 2 at a rate f 2 . The quantizer has minimum and maximum values β 1 = 2 and β L = 10 .
Sensors 21 07675 g008
Figure 9. (Left) The quantized and nonquantized measurements; (right) the MSE between the true and estimated filtered state. KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.
Figure 9. (Left) The quantized and nonquantized measurements; (right) the MSE between the true and estimated filtered state. KF, QKF, GSF, and PF stand for the Kalman filter, quantized Kalman filter, Gaussian sum filter, and particle filter, respectively.
Sensors 21 07675 g009
Table 1. Integral limits and parameters of Theorem 1. ILQ and FLQ mean infinite- and finite-level quantizers, respectively.
Table 1. Integral limits and parameters of Theorem 1. ILQ and FLQ mean infinite- and finite-level quantizers, respectively.
FLQ: y t = β 1 ILQ: y t = β i with i I in (7)
FLQ: y t = β i with i = 2 , , L 1
FLQ: y t = β L
a t q i 1 C x t D u t q L 1 C x t D u t
b t q 1 C x t D u t q i C x t D u t
ς t τ 2 ω τ / ( 1 + ψ τ ) 2 ω τ ( q i q i 1 ) / 2 2 ω τ / ( 1 + ψ τ ) 2
η t τ ( 1 ψ τ ) / ( 1 + ψ τ ) ψ τ ( q i q i 1 ) / 2 ( 1 ψ τ ) / ( 1 + ψ τ )
μ t τ q 1 ( q i + q i 1 ) / 2 q L 1
Table 2. Time in seconds required to perform the smoothing algorithm for the scalar system. Par(K) represents the number of particles (labeled in Figure 7) that produce a similar result to the Gaussian sum smoother with K components. KS, QKS, GSS, and PS stand for the Kalman smoother, quantized Kalman smoother, Gaussian sum smoother, and particle smoother, respectively.
Table 2. Time in seconds required to perform the smoothing algorithm for the scalar system. Par(K) represents the number of particles (labeled in Figure 7) that produce a similar result to the Gaussian sum smoother with K components. KS, QKS, GSS, and PS stand for the Kalman smoother, quantized Kalman smoother, Gaussian sum smoother, and particle smoother, respectively.
KSQKS GSS PS
Δ q -- K = 6 K = 8 K = 10 Par(6)Par(8)Par(10)
70.03790.1494 0.41580.42210.4735 2.04203.35384.0462
50.00480.0044 0.26950.37060.4588 3.00743.56654.0192
30.00330.0043 0.26090.34910.4618 4.24715.55016.3408
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cedeño, A.L.; Albornoz, R.; Carvajal, R.; Godoy, B.I.; Agüero, J.C. A Two-Filter Approach for State Estimation Utilizing Quantized Output Data. Sensors 2021, 21, 7675. https://doi.org/10.3390/s21227675

AMA Style

Cedeño AL, Albornoz R, Carvajal R, Godoy BI, Agüero JC. A Two-Filter Approach for State Estimation Utilizing Quantized Output Data. Sensors. 2021; 21(22):7675. https://doi.org/10.3390/s21227675

Chicago/Turabian Style

Cedeño, Angel L., Ricardo Albornoz, Rodrigo Carvajal, Boris I. Godoy, and Juan C. Agüero. 2021. "A Two-Filter Approach for State Estimation Utilizing Quantized Output Data" Sensors 21, no. 22: 7675. https://doi.org/10.3390/s21227675

APA Style

Cedeño, A. L., Albornoz, R., Carvajal, R., Godoy, B. I., & Agüero, J. C. (2021). A Two-Filter Approach for State Estimation Utilizing Quantized Output Data. Sensors, 21(22), 7675. https://doi.org/10.3390/s21227675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop