Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
A Systematic Literature Review on Machine and Deep Learning Approaches for Detecting Attacks in RPL-Based 6LoWPAN of Internet of Things
Next Article in Special Issue
Zonotopic Linear Parameter Varying SLAM Applied to Autonomous Vehicles
Previous Article in Journal
Automated Data Generation for Raman Spectroscopy Calibrations in Multi-Parallel Mini Bioreactors
Previous Article in Special Issue
VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach

Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Llorens i Artigas, 4-6, 08028 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3399; https://doi.org/10.3390/s22093399
Submission received: 30 March 2022 / Revised: 25 April 2022 / Accepted: 26 April 2022 / Published: 28 April 2022
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Figure 1
<p>The architecture of the software modules, including information flow, and a conceptual overview of the interconnections. Software framework also includes low-level sensor measurement acquisition units.</p> ">
Figure 2
<p>A representation of the bicycle model in 2D space for deriving the equations of motion. The vehicle schematic shows the inertial frame (O), the center of gravity (CoG) frame (C) attached to the center of gravity of the vehicle. The forces <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>r</mi> <mi>x</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>r</mi> <mi>y</mi> </mrow> </msub> </semantics></math> on the rear wheel are the longitudinal and lateral force, respectively. The front wheel forces <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>f</mi> <mi>l</mi> <mi>o</mi> <mi>n</mi> <mi>g</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> </semantics></math> are the longitudinal and lateral forces, respectively, the forces <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>f</mi> <mi>x</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>f</mi> <mi>y</mi> </mrow> </msub> </semantics></math> are the result of these forces in frame C. The lengths <math display="inline"><semantics> <msub> <mi>l</mi> <mi>f</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>l</mi> <mi>r</mi> </msub> </semantics></math> are the distance from CoG to the front wheel and rear wheel, respectively.</p> ">
Figure 3
<p>Calculation of laser end-point probability at sub-grid level by using occupancy grid probability map.</p> ">
Figure 4
<p>Translation of scan information <math display="inline"><semantics> <msup> <mrow> <mo>[</mo> <mi>r</mi> <mo>,</mo> <mi>α</mi> <mo>]</mo> </mrow> <mi>T</mi> </msup> </semantics></math> to scan end-points <math display="inline"><semantics> <msup> <mrow> <mo>[</mo> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>X</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>s</mi> <mrow> <mi>i</mi> <mi>Y</mi> </mrow> </msub> <mo>]</mo> </mrow> <mi>T</mi> </msup> </semantics></math> in LIDAR frame of reference.</p> ">
Figure 5
<p>A defined sector that is designed to capture kinematic system matrices to obtain offline gain. Later on, using the same sector approach, online interpolated gains are obtained.</p> ">
Figure 6
<p>Estimation comparison between wrapped yaw measurement and unwrap measurement. Panels (<b>a</b>,<b>b</b>) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mi>π</mi> <mo>,</mo> <mi>π</mi> <mo>]</mo> </mrow> </semantics></math>. Panels (<b>c</b>,<b>d</b>) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using continuous yaw measurement. (<b>a</b>) X–Y global position on yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mi>π</mi> <mo>,</mo> <mi>π</mi> <mo>]</mo> </mrow> </semantics></math>. (<b>b</b>) Yaw sensor measurement. (<b>c</b>) X–Y global position on yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mo>∞</mo> <mo>,</mo> <mo>∞</mo> <mo>]</mo> </mrow> </semantics></math>. (<b>d</b>) Yaw transformed to continuous measurement.</p> ">
Figure 6 Cont.
<p>Estimation comparison between wrapped yaw measurement and unwrap measurement. Panels (<b>a</b>,<b>b</b>) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mi>π</mi> <mo>,</mo> <mi>π</mi> <mo>]</mo> </mrow> </semantics></math>. Panels (<b>c</b>,<b>d</b>) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using continuous yaw measurement. (<b>a</b>) X–Y global position on yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mi>π</mi> <mo>,</mo> <mi>π</mi> <mo>]</mo> </mrow> </semantics></math>. (<b>b</b>) Yaw sensor measurement. (<b>c</b>) X–Y global position on yaw measurement between <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mo>∞</mo> <mo>,</mo> <mo>∞</mo> <mo>]</mo> </mrow> </semantics></math>. (<b>d</b>) Yaw transformed to continuous measurement.</p> ">
Figure 7
<p>Vehicle estimated trajectory in the presence of sensor disturbance during the simulation. As per the controller policy, the vehicle tried to follow two laps of the center-line track.</p> ">
Figure 8
<p>Estimated states on vehicle simulator for two laps.</p> ">
Figure 9
<p>Vehicle estimated trajectory in the presence of sensor disturbance during the real experiment. As per the controller policy, the vehicle tried to follow two laps of the center-line track.</p> ">
Figure 10
<p>Snapshots of the vehicle following the center line of the oval track. The vehicle motion is represented by a sequence of images clockwise from the upper-left to the bottom-left.</p> ">
Figure 11
<p>Estimated states on real vehicle for two laps.</p> ">
Figure 11 Cont.
<p>Estimated states on real vehicle for two laps.</p> ">
Figure 12
<p>Comparison of ground truth and final map obtained. (<b>a</b>) The ground truth image represents the environment where the testing is performed. (<b>b</b>) The final map formed after completion of two laps.</p> ">
Versions Notes

Abstract

:
This paper proposes an optimal approach for state estimation based on the Takagi–Sugeno (TS) Kalman filter using measurement sensors and rough pose obtained from LIDAR scan end-points matching. To obtain stable and optimal TS Kalman gain for estimator design, a linear matrix inequality (LMI) is optimized which is constructed from Lyapunov stability criteria and dual linear quadratic regulator (LQR). The technique utilizes a Takagi–Sugeno (TS) representation of the system, which allows modeling the complex nonlinear dynamics in such a way that linearization is not required for the estimator or controller design. In addition, the TS fuzzy representation is exploited to obtain a real-time Kalman gain, avoiding the expensive optimization of LMIs at every step. The estimation schema is integrated with a nonlinear model-predictive control (NMPC) that is in charge of controlling the vehicle. For the demonstration, the approach is tested in the simulation, and for practical validity, a small-scale autonomous car is used.

1. Introduction

To solve the issue of traffic accidents and congestion, autonomous vehicles provide a promising solution [1]. Research work on this technology has immensely progressed from perception algorithms to vehicle control algorithms. The perception stack exploits the sensor measurements to provide the vehicle state as well as environmental information. The correction of these measurements is crucial to the safety of the vehicle when navigating within the environment. On the other hand, the planner and the controller use this information to plan the motion and achieve the desired trajectory while maintaining some objectives such as speed, time to reach the destination, and comfort. For such reasons, the measurement filtering and the obtaining of unmeasured states are paramount (e.g., lateral velocity in many cases can not be measured directly, which is important for lateral control). Advanced controllers, such as, e.g., NMPC, require a full state observation, which can be achieved using a state estimator in the loop for the full-body control.
The state estimation of an autonomous vehicle involves an algorithm that fuses the raw information provided by the sensors, which is affected by noise, and the system model, which is affected by some uncertainty represented as a disturbance. The estimation algorithm considers the measurement model, vehicle model, sensor uncertainty, and model perturbation to estimate the correct states. The standard approach is based on some form of Kalman filter, originally developed by R.E Kalman [2]. A Kalman filter provides an optimal estimate in the least-square sense, assuming that the model perfectly matches with the real system such that the noises/disturbances are Gaussian and their covariances are known. The first assumption is not always true, especially in nonlinear systems with complex dynamics. A variant named extended Kalman filter (EKF) for the nonlinear systems has been developed, which provides the solution by linearizing the system and the measurement model around the current states. The linearization is based on a Taylor series expansion, rejecting higher-order terms. The linearization process does not preserve the random variable distributions of the states and the measurements as Gaussian, which means that the optimality condition is not guaranteed. For the systems with a complex nonlinear model, such as a vehicle, the assumption of a small range of operation for a linear model acquired by linearizing the nonlinear model is not valid. For all these reasons, new methods for extending Kalman filters to nonlinear systems are required.
The TS framework offers a systematic approach to obtain a multi-model system from the original mathematical model of the system, see, e.g., for more detail, [3]. The main feature of this approach is that it allows representing the model of the nonlinear system as a set of linear system models and the overall TS model of the system is achieved by the polytopic combination of these linear system models. This allows extending the LMI design procedures for control and estimation to the nonlinear systems represented in polytopic TS form. Moreover, there exists a systematic procedure for generating the TS model for the nonlinear model and approximating it in a polytopic way [4].
In this paper, the state estimation of an autonomous vehicle is developed using an approach alternative to the use of EKF. Moreover, this approach will be combined with a LIDAR scan matching algorithm that provides a rough pose estimation. For LIDAR scan matching, the real-time correlative scan matching algorithm is being exploited instead of iterative closest point (ICP) [5], which has a drawback of expensive search for point correspondences in every iteration. As LIDAR scan matching is not within the scope of this paper, more details can be found in [6,7]. The methods such as Gmapping [8], which generally require a large number of particles to obtain good results, lead to higher computational complexity compared with our approach. In comparison with the methods such as LOAM [9], our method fuses the LIDAR and other sensor information with the accurate vehicle model prediction for precise localization. The proposed method can be extended to any degree of freedom (DoF) system if the model is known and additional sensors can be integrated. In our approach, dynamic states, such as longitudinal velocity and angular velocity, will be directly measured from the sensors. Then, the rough pose obtained from the LIDAR scan matching and direct measurements from sensors will be corrected using the TS Kalman filter design. A similar work [10] was carried out to improve the approximation error by utilizing the Takagi–Sugeno model; however, this approach considers a limited and known number of landmarks and neither ensures optimality. Another work [11] utilized a linear parameter varying (LPV) system, which is analogous to TS and has shown good performance in the simulation with different applications.
To summarize, the contributions of our work are as follows:
  • A TS Kalman filter is developed which does not need linearization at each step, as in the case of EKF. In addition, the estimator provides a stable and optimal solution.
  • The design of the TS Kalman filter is solved offline using LMIs while online (in real-time), only the interpolation of the TS subsystems gains is required.
  • A case study of a small-scale autonomous vehicle is presented to obtain the full state by correcting the raw measurement obtained from sensors and LIDAR scan end-point matching.
  • The whole framework, including NMPC, is validated on a small computation board Odroid XU4. The low computational requirements of the framework, especially SLAM, make it easy to deploy on small robots.
The structure of the paper is the following: Section 2 describes the proposed approach and presents the autonomous vehicle considered as a case study. Section 4 presents the proposed TS Kalman filter for the state estimation. Section 5 presents the simulation and experimental results using the considered case study. Finally, Section 6 draws the main conclusions of the paper and proposes future research paths.

2. Proposed Approach

Figure 1 presents the overall architecture and integration of different modules for the autonomy. The proposed pipeline also includes an NMPC controller. Descriptions of the states x = [ v x , v y , ω , X , Y , θ ] T and control u = [ D , δ ] T symbol are listed in Table 1. The NMPC controller uses the error model of the system so the estimated states [ X ^ , Y ^ , θ ^ ] T are converted to the errors states [ θ ^ e , s ^ , y ^ e ] T using the offline planner; details can be found in [12]. For the state measurements, an inertial measurement unit (IMU), a motor encoder, and scan end-point matching modules are utilized. The mapping module is dedicated to map the environment using the scanned points and estimated pose of the vehicle. The previous map (occupancy grid) is queried by the scan-matching module to roughly estimate the relative position of the current scan. Finally, the estimator module is responsible for correcting the uncertainty in the measurements and scan matching, as well as obtaining the hidden state  v y .
To generalize the SLAM algorithm, i.e., free from the number of landmarks and environment, the rough pose estimation is based on LIDAR scan end-point matching by exploiting the Gauss–Newton optimization process. This matching approach allows estimating the pose without any prior information of the environment or landmarks. The scan end-points matching is similar to the work carried out in the paper [6], which is presented in Section 3. In this development, LIDAR’s scan end-points are exploited for observing the environment. Instead, a camera can also be used for the rough localization using either feature matching technique [13] or deep learning technique [14]. Then, to correct the rough pose from the scan matching and acquired sensor measurements, a TS modeling approach is considered to design an optimal and stable Kalman estimator. The whole design procedure consists of the following steps:
1.
Derive a TS model from the nonlinear model which embeds the nonlinearity term inside the system matrices.
2.
Obtain the polytopic systems and derive the fuzzy representation of the TS model [4].
3.
Formulate LMIs for Lyapunov stability and optimal dual LQR for all the obtained polytopes or system vertices of the fuzzy model to obtain the offline Kalman gain.
4.
Exploit fuzzy interpolation technique to find the online gain for the TS Kalman filter.
5.
Apply online TS Kalman gain for the state estimation.
Steps 3 to 5 are presented thoroughly in Section 4 of this paper, while the remaining steps are detailed in this section.
Formalizing the TS model for LMIs and fuzzy gain interpolation technique is based on the concept of fuzzy theory [4,15], which offers a systematic approach to obtain a multi-model system from the original mathematical model of the system. The main feature of this model is to express the local dynamics of each fuzzy implication by a set of linear system models, and the overall fuzzy model of the system is achieved by the fuzzy blending of these linear system models. It is proved that TS fuzzy models are universal approximators with a high degree of precision of any smooth nonlinear system [16].

2.1. Considered Autonomous Vehicle

To validate the proposed approach, a case based on a small-scale autonomous car is used. The states of the car have to be estimated using available sensors, and, further, this information will be exploited by the full-body controller.
The dynamic states, such as longitudinal velocity, can be roughly estimated using the radius and RPM of the rear wheel measured by the motor encoder. Angular velocity can be measured by an IMU sensor. The rough position of the vehicle can be obtained using the scan end-point matching algorithm and orientation from IMU. Figure 1 shows the high-level scheme of the proposed pipeline which includes a model-predictive controller (MPC) in the loop. We do not provide further details about the MPC algorithm, as this is not the focus of this work. We assume that the control of the car and the track plan can be obtained.
As the estimator is based on a system model, the dynamics of the system are derived using the bicycle model [17] representation shown in Figure 2. The vehicle model includes kinematic as well as dynamic equations, represented by:
v x ˙ = 1 m ( F r x F f l a t sin ( δ ) + m v y ω ) v y ˙ = 1 m ( F f l a t cos ( δ ) + F r y m v x ω ) ω ˙ = 1 I z ( l f F f l a t cos ( δ ) l r F r y ) X ˙ = v x c o s ( θ ) v y s i n ( θ ) Y ˙ = v x s i n ( θ ) + v y c o s ( θ ) θ ˙ = ω
where v x , v y , ω are the longitudinal, lateral, and angular velocity of the vehicle, respectively. X , Y , θ are the global pose of the vehicle in a fixed inertial frame. The available sensors are able to measure, directly or indirectly, [ v x , ω , X , Y , θ ] states, with some errors. The lateral velocity v y cannot be measured and will be estimated. F r x is the longitudinal force in the rear wheel which consists of force from the motor, drive-line resistance, and drag force. F f l a t and F r y are the lateral tire force in the front and rear wheel, respectively, which is obtained by simplifying the Pacejka tire model [18]. Longitudinal force on the front wheel is considered to be negligible since no brake nor torque is applied to it. The forces are given by
F r x = ( C m 0 C m 1 v x ) D C 0 v x C 1 C D A ρ v x 2 2
F f l a t = 2 C a f δ arctan v y + l f θ ˙ v x
F r y = 2 C a r arctan v y l r θ ˙ v x
where δ and D are two control inputs, steering angle and duty cycle, respectively. Some of the system parameters are obtained by physical measurement, and the remaining ones by least-squares optimization. The obtained values and description of the parameters are listed in Table 2.

2.1.1. Construction of TS Model

The goal is to derive a TS model from the nonlinear vehicle dynamics (1) as if the response of the TS model exactly matches the response of a nonlinear system within a specified domain with the same input u. To form the TS model, the varying nonlinear terms in the equations are set as fuzzy variables or scheduling variables ( ϑ i ( t ) ) .
The set of fuzzy (or scheduling) variables is represented in Table 3. These are the state and control variables on which the nonlinear dynamics are dependent. These variables represent the physical limit of the vehicle, for example, the maximum and minimum longitudinal velocities or maximum and minimum steering angle that a vehicle can achieve.
From the nonlinear model (1), the TS model (5) is obtained by embedding schedule variables inside the matrix:
v x ˙ v y ˙ ω ˙ X ˙ Y ˙ θ ˙ x ˙ = A 11 A 12 0 0 0 0 A 21 A 22 0 0 0 0 A 31 0 0 0 0 0 A 41 A 42 0 0 0 0 A 51 A 52 0 0 0 0 0 0 1 0 0 0 A ( ϑ ) v x v y ω X Y θ x + B 11 B 12 0 B 22 0 B 32 0 0 0 0 0 0 B ( ϑ ) D δ u
where A i , j and B i , j are the varying sub-elements of matrix A ( ϑ ) and matrix B ( ϑ ) , respectively, and can be expressed as
A 11 = 1 m C 0 + C 1 v x + C D A ρ v x 2 , A 12 = ω , A 21 = ω , A 22 = 2 C a r m v y arctan v y l r θ ˙ v x , A 31 = 2 C a r l r v x I z arctan v y l r θ ˙ v x , A 41 = c o s ( θ ) , A 42 = s i n ( θ ) , A 51 = s i n ( θ ) , A 52 = c o s ( θ ) B 11 = 1 m ( C m 0 C m 1 v x ) , B 12 = 2 C a f m δ δ arctan v y + l f θ ˙ v x sin ( δ ) B 22 = 2 C a f m δ δ arctan v y + l f θ ˙ v x cos ( δ ) , B 32 = 2 C a f l f I z δ δ arctan v y + l f θ ˙ v x cos ( δ )
The scheduling variables can be further represented as a set of membership function ϑ i , as follows:
ϑ i ( t ) = μ i 1 ( ϑ i ( t ) ) · ϑ i ̲ ) + μ i 2 ( ϑ i ( t ) ) · ϑ i ¯ , i [ 1 , 2 , , 5 ]
μ i 1 ( ϑ i ( t ) ) + μ i 2 ( ϑ i ( t ) ) = 1 ,
where ϑ i ̲ , ϑ i ¯ are the minimum and maximum scheduling variables, respectively, and the μ i j ( ϑ i ( t ) ) are the fuzzy sets with two membership functions for each scheduling variable being in total 32 combinations. For each membership function, a system vertex or polytopic system represented by A i x ( t ) + B i u ( t ) can be obtained, and by applying the fuzzy rules, a nonlinear system can be modeled as:
x ˙ ( t ) = i = 1 32 h i ( ϑ ( t ) ) { A i x ( t ) + B i u ( t ) }
where,
ϑ ( t ) = [ ϑ 1 ( t ) , ϑ 2 ( t ) , ϑ 3 ( t ) , ϑ 4 ( t ) , ϑ 5 ( t ) ]
and
h i ( ϑ ( t ) ) = μ 1 j ( ϑ 1 ( t ) ) · μ 2 j ( ϑ 2 ( t ) ) · μ 3 j ( ϑ 3 ( t ) ) · μ 4 j ( ϑ 4 ( t ) ) · μ 5 j ( ϑ 5 ( t ) ) , j [ 1 , 2 ]
The polytopic system representation will be used to formulate the LMIs to obtain offline gain in Section 4.1. Additionally, the fuzzy representation (9) of the TS model will also allow the computation of online gain using the interpolation technique without optimizing the LMIs at each time step.

2.1.2. Measurement Model

The rough position of the vehicle is obtained from the scan matching algorithm, and other information is directly obtained from various sensors listed in Table 4.
The discretized state-space measurement model for the system is defined by y k = C x k + D u k + E v v k , where v k R n y is the measurement noise and n y = 5 , E v is the distribution matrix with appropriate dimension. D = 0 5 × 2 is taken as there is no interaction from control input. The C matrix relates the system state to the output measurement y k .
E v = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 , C = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1
The noise vector v k = [ σ v x 2 , σ ω 2 , σ X 2 , σ Y 2 , σ θ 2 ] T has noise of each sensor with the covariance listed in Table 4.

2.1.3. Observability Analysis

Before designing the estimator, the observability of Equation (5) needs to be analyzed for singularity. This equation consists of n x = 6 state variables and the observability matrix defined by:
O = C , C A d , C A d 2 , · · · , C A d n x 1 T
If the rank of observability is equal to n x , then the system is observable. To check the observability, Equation (5) is discretized using Euler approximation A d = I + A · Δ t s . During the analysis, at certain states, the rank of the observability matrix O reaches singularity, precisely when any of the v x = 0 , v y = 0 and δ = 0 . To resolve this issue whenever this state variable attains this value, a bias ϵ = 0.0001 is added to this variable.

3. SLAM Algorithm

The measurement from the LIDAR provides the environment information through building an occupancy grid probability map using Bayes inference theory which is later on converted to occupancy grids for detecting obstacles and free zones within the map. Using the occupancy probability map at time { t } and previous probability maps { t 1 : 0 } , a robust matching is performed using Gauss–Newton optimization to obtain the relative pose. This approach allows to map and estimate the pose without any prior information about the environment or landmarks. Later on, this pose and map will be cured using the TS estimator due to corrupted matching as the LIDAR has an error in the measurement. For robust matching between occupancy probability maps, the sub-grid-level probability is accessed for sub-grid-level accuracy [6]. The algorithm has two main steps: (a) mapping the obtained map into sub-grid level accuracy, and (b) localization using the optimization process.

3.1. Mapping

The map used for matching is developed using an interpolation scheme to overcome the discrete nature of the map through bilinear filtering, which allows sub-grid-level cell accuracy for estimating occupancy probabilities and derivatives. The occupancy value M ( P m ) can be approximated by linear interpolation of a scan end-point P m , similar to Figure 3, in a continuous map coordinate given as:
M ( P m ) Y Y 0 Y 1 Y 0 X X 0 X 1 X 0 M ( P 11 ) + X 1 X X 1 X 0 M ( P 01 ) + Y 1 Y Y 1 Y 0 X X 0 X 1 X 0 M ( P 10 ) + X 1 X X 1 X 0 M ( P 00 )
where P 00 , P 01 , P 10 , and P 11 are the closest integer point coordinates in a grid map. The partial derivatives along the X and Y axis can be approximated by:
M X ( P m ) Y Y 0 Y 1 Y 0 ( M ( P 11 ) M ( P 01 ) ) + Y 1 Y Y 1 Y 0 ( M ( P 10 ) M ( P 00 ) ) M Y ( P m ) X X 0 X 1 X 0 ( M ( P 11 ) M ( P 10 ) ) + X 1 X X 1 X 0 ( M ( P 01 ) M ( P 00 ) )

3.2. Localization

The localization algorithm is based on the matching of scan end-points shown in Figure 4 with the existing maps to find the rigid transformation ξ = ( X , Y , θ ) T that minimizes
ξ * = arg min ξ i = 1 n [ 1 M ( S i ( ξ ) ) ] 2
where S i ( ξ ) is the location of scan end-points in the world coordinate frame, given by:
S i ( ξ ) = c o s ( θ ) s i n ( θ ) s i n ( θ ) c o s ( θ ) s i , x s i , y + X Y
where s i = ( s i , x , s i , y ) T is the location of end-points in the LIDAR coordinate frame. The X , Y , and θ are the pose of the LIDAR in the world coordinate frame. The function M ( S i ( ξ ) ) provides the map value at the coordinate given by S i ( ξ ) . Let us suppose the robot started at initial pose ξ and control input is given to move by Δ ξ . At position ξ , the map is built from the measurement, and at ξ + Δ ξ , a new map is registered. To estimate the Δ ξ , the optimizer minimizes the error according to:
i = 1 n [ 1 M ( S i ( ξ + Δ ξ ) ) ] 2 0
Solving for Δ ξ , using first-order Taylor expansion of M ( S i ( ξ + Δ ξ ) ) and setting the partial derivative with respect to ξ , yields the Gauss–Newton minimization equation:
Δ ξ = H 1 i = 1 n Δ M ( S i ( ξ ) ) S i ( ξ ) ξ T [ 1 M ( S i ( ξ ) ) ]
where
H = Δ M ( S i ( ξ ) ) S i ( ξ ) ξ T Δ M ( S i ( ξ ) ) S i ( ξ ) ξ
and from Equation (17), the gradient of S i ( ξ ) ξ can be represented as
S i ( ξ ) ξ = 1 0 sin ( θ ) s i , x cos ( θ ) s i , y 0 1 cos ( θ ) s i , x sin ( θ ) s i , y
and the map gradient Δ M ( S i ( ξ ) ) can be approximated from Equation (15). The obtained pose might be inaccurate due to sensor noises and errors which need to be corrected before building the map. The estimator which is discussed in the next section will be used to correct this pose. The corrected pose is applied to Equation (17) to relocate the scan end-points S i , which eventually updates the occupancy probability grid map.

4. Takagi–Sugeno Kalman Filter Design

For the development of the proposed estimator design, first, in Section 4.1, LMIs for the TS Kalman filter design are formulated. Second, in Section 4.2, the measurement noise and system perturbation matrix are provided. Finally, the implementation of the proposed approach and its improvement are discussed in Section 4.3.

4.1. LMI Design Procedure

The following Kalman filter to obtain the x ^ estimation is required:
x ^ ˙ ( t ) = ( A ( ϑ ) L ( ϑ ) C ) x ^ + ( B ( ϑ ) L ( ϑ ) D ) u + L ( ϑ ) y
where A ( ϑ ) , B ( ϑ ) can be obtained by using TS model (5), L ( ϑ ) is the online Kalman gain, and D = 0 5 × 2 . The above continuous Kalman estimator is discretized for implementation on a real-time system. Now, the aim is to find the optimal L ( ϑ ) which converges to the estimation ground truth in the presence of sensor noise and system disturbance. To obtain an optimal Kalman gain L ( ϑ ) , first, LMIs are offline optimized to obtain gain L o f f . Second, the obtained L o f f is interpolated in real time to obtain L ( ϑ ) by exploiting relation (9).
To formulate LMIs for the discretized system, the continuous fuzzy model Equation (9) can be discretized using Euler approximation with sampling time Δ t s :
x ( k + 1 ) = x ( k ) + i = 1 32 h i ( ϑ ( t ) ) { A i x ( k ) + B i u ( k ) } Δ t s
= i = 1 32 h i ( ϑ ( t ) ) { ( I + A i Δ t s ) A d i x ( k ) + ( B i Δ t s ) B d i u ( k ) }
The LQR optimal control objective is to find a state feedback control u ( k ) = K x ( k ) , where K is the gain for the system matrix, that minimizes the following objective function:
J = k = 0 [ x T ( k ) Q x ( k ) + u T ( k ) R u ( k ) ]
Introducing a Lyapunov function for each vertex of the polytopic model:
V ( x ( k ) ) = x T ( k ) P x ( k )
that satisfies the following conditions:
V ( x 0 ) < γ
V ( x ( k + 1 ) ) V ( x ( k ) ) + x T ( k ) Q x ( k ) + u T ( k ) R u ( k ) < 0
Integrating the Equation (28) and substituting u ( k ) = K x ( k ) yields
k = 0 [ x T ( k ) Q x ( k ) + u T ( k ) R u ( k ) ] < V ( x 0 ) = x 0 T P x 0 < γ
The above equation ensures that LQR objective function is bounded by the optimal value γ for all the polytopes. The increment of Lyapunov function (26):
Δ V ( k + 1 ) = x T ( k ) A d ( ϑ ) T P A d ( ϑ ) x ( k ) x T ( k ) P x ( k )
For the closed-loop system the Lyapunov function becomes
Δ V ( k + 1 ) = x T ( k ) ( A d B d K ) T P ( A d B d K ) x ( k ) x T ( k ) P x ( k )
The previous inequality Equations (29) and (31) can be rewritten as follows:
x 0 T P x 0 < γ
( A d B d K ) T P ( A d B d K ) P + Q + K T R K < 0
Applying duality principle A d A d T , B d C d T , K L T  [19] to LQR inequality Equation (33) results:
( A d T C d T L T ) T P ( A d T C d T L T ) P + Q + L R L T < 0
By applying some algebraic manipulations to Equations (32) and (34), some changes of variables ( Q = H T H , Y = P 1 ) and Schur complement lead to the following LMIs optimization:
γ I I I Y > 0 Y Y A d W T C d Y H T W T A d T Y C d T W Y 0 0 H Y 0 I 0 W 0 0 R 1 < 0
The optimal gain can be found by L = ( W Y 1 ) T . The solution involves optimization at each time step, due to the varying system matrices ( A d , B d ), which are computationally expensive and slow. Instead, we exploit the method developed in Section 2.1.1. Any nonlinear system can be represented in the form (24), which means the offline gain will be found at the system vertices (polytopes) of these systems and later on interpolated online using fuzzy membership grade function (11) for the TS model (5). Therefore, for the 5 scheduling variables, 32 system matrices can be obtained at the polytopes, resulting in 32 LMIs and offline gain. The gain obtained from solving LMIs will be called L i o f f [ 1 , · · · , 32 ] . All the steps for obtaining the LMIs are mentioned in Algorithm 1.

4.2. Disturbance and Noise Matrix

The disturbance of the vehicle model and sensor noise is modeled by Gaussian distribution whose mean is set to zero, and covariance is obtained experimentally. The disturbance and the noise of the model are found experimentally, and some of the sensor noise from the manufacturer’s data sheet. The scan matching covariance is obtained via a sampling-based covariance estimate. However, a second method based on the Hessian matrix can also be used [20]. The disturbance in the model is considered to be higher than the measurement noise to compensate for uncaptured dynamics during parameter estimation. The following covariances ( m 2 ) for the experiment were set for the disturbance ( Q ) and noise ( R ) matrices, respectively:
Q = diag ( 0.15 , 0.05 , 0.15 , 0.25 , 0.25 , 0.1 )
R = diag ( 0.04 , 0.0187 , 0.0225 , 0.0225 , 0.01 )

4.3. Switching Estimator Design

During the experiment phase, the scheduling variable θ : [ π , 0 ] does not yield a better approximation of the nonlinear function. This can be seen by substituting the values in element A 41 , A 42 , A 51 , and A 52 in Equation (5) and taking only these subelements:
A 4 : 5 , 1 : 2 ( θ ) = cos ( θ ) sin ( θ ) sin ( θ ) cos ( θ )
A 4 : 5 , 1 : 2 ( π ) = 1 0 0 1 , A 4 : 5 , 1 : 2 ( 0 ) = 1 0 0 1
The above polytopic systems are true considering the minimum and maximum vertices in c o s function, but it is not valid for s i n function. To accurately model this effect, the θ scheduling variable is chosen for each quadrant ϑ 4 j = [ 0 , π 2 , π , π 2 ] , j [ 1 , . . , 4 ] . This improvement provides the system vertex to reach all the possible values:
A 4 : 5 , 1 : 2 ( 0 ) = 1 0 0 1 , A 4 : 5 , 1 : 2 π 2 = 0 1 1 0
A 4 : 5 , 1 : 2 ( π ) = 1 0 0 1 , A 4 : 5 , 1 : 2 π 2 = 0 1 1 0
For the implementation of this system, every four sections of the quadrant are considered and the LMIs for all the quadrants are computed offline, following the steps mentioned in Algorithm 1.   
Algorithm 1: Offline Gain Optimization Algorithm
1.
Define maximum and minimum scheduling variables for four sectors:
(a)
[ [ ϑ ̲ 1 , ϑ ¯ 1 ] , [ ϑ ̲ 2 , ϑ ¯ 2 ] , [ ϑ ̲ 3 , ϑ 3 ¯ ] , [ ϑ ̲ 41 , ϑ ¯ 41 ] , [ ϑ ̲ 5 , ϑ ¯ 5 ] ]
(b)
[ [ ϑ ̲ 1 , ϑ ¯ 1 ] , [ ϑ ̲ 2 , ϑ ¯ 2 ] , [ ϑ ̲ 3 , ϑ 3 ¯ ] , [ ϑ ̲ 42 , ϑ ¯ 42 ] , [ ϑ ̲ 5 , ϑ ¯ 5 ] ]
(c)
[ [ ϑ ̲ 1 , ϑ ¯ 1 ] , [ ϑ ̲ 2 , ϑ ¯ 2 ] , [ ϑ ̲ 3 , ϑ 3 ¯ ] , [ ϑ ̲ 43 , ϑ ¯ 43 ] , [ ϑ ̲ 5 , ϑ ¯ 5 ] ]
(d)
[ [ ϑ ̲ 1 , ϑ ¯ 1 ] , [ ϑ ̲ 2 , ϑ ¯ 2 ] , [ ϑ ̲ 3 , ϑ 3 ¯ ] , [ ϑ ̲ 44 , ϑ ¯ 44 ] , [ ϑ ̲ 5 , ϑ ¯ 5 ] ]
2
A i j , B i j i [ 1 , 2 , , 32 ] , j [ 1 , 2 , , 4 ] Using Equation (5) obtain 32 polytopic systems for each scheduling variables (a)–(d);
3.
A d i j , B d i j Discretize using sampling time ( Δ t );
4.
L i j o f f Optimize formulated LMIs Equation (35);
The estimator gains L i j o f f , j [ 1 , . . , 4 ] are obtained for each quadrant, and then particular gains are applied according to the region in which the previous yaw estimate ( θ ^ w r a p ) lies. For example, in Figure 5, θ lies in the first quadrant, then L i 1 o f f gain will be weighted using the TS fuzzy interpolation function depending on the scheduling variable [ 0 , π 2 ] to obtain the current online gain. Likewise, for each quadrant, the L i j o f f is interpolated for the current states using the membership function. Algorithm 2 presents the method to obtain the fuzzy gain interpolation scheme.
Algorithm 2: Gain Interpolation Algorithm
  • L_interpolation ( ϑ ( k ) , L i j o f f ) ;
  • Input: Scheduling variable ϑ ( k )
  • Output: Interpolated gain L
  • ϑ i ¯ , ϑ i ̲ Load scheduling variables ;
  • μ i 1 ( k ) , μ i 2 ( k ) Compute fuzzy sets for all the ϑ ( k ) using Equations (7) and (8);
  • h i ( ϑ ( k ) ) Compute fuzzy interpolation function using Equation (11);
  • L i = 1 32 h i ( ϑ ( k ) ) L i j o f f ;
   The algorithm for the proposed state estimation technique can be referred from Algorithm 3. The w r a p ( ) function is used to discretize the θ value between [ π , π ] , and the u n w r a p ( ) function is used to change the θ value to continuous value. This is required as the yaw measurement from the sensor provides a measurement between [ π , π ] , which is discontinuous. The measurement discontinuity results in wrong estimates due to the lagging or leading effect of the measurement. In Figure 6, the wrong estimation of states is depicted. Due to the abrupt change in yaw measurement in Figure 6b, the robot position is wrongly estimated in Figure 6a. It can be seen in Figure 6b that at around 12.5 s, the estimated yaw slightly leads the measurement, due to which the estimator tries to track the measurement data, which results in the full turn of the robot at location [ 1.8 , 4 ] . The same phenomenon at 19 s also occurs when the estimated yaw lags behind the measurement data. Changing the discontinuity using the u n w r a p ( ) function provides an accurate estimation of all the states, which is shown in Figure 6c,d.
Algorithm 3: State Estimation Algorithm
Sensors 22 03399 i001

5. Results

To validate the estimator performance, experiments were performed in the simulator as well in a real environment. First, the estimator is tested using manual control input, and once it is proved to be working, an NMPC controller is utilized to follow a track. The vehicle is 41 cm long and 21 cm wide, and the track is 50 cm in width. The setting of the controller is tuned to complete two laps while achieving a longitudinal velocity of 0.8 m/s, keeping the vehicle inside the track and closer to the center track, shown as a dotted line in Figure 7. The objective of the validation is to (i) estimate full states correctly, including unmeasured state v y , in the simulator as well as real vehicle, and (ii) validate real-time performance. For the simulator, the NRMSE evaluation metric is used to compare the error between the simulated and estimated state, which can be computed by:
NRMSE = i = 1 N ( x x ^ ) 2 x max x min

5.1. Simulation

The vehicle simulator is developed using the vehicle full state dynamics obtained in Section 2.1. The perturbations and noises are injected into the vehicle model and sensor model, respectively, to simulate the real world. The perturbation and noises are kept a little higher than the actual one to ensure the estimator works even in the worst cases. By analyzing Figure 7, the estimated position is very close to the simulator state of the vehicle. The NRMSE error for all the estimated states is presented in Table 5.
The rest of the estimated states are compared in Figure 8. The lateral velocity ( v y ), which cannot be measured directly, has an NRMSE value of 0.0323 (Figure 8b). The errors of all the dynamic estimated states are within a certain range, which indicates that the estimator is fully working and ready to be tested on the real vehicle.

5.2. Real Experiment

Note that the RMSE error is not used here due to a lack of ground truth measurement. The validation for this experiment is performed visually.
The estimated X–Y position for the real experiment is shown in Figure 9. Some snapshots for visual validation are shown in Figure 10 and compared to Figure 9; it can be noticed that the estimated position of the vehicle matches the real position of the vehicle in the snapshots. For properly substantiated illustration, the media attached (https://youtu.be/Oey2ZxsxlnY (accessed on 29 March 2022)) shows the estimated states and NMPC controller performance. The media validates the performance of the estimator design.
Additional estimated states are shown in Figure 11. The velocity obtained from the motor encoder is very noisy and inaccurate (see Figure 11a), but the TS Kalman filter can provide quite a clean estimation. Similarly, this happens with the angular velocity. The lateral velocity was not measured and is successfully estimated.
The final corrected map after completing the laps is shown in Figure 12b. Figure 12a represents the environment which includes fixtures, obstacles, and free space. There are some unoccupied cells, particularly when the LIDAR scan is hindered by the fixtures present in the environment. For this reason, the error between ground truth and map is only calculated for properly scanned areas, i.e., the fixture occlusion region is excluded (see Figure 12a). The intersection-over-union (IOU) score of 0.9388 is obtained between the ground truth and the corrected map.

5.3. Simulated vs. Real Experiment

It is worth noting that the time taken for completing two laps in the simulator and real experiments are 38 s and 32 s, respectively.
The real experiment completed the lap faster than the simulation due to strictly maintaining a longitudinal velocity of 0.8 m/s. There are discrepancies between some estimated states which are due to different control input in simulation and real experiment phase or mismatch of the simulator model from the real vehicle model. Table 6 shows the minimum and the maximum absolute differences between estimated and measured states, as well as their standard deviations for both the simulator and real platform experiment. The minimum deviations for the real experiment are similar to those of the simulator, while there is a deviation of about 10 cm for real experiment values compared with simulator values; this is because the standard deviation of simulation measurements is set higher than the real standard deviation. This can also be validated from the standard deviation values in the simulator and real experiments. Even though the performance of the estimator in the real experiment cannot be verified due to the lack of ground truth, the values in the real experiment appear to be similar to those in the simulator, suggesting that the estimator performs similarly.

6. Conclusions

This paper has presented an approach for full-state estimation of an autonomous vehicle using a TS Kalman filter as well as imprecise scan matching and measurements. Further, this technique can also correct the map obtained from the noisy LIDAR scan end-points matching. The proposed approach provides a stable and optimal solution in the presence of model disturbance and measurement noises at a high update rate of 100 Hz for state estimation, and simultaneous correction of the obtained map. The update rate can go much faster than 100 Hz; the only limitation is the update rate of the sensor measurements. The computational cost of the proposed approach is fairly low if compared with EKF SLAM, which depends on the number of landmarks. The result produced motivates the usage of the TS-based state estimation and mapping in the field of autonomous vehicles and robotics.
The proposed estimation technique depends on the model of the system and the measurement model with certain disturbance and noise, respectively. This requirement is sometimes hard to fulfill, and also, for some systems, the parameters change within a certain range, for example, the tire coefficient of the vehicle. For such a kind of system, online model learning techniques need to be incorporated [21,22]. The proposed method can also be scaled to a high-DoF system if the system model is known, and extra sensors can be installed, for example, a height sensor, such as a barometer, or range sensor can be used to detect z-pose. The proposed scan matching technique can be applied to an unknown environment but the accuracy may degrade in a highly dynamic environment. For such cases, robust scan matching techniques need to be applied, such as in [23].
As future work, the proposed approach will be benchmarked against competing approaches.

Author Contributions

Conceptualization, S.C. and V.P.; methodology, S.C. and V.P.; software, S.C.; writing—original draft preparation, S.C.; writing—review and editing, V.P.; supervision, V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been co-financed by the Spanish State Research Agency (AEI) and the European Regional Development Fund (ERFD) through the project SaCoAV (ref. MINECO PID2020-114244RB-I00 ), by the European Regional Development Fund of the European Union in the framework of the ERDF Operational Program of Catalonia 2014-2020 (ref. 001-P-001643 Looming Factory) and by the DGR of Generalitat de Catalunya (SAC group ref. 2017/SGR/482). The author is also supported by a Industrial PhD AGAUR grant (Ref. 2019 DI 040).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the required data is included in the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Litman, T. Autonomous Vehicle Implementation Predictions: Implications for Transport Planning. 2020. Available online: https://www.vtpi.org/avip.pdf (accessed on 29 March 2022).
  2. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  3. Rotondo, D. Advances in Gain-Scheduling and Fault Tolerant Control Techniques; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  4. Takagi, T.; Sugeno, M. Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. 1985, SMC-15, 116–132. [Google Scholar] [CrossRef]
  5. Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 2005, 13, 119–152. [Google Scholar] [CrossRef]
  6. Kohlbrecher, S.; Meyer, J.; von Stryk, O.; Klingauf, U. A Flexible and Scalable SLAM System with Full 3D Motion Estimation. In Proceedings of the IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Tokyo, Japan, 9–12 November 2011. [Google Scholar]
  7. Olson, E.B. Real-time correlative scan matching. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 4387–4393. [Google Scholar] [CrossRef] [Green Version]
  8. Grisetti, G.; Stachniss, C.; Burgard, W. Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the 2014 Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
  10. Pathirana, C.D. Fuzzy Kalman Filtering of the Slam Problem Using Pseudo-Linear Models with Two-Sensor Data Association. Int. J. Syst. Signal Control. Eng. Appl. 2008, 1, 89–100. [Google Scholar]
  11. Rotondo, D.; Reppa, V.; Puig, V.; Nejjari, F. Adaptive Observer for Switching Linear Parameter-Varying (LPV) Systems. IFAC Proc. Vol. 2014, 47, 1471–1476. [Google Scholar] [CrossRef] [Green Version]
  12. Alcalá, E.; Puig, V.; Quevedo, J.; Rosolia, U. Autonomous racing using Linear Parameter Varying-Model Predictive Control (LPV-MPC). Control. Eng. Pract. 2020, 95, 104270. [Google Scholar] [CrossRef]
  13. Hartmann, J.; Klüssendorff, J.H.; Maehle, E. A comparison of feature descriptors for visual SLAM. In Proceedings of the 2013 European Conference on Mobile Robots, Barcelona, Spain, 25–27 September 2013; pp. 56–61. [Google Scholar] [CrossRef]
  14. Chen, C.; Wang, B.; Lu, C.X.; Trigoni, N.; Markham, A. A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence. arXiv 2020, arXiv:2006.12567. [Google Scholar]
  15. Zadeh, L.; Klir, G.; Yuan, B. Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems—Selected Papers by Lotfi A Zadeh. In Advances in Fuzzy Systems—Applications and Theory; World Scientific Publishing: Singapore, 1996. [Google Scholar]
  16. Fantuzzi, C.; Rovatti, R. On the approximation capabilities of the homogeneous Takagi–Sugeno model. In Proceedings of the IEEE 5th International Fuzzy Systems, New Orleans, LA, USA, 11 September 1996; Volume 2, pp. 1067–1072. [Google Scholar] [CrossRef]
  17. Rajmani, R. Vehicle Dynamics and Control; Springer: New York, NY, USA, 2012; Volume 2. [Google Scholar] [CrossRef]
  18. Pacejka, H.B.; Bakker, E. The magic formula tyre model. Veh. Syst. Dyn. 1992, 21, 1–18. [Google Scholar] [CrossRef]
  19. Ostertag, E. Mono- and Multivariable Control and Estimation: Linear, Quadratic and LMI Methods; Springer: Berlin/Heidelberg, Germany, 2011; Volume 2, pp. 239–241. [Google Scholar] [CrossRef]
  20. Burguera, A.B.; Cid, Y.G.; Oliver, G. On the use of likelihood fields to perform sonar scan matching localization. Auton. Robot. 2009, 26, 203–222. [Google Scholar] [CrossRef]
  21. Masti, D.; Bemporad, A. Learning nonlinear state–space models using autoencoders. Automatica 2021, 129, 109666. [Google Scholar] [CrossRef]
  22. Yu, H.S.A.; Yao, D.; Zimmer, C.; Toussaint, M.; Nguyen-Tuong, D. Active Learning in Gaussian Process State Space Model. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases; Springer: Cham, Switzerland, 2021; pp. 346–361. [Google Scholar]
  23. Li, X.; Du, S.; Li, G.; Li, H. Integrate Point-Cloud Segmentation with 3D LiDAR Scan-Matching for Mobile Robot Localization and Mapping. Sensors 2020, 20, 237. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The architecture of the software modules, including information flow, and a conceptual overview of the interconnections. Software framework also includes low-level sensor measurement acquisition units.
Figure 1. The architecture of the software modules, including information flow, and a conceptual overview of the interconnections. Software framework also includes low-level sensor measurement acquisition units.
Sensors 22 03399 g001
Figure 2. A representation of the bicycle model in 2D space for deriving the equations of motion. The vehicle schematic shows the inertial frame (O), the center of gravity (CoG) frame (C) attached to the center of gravity of the vehicle. The forces F r x and F r y on the rear wheel are the longitudinal and lateral force, respectively. The front wheel forces F f l o n g and F f l a t are the longitudinal and lateral forces, respectively, the forces F f x and F f y are the result of these forces in frame C. The lengths l f and l r are the distance from CoG to the front wheel and rear wheel, respectively.
Figure 2. A representation of the bicycle model in 2D space for deriving the equations of motion. The vehicle schematic shows the inertial frame (O), the center of gravity (CoG) frame (C) attached to the center of gravity of the vehicle. The forces F r x and F r y on the rear wheel are the longitudinal and lateral force, respectively. The front wheel forces F f l o n g and F f l a t are the longitudinal and lateral forces, respectively, the forces F f x and F f y are the result of these forces in frame C. The lengths l f and l r are the distance from CoG to the front wheel and rear wheel, respectively.
Sensors 22 03399 g002
Figure 3. Calculation of laser end-point probability at sub-grid level by using occupancy grid probability map.
Figure 3. Calculation of laser end-point probability at sub-grid level by using occupancy grid probability map.
Sensors 22 03399 g003
Figure 4. Translation of scan information [ r , α ] T to scan end-points [ s i X , s i Y ] T in LIDAR frame of reference.
Figure 4. Translation of scan information [ r , α ] T to scan end-points [ s i X , s i Y ] T in LIDAR frame of reference.
Sensors 22 03399 g004
Figure 5. A defined sector that is designed to capture kinematic system matrices to obtain offline gain. Later on, using the same sector approach, online interpolated gains are obtained.
Figure 5. A defined sector that is designed to capture kinematic system matrices to obtain offline gain. Later on, using the same sector approach, online interpolated gains are obtained.
Sensors 22 03399 g005
Figure 6. Estimation comparison between wrapped yaw measurement and unwrap measurement. Panels (a,b) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using yaw measurement between [ π , π ] . Panels (c,d) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using continuous yaw measurement. (a) X–Y global position on yaw measurement between [ π , π ] . (b) Yaw sensor measurement. (c) X–Y global position on yaw measurement between [ , ] . (d) Yaw transformed to continuous measurement.
Figure 6. Estimation comparison between wrapped yaw measurement and unwrap measurement. Panels (a,b) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using yaw measurement between [ π , π ] . Panels (c,d) represent the position of vehicle and yaw measurement in the global frame, respectively, both of these states are estimated using continuous yaw measurement. (a) X–Y global position on yaw measurement between [ π , π ] . (b) Yaw sensor measurement. (c) X–Y global position on yaw measurement between [ , ] . (d) Yaw transformed to continuous measurement.
Sensors 22 03399 g006aSensors 22 03399 g006b
Figure 7. Vehicle estimated trajectory in the presence of sensor disturbance during the simulation. As per the controller policy, the vehicle tried to follow two laps of the center-line track.
Figure 7. Vehicle estimated trajectory in the presence of sensor disturbance during the simulation. As per the controller policy, the vehicle tried to follow two laps of the center-line track.
Sensors 22 03399 g007
Figure 8. Estimated states on vehicle simulator for two laps.
Figure 8. Estimated states on vehicle simulator for two laps.
Sensors 22 03399 g008
Figure 9. Vehicle estimated trajectory in the presence of sensor disturbance during the real experiment. As per the controller policy, the vehicle tried to follow two laps of the center-line track.
Figure 9. Vehicle estimated trajectory in the presence of sensor disturbance during the real experiment. As per the controller policy, the vehicle tried to follow two laps of the center-line track.
Sensors 22 03399 g009
Figure 10. Snapshots of the vehicle following the center line of the oval track. The vehicle motion is represented by a sequence of images clockwise from the upper-left to the bottom-left.
Figure 10. Snapshots of the vehicle following the center line of the oval track. The vehicle motion is represented by a sequence of images clockwise from the upper-left to the bottom-left.
Sensors 22 03399 g010
Figure 11. Estimated states on real vehicle for two laps.
Figure 11. Estimated states on real vehicle for two laps.
Sensors 22 03399 g011aSensors 22 03399 g011b
Figure 12. Comparison of ground truth and final map obtained. (a) The ground truth image represents the environment where the testing is performed. (b) The final map formed after completion of two laps.
Figure 12. Comparison of ground truth and final map obtained. (a) The ground truth image represents the environment where the testing is performed. (b) The final map formed after completion of two laps.
Sensors 22 03399 g012
Table 1. The list of symbols used in this article.
Table 1. The list of symbols used in this article.
SymbolDescription
v x Longitudinal velocity of vehicle in center of gravity (CoG) frame (C), see Figure 2.
v y Lateral velocity of the vehicle in CoG frame (C).
ω Angular velocity of the vehicle in CoG frame (C).
XGlobal position of the vehicle in X-axis frame (O).
YGlobal position of the vehicle in Y-axis frame (O).
θ Orientation of the vehicle with respect to the x-axis of the frame (O).
DDuty Cycle of motor, normalized between [0,1].
δ Steering angle.
Table 2. Vehicle model parameters.
Table 2. Vehicle model parameters.
ParametersValuesDescription
m 2.424 kgMass of the vehicle
l f 0.1377 mDistance from CoG to front wheel
l r 0.1203 mDistance from CoG to rear wheel
ρ 1.225 kg/m 3 Air density
C m 0 9.4685 NMotor parameter 1
C m 1 0.6672 kg/sMotor parameter 2
C 0 2.6104 kg/sResistive driveline parameter
C 1 0.00213 NStatic friction force
C D A 0.466 m 2 Coefficient of drag multiplied with area
C a f 1.2354 N/radFront wheel cornering stiffness
C a r 1.4532 N/radRear wheel cornering stiffness
I z 0.02 kg·m 2 Moment of inertia
Table 3. Scheduling variables, i.e., states or inputs on which the system matrix A ( ϑ ) and B ( ϑ ) are dependent. The maximum and minimum value of these variables can define any polytopic system.
Table 3. Scheduling variables, i.e., states or inputs on which the system matrix A ( ϑ ) and B ( ϑ ) are dependent. The maximum and minimum value of these variables can define any polytopic system.
Variables ( ϑ i ( t ) ) min ( ϑ i ( t ) ) = ϑ i ¯ max ( ϑ i ( t ) ) = ϑ i ¯
v x 5.0 5.0
v y 3.0 3.0
ω 1.5 1.5
θ π π
δ 0.35 0.35
Table 4. Noise of dedicated sensors or algorithms used in the experimental platform to measure or estimate the rough information.
Table 4. Noise of dedicated sensors or algorithms used in the experimental platform to measure or estimate the rough information.
StatesSensorsCovariance ( σ 2 ) [m 2 ]
v x Motor encoder 0.04
ω IMU 0.0187
XScan matching 0.0225
YScan matching 0.0225
θ IMU 0.01
Table 5. NRMSE evaluation of estimated states.
Table 5. NRMSE evaluation of estimated states.
States v x v y ω XY θ
NRMSE0.07810.03230.09130.01570.01750.0198
Table 6. Evaluation between estimated states and measured states for the experiment in the simulator and real platform. The min and max header represent the minimum and maximum absolute difference between estimated and measured states, respectively. The std header represents the standard deviation between estimated and measured states.
Table 6. Evaluation between estimated states and measured states for the experiment in the simulator and real platform. The min and max header represent the minimum and maximum absolute difference between estimated and measured states, respectively. The std header represents the standard deviation between estimated and measured states.
Statesmin [m]max [m]std [m]
SimRealSimRealSimReal
X0.1240.0940.4050.3170.1590.116
Y0.1370.1190.3440.2410.1380.126
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chaubey, S.; Puig, V. Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach. Sensors 2022, 22, 3399. https://doi.org/10.3390/s22093399

AMA Style

Chaubey S, Puig V. Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach. Sensors. 2022; 22(9):3399. https://doi.org/10.3390/s22093399

Chicago/Turabian Style

Chaubey, Shivam, and Vicenç Puig. 2022. "Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach" Sensors 22, no. 9: 3399. https://doi.org/10.3390/s22093399

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop