Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
An Ultra-Short Baseline Positioning Model Based on Rotating Array & Reusing Elements and Its Error Analysis
Next Article in Special Issue
ITC: Infused Tangential Curves for Smooth 2D and 3D Navigation of Mobile Robots
Previous Article in Journal
Application of Ultrasonic Array Method for the Inspection of TC18 Addictive Manufacturing Titanium Alloy
Previous Article in Special Issue
Multirobot Heterogeneous Control Considering Secondary Objectives
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain

1
School of Automation, Beijing Institute of Technology, Beijing 100081, China
2
School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4372; https://doi.org/10.3390/s19204372
Submission received: 8 August 2019 / Revised: 24 September 2019 / Accepted: 27 September 2019 / Published: 10 October 2019
(This article belongs to the Special Issue Mobile Robot Navigation)
Figure 1
<p>The architecture of the proposed traversability assessor and trajectory planner.</p> ">
Figure 2
<p>The overview of the traversability assessment approach.</p> ">
Figure 3
<p>The 3D light detection and ranging (LiDAR) used in this work.</p> ">
Figure 4
<p>The models of vehicle suspension systems.</p> ">
Figure 5
<p>How are the terrain roughness and the height difference computed.</p> ">
Figure 6
<p>Overview of the trajectory planning approach.</p> ">
Figure 7
<p>A car-like vehicle and its kinematic model.</p> ">
Figure 8
<p>The differences between the conventional A* and the non-holonomic A*.</p> ">
Figure 9
<p>The trapezoidal profile of the longitudinal velocity.</p> ">
Figure 10
<p>The spline profile of the angular velocity.</p> ">
Figure 11
<p>The virtual terrain surfaces used to test the proposed pose estimation method.</p> ">
Figure 12
<p>A virtual terrain surface and its traversability map.</p> ">
Figure 13
<p>The initial paths (red), the trajectories (green) after optimization, and the real trajectories (blue) of the vehicle on the virtual terrain surfaces. For clarity, the trajectories are drawn on the grayscale traversability maps.</p> ">
Figure 14
<p>The <span class="html-italic">z</span>-coordinate, the roll and the pitch of the vehicle along the paths and the trajectories. (<b>a</b>–<b>c</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f013" class="html-fig">Figure 13</a>a; (<b>d</b>–<b>f</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f013" class="html-fig">Figure 13</a>b.</p> ">
Figure 15
<p>The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).</p> ">
Figure 16
<p>The <span class="html-italic">z</span>-coordinate, the roll and the pitch of the vehicle along the trajectories. (<b>a</b>–<b>c</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f015" class="html-fig">Figure 15</a>a; (<b>d</b>–<b>f</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f015" class="html-fig">Figure 15</a>b; (<b>g</b>–<b>i</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f015" class="html-fig">Figure 15</a>c.</p> ">
Figure 17
<p>The vehicle used in experiments on rough terrain.</p> ">
Figure 18
<p>Inertial Measurement Unit (IMU).</p> ">
Figure 19
<p>Non-planar environments in the real world.</p> ">
Figure 20
<p>The statistic evaluation of the computational complexities.</p> ">
Figure 21
<p>The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).</p> ">
Figure 22
<p>The <span class="html-italic">z</span>-coordinate, the roll, and the pitch of the vehicle along the paths and the trajectories. (<b>a</b>–<b>c</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f021" class="html-fig">Figure 21</a>a; (<b>d</b>–<b>f</b>) The vehicle state along the trajectories shown in <a href="#sensors-19-04372-f021" class="html-fig">Figure 21</a>b.</p> ">
Figure 23
<p>The trajectories (green) after optimization and the real trajectories (blue) of the vehicle in the real world.</p> ">
Versions Notes

Abstract

:
This paper presents a traversability assessment method and a trajectory planning method. They are key features for the navigation of an unmanned ground vehicle (UGV) in a non-planar environment. In this work, a 3D light detection and ranging (LiDAR) sensor is used to obtain the geometric information about a rough terrain surface. For a given SE(2) pose of the vehicle and a specific vehicle model, the SE(3) pose of the vehicle is estimated based on LiDAR points, and then a traversability is computed. The traversability tells the vehicle the effects of its interaction with the rough terrain. Note that the traversability is computed on demand during trajectory planning, so there is not any explicit terrain discretization. The proposed trajectory planner finds an initial path through the non-holonomic A*, which is a modified form of the conventional A* planner. A path is a sequence of poses without timestamps. Then, the initial path is optimized in terms of the traversability, using the method of Lagrange multipliers. The optimization accounts for the model of the vehicle’s suspension system. Therefore, the optimized trajectory is dynamically feasible, and the trajectory tracking error is small. The proposed methods were tested in both the simulation and the real-world experiments. The simulation experiments were conducted in a simulator called Gazebo, which uses a physics engine to compute the vehicle motion. The experiments were conducted in various non-planar experiments. The results indicate that the proposed methods could accurately estimate the SE(3) pose of the vehicle. Besides, the trajectory cost of the proposed planner was lower than the trajectory costs of other state-of-the-art trajectory planners.

1. Introduction

Recently, ground mobile robots with different functions start to play essential roles in people’s daily life. In particular, autonomous navigation in a non-planar environment is an important function. It enables competent operations of a ground mobile robot in many challenging applications, such as surveillance, rescue, and planet exploration. In this search field, two basic and critical problems need to be addressed: traversability assessment and trajectory planning. On the one hand, the traversability tells a robot the effects of its interaction with a rough terrain surface. The terrain information is obtained using a 3D light detection and ranging (LiDAR) sensor and/or a visual sensor. On the other hand, a trajectory planner utilizes a cost function considering the traversability to determine a dynamically feasible motion trajectory. This trajectory permits a robot to move from an initial pose to a goal pose. A pose is made up of the position (x, y, z) and the orientation (roll, pitch, yaw) of the robot.
On an uneven terrain surface, the motion of a ground robot is constrained by the terrain shape. Conventional trajectory planners that assume the terrain to be flat are not applicable, because the trajectories generated by them are hard to be tracked or even non-traversable. These trajectories can cause wheel slip, high roll/pitch angle, and so on. Therefore, it is necessary to quantitatively assess the utility of passing through an uneven terrain surface, and then generate a trajectory based on the utility. This utility is called traversability. Planning based on traversability can reduce the trajectory tracking error in a predictive way at the planning time, which otherwise is reduced using a feedback-based trajectory tracker at the control time.
Among different types of mobile robots, car-like unmanned ground vehicles (UGVs) are rapidly gaining popularity from researchers. A car-like vehicle is often equipped with a suspension system, which is the system of tires, tire air, springs, shock absorbers, and linkages that connects the vehicle to its wheels and allows relative motion between the two. The suspension system has a significant effect on the traversability of the vehicle. However, the suspension system was often neglected in previous work. Therefore, to generate a dynamically feasible trajectory, the proposed approach incorporates the model of the vehicle’s suspension system.
The purpose of this work is to navigate a car-like UGV safely and efficiently, with a focus on rough and unstructured terrain. The proposed methods can help an autonomous vehicle to be applied in many challenge applications. For example, the car-like vehicle may be required to dig a hole or unload something on a complex terrain surface. Besides, in the application of teleoperations, the proposed planner can be used to autonomously drive the vehicle to an operator-designated pose. This can reduce the workload of the operator and the telemetry bandwidth.
In summary, the contributions of this paper and the characteristics of the proposed methods are shown as follows:
  • The suspension system of the vehicle is used to reduce the pose estimation error and optimize the trajectory. The optimized trajectory is easy to be tracked by the vehicle in non-planar environments.
  • The traversability is assessed on demand based on original LiDAR points during trajectory planning, without any kind of explicit terrain surface reconstruction or discretization. This feature makes the proposed method efficient in terms of computation and storage.
  • The cost function and the node-expansion rule of the conventional A* are modified to obtain a path satisfying non-holonomic constraints. This path is then optimized by a constraint-aware optimizer based on a custom cost function. The final trajectory is smoother and more traversable than those generated by other state-of-the-art methods.
  • The proposed traversability assessor (or trajectory planner) is general and can be used with any other motion planning method (or traversability assessment method).
The rest of this paper is organized as follows. First, some previous researches about the traversability assessment and trajectory planning in a non-planar environment are reviewed. Then, Section 2 introduces the architecture of the proposed methods briefly. Section 3 describes how to assess the traversability of a ground vehicle based on its suspension system using a 3D LiDAR. Section 4 introduces how to generate and optimize a vehicle trajectory on rough terrain using the assessed traversabilities. Section 5 shows and analyzes the results of some simulation and real-world experiments. Finally, the paper is concluded and a direction for future work is suggested.

1.1. Related Work

The safe and efficient navigation of an unmanned ground vehicle requires the terrain information, which can be exploited to predict the future pose of the vehicle and assess the traversability. The sensors used to obtain the terrain information include visual sensors [1,2,3,4,5], LiDARs [6,7,8,9,10,11,12,13], and so on. Visual sensors can provide various kinds of terrain information, but processing visual data often requires a long computation time. What is more, the performance of visual sensors may vary with the intensity of sunlight [14]. The point clouds from LiDARs usually occupy large storage space, but the performance of a LiDAR-based terrain mapping approach is often relatively stable [15]. The point clouds or the visual data can be converted into traversabilities directly, or be converted into a digital elevation map (DEM) [16] or a 3D grid map [17,18]. A DEM is a 2.5D grid map with each grid value representing the height information about the terrain surface. It is easy to be implemented but cannot model overhanging obstacles, such as bridges. Besides, accurately computing the traversability needs a high resolution DEM, but building and maintaining a high resolution DEM is time-consuming. A 3D grid map is able to reconstruct an environment and suitable for the navigation in a multi-level building [19], but it is inefficient in terms of computation and storage. Compared with the existing methods, the proposed method assesses the vehicle traversability on demand using the original point cloud from a 3D LiDAR. It does not include complex terrain mapping process, so it is more efficient in terms of computation and storage.
A traversability assessor provides a mapping from the terrain maps or the sensor data to the traversabilities. It evaluates the mobility of a ground vehicle. The traversability depends on the pose of the vehicle and the terrain shape. The approaches that assess the traversability based on the terrain maps can be classified into appearance-based methods [20], geometry-based methods [21,22], and learning-based methods [23,24,25]. The approaches that assess the traversability based on original sensor data are often free of artificial discretization, which is inherent to DEMs, 3D grid maps, or other classical terrain models. For instance, Krüsi et al. propose an approach to determine the traversability of a specific pose without any discretization [26]. This approach can plan a trajectory directly on LiDAR points. Santamaria-Navarro et al. present a method for the large-scale traversability classification of point clouds [27]. In this method, the model for the classification of points is learned from training data using Gaussian processes. However, the existing methods usually ignore the vehicle’s suspension system.
Once the traversability is known, a trajectory should be generated. In the context of trajectory planning, researchers have paid much attention to the generation of a collision-free trajectory assuming flat terrain. Common trajectory planning techniques include artificial potential field (APF) methods [28,29], optimization-based methods [30,31], search-based methods [32,33], sampling-based methods [34,35], and so on. However, in a non-planar environment, more complicated cost functions and vehicle models must be used. For example, Amar et al. adapt the trajectory to the rough terrain using the kinematic model of the vehicle [36], and Howard et al. propose to optimize a trajectory by the model-based simulation of a vehicle on the rough terrain [37]. Recently, the trajectory planning methods based on machine learning are rapidly gaining popularity, especially the end-to-end learning methods. The main idea of these methods is to directly learn a mapping from the original sensor data to a trajectory. Learning-based planners have enabled a long-range autonomous navigation using a single stereo camera [38] or a LiDAR [39]. In terms of application, much of the work on trajectory planning in a non-planar environment focused on rovers for planetary exploration [40,41] or military vehicles [42].
The existing traversability assessors and trajectory planners seldom consider the suspension model of the vehicle. They usually take the whole vehicle as a rigid body. However, in a non-planar environment, the traversability of the vehicle largely depends on its suspension system. Therefore, the proposed methods incorporate the vehicle’s suspension model in traversability assessment and trajectory planning. This is essential for safe and efficient navigation in a non-planar environment.

2. System Architecture

Figure 1 shows the architecture of the proposed traversability assessor and trajectory planner. The inputs of the system are a start pose and a goal pose. A non-holonomic A* planner generates an initial path, which is then optimized by a trajectory optimizer. Formally, a path is defined as the following mapping: [ 0 , 1 ] X , and a trajectory is defined as the following mapping: [ 0 , T ] X prescribing the evolution of the state of the vehicle in time, where T is the time instant at which the vehicle reaches the end of the trajectory, and X is the state space of the vehicle. The state of the vehicle can be defined as the pose (position and orientation) of the vehicle. The final output of the system is a solution trajectory. During the generation and the optimization of the trajectory, the traversability assessor is invoked by both the planner and the optimizer on demand to calculate the traversabilities at required poses. This calculation is based on a suspension model and the point clouds from a 3D LiDAR. Note that the traversability assessment is not treated as a separate and upstream process. It is regarded as an integral part of trajectory planning.

3. Traversability Assessment Using LiDAR

In this section, a method will be introduced to assess the traversability of a vehicle using a 3D LiDAR. The traversability depends on the pose of the vehicle, the terrain roughness, and the height difference. The vehicle pose is estimated based on the point cloud and the suspension system. The overview of this section is shown in Figure 2.

3.1. Light Detection and Ranging

In this work, the 3D LiDAR used to acquire the geometric information of rough terrain is HDL-64E S2 or HDL-32E developed by Velodyne LiDAR, as shown in Figure 3. The manufactory is located at Silicon Valley. The specifications of the LiDAR are summarized in Table 1. The output of the 3D LiDAR is a set of points, called a point cloud. The point cloud is a continuous terrain representation that is free of any artificial discretization. Hence, It is suitable for accurately computing the traversabilities at specific vehicle poses. Besides, it is a by-product of the SLAM module, so we do not spend additional computation time on building and maintaining terrain maps. Next, we will compute the vehicle pose and the traversability based on the point cloud.

3.2. Pose Estimation

A ground vehicle is always constrained to move on the surface of rough terrain, so the height, roll, and pitch of the vehicle are controlled by the local geometry of the terrain surface and need to be estimated. Formally, pose estimation can be defined as the following function:
f pe : ( P , s ¯ ) s ˜
where P is a point cloud that is constructed before trajectory planning. s ¯ = x ¯ y ¯ θ ¯ z T SE(2) is a query pose, and s ˜ = x ˜ y ˜ z ˜ θ ˜ x θ ˜ y θ ˜ z T SE(3) is the estimated pose on the terrain surface. Note that SE(2) is a state-space whose element is composed of x-coordinate, y-coordinate, and yaw. SE(3) is a state-space whose element is composed of x-coordinate, y-coordinate, z-coordinate, roll, pitch, and yaw. In the following, a pose estimation method developed assuming that the vehicle is static. This approach first calculates a roll and a pitch according to the wheel-terrain interaction, and then estimates the SE(3) pose of the vehicle based on its suspension system. The proposed pose estimation method is based on Point Clouds and vehicle Suspension models, so it is called PC-Sus.

3.2.1. Euler Angle Estimation Based on Wheel-Terrain Interaction

First, the position of the vehicle’s wheels on the terrain surface is calculated. For a given query pose s ¯ = x ¯ y ¯ θ ¯ z T , the planar coordinates of the right front wheel are calculated as:
x rf y rf = cos θ ¯ z sin θ ¯ z sin θ ¯ z cos θ ¯ z W / 2 L + x ¯ y ¯ ,
where W and L are the length of the vehicle’s axle and the wheelbase, respectively. Let P rf denote the following 3D point set:
P rf = p = x y z T | ( x x rf ) 2 + ( y y rf ) 2 < W tire 2 2 , p P ,
where W tire is the width of the vehicle’s tire. P rf is the set of the LiDAR points that are near the contact area between the vehicle’s tire and the terrain surface. Then, the center of gravity of the points in P rf is computed as:
p average = 1 n p P rf p
where n is the number of points in P rf . Let z rf be the z-coordinate of p average . Finally, the position of the right front wheel on the terrain surface can be represented by p rf = [ x rf y rf z rf ] T . The positions of the right back wheel, the left front wheel, and the left back wheel are represented by p rb , p lf , and p lb respectively, which are all computed in a similar way.
In this paper, every three wheel positions are defined as a triple. For a given triple α = p 1 p 2 p 3 T ( p i = x i y i z i T ), the normal vector of the plane passing through these three points is:
n α = p 2 p 1 × p 3 p 1 = i j k x 2 x 1 y 2 y 1 z 2 z 1 x 3 x 1 y 3 y 1 z 3 z 1 = a i + b j + c k = a b c T ,
where i = 1 0 0 T , j = 0 1 0 T , k = 0 0 1 T , a = ( y 2 y 1 ) ( z 3 z 1 ) ( y 3 y 1 ) ( z 2 z 1 ) , b = ( z 2 z 1 ) ( x 3 x 1 ) ( z 3 z 1 ) ( x 2 x 1 ) , and c = ( x 2 x 1 ) ( y 3 y 1 ) ( x 3 x 1 ) ( y 2 y 1 ) . Then, the roll ( θ x α ) and the pitch ( θ y α ) of the plane that passes through the three points are calculated as:
θ x n ( α ) = sgn ( b ) arccos c b 2 + c 2 θ y n ( α ) = sgn ( a ) arccos b 2 + c 2 a 2 + b 2 + c 2
where sgn is the sign function.
Recall that the wheel positions p rf , p rb , p lf and p lb have been computed previously. Let A be a set of triples: { p rf p lf p lb T , p rf p lf p rb T , p rf p lb p rb T , p lf p lb p rb T } . Then, a roll ( θ x * ) and a pitch ( θ y * ) can be calculated as:
α x * = arg max α A | θ x n ( α ) | α y * = arg max α A | θ y n ( α ) | θ x * = θ x ( n ( α x * ) ) θ y * = θ y ( n ( α y * ) )
where α x * and α y * are the triples that make the roll and the pitch equal to the maximal absolute values, respectively. θ x * and θ y * are the maximal roll and pitch with signs (positive or negative), respectively. If the vehicle is assumed to be a rigid body without a suspension system, θ x * and θ y * will be the roll and the pitch of the vehicle, respectively.

3.2.2. Pose Estimation Based on Suspension System

Note that θ x * and θ y * cannot be used as the roll and the pitch of the vehicle, because the above calculations do not consider the suspension model of the vehicle. The role of the suspension model is a theoretical basis used to compute the roll and the pitch. Without the suspension model, the whole vehicle can only be taken as a rigid body. However, a real vehicle is never a rigid body. Next, the roll and the pitch will be computed based on the suspension model. During the computation, an appropriate suspension model will be chosen based on θ x * to estimate the SE(3) pose ( s ˜ ) of the vehicle.
There are two well known passive suspension models: the half vehicle model and the full vehicle model, as shown in Figure 4a,b. Formally, the dynamic model of a half vehicle contains four linear differential equations [43]:
m s z ˜ ¨ / 2 = μ f ( z ˙ s rf z ˙ u rf ) μ b ( z ˙ s rb z ˙ u rb ) k f ( z s rf z u rf ) k b ( z s rb z u rb ) I y θ ˜ ¨ y / 2 = μ f L f ( z ˙ s rf z ˙ u rf ) + μ b L b ( z ˙ s rb z ˙ u rb ) k f L f ( z s rf z u rf ) + k b L b ( z s rb z u rb ) m u f z ¨ u rf = μ f ( z ˙ s rf z ˙ u rf ) + k f ( z s rf z u rf ) + k t f ( z rf z u rf ) m u b z ¨ u rb = μ b ( z ˙ s rb z ˙ u rb ) + k b ( z s rb z u rb ) + k t b ( z rb z u rb )
Note that the half vehicle model assumes that the roll of the vehicle is zero. The dynamic model of a full vehicle contains seven linear differential equations [44]:
m s z ˜ ¨ = μ f ( z ˙ s rf z ˙ u rf ) μ f ( z ˙ s lf z ˙ u lf ) μ b ( z ˙ s rb z ˙ u rb ) μ b ( z ˙ s lb z ˙ u lb ) k f ( z s rf z u rf ) k f ( z s lf z u lf ) k b ( z s rb z u rb ) k b ( z s lb z u lb ) I x θ ˜ ¨ x = μ f W f ( z ˙ s rf z ˙ u rf ) + μ f W f ( z ˙ s lf z ˙ u lf ) μ b W b ( z ˙ s rb z ˙ u rb ) + μ b W b ( z ˙ s lb z ˙ u lb ) k f W f ( z s rf z u rf ) + k f W f ( z s lf z u lf ) k b W b ( z s rb z u rb ) + k b W b ( z s lb z u lb ) I y θ ˜ ¨ y = μ f L f ( z ˙ s rf z ˙ u rf ) μ f L f ( z ˙ s lf z ˙ u lf ) + μ b L b ( z ˙ s rb z ˙ u rb ) + μ b L b ( z ˙ s lb z ˙ u lb ) k f L f ( z s rf z u rf ) k f L f ( z s lf z u lf ) + k b L b ( z s rb z u rb ) + k b L b ( z s lb z u lb ) m u f z ¨ u rf = μ f ( z ˙ s rf z ˙ u rf ) + k f ( z s rf z u rf ) + k t f ( z rf z u rf ) m u f z ¨ u lf = μ f ( z ˙ s lf z ˙ u lf ) + k f ( z s lf z u lf ) + k t f ( z lf z u lf ) m u b z ¨ u rb = μ b ( z ˙ s rb z ˙ u rb ) + k b ( z s rb z u rb ) + k t b ( z rb z u rb ) m u b z ¨ u lb = μ b ( z ˙ s lb z ˙ u lb ) + k b ( z s lb z u lb ) + k t b ( z lb z u lb )
where z s rf = W f θ ˜ x + L f θ ˜ y + z ˜ , z s lf = W f θ ˜ x + L f θ ˜ y + z ˜ , z s rb = W b θ ˜ x L b θ ˜ y + z ˜ , z s lb = W b θ ˜ x L b θ ˜ y + z ˜ , and z rf , z rb , z lf , z lb are the heights of the vehicle’s wheels on the terrain surface. The definitions of the constants in Equations (8) and (9) are shown in Table 2. The values of these constants are obtained by referring to the manufacturer’s specifications of the vehicle.
Note that during the generation of the initial path, the vehicle is assumed to be static and all the first-order and second-order derivatives in Equations (8) and (9) are equal to zero. Let x = [ z ˜ θ ˜ x θ ˜ y z u rf z u lf z u rb z u lb ] T denote the state vector and w = [ z rf z rb z lf z lb ] T denote the disturbance vector. Then, Equation (9) can be written as the following state-transition equation: 0 = A x + E w , where A and E are fixed matrices called state-transition matrix and disturbance matrix, respectively. The solution of this equation is x = A 1 E w , which can be considered as a mapping f full : [ z rf z rb z lf z lb ] T [ z ˜ θ ˜ x θ ˜ y ] T . For the half vehicle model, the mapping f half : [ z rf z rb ] T [ z ˜ θ ˜ y ] T can be obtained in a similar way. Finally, the SE(3) pose ( s ˜ ) of the vehicle is calculated as:
s ˜ = x ˜ y ˜ z ˜ θ ˜ x θ ˜ y θ ˜ z T where [ x ˜ y ˜ θ ˜ z ] T = s ¯ = [ x ¯ y ¯ θ ¯ z ] T and [ z ˜ θ ˜ y ] T = f half ( [ z rf z rb ] T ) , θ ˜ x = θ x * if | θ x * | θ x + , [ z ˜ θ ˜ x θ ˜ y ] T = f full ( [ z rf z rb z lf z lb ] T ) otherwise .
If the roll is nearly zero, the pitch and the z-coordinate are computed using the half vehicle model. Otherwise, the SE(3) pose of the vehicle is computed using the full vehicle model. In summary, given the z-coordinates of the vehicle wheels, the half or full vehicle suspension model will become a linear system of equations with 4 or 7 unknown variables. Then, the roll and the pitch of the vehicle can be calculated via solving this linear system of equations.

3.3. Traversability Computation

In addition to the roll and the pitch calculated by Equation (10), terrain roughness is also necessary in traversability computation. Let P foot denote the set of all LiDAR points that are located in the footprint of the vehicle ( P foot P ). Then, the center of gravity ( p foot ) of the points in P foot and the associated covariance matrix are calculated as:
p foot = 1 n p P foot p
cov ( p foot , p foot ) = 1 n p P foot ( p p foot ) ( p p foot ) T ,
where n is the number of points in P foot . Figure 5 explains how the terrain roughness is computed. The terrain roughness ( ρ ) is defined as the residual of the fitting plane: j = 1 n d j , where d j represents the distance between ith LiDAR point in P foot and the fitting plane. Let λ min be the minimum of the eigenvalues of cov ( p foot , p foot ) . Then, the residual of the fitting plane is:
ρ = λ min
Besides, the height difference need to be computed. As shown in Figure 5, let g denote the grid in which the point [ x ˜ y ˜ ] T located, and let P g denote the set of all LiDAR points located in g ( P g P ). Then, the height of g can be calculated as:
h g = 1 n p P g p . z
where p . z is the z-coordinate of p . Let g 0 , g 1 , , g 7 denote the eight-connected grids of g . Then, the height difference ( h d ) is defined as: | h g h g i | , where i = [ ( θ ˜ z + 22.5 ) mod 360 ] / 45 ( θ ˜ z [ 0 , 360 ) and i [ 0 , 7 ] ). The symbols “ ” and “mod” represent the round-down operator and the modulo operator, respectively. In fact, g i is the grid that is nearest to g in the heading direction of the vehicle.
Finally, the traversability is calculated as:
τ = 0 , if | θ ˜ x | > θ ˜ x max or θ ˜ y < θ ˜ y min or θ ˜ y > θ ˜ y max or ρ > ρ max or h d > h dmax 1 w τ 1 max θ ˜ y θ ˜ y min , θ ˜ y θ ˜ y max + w τ 2 | θ ˜ x | θ ˜ x max + w τ 3 ρ ρ max + w τ 4 h d h dmax , otherwise
Note that the traversability (Equation (15)) is defined artificially. It is a relative value ranged from 0 to 1. In this work, the definition is that if one of the roll ( θ ˜ x ), pitch ( θ ˜ y ), terrain roughness ( ρ ), or height difference ( h d ) exceeds the respective limit value, the traversability is 0. Otherwise, the traversability is the weighted sum of them, normalized by their respective limit values. Note that a vehicle is usually not front-back symmetrical, so there are two different limit values ( θ ˜ y min and θ ˜ y max ) for the pitch.

4. Trajectory Planning

In this section, an initial path will first be generated, which is subject to the non-holonomic constraints of a car-like vehicle. Then, this path will be converted into a trajectory, which will be optimized in terms of safety, traversability, time consumption, trajectory smoothness, and so on. Note that the proposed planner and optimizer are called on demand, so the frequency of planning and optimization is not fixed. For example, if the solution trajectory is blocked by a new sensed obstacle, then the planner and optimizer will be called to generate a new trajectory. The overview of this section is shown in Figure 6. The proposed trajectory planner is based on Terrain Shapes and vehicle Suspension models, so it is called TS-Sus.

4.1. Non-Holonomic A*

For a given planning query (a start pose and a goal pose in the SE(2) space), an initial path is first found by a non-holonomic A* planner, which takes into account the non-holonomic constraints, the proximity to an obstacle, and the traversability. Next, the non-holonomic constraints of a car-like vehicle will be introduced, and then the node expansion and the cost function of the non-holonomic A* will be described.

4.1.1. Non-Holonomic Constraints

For a mechanical system, kinematic constraints are described by the relations between the position and the velocity of the system. The kinematic constraints that cannot be integrated to the form containing only the position are called non-holonomic constraints. A system subject to non-holonomic constraints is called a non-holonomic system. A car-like vehicle is a non-holonomic system. It cannot move sideways, and its turning radius is lower bounded. Formally, the non-holonomic constraints that a car-like vehicle (shown in Figure 7) satisfies are written as:
x ¯ ˙ y ¯ ˙ θ ¯ ˙ z = v · sin θ ¯ z v · cos θ ¯ z ( v · tan ψ ) / L ,
where [ x ¯ y ¯ θ ¯ z ] T is the SE(2) pose of the vehicle, and v is the longitudinal velocity of the vehicle. ψ is the steering angle, and L is the wheelbase.

4.1.2. Node Expansion

In this section, a non-holonomic A* search is performed to find a path satisfying the non-holonomic constraints. Figure 8 shows the differences between the conventional A* and the non-holonomic A*. On the one hand, the conventional A* treats a car-like vehicle as a point without orientation and only visits grid centers, as shown in Figure 8a. As a result, the path generated by the conventional A* is piecewise-linear, which is a sequence of vehicle positions [ x ¯ i y ¯ i ] T , i = 1 , 2 , , n (as shown Figure 8b). On the other hand, the non-holonomic A* considers a car-like vehicle as a vector. The child poses are generated by assuming some different steering angles and a fixed forward/reverse velocity in Equation (16). This method associates vehicle poses with grids, so any continuous pose of the vehicle can be visited. Therefore, the resultant path satisfies the non-holonomic constraints, as shown in Figure 8b. Mathematically, the path generated by the non-holonomic A* is a sequence of SE(2) poses s ¯ i = [ x ¯ i y ¯ i θ ¯ z i ] T , i = 1 , 2 , , n , which can be mapped to SE(3) poses using the pose estimation method introduced in Section 3.2.
The shapes of the grids in the conventional A* and the non-holonomic A* are also different. The conventional A* planner and the non-holonomic A* planner search square-shaped grids and sector-shaped grids, respectively. The sector-shaped grids can well represent the characteristics of point cloud data (high resolution for the LiDAR points near the vehicle, and low resolution for the points that are far from the vehicle).

4.1.3. Cost Function

The order of the node expansions in the non-holonomic A* is partly determined by movement costs. There are different costs for different types of movements. For example, moving on the uneven ground leads to a greater cost than moving on flat ground, and the cost of moving reverse is greater than that of moving forward. In this work, the cost ( f m ) of moving from a parent pose ( s ˜ i ∈ SE(3)) to a child pose ( s ˜ i + 1 ∈ SE(3)) is computed as:
f m s ˜ i + 1 , s ˜ i = w m 1 ϵ reverse length ( s ˜ i + 1 , s ˜ i ) N m 1 + w m 2 ϵ switch N m 2 + w m 3 max ( 0 , d omax d o i + 1 ) N m 3 + w m 4 1 τ i + 1 N m 4 , d o i + 1 0 and τ i + 1 0 and s ˜ i + 1 s ˜ i ξ
where w m i ( i = 1 , 2 , 3 , 4 ) is a weight factor (determined empirically) that indicates the importance of the respective cost, and N m i ( i = 1 , 2 , 3 , 4 ) is a normalization factor that is determined as the largest value of the respective cost. d o i + 1 is the distance between s ˜ i + 1 and the obstacle nearest to s ˜ i + 1 . d omax , called safe distance, is determined according to the vehicle size and the environment. τ i + 1 is the traversability of s ˜ i + 1 . ξ is the resolution of the path generated by the non-holonomic A* planner. Besides, ϵ reverse and ϵ switch are computed as:
ϵ reverse = 1 + δ 1 + sgn ( v i + 1 ) P reverse ϵ switch = δ 2 sgn ( v i + 1 ) sgn ( v i ) P switch
where
δ ( u ) = 0 , u 0 1 , u = 0 sgn ( v ) = 1 , v < 0 0 , v = 0 1 , v > 0
and P reverse , P switch are the multiplicative penalty applied to driving reverse and the additive penalty applied to switching direction, respectively. Finally, length ( s ˜ i + 1 , s ˜ i ) is the length of the path segment connecting the parent pose ( s ˜ i ) and the child pose ( s ˜ i + 1 ). It can be calculated as:
length ( s ˜ i + 1 , s ˜ i ) = Δ θ ˜ z r turn , Δ θ ˜ z 0 ( Δ x ˜ 2 + Δ y ˜ 2 + Δ z ˜ 2 ) 1 / 2 , Δ θ ˜ z = 0
where r turn is the maximal turning radius of the vehicle. Δ x ˜ is computed as x ˜ i + 1 x ˜ i . Δ y ˜ , Δ z ˜ , and Δ θ ˜ z are all computed in a similar way. During the non-holonomic A* search, the cost ( f g , called cost-to-come) of moving from the start pose ( s ˜ 0 ) to the current pose ( s ˜ i ) can be calculated as:
f g s ˜ i = f m s ˜ i , s ˜ i 1 + f g s ˜ i 1 = f m s ˜ i , s ˜ i 1 + f m s ˜ i 1 , s ˜ i 2 + + f m s ˜ 1 , s ˜ 0 .
Note that the pose whose d o or τ equals 0 is not considered in Equation (17), because it is the non-traversable pose that is not expanded by the A* search. The computational complexity of A* is well known: O ( b d ) , where b is the branching factor of the A* search tree, and d is the depth of the goal node.

4.2. Trajectory Optimization

The non-holonomic A* only accounts for the kinematic model of the ground vehicle. However, the pose of the vehicle is also largely affected by its suspension model in a non-planar environment. Besides, the initial path produced by the non-holonomic A* is usually not smooth enough and worthy of further improvement. Next, the features of the initial path will first be extracted and the trajectory of the vehicle will be predicted. This prediction is based on the extracted features and the suspension model of the vehicle. Then, the predicted trajectory corresponding to the features will be optimized in terms of smoothness, traversability, and so on.

4.2.1. Feature Extraction

The initial path is a sequence of SE(3) poses, which is a high-dimensional vector and hard to be optimized directly. So the features of this path are first extracted to descend its dimension. The feature vector ( u ) is defined as:
u = [ u v T u _ z T ] T , u v = [ v 0 a 0 v max a f v f t f ] T and u ω z = [ ω z ( 0 ) ω z ( Δ t ) ω z ( 2 Δ t ) ω z ( t f ) ] T
where u v determines the parameters of the trapezoid shown in Figure 9, and u ω z determines the knot-points of the spline curve shown in Figure 10. v 0 , a 0 , v max , a f , and v f are the start velocity, the start acceleration, the maximal velocity (or called the traverse velocity), the terminal acceleration, and the terminal velocity. ω z ( t ) is the angular velocity of the vehicle at time instant t.
The initial path produced by the non-holonomic A* contains no information about the velocity and the acceleration of the vehicle, so v 0 , a 0 , v max , a f and v f are set manually according to the limitation of the vehicle. t f is the time spent on moving from the start pose to the goal pose, and it can be calculated as:
t f = l f + ( v max v 0 ) 2 2 a 0 ( v max v f ) 2 2 a f / v max
where l f is the length of the initial path. Once u v is known, the mapping from the time instant (t) to the traveled distance of the vehicle (l) can be obtained via integration. Besides, for a given initial path, the mapping from l to the yaw of the vehicle ( θ ˜ z ) can be obtained via interpolation. Finally, the mapping from t to θ ˜ z ( t [ 0 , t f ] ) can be obtained. Let K be the dimension of u ω z and Δ t = t f / ( K 1 ) . Then, u ω z can be calculated approximately using forward differences:
ω z ( t ) = d θ ˜ z d t θ ˜ z ( t + e ) θ ˜ z ( t ) e

4.2.2. Motion Prediction

Before the feature vector is optimized, it should first be mapped to a vehicle trajectory by a motion prediction method, which contains two steps: motion simulation and pose estimation. Note that in the motion prediction the vehicle is not assumed to be static. This means that the derivatives in Equation (9) can be non-zero, so the pose estimation in the motion prediction is different from that in Section 3.2.
The motion simulation works as follows. Given the SE(3) pose of the vehicle at the current time instant t i ( t i [ 0 , t f ] ) and a feature vector ( u ), Euler’s method is used to calculate the planar position and the yaw at the next time instant t i + 1 . This calculation is based on the following vehicle motion model:
x ˜ ˙ y ˜ ˙ θ ˜ ˙ z = cos θ ˜ z cos θ ˜ y cos θ ˜ z sin θ ˜ y sin θ ˜ x sin θ ˜ z cos θ ˜ x 0 sin θ ˜ z cos θ ˜ y sin θ ˜ z sin θ ˜ y sin θ ˜ x + cos θ ˜ z cos θ ˜ x 0 0 0 cos θ ˜ x cos θ ˜ y v 0 ω z .
The pose estimation works as follows. Given the planar position and the yaw at t i + 1 , Equations (2)–(4) are used to compute the heights of the four vehicle wheels ( z rf , z rb , z lf , z lb ), which are the disturbance inputs of the vehicle’s suspension model (shown in Equation (9)). Then, Euler’s method is used again to compute the height ( z ˜ ), the roll ( θ ˜ x ), and the pitch ( θ ˜ y ) of the vehicle at t i + 1 based on Equation (9). A vehicle trajectory, which is a sequence of SE(3) vehicle poses, is generated when t i is equal to t f . Note that this vehicle trajectory is different from the initial path produced by the non-holonomic A*. The poses along the trajectory are all time-indexed, and the trajectory is generated without assuming that the vehicle is static.

4.2.3. Numerical Optimization

To generate an optimal trajectory, the optimizer must adjust the feature vector ( u ) to satisfy the constraint: Δ s ˜ f ( u ) = s ˜ f s ˜ ( t f ) = 0 , and minimize a cost function ( f t ( u ) ), where s ˜ f is the goal pose and s ˜ ( t f ) is the terminal pose of the trajectory generated by the motion prediction. This is a constrained optimization problem, which can be solved using the method of Lagrange multipliers. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can be applied. The Lagrange function ( L ) is written as:
L ( u , λ ) = f t ( u ) + λ T Δ s ˜ f ( u ) ,
where λ is the Lagrange multiplier vector. The necessary conditions for optimality are:
u , λ L ( u , λ ) = 0 L ( u , λ ) u = f t ( u ) u + λ T Δ s ˜ f ( u ) u = 0 T L ( u , λ ) λ = Δ s ˜ f ( u ) = 0 .
Equation (27) can be solved using Newton’s method. The feature vector and the Lagrange multiplier vector are adjusted iteratively until a sufficiently precise value is reached. According to Newton’s method, a correction vector for u and λ at each iteration of the optimization is computed as:
Δ u Δ λ = β H 1 J = β 2 L ( u , λ ) u 2 Δ s ˜ f ( u ) u T Δ s ˜ f ( u ) u 0 1 L ( u , λ ) u T Δ s ˜ f ( u ) ,
where H and J are the Hessian matrix and the Jacobian matrix of the Lagrange function. A step size scaling factor ( β ) is used to improve numerical stability. In the implementation, forward differences are used to estimate the partial derivatives:
Δ s ˜ i ( u ) u j Δ s ˜ i ( u 1 , u 2 , , u j + e , , u n ) Δ s ˜ i ( u ) e
2 L ( u ) u k u l Δ 2 L e 2 , Δ 2 L = L ( u 1 , u 2 , , u k + e , , u l + e , , u n ) L ( u 1 , u 2 , , u k + e , , u n ) L ( u 1 , u 2 , , u l + e , , u n ) + L ( u ) .
where Δ s ˜ i is the ith element of Δ s ˜ f , and u j is the jth element of u .
Note that the cost function ( f t ( u ) ) used in the trajectory optimization is different from that used in the non-holonomic A*. The former is used to compute the cost of a trajectory corresponding to u . The poses along a trajectory are time-indexed, so f t ( u ) often accounts for more optimization criteria such as velocity, acceleration and time consumption. Next, the optimization criteria that are considered by f t ( u ) will be introduced.
Firstly, the proximity ( d o ) of the trajectory to an obstacle and the traversability ( τ ) of the trajectory are included. The cost functions corresponding to them are written as:
f d o ( u ) f d o ( f mp ( u ) ) = f d o ( t ) = 1 n i = 1 n max ( 0 , d omax d o i ) f τ ( u ) = 1 n i = 1 n 1 τ i
where f mp : u t is the motion prediction, and t is a vehicle trajectory. n is the number of poses along t , and the other notations are the same as the notations in Equation (17). For clarity, t will always be replaced by u in the following definitions of cost functions. That is to say, if a cost is calculated based on the properties of a vehicle trajectory, the feature vector ( u ) will always be mapped to a trajectory ( t ) based on f mp .
Secondly, the velocity and the acceleration should be bounded from above and below according to the manufacturer’s specifications of the vehicle. For example, the cost function ( f v ) corresponding to the longitudinal velocity is defined as:
f v ( u ) = 1 n i = 1 n max ( 0 , v lim v i , v i v lim + ) , v lim < 0 and v lim + > 0 ,
where v i is the longitudinal velocity of the vehicle when it is at the ith pose along the trajectory. v lim and v lim + are the reversing velocity limit and the forward velocity limit of the vehicle, respectively. In a similar way, the cost functions corresponding to the angular velocity, the longitudinal acceleration and the angular acceleration of the vehicle are defined as f ω z ( u ) , f a ( u ) , and f α z ( u ) , respectively.
Thirdly, the time consumption ( t f ) and the length ( l f ) of the trajectory should also be included. The cost functions corresponding to them are f t f ( u ) = t f and f l f ( u ) = l f . l f is computed by Equation (23) when t f is known. Fourthly, the trajectory should be optimized in terms of smoothness to reduce unnatural swerves. The cost function ( f σ ) corresponding to the trajectory smoothness is defined as:
f σ ( u ) = 1 n 2 i = 2 n 1 ( [ x ˜ i + 1 y ˜ i + 1 z ˜ i + 1 ] T 2 [ x ˜ i y ˜ i z ˜ i ] T + [ x ˜ i 1 y ˜ i 1 z ˜ i 1 ] T ) 2 .
Note that the z-coordinate is also considered in the calculation of the smoothness cost. In this way, a trajectory that prevents the vehicle from rising and falling frequently can be obtained. Finally, the total cost function ( f t ( u ) ) is computed as a weighted sum of all the above-mentioned cost functions.
The computational complexity of the trajectory optimization is O ( m n ) , where m is the number of optimization iteration, and n is the number of poses along the trajectory that is optimized. Recall that the computational complexity of A* is O ( b d ) , so the computational complexity of the proposed approach is O ( b d + m n ) .

5. Experimental Results and Discussion

This section provides a precise description of the experimental results and discusses them. Moreover, the proposed traversability assessor and trajectory planner are analyzed according to these results. The experiments were conducted in both the simulation and the real world, with the same set of parameter values. The half and full vehicle suspension models used in experiments are shown in Figure 4, Equations (8) and (9). The programs of the proposed approaches are executed on a single core of a 3.2 GHz Intel Core i5-3470 processor.

5.1. Simulation Experiments

In the simulation experiments, virtual terrain surfaces were created using a terrain editor called EarthSculptor, and the point clouds of these virtual terrain surfaces were generated by a virtual HDL-64E S2 LiDAR in a simulator called Gazebo, which was run on the Robot Operating System (ROS). Besides, a virtual vehicle with the samespecifications as our real vehicle (an all-terrain vehicle) was created in the Gazebo simulator.
Let S denote the following set: { [ x ¯ y ¯ θ ¯ z ] T | x ¯ / 1 Z , y ¯ / 1 Z , θ ¯ z / 0.1 Z , x ¯ [ 0 , W m ] , y ¯ [ 0 , L m ] , θ ¯ z [ 0 , 2 π ) } , where W m and L m are the width and the length of a terrain surface (in meters), respectively. Z is the set of all integers. For each terrain surface in Figure 11, all the query poses in S were mapped to SE(3) poses using the proposed pose estimation method. Then, to acquire the ground truths of the vehicle’s height, roll, and pitch, the virtual vehicle was launched at each query pose in S , and the corresponding SE(3) pose was acquired via the ROS message published by the Gazebo simulator. Finally, the errors of our pose estimation method (PC-Sus) could be calculated. Similarly, we computed the errors of a pose estimation method [2] (called DEM-Kin) that is based on Digital Elevation Maps and vehicle Kinematic models, and another pose estimation method [45] (called Kin-GP-VE) that is based on Kinematics, Gaussian Processes, and Vehicle Experiences. Table 3 shows these errors and the improvement in pose estimation using the proposed PC-Sus method. The Root Mean Squared Errors (RMSEs) in roll and pitch estimation were reduced by approximately 35% and 47% over the Kin-GP-VE method and the DEM-Kin method, respectively.
Let Θ denote the following set: { θ ¯ z | θ ¯ z / 0.1 Z , θ ¯ z [ 0 , 2 π ) } . Then, the traversability at [ x ¯ y ¯ ] T can be defined as:
τ x ¯ y ¯ max θ ¯ z Θ τ f pe [ x ¯ y ¯ θ ¯ z ] T ,
where τ is calculated using Equation (15), and f pe is shown in Equation (1). Figure 12 shows the heat map colored by τ x ¯ y ¯ , which is called traversability map. According to the results, the proposed method can evaluate the traversabilities of different terrain shapes, such as ramps, pits, and so on. Note that the traversability maps were constructed only for the purpose of validating the proposed traversability assessor. The traversability map would not be built in trajectory planning. During the process of planning, only the traversabilities of the query poses would be computed.
The proposed trajectory planner was tested on different terrain surfaces. Figure 13 shows the initial paths (red) produced by the non-holonomic A*, the trajectories (green) after optimization, and the real trajectories (blue) of the virtual vehicle. The blue trajectories were generated by making the virtual vehicle track the green trajectories. For clarity, all the above trajectories were drawn on the grayscale traversability maps of the terrain surfaces. Figure 14 shows the z-coordinate, the roll and the pitch of the vehicle along these paths and trajectories. Table 4 shows the performance comparison of these paths and trajectories. Note that d o is + when there is no obstacle in the environments. According to the results, the initial paths had many unnatural swerves (according to Figure 13). Although such paths were drivable, they often led to excessive steering of the vehicle. Compared with the initial paths, the trajectories after optimization were shorter, smoother and had higher traversabilities (according to Figure 14 and Table 4). Therefore, the total cost of the trajectory is reduced after the optimization.
Furthermore, Figure 15, Figure 16, and Table 5 compare the proposed trajectory planner (TS-Sus) with two other state-of-the-art trajectory planners. One [46] (referred to as NoTS) of the two methods does Not account for the Terrain Shape. The other one [26] (referred to as TS-NoSus) takes the Terrain Shape into consideration, but it ignores the Suspension system of the vehicle. In the environment shown in Figure 15a, the TS-NoSus method computed a trajectory that avoided the small terrain undulations. This trajectory was highly traversable (according Figure 16a–c and Table 5) but not smooth (according to Table 5). In fact, the all-terrain vehicle could directly pass through the small terrain undulations without reducing the traversability, due to the shock absorption of its suspension system. The proposed TS-Sus method accounted for the vehicle suspension in the pose estimation, so its trajectory only avoided some large undulations and was smoother than that generated by TS-NoSus.
The terrain undulations become larger in the environments shown in Figure 15b,c and there were even non-traversable areas (or called obstacles) in the latter environment. The NoTS method, which did not consider the terrain shape, calculated two nearly “straight” trajectories in these two environments. However, these trajectory were not really straight and longer than the trajectories generated by TS-NoSus and TS-Sus, because the z-coordinate along these “straight” trajectories changed dramatically (according to Figure 16d,g). Recall that the trajectory smoothness was computed using Equation (33). Therefore, the trajectories generated by the NoTS method were not smooth in these two environments. Besides, the TS-Sus trajectories were still shorter and smoother than the TS-NoSus trajectories in these two environments (according to Table 5). In summary, the costs of the TS-Sus trajectories were the lowest, and the runtime of TS-Sus was only a little longer than that of NoTS.

5.2. Real-World Experiments

In the real-world experiments, the point clouds of real terrain surfaces were generated by a real HDL-32E LiDAR. An all-terrain vehicle developed by Polaris (shown in Figure 17) was used to test our approaches. Besides, an inertial measurement unit (IMU) was combined with the global positioning system (GPS) to measure the roll and the pitch of the vehicle, which would be used to compute the errors of different pose estimation methods. The IMU and its specifications are shown in Figure 18 and Table 6, respectively. The GPS receiver is equipped with NT1065 “Nomada” and bladeRF as the RF front end. They can simultaneously receive various GNSS satellite signals, including GPS (L1, L2, L3, L5). The IMU is similar to an odometer, whose error increases as time goes on. This accumulated error can be decreased using GPS. The IMU can generate continuous measurements and is more robust in non-planar environments than other odometers. Therefore, the combination of the IMU and the GPS is suitable for obtaining the ground truths of the vehicle’s roll and pitch.
Figure 19a,b show a piece of uneven soil and a piece of rock ground, respectively. The terrain shape of the latter is more complex than that of the former. The vehicle was driven in the above two environments, and the SE(3) pose of the vehicle was recorded at a fixed frequency. Then, different pose estimation algorithms were performed based on the planar positions and the yaws of all the recorded poses. Table 7 shows the errors of the three different pose estimation methods. Typically, the errors are larger when the vehicle runs on a more complex terrain surface. The results show that the proposed method (PC-Sus) outperformed Kin-GP-VE and DEM-Kin (by approximately 36% and 49%).
To analyze the computational complexities of the different stages of the proposed trajectory planning approach, an experiment comprising 3000 different random planning queries (3000 pairs of start and goal poses) was conducted. For this experiment, the point clouds of the non-planar environments shown in Figure 19a,b were used. Each planning query was randomly set in one of these two environments. The results are shown in Figure 20. In all the three stages, the runtime and its variance increase approximately proportionally with the length of the planned trajectory. On average, the non-holonomic A* search is the most computationally expensive.
Next, three different trajectory planning methods (NoTS, TS-NoSus, and the proposed TS-Sus) were compared in the real-world experiments. Figure 21a shows a piece of soil whose terrain undulations are small. In this environment, the TS-NoSus method computed a trajectory that avoided these small terrain undulations. However, the small terrain undulations often have little impact on the traversability of the vehicle that has a suspension system. Therefore, the propose TS-Sus method generated a trajectory that only avoided some large undulations. As a result, the traversability of the TS-Sus trajectory is nearly the same as that of the TS-NoSus trajectory (according to Figure 22a–c, and Table 8), but the TS-Sus trajectory is much smoother than the TS-NoSus trajectory (according to Table 8).
Figure 21b shows a piece of rock ground that has a ramp and many large terrain undulations. In this environment, the NoTS method computed a trajectory that only avoided the non-traversable areas (or called obstacles). The roll and the pitch along this trajectory were larger than those along the trajectories generated by the other two planners (according to Figure 22e,f). As a result, the traversability of this trajectory was smaller than the traversabilities of the other two trajectories (according to Table 8). In conclusion, the total costs of the trajectories generated by the proposed planner were the lowest in the comparative experiments. The runtime of our planner was a little longer than that of the NoTS planner, because our planner spent more time on processing the information of terrain shapes.
At last, Figure 23 shows the trajectories (green) after optimization and the real trajectories (blue) of the vehicle in the real world. The blue trajectories were generated by making the vehicle track the green trajectories. Table 9 shows the performance comparison of these trajectories. The results show that there was little difference between the optimized trajectories (green) and the real trajectories (blue). This implies that the optimized trajectories were easy to be tracked by the vehicle on the real rough terrain. The reason is that planning based on the traversability can reduce the trajectory tracking error in a predictive way.

6. Conclusions

This paper presents a traversability assessment method and a trajectory planning method, which can be applied to generate a safe and efficient trajectory for a car-like vehicle running on a rough terrain surface. The proposed suspension-based traversability calculation enables planning dynamically feasible trajectories on different terrain surfaces. Also, computing the traversability on demand based on original LiDAR points helps us get rid of the explicit terrain reconstruction or discretization. Besides, the proposed traversability assessment method (or trajectory planning method) is a general approach that can be used in any other navigation system of a ground mobile robot.
The proposed method has been tested and analyzed in both the simulation and the real-world experiments. According to the experimental results, the error of our pose estimation method, which accounts for the vehicle suspension model, was smaller than the other methods that ignore the suspension model. In a non-planar environment with various terrain shapes, the proposed traversability assessment method was validated by showing a heat map colored by the traversability. Besides, the different stages of the proposed planner were analyzed in terms of the trajectory costs and the computational complexities. The results indicate that the optimization based on the suspension model could make the trajectory smoother and easier to be tracked. Finally, the proposed planner was compared with some other state-of-the-art planners in various non-planar environments. In these comparative experiments, the proposed planner showed a better generalization ability to compute trajectories with lower costs on both simple and complex terrain surfaces.
In the future, the traversability will be calculated based on not only the geometric information but also the semantic information of rough terrain surfaces. The semantic-based traversability can enable the trajectory planner to navigate a vehicle more intelligently, which needs techniques to perform the semantic segmentation of 3D point clouds from laser sensors or images from visual sensors.

Author Contributions

Conceptualization, K.Z., Y.Y., M.F. and M.W.; Data curation, K.Z.; Formal analysis, K.Z.; Funding acquisition, Y.Y., M.F. and M.W.; Investigation, K.Z.; Methodology, K.Z.; Project administration, K.Z., Y.Y. and M.F.; Resources, Y.Y.; Software, K.Z.; Supervision, Y.Y. and M.F.; Validation, K.Z.; Visualization, K.Z.; Writing—original draft, K.Z.; Writing—review & editing, K.Z., Y.Y., M.F. and M.W.

Funding

The work presented in this paper was supported by the National Natural Science Foundation of China (No. 61473042, No. 61105092, and No. 61173076) and the Program for Changjiang Scholars and Innovative Research Team in University (No. IRT-16R06, No. T2014224). The authors would like to thank the suggestions on academic writing from Wenjie Song.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UGVUnmanned Ground Vehicle
LiDARLight Detection and Ranging
DEMDigital Elevation Map
ROSRobot Operating System
GPGaussian Process
VEVehicle Experience
RMSERoot Mean Squared Error
TSTerrain Shape
IMUInertial Measurement Unit
GPSGlobal Positioning System

References

  1. Kelly, A.; Stentz, A. Rough terrain autonomous mobility—Part 2: An active vision, predictive control approach. Auton. Robots 1998, 5, 163–198. [Google Scholar] [CrossRef]
  2. Lacroix, S.; Mallet, A.; Bonnafous, D.; Bauzil, G.; Fleury, S.; Herrb, M.; Chatila, R. Autonomous rover navigation on unknown terrains: Functions and integration. Int. J. Robot. Res. 2002, 21, 917–942. [Google Scholar] [CrossRef]
  3. Bai, C.; Guo, J. Uncertainty-Based Vibration/Gyro Composite Planetary Terrain Mapping. Sensors 2019, 19, 2681. [Google Scholar] [CrossRef] [PubMed]
  4. Zhou, X.; Bai, T.; Gao, Y.; Han, Y. Vision-Based Robot Navigation through Combining Unsupervised Learning and Hierarchical Reinforcement Learning. Sensors 2019, 19, 1576. [Google Scholar] [CrossRef] [PubMed]
  5. Wellington, C.; Courville, A.C.; Stentz, A. Interacting Markov Random Fields for Simultaneous Terrain Modeling and Obstacle Detection. In Proceedings of the Robotics: Science and Systems, Cambridge, MA, USA, 8–11 June 2005; pp. 1–8. [Google Scholar]
  6. Daily, M.; Harris, J.; Keirsey, D.; Olin, D.; Payton, D.; Reiser, K.; Rosenblatt, J.; Tseng, D.; Wong, V. Autonomous cross-country navigation with the ALV. In Proceedings of the 1988 IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, 24–29 April 1988; pp. 718–726. [Google Scholar] [CrossRef]
  7. Ye, C.; Borenstein, J. A new terrain mapping method for mobile robots obstacle negotiation. In Proceedings of the Unmanned Ground Vehicle Technology, Orlando, FL, USA, 30 September 2003; pp. 52–63. [Google Scholar]
  8. Ren, Z.; Wang, L.; Bi, L. Robust GICP-Based 3D LiDAR SLAM for Underground Mining Environment. Sensors 2019, 19, 2915. [Google Scholar] [CrossRef] [PubMed]
  9. Peterson, J.; Chaudhry, H.; Abdelatty, K.; Bird, J.; Kochersberger, K. Online Aerial Terrain Mapping for Ground Robot Navigation. Sensors 2018, 18, 630. [Google Scholar] [CrossRef]
  10. Vlaminck, M.; Luong, H.; Philips, W. Have I Seen This Place Before? A Fast and Robust Loop Detection and Correction Method for 3D Lidar SLAM. Sensors 2019, 19, 23. [Google Scholar] [CrossRef]
  11. Pang, C.; Zhong, X.; Hu, H.; Tian, J.; Peng, X.; Zeng, J. Adaptive Obstacle Detection for Mobile Robots in Urban Environments Using Downward-Looking 2D LiDAR. Sensors 2018, 18, 1749. [Google Scholar] [CrossRef]
  12. Castaño, F.; Beruvides, G.; Haber, R.E.; Artuñedo, A. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System. Sensors 2017, 17, 2109. [Google Scholar] [CrossRef]
  13. Castaño, F.; Beruvides, G.; Villalonga, A.; Haber, R.E. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models. Sensors 2018, 18, 1508. [Google Scholar] [CrossRef]
  14. Maimone, M.; Johnson, A.; Cheng, Y.; Willson, R.; Matthies, L. Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission. In Proceedings of the 9th International Symposium on Experimental Robotics, Singapore, 18–21 June 2004; pp. 3–13. [Google Scholar]
  15. Wettergreen, D.; Jonak, D.; Kohanbash, D.; Moreland, S.; Spiker, S.; Teza, J. Field Experiments in Mobility and Navigation with a Lunar Rover Prototype. In Proceedings of the 7th International Conference on Field and Service Robotics, Cambridge, MA, USA, 13 July 2009; pp. 489–498. [Google Scholar]
  16. Pfaff, P.; Triebel, R.; Burgard, W. An efficient extension to elevation maps for outdoor terrain mapping and loop closing. Int. J. Robot. Res. 2007, 26, 217–230. [Google Scholar] [CrossRef]
  17. Stumm, E.; Breitenmoser, A.; Pomerleau, F.; Pradalier, C.; Siegwart, R. Tensor-voting-based navigation for robotic inspection of 3D surfaces using lidar point clouds. Int. J. Robot. Res. 2012, 31, 1465–1488. [Google Scholar] [CrossRef]
  18. Garrido, S.; Malfaz, M.; Blanco, D. Application of the fast marching method for outdoor motion planning in robotics. Robot. Auton. Syst. 2013, 61, 106–114. [Google Scholar] [CrossRef] [Green Version]
  19. Kummerle, R.; Hahnel, D.; Dolgov, D.; Thrun, S.; Burgard, W. Autonomous driving in a multi-level parking structure. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3395–3400. [Google Scholar] [CrossRef]
  20. Khan, Y.N.; Komma, P.; Zell, A. High resolution visual terrain classification for outdoor robots. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops, Barcelona, Spain, 6–13 November 2011; pp. 1014–1021. [Google Scholar] [CrossRef]
  21. Singh, S.; Simmons, R.; Smith, T.; Stentz, A.; Verma, V.; Yahja, A.; Schwehr, K. Recent progress in local and global traversability for planetary rovers. In Proceedings of the 2000 IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 1194–1200. [Google Scholar]
  22. Santana, P.; Guedes, M.; Correia, L.; Barata, J. Stereo-based all-terrain obstacle detection using visual saliency. J. Field Robot. 2011, 28, 241–263. [Google Scholar] [CrossRef]
  23. Karumanchi, S.; Allen, T.; Bailey, T.; Scheding, S. Non-parametric learning to aid path planning over slopes. Int. J. Robot. Res. 2010, 29, 997–1018. [Google Scholar] [CrossRef]
  24. Peynot, T.; Lui, S.T.; McAllister, R.; Fitch, R.; Sukkarieh, S. Learned stochastic mobility prediction for planning with control uncertainty on unstructured terrain. J. Field Robot. 2014, 31, 969–995. [Google Scholar] [CrossRef]
  25. Krebs, A.; Pradalier, C.; Siegwart, R. Adaptive rover behavior based on online empirical evaluation: Rover–terrain interaction and near-to-far learning. J. Field Robot. 2010, 27, 158–180. [Google Scholar] [CrossRef]
  26. Krüsi, P.; Furgale, P.; Bosse, M.; Siegwart, R. Driving on point clouds: Motion planning, trajectory optimization, and terrain assessment in generic nonplanar environments. J. Field Robot. 2017, 34, 940–984. [Google Scholar] [CrossRef]
  27. Santamaria-Navarro, À.; Teniente, E.H.; Morta, M.; Andrade-Cetto, J. Terrain Classification in Complex Three-dimensional Outdoor Environments. J. Field Robot. 2015, 32, 42–60. [Google Scholar] [CrossRef]
  28. Mohanan, M.G.; Salgoankar, A. A survey of robotic motion planning in dynamic environments. Robot. Auton. Syst. 2018, 100, 171–185. [Google Scholar] [CrossRef]
  29. Rasekhipour, Y.; Khajepour, A.; Chen, S.; Litkouhi, B. A Potential Field-Based Model Predictive Path-Planning Controller for Autonomous Road Vehicles. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1255–1267. [Google Scholar] [CrossRef]
  30. Paden, B.; Čáp, M.; Yong, S.Z.; Yershov, D.; Frazzoli, E. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles. IEEE Trans. Intell. Veh. 2016, 1, 33–55. [Google Scholar] [CrossRef]
  31. Sato, H.; Iwai, T. A new, globally convergent Riemannian conjugate gradient method. Optimization 2015, 64, 1011–1031. [Google Scholar] [CrossRef]
  32. Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  33. Pivtoraiko, M.; Knepper, R.A.; Kelly, A. Differentially constrained mobile robot motion planning in state lattices. J. Field Robot. 2009, 26, 308–333. [Google Scholar] [CrossRef]
  34. LaValle, S.M.; Kuffner, J.J. Randomized Kinodynamic Planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
  35. Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
  36. Amar, F.B.; Bidaud, P.; Ouezdou, F.B. On modeling and motion planning of planetary vehicles. In Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems, Yokohama, Japan, 26–30 July 1993; pp. 1381–1386. [Google Scholar] [CrossRef]
  37. Howard, T.M.; Kelly, A. Optimal rough terrain trajectory generation for wheeled mobile robots. Int. J. Robot. Res. 2007, 26, 141–166. [Google Scholar] [CrossRef]
  38. Furgale, P.; Barfoot, T.D. Visual teach and repeat for long-range rover autonomy. J. Field Robot. 2010, 27, 534–560. [Google Scholar] [CrossRef]
  39. McManus, C.; Furgale, P.; Stenning, B.; Barfoot, T.D. Lighting-invariant visual teach and repeat using appearance-based lidar. J. Field Robot. 2013, 30, 254–287. [Google Scholar] [CrossRef]
  40. Rekleitis, I.; Bedwani, J.L.; Dupuis, E.; Lamarche, T.; Allard, P. Autonomous over-the-horizon navigation using LIDAR data. Auton. Robots 2013, 34, 1–18. [Google Scholar] [CrossRef]
  41. Woods, M.; Shaw, A.; Tidey, E.; Van Pham, B.; Simon, L.; Mukherji, R.; Maddison, B.; Cross, G.; Kisdi, A.; Tubby, W. Seeker—Autonomous Long-range Rover Navigation for Remote Exploration. J. Field Robot. 2014, 31, 940–968. [Google Scholar] [CrossRef]
  42. Silver, D.; Bagnell, J.A.; Stentz, A. Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain. Int. J. Robot. Res. 2010, 29, 1565–1592. [Google Scholar] [CrossRef]
  43. Gandhi, P.; Adarsh, S.; Ramachandran, K. Performance Analysis of Half Car Suspension Model with 4 DOF using PID, LQR, FUZZY and ANFIS Controllers. Procedia Comput. Sci. 2017, 115, 2–13. [Google Scholar] [CrossRef]
  44. Ikenaga, S.; Lewis, F.L.; Campos, J.; Davis, L. Active suspension control of ground vehicle based on a full-vehicle model. In Proceedings of the 2000 American Control Conference, Chicago, IL, USA, 28–30 June 2000; pp. 4019–4024. [Google Scholar] [CrossRef]
  45. Ho, K.; Peynot, T.; Sukkarieh, S. Traversability estimation for a planetary rover via experimental kernel learning in a Gaussian process framework. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3475–3482. [Google Scholar] [CrossRef]
  46. Chu, K.; Kim, J.; Jo, K.; Sunwoo, M. Real-time path planning of autonomous vehicles for unstructured road navigation. Int. J. Automot. Technol. 2015, 16, 653–668. [Google Scholar] [CrossRef]
Figure 1. The architecture of the proposed traversability assessor and trajectory planner.
Figure 1. The architecture of the proposed traversability assessor and trajectory planner.
Sensors 19 04372 g001
Figure 2. The overview of the traversability assessment approach.
Figure 2. The overview of the traversability assessment approach.
Sensors 19 04372 g002
Figure 3. The 3D light detection and ranging (LiDAR) used in this work.
Figure 3. The 3D light detection and ranging (LiDAR) used in this work.
Sensors 19 04372 g003
Figure 4. The models of vehicle suspension systems.
Figure 4. The models of vehicle suspension systems.
Sensors 19 04372 g004
Figure 5. How are the terrain roughness and the height difference computed.
Figure 5. How are the terrain roughness and the height difference computed.
Sensors 19 04372 g005
Figure 6. Overview of the trajectory planning approach.
Figure 6. Overview of the trajectory planning approach.
Sensors 19 04372 g006
Figure 7. A car-like vehicle and its kinematic model.
Figure 7. A car-like vehicle and its kinematic model.
Sensors 19 04372 g007
Figure 8. The differences between the conventional A* and the non-holonomic A*.
Figure 8. The differences between the conventional A* and the non-holonomic A*.
Sensors 19 04372 g008
Figure 9. The trapezoidal profile of the longitudinal velocity.
Figure 9. The trapezoidal profile of the longitudinal velocity.
Sensors 19 04372 g009
Figure 10. The spline profile of the angular velocity.
Figure 10. The spline profile of the angular velocity.
Sensors 19 04372 g010
Figure 11. The virtual terrain surfaces used to test the proposed pose estimation method.
Figure 11. The virtual terrain surfaces used to test the proposed pose estimation method.
Sensors 19 04372 g011
Figure 12. A virtual terrain surface and its traversability map.
Figure 12. A virtual terrain surface and its traversability map.
Sensors 19 04372 g012
Figure 13. The initial paths (red), the trajectories (green) after optimization, and the real trajectories (blue) of the vehicle on the virtual terrain surfaces. For clarity, the trajectories are drawn on the grayscale traversability maps.
Figure 13. The initial paths (red), the trajectories (green) after optimization, and the real trajectories (blue) of the vehicle on the virtual terrain surfaces. For clarity, the trajectories are drawn on the grayscale traversability maps.
Sensors 19 04372 g013
Figure 14. The z-coordinate, the roll and the pitch of the vehicle along the paths and the trajectories. (ac) The vehicle state along the trajectories shown in Figure 13a; (df) The vehicle state along the trajectories shown in Figure 13b.
Figure 14. The z-coordinate, the roll and the pitch of the vehicle along the paths and the trajectories. (ac) The vehicle state along the trajectories shown in Figure 13a; (df) The vehicle state along the trajectories shown in Figure 13b.
Sensors 19 04372 g014
Figure 15. The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).
Figure 15. The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).
Sensors 19 04372 g015
Figure 16. The z-coordinate, the roll and the pitch of the vehicle along the trajectories. (ac) The vehicle state along the trajectories shown in Figure 15a; (df) The vehicle state along the trajectories shown in Figure 15b; (gi) The vehicle state along the trajectories shown in Figure 15c.
Figure 16. The z-coordinate, the roll and the pitch of the vehicle along the trajectories. (ac) The vehicle state along the trajectories shown in Figure 15a; (df) The vehicle state along the trajectories shown in Figure 15b; (gi) The vehicle state along the trajectories shown in Figure 15c.
Sensors 19 04372 g016
Figure 17. The vehicle used in experiments on rough terrain.
Figure 17. The vehicle used in experiments on rough terrain.
Sensors 19 04372 g017
Figure 18. Inertial Measurement Unit (IMU).
Figure 18. Inertial Measurement Unit (IMU).
Sensors 19 04372 g018
Figure 19. Non-planar environments in the real world.
Figure 19. Non-planar environments in the real world.
Sensors 19 04372 g019
Figure 20. The statistic evaluation of the computational complexities.
Figure 20. The statistic evaluation of the computational complexities.
Sensors 19 04372 g020
Figure 21. The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).
Figure 21. The trajectories generated by NoTS (red), TS-NoSus (green), and TS-Sus (blue).
Sensors 19 04372 g021
Figure 22. The z-coordinate, the roll, and the pitch of the vehicle along the paths and the trajectories. (ac) The vehicle state along the trajectories shown in Figure 21a; (df) The vehicle state along the trajectories shown in Figure 21b.
Figure 22. The z-coordinate, the roll, and the pitch of the vehicle along the paths and the trajectories. (ac) The vehicle state along the trajectories shown in Figure 21a; (df) The vehicle state along the trajectories shown in Figure 21b.
Sensors 19 04372 g022
Figure 23. The trajectories (green) after optimization and the real trajectories (blue) of the vehicle in the real world.
Figure 23. The trajectories (green) after optimization and the real trajectories (blue) of the vehicle in the real world.
Sensors 19 04372 g023
Table 1. The specifications of the LiDARs.
Table 1. The specifications of the LiDARs.
ParameterHDL-64E S2HDL-32E
Distance accuracy < 2 cm < 2 cm
Measurement range50 m for pavement and 120 m for cars and foliage70 m
Vertical field of view + 2.0 to 24.8 + 10.7 to 30.7
Vertical angular resolution 0.4 1.33
Horizontal angular resolution 0.09 0.16
# Points per second1,333,000700,000
Table 2. The definitions of the constants used in the vehicle suspension models.
Table 2. The definitions of the constants used in the vehicle suspension models.
NotationDefinition
m s Mass of sprung
m u f Mass of front unsprung
m u b Mass of back unsprung
I x Roll axis moment of inertia
I y Pitch axis moment of inertia
k t f Stiffness of front tire
k t b Stiffness of back tire
k f Front suspension spring stiffness
k b Back suspension spring stiffness
μ f Front suspension damping
μ b Back suspension damping
W f Width of front sprung
W b Width of back sprung
L f Length between vehicle front axle and center of gravity of sprung
L b Length between vehicle back axle and center of gravity of sprung
Table 3. The errors of the different pose estimation methods.
Table 3. The errors of the different pose estimation methods.
Terrain SurfaceMethodRMSE of Roll (Rad)RMSE of Pitch (Rad)
Figure 11aDEM-Kin0.07510.0806
Kin-GP-VE0.06120.0654
PC-Sus0.03890.0405
Figure 11bDEM-Kin0.09210.0984
Kin-GP-VE0.07630.0817
PC-Sus0.05020.0535
Improvement over DEM-Kin46.85%47.69%
Improvement over Kin-GP-VE35.33%36.30%
Table 4. The performance comparison of the paths and the trajectories shown in Figure 13. Note that the real trajectories were often different between different runs, so the data of the real trajectories were averaged over 50 runs.
Table 4. The performance comparison of the paths and the trajectories shown in Figure 13. Note that the real trajectories were often different between different runs, so the data of the real trajectories were averaged over 50 runs.
Terrain Surface Shown in Figure 13a
Path or Trajectory d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal Cost
Initial path6.090.9099207.09617.480.01111.0000
Optimized trajectory6.530.9281200.24596.231.00000.2079
Real trajectory6.450.9267201.33598.790.98220.2190
Terrain Surface Shown in Figure 13b
Path or Trajectory d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal Cost
Initial path15.440.9329115.03340.600.44841.0000
Optimized trajectory18.080.9466111.71330.621.00000.6461
Real trajectory18.720.9465111.98331.880.99370.6526
Table 5. The performance comparison of the trajectories shown in Figure 15 (averaged over 50 runs).
Table 5. The performance comparison of the trajectories shown in Figure 15 (averaged over 50 runs).
Terrain Surface Shown in Figure 15a
Method d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal CostRuntime (s)
NoTS + 0.9318114.53336.080.77380.35333.220
TS-NoSus + 0.9495121.22344.150.20501.00004.861
TS-Sus + 0.9497112.46332.871.00000.32993.338
Terrain Surface Shown in Figure 15b
Method d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal CostRuntime (s)
NoTS + 0.2765119.81354.920.27141.00003.385
TS-NoSus + 0.9708123.82366.950.50660.39734.217
TS-Sus + 0.9795117.97351.421.00000.28803.782
Terrain Surface Shown in Figure 15c
Method d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal CostRuntime (s)
NoTS6.080.3461127.96379.390.29661.00003.617
TS-NoSus11.690.9317152.22452.170.73180.44074.418
TS-Sus12.820.9296124.65378.461.00000.37483.730
Table 6. The specifications of the IMU.
Table 6. The specifications of the IMU.
Measurement Range (Roll, Pitch)Accuracy (Roll, Pitch)Resolution (Roll, Pitch)Bandwidth
180.0 to 180.0 < 0.2 0.01 300 Hz
Table 7. The errors of the different pose estimation methods.
Table 7. The errors of the different pose estimation methods.
Non-Planar EnvironmentMethodRMSE of Roll (Rad)RMSE of Pitch (Rad)
Figure 19aDEM-Kin0.07630.0819
Kin-GP-VE0.06050.0640
PC-Sus0.03430.0367
Figure 19bDEM-Kin0.11040.1196
Kin-GP-VE0.08790.0963
PC-Sus0.06170.0662
Improvement over DEM-Kin49.58%49.92%
Improvement over Kin-GP-VE36.56%36.96%
Table 8. The performance comparison of the trajectories shown in Figure 21 (averaged over 50 runs).
Table 8. The performance comparison of the trajectories shown in Figure 21 (averaged over 50 runs).
Non-Planar Environment Shown in Figure 21a
Method d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal CostRuntime (s)
NoTS + 0.904660.47178.570.99700.69441.489
TS-NoSus + 0.918561.81183.180.44901.00002.334
TS-Sus + 0.917460.04177.681.00000.51761.519
Non-Planar Environment Shown in Figure 21b
Method d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal CostRuntime (s)
NoTS13.450.516569.09196.830.99821.00003.103
TS-NoSus14.330.915578.05222.620.84090.73584.840
TS-Sus14.510.916570.54199.081.00000.64353.174
Table 9. The performance comparison of the trajectories shown in Figure 23. Note that the real trajectories were often different between different runs, so the data of the real trajectories were averaged over 50 runs.
Table 9. The performance comparison of the trajectories shown in Figure 23. Note that the real trajectories were often different between different runs, so the data of the real trajectories were averaged over 50 runs.
Terrain Surface Shown in Figure 23a
Trajectory d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal Cost
Optimized trajectory + 0.916560.93180.281.00000.9797
Real trajectory + 0.912861.02180.650.97911.0000
Terrain Surface Shown in Figure 23b
Trajectory d o (m) τ t f (s) l f (m)Trajectory SmoothnessTotal Cost
Optimized trajectory6.440.914368.04194.591.00000.9816
Real trajectory6.290.908567.92194.020.98041.0000

Share and Cite

MDPI and ACS Style

Zhang, K.; Yang, Y.; Fu, M.; Wang, M. Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain. Sensors 2019, 19, 4372. https://doi.org/10.3390/s19204372

AMA Style

Zhang K, Yang Y, Fu M, Wang M. Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain. Sensors. 2019; 19(20):4372. https://doi.org/10.3390/s19204372

Chicago/Turabian Style

Zhang, Kai, Yi Yang, Mengyin Fu, and Meiling Wang. 2019. "Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain" Sensors 19, no. 20: 4372. https://doi.org/10.3390/s19204372

APA Style

Zhang, K., Yang, Y., Fu, M., & Wang, M. (2019). Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain. Sensors, 19(20), 4372. https://doi.org/10.3390/s19204372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop