System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment
<p>Our KIA-Soul EV is equipped with two FLIR cameras, a 3D 32-laser scanner and a GPS/IMU inertial navigation system [<a href="#B24-sensors-20-05999" class="html-bibr">24</a>] (Best viewed in color).</p> "> Figure 2
<p>The communication lines between CAN Gateways Module and OBD Connector.</p> "> Figure 3
<p>In the wiring diagram, the black color is the connection between the main battery of the car and the inverter. The red color wire shows the power is transmitted to different sensors. The green colors show the connection between sensors which are connected to the computer and are powered through them. The violet color is for the devices which are connected through a network protocol. The brown color is for GPS antenna of the GNSS system. The pink color is for receiving and sending commands to car hardware (Best viewed in color).</p> "> Figure 4
<p>Overall architecture of our autonomous vehicle system. It includes sensors, perception, planning and control modules (Best viewed in color).</p> "> Figure 5
<p>The flow diagram of the localization process is shown. First, the map is built using NDT mapping using the Lidar data. The point cloud data is downsampled using voxel grid filter for the refinement of 3D map. NDT matching takes filtered Lidar data (scans), GPS, IMU, Odometry, and 3D map for the pose estimation.</p> "> Figure 6
<p>(<b>a</b>) The 3D map of the environment formed using the NDT mapping. (<b>b</b>) The localization of autonomous vehicle using NDT matching in the 3D map by matching the current Lidar scan and the 3D map information. (<b>c</b>) The current view of the environment in image data. (<b>d</b>) The current localization is marked in the Google map. (<b>e</b>) Qualitative comparison between GNSS pose and NDT pose. The GNSS pose is shown in red and NDT pose is shown in green (Best viewed in color).</p> "> Figure 7
<p>The overall architecture of the calibration and re-projection process is shown. The process is divided into steps. (i) Object detection gives the object detection (ii) Calibration module (iii) The calibration parameters are utilized by the mapping module for generating the point image. (iv) The re-projection module consists of range fusion that performs the distance calculation from the point image and object detection proposal in images and finally computes the object labels bounding box in the Lidar frame (Best viewed in color).</p> "> Figure 8
<p>(<b>a</b>–<b>d</b>) The object detection result using YOLOv3. (<b>e</b>) shows lidar point cloud, (<b>f</b>) shows the results for ground plane removal and (<b>g</b>) shows the results of PointPillar (Best viewed in color).</p> "> Figure 9
<p>(<b>a</b>,<b>b</b>) The images data and Lidar point cloud data. (<b>a</b>) The corner detection in image data. (<b>c</b>) The point image data is the projection of Lidar data onto image data. (<b>d</b>) The person is detected in the image data and (<b>e</b>) The re-projection of that detection in the Lidar data (Best viewed in color).</p> "> Figure 10
<p>The architecture of the planning module is shown. The planning module contains Mission and Motion planning as its most vital modules. Mission planning generates the lane for routing for the autonomous vehicle using lane planner. Motion planning plans the path and keeps the autonomous vehicle to follow that path. The flow of information is shown above (Best viewed in color).</p> "> Figure 11
<p>(<b>a</b>) The waypoints generated by the motion planning module that contains the GPS coordinates, velocities and angles. (<b>b</b>) The state-machine implementation for obstacle avoidance and stopping. The vehicle will approach to track the speed until the obstacle is not encountered. When the obstacle is detected, if the distance of detected obstacle is less than the velocity-threshold, the planner will replan the velocity to decrease the speed. After this there are two options for the vehicle, if the distance of the detected object is less than stop-threshold and lane information with change flag is available, then the vehicle will avoid the obstacle until the safe avoidance has not been done. In another case, if the detected obstacle distance is less than stop-threshold, and vehicle velocity is approaching to zero, the vehicle will stop until the obstacle position is not changed. Finally, suppose the obstacle position is changed, and the detected obstacle distance is greater than the stop-threshold. In that case, the vehicle will approach to track speed in stop case whereas in avoiding obstacle case if the distance of detected obstacle is greater than stop-threshold, it will approach to track speed (Best viewed in color).</p> "> Figure 12
<p>(<b>a</b>) The frames in which the obstacle avoidance is done. An obstacle (a mannequin husky) is placed in front of the autonomous vehicle. (<b>b</b>) The visualization of pure pursuit and the changed waypoints path (Best viewed in color).</p> "> Figure 13
<p>(<b>I</b>) (<b>a</b>,<b>c</b>) The autonomous vehicle is stopped in front of the obstacle. (<b>b</b>) The visualization of the process in the RViz showing the obstacle and the car. (<b>II</b>) (<b>a</b>–<b>c</b>) shows the results of an obstacle stop using the person as an obstacle (Best viewed in color).</p> "> Figure 14
<p>The quantitative evaluation of obstacle detection using the proposed autonomous vehicle is illustrated. (<b>a</b>,<b>c</b>) show the obstacle detection in camera and Lidar frame using YOLOv3 and PointPillar network respectively. (<b>b</b>) The latency of speed, brake and obstacle profile is illustrated. It is shown in the graph that when the obstacle is detected, the vehicle speed slows down and brake command is promptly activated. The synchronization between the three profile is clearly seen with total execution time of 1650 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s. In (<b>b</b>) at the detection of second obstacle, the graph profile of speed, brake and obstacle remain in their respective state, this corresponds that the obstacle is present in the route of autonomous vehicle and have not changed the place for that much period. The speed is in (m/s), brake command is command to vehicle CAN bus through drivekit, and obstacle detection is the indication of obstacle flag.</p> "> Figure 15
<p>The quantitative evaluation of obstacle avoidance using the proposed autonomous vehicle is illustrated. (<b>a</b>) Obstacle is detected in the camera frame using YOLOv3. (<b>b</b>) Obstacle is detected in the Lidar frame using PointPillar and upon detection the waypoints are changed and velocity is replanned using velocity replanning module and obstacle is avoided by complying the traffic rules using the state-machine configuration. (<b>c</b>) illustrates the quantitative graphs of obstacle avoidance using proposed autonomous vehicle. The spike in the steering angle graph shows the change of angle in positive direction. In the graphs it is shown that first the obstacle is detected, the vehicle speed is lower down and brake is applied, and in that case it is possible for the autonomous vehicle to avoid the obstacle by not violating the traffic rules, the autonomous vehicle has avoided the obstacle safely.</p> "> Figure 16
<p>Vehicle sender receives the information from Planning and send the command to PID controller. Kiasoul-socket is composed of Throttle/Brake and Torque PIDs that sends the command to Drivekit.</p> "> Figure 17
<p>The geometric representation of pure pursuit algorithm.</p> "> Figure 18
<p>(<b>a</b>) The current velocity (from can-bus) and command velocity (from twist filter). It also shows the Throttle Pedal and PID parameters (Proportional, Integral and Derivative) (between 1 and −1). (<b>b</b>) The graph of current steering and target steering values along with the torque information [<a href="#B24-sensors-20-05999" class="html-bibr">24</a>] (Best viewed in color).</p> "> Figure 19
<p>Framework for tuning the PID parameters using genetic algorithm is shown. The parameters are tuned for throttle, brake and steering respectively.</p> "> Figure 20
<p>(<b>a</b>) illustrates the effect of look-ahead distance in pure pursuit. The look-ahead distance with different configuration is illustrated in the legend of the graph; for instance, the PP-LA-1m corresponds to 1m look-ahead distance in the pure pursuit. The look-ahead distance of 1m is more prone to noise, and produces more vibrations in the lateral control of the vehicle. Moreover, the look-ahead distance of 20 m, deviates the lateral control from the original track. The optimized look-ahead distance of 6m gives the optimal result with minimal error in contrast to former look-ahead distances. The steering angle for lateral control is shown by Theta (rad). (<b>b</b>) illustrates the graph of lateral error difference between MPC and pure pursuit [<a href="#B24-sensors-20-05999" class="html-bibr">24</a>] (Best viewed in color).</p> "> Figure 21
<p>(<b>a</b>) illustrates the qualitative results between Pure Pursuit and MPC-based path follower. The difference between Pure Pursuit, and MPC-based path follower is shown in (<b>b</b>,<b>c</b>) respectively [<a href="#B24-sensors-20-05999" class="html-bibr">24</a>] (Best viewed in color).</p> "> Figure 22
<p>The framework of cab booking service (Best viewed in color).</p> "> Figure 23
<p>The detailed architecture of converting the autonomous vehicle to a cab service (Best viewed in color).</p> "> Figure 24
<p>The visualization of autonomous taxi service as an application of our autonomous vehicle. (<b>a</b>) The 3D map shows the start and customer’s pick up position upon receiving the request. (<b>b</b>) Image view of start position. (<b>c</b>) Front-facing camera view from the start position. (<b>d</b>) illustrates that the customer is waiting for the autonomous taxi service. (<b>e</b>) The map of our institute shows the start, customer and destination position (Best viewed in color).</p> "> Figure 25
<p>The visualization results of an autonomous vehicle approaching to pick up the customer. (<b>a</b>) The 3D map of the environment showing the ego vehicle and customer’s position. (<b>b</b>) The view of the customer from the autonomous vehicle front-facing camera. (<b>c</b>) illustrate the environment and also showing the app which customer is using to request for autonomous taxi service (Best viewed in color).</p> "> Figure 25
<p>The visualization results of an autonomous vehicle approaching to pick up the customer. (<b>a</b>) The 3D map of the environment showing the ego vehicle and customer’s position. (<b>b</b>) The view of the customer from the autonomous vehicle front-facing camera. (<b>c</b>) illustrate the environment and also showing the app which customer is using to request for autonomous taxi service (Best viewed in color).</p> "> Figure 26
<p>The route of the autonomous vehicle after picking the customer to the destination is illustrated in (<b>a</b>–<b>c</b>) along with object detection and autonomous vehicle stopping at the obstacle as shown in (<b>d</b>,<b>e</b>). In that particular case, obstacle avoidance is not performed because this is the one-side road and obstructing the yellow line is against the traffic rules (Best viewed in color).</p> ">
Abstract
:1. Introduction
- (1)
- The proposed autonomous vehicle (car.Mlv.ai) is designed with constraint resources with minimal sensor suite compared to state-of-the-art vehicles. The cost-effectiveness of the proposed autonomous vehicle is one of the significant contributions of the proposed work.
- (2)
- The generation of 3D map for the localization of autonomous vehicle is built using the 32 channels Lidar with the auxiliary information for Global Navigation Satellite System (GNSS), Inertial Measurement Unit (IMU), and odometry data from vehicle Controller Area Network (CAN) bus respectively.
- (3)
- For the autonomous vehicle’s perception, the fusion of exteroceptive sensors that include Lidar and camera is performed in the proposed work. In the fusion framework, both sensor’s object detection networks are trained and implemented in the Robot Operating System (ROS). The real-time efficacy of the detection network is evaluated by using TensorRT and deployed on the system.
- (4)
- A state-machine is designed that caters the mission and motion planning for the autonomous vehicle by incorporating the information of obstacles from the perception stack.
- (5)
- For longitudinal and lateral motion control, a customized controller, named Kiasoul-Socket is designed that caters throttle, brake, and steering of the autonomous vehicle by following the state-machine conditions for different real-world traffic scenarios, for instance, obstacle avoidance and obstacle stopping. The KIA Soul EV company does not provide this controller because of proprietary law.
- (6)
- Development of CAN-BUS shield for KIA Soul EV for acquiring the CAN messages from the vehicle for lateral and longitudinal control and providing the odometry data for the localization.
- (7)
- The proposed autonomous vehicle has experimentally validated in the application of autonomous taxi service.
2. Autonomous Vehicle Platform
3. Architecture
3.1. Mapping and Localization
Experimental Results
3.2. Perception
3.2.1. Object Detection
Algorithm 1: Ground plane removal based on angle and distance filtering |
3.2.2. Camera-Lidar Calibration
3.2.3. Fusion (Mapping and Re-Projection)
3.2.4. Experimental Results
3.3. Planning
3.3.1. Path Generation
3.3.2. Mission Planning
3.3.3. Motion Planning
Path Planning
3.3.4. Experimental Results
3.4. Control
3.4.1. Longitudinal Control
3.4.2. Lateral Control
3.4.3. Experimental Results
4. Experimental Validation: Autonomous Taxi Service
5. Comparison
6. Discussion
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
SOTIF | Safety of the intended functionality |
DARPA | The Defense Advanced Research Projects Agency |
SAE | The Society of Automobile Engineers |
SLAM | Simultaneous localization and mapping |
NDT | Normal Distribution Transform |
RTK | Real-time kinematics |
IMU | Inertial Measurement Unit |
GNSS | Global Navigation and Satellite System |
CAN | Controller Area network |
GPS | Global Positioning System |
ROS | Robot Operating System |
SPI | Serial Peripheral Interface |
References
- The Milwaukee Sentinel. Phantom Auto Will Tour City. Available online: https://news.google.com/newspapers?id=unBQAAAAIBAJ&sjid=QQ8EAAAAIBAJ&pg=7304,3766749 (accessed on 22 October 2020).
- Wang, C.; Gong, S.; Zhou, A.; Li, T.; Peeta, S. Cooperative adaptive cruise control for connected autonomous vehicles by factoring communication-related constraints. arXiv 2018, arXiv:1807.07232. [Google Scholar]
- EmbeddedMontiArc. Available online: https://embeddedmontiarc.github.io/webspace/ (accessed on 22 October 2020).
- Munir, F.; Jalil, A.; Jeon, M. Real time eye tracking using Kalman extended spatio-temporal context learning. In Proceedings of the Second International Workshop on Pattern Recognition, Singapore, 1–3 May 2017; Volume 10443, p. 104431G. [Google Scholar]
- Munir, F.; Azam, S.; Hussain, M.I.; Sheri, A.M.; Jeon, M. Autonomous vehicle: The architecture aspect of self driving car. In Proceedings of the 2018 International Conference on Sensors, Signal and Image Processing, Prague, Czech Republic, 12–14 October 2018; pp. 1–5. [Google Scholar]
- Thomanek, F.; Dickmanns, E.D. Autonomous road vehicle guidance in normal traffic. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 1995; Volume 3, pp. 499–507. [Google Scholar]
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B.; et al. Junior: The stanford entry in the urban challenge. J. Field Rob. 2008, 25, 569–597. [Google Scholar] [CrossRef] [Green Version]
- Buechel, M.; Frtunikj, J.; Becker, K.; Sommer, S.; Buckl, C.; Armbruster, M.; Marek, A.; Zirkler, A.; Klein, C.; Knoll, A. An automated electric vehicle prototype showing new trends in automotive architectures. In Proceedings of the 18th International IEEE Conference on Intelligent Transportation Systems, Las Palmas de Gran Canaria, Spain, 15–18 September 2015; pp. 1274–1279. [Google Scholar]
- Aramrattana, M.; Detournay, J.; Englund, C.; Frimodig, V.; Jansson, O.U.; Larsson, T.; Mostowski, W.; Rodríguez, V.D.; Rosenstatter, T.; Shahanoor, G. Team halmstad approach to cooperative driving in the grand cooperative driving challenge 2016. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1248–1261. [Google Scholar] [CrossRef]
- Apollo. Available online: https://apollo.auto/docs/disclaimer.html (accessed on 22 October 2020).
- Ziegler, J.; Bender, P.; Schreiber, M.; Lategahn, H.; Strauss, T.; Stiller, C.; Dang, T.; Franke, U.; Appenrodt, N.; Keller, C.G.; et al. Making bertha drive—an autonomous journey on a historic route. IEEE Intell. Transp. Syst. Mag. 2014, 6, 8–20. [Google Scholar] [CrossRef]
- Self-Driving Car Technology | Uber ATG. Available online: https://www.uber.com/us/en/atg/technology/ (accessed on 22 October 2020).
- Autopilot. Available online: https://www.tesla.com/autopilot (accessed on 22 October 2020).
- Waymo. Available online: https://waymo.com/ (accessed on 22 October 2020).
- General Motors. Available online: https://www.gm.com/ (accessed on 22 October 2020).
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Rob. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the urban challenge. J. Field Rob. 2008, 25, 425–466. [Google Scholar] [CrossRef] [Green Version]
- Tas, O.S.; Salscheider, N.O.; Poggenhans, F.; Wirges, S.; Bandera, C.; Zofka, M.R.; Strauss, T.; Zöllner, J.M.; Stiller, C. Making bertha cooperate–team annieway’s entry to the 2016 grand cooperative driving challenge. IEEE Trans. Intell. Transp. Syst. 2017, 19, 1262–1276. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
- Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 Year, 1000 km: The Oxford RobotCar Dataset. Int. J. Robot. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
- Fridman, L. Human-Centered Autonomous Vehicle Systems: Principles of Effective Shared Autonomy. arXiv 2018, arXiv:1810.01835. [Google Scholar]
- Pandey, G.; McBride, J.R.; Ryan, M. Eustice. Ford campus vision and lidar data set. Int. J. Robot. Res. 2011, 30, 1543–1552. [Google Scholar] [CrossRef]
- Spirit of Berlin: An Autonomous Car for the DARPA Urban Challenge Hardware and Software Architecture. Available online: https://archive.darpa.mil/grandchallenge/TechPapers/Team_Berlin.pdf (accessed on 22 October 2020).
- Azam, S.; Munir, F.; Jeon, M. Dynamic control system Design for autonomous vehicle. In Proceedings of the 6th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS), Prague, Czech Republic, 2–4 May 2020. [Google Scholar]
- Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixão, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2020, 113816. [Google Scholar] [CrossRef]
- Ulaş, C.; Temeltaş, H. 3d multi-layered normal distribution transform for fast and long range scan matching. J. Intell. Robot. Syst. 2013, 71, 85–108. [Google Scholar] [CrossRef]
- Chetverikov, D.; Svirko, D.; Stepanov, D.; Krsek, P. The trimmed iterative closest point algorithm. In Object Recognition Supported by User Interaction for Service Robots; IEEE: Piscataway, NJ, USA, 2002; Volume 3, pp. 545–548. [Google Scholar]
- Dissanayake, M.G.; Newman, P.; Clark, S.; DurrantWhyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (slam) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef] [Green Version]
- Borrmann, D.; Elseberg, J.; Lingemann, K.; Nuchter, A.; Hertzberg, J. Globally consistent 3d mapping with scan matching. Robot. Auton. Syst. 2008, 56, 130–142. [Google Scholar] [CrossRef] [Green Version]
- Kim, P.; Chen, J.; Cho, Y.K. Slam-driven robotic mapping and registration of 3d point clouds. Autom. Constr. 2018, 89, 38–48. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. Method for registration of 3-d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–607. [Google Scholar]
- He, Y.; Liang, B.; Yang, J.; Li, S.; He, J. An iterative closest points algorithm for registration of 3d laser scanner point clouds with geometric features. Sensors 2017, 17, 1862. [Google Scholar] [CrossRef] [Green Version]
- Biber, P.; Straßer, W. The normal distributions transform: A new approach to laser scan matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), Las Vegas, NV, USA, 27–31 October 2003. [Google Scholar]
- Prieto, P.G.; Martín, F.; Moreno, L.; Carballeira, J. DENDT: 3D-NDT scan matching with Differential Evolution. In Proceedings of the 25th Mediterranean Conference on Control and Automation (MED), Valletta, Malta, 3–6 July 2017; pp. 719–724. [Google Scholar]
- Zhou, Q.-Y.; Park, J.; Koltun, V. Open3d: A modern library for 3d data processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
- Takeuchi, E.; Tsubouchi, T. A 3-d scan matching using improved 3-d normal distributions transform for mobile robotic mapping. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 3068–3073. [Google Scholar]
- Munir, F.; Azam, S.; Sheri, A.M.; Ko, Y.; Jeon, M. Where Am I: Localization and 3D Maps for Autonomous Vehicles. In Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS), Crete, Greece, 3–5 May 2019. [Google Scholar]
- Leonard, J.; How, J.; Teller, S.; Berger, M.; Campbell, S.; Fiore, G.; Fletcher, L.; Frazzoli, E.; Huang, A.; Karaman, S.; et al. A perception-driven autonomous urban vehicle. J. Field Rob. 2008, 25, 727–774. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Azam, S.; Munir, F.; Rafique, A.; Ko, Y.; Sheri, A.M.; Jeon, M. Object Modeling from 3D Point Cloud Data for Self-Driving Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar]
- Maturana, D.; Scherer, S. Voxnet: A 3d convolutional neural network for real-time object recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; p. 4. [Google Scholar]
- Munir, F.; Azam, S.; Rafique, A.; Jeon, M. Automated Labelling of 3D Point Cloud Data. Proc. Korean Inf. Sci. Soc. 2017, 769–771. [Google Scholar]
- Li, B.; Zhang, T.; Xia, T. Vehicle detection from 3d lidar using fully convolutional network. arXiv 2016, arXiv:1608.07916. [Google Scholar]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; Volume 1. [Google Scholar]
- Engelcke, M.; Rao, D.; Wang, D.Z.; Tong, C.H.; Posner, I. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2019; pp. 12697–12705. [Google Scholar]
- Schreier, M. Bayesian environment representation, prediction, and criticality assessment for driver assistance systems. Automatisierungstechnik 2017, 65, 151–152. [Google Scholar] [CrossRef] [Green Version]
- Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar]
- Pandey, G.; McBride, J.; Savarese, S.; Eustice, R. Extrinsic calibration of a 3d laser scanner and an omnidirectional camera. IFAC Proc. Vol. 2010, 7, 336–341. [Google Scholar] [CrossRef] [Green Version]
- Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Vel’as, M.; Španěl, M.; Materna, Z.; Herout, A. Calibration of rgb camera with velodyne lidar. In WSCG 2014: Communication Papers Proceedings of 22nd International Conference in Central Europeon Computer Graphics, Pilsen, Czech Republic, 18 May 2020–22 May 2020; Václav Skala-UNION Agency: Plzen, Czech Republic; pp. 135–144.
- Frohlich, R.; Kato, Z.; Tremeau, A.; Tamas, L.; Shabo, S.; Waksman, Y. Region based fusion of 3D and 2D visual data for Cultural Heritage objects. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016. [Google Scholar]
- Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information; AAAI: Menlo Park, CA, USA, 2012. [Google Scholar]
- Alismail, H.; Baker, L.D.; Browning, B. Automatic calibration of a range sensor and camera system. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Zurich, Switzerland, 13–15 October 2012. [Google Scholar]
- Pusztai, Z.; Hajder, L. Accurate calibration of LiDAR-camera systems using ordinary boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar]
- Taylor, Z.; Nieto, J. Automatic calibration of lidar and camera images using normalized mutual information. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
- Banerjee, K.; Notz, D.; Windelen, J.; Gavarraju, S.; He, M. Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar]
- Debattisti, S.; Mazzei, L.; Panciroli, M. Automated extrinsic laser and camera inter-calibration using triangular targets. In Proceedings of the Intelligent Vehicles Symposium (IV), Gold Coast, QLD, Australia, 23–26 June 2013. [Google Scholar]
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: Efficient perspective-n-point camera pose estimation. Int. J. Comput. Vis. 2009, 2009 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
- Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
- Massera Filho, C.; Wolf, D.F.; Grassi, V.; Osório, F.S. Longitudinal and lateral control for autonomous ground vehicles. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014. [Google Scholar]
- Åström, K.J.; Hägglund, T. PID Controllers: Theory, Design, and Tuning; Instrument Society of America: Research Triangle Park, NC, USA, 1995; Volume 2. [Google Scholar]
- Dantas, A.D.O.S.; Dantas, A.F.O.A.; Campos, J.T.L.S.; de Almeida Neto, D.L.; Dórea, C.E. PID Control for Electric Vehicles Subject to Control and Speed Signal Constraints. J. Control Sci. Eng. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
- Ang, K.H.; Chong, G.; Li, Y. PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 2005, 13, 559–576. [Google Scholar]
- Mayne, D.Q. Model predictive control: Recent developments and future promise. Automatica 2014, 50, 2967–2986. [Google Scholar] [CrossRef]
- Garcia, C.E.; Prett, D.M.; Morari, M. Model predictive control: Theory and practice—A survey. Automatica 1989, 25, 335–348. [Google Scholar] [CrossRef]
- Falcone, P.; Borrelli, F.; Asgari, J.; Tseng, H.E.; Hrovat, D. Predictive active steering control for autonomous vehicle systems. IEEE Trans. Control Syst. Technol. 2007, 15, 566–580. [Google Scholar] [CrossRef]
- Falcone, P.; Borrelli, F.; Asgari, J.; Tseng, H.E.; Hrovat, D. A model predictive control approach for combined braking and steering in autonomous vehicles. In Proceedings of the 2007 Mediterranean Conference on Control and Automation, Athens, Greece, 27–29 June 2007. [Google Scholar]
- Borrelli, F.; Bemporad, A.; Fodor, M.; Hrovat, D. An MPC/hybrid system approach to traction control. IEEE Trans. Control Syst. Technol. 2006, 14, 541–552. [Google Scholar] [CrossRef]
- Liu, C.; Carvalho, A.; Schildbach, G.; Hedrick, J.K. Stochastic predictive control for lane keeping assistance systems using a linear time-varying model. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015. [Google Scholar]
- Hang, C.C.; Åström, K.J.; Ho, W.K. Refinements of the Ziegler-Nichols tuning formula. In IEE Proceedings D-Control Theory and Applications; No 2; IET Digital Library: London, UK, 1991; Volume 138. [Google Scholar]
- Mitsukura, Y.; Yamamoto, T.; Kaneda, M. A design of self-tuning PID controllers using a genetic algorithm. In Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), San Diego, CA, USA, 2–4 June 1999; Volume 2. [Google Scholar]
- Thomas, N.; Poongodi, D.P. Position control of DC motor using genetic algorithm based PID controller. In Proceedings of the World Congress on Engineering, London, UK, 2–4 July 2009; Volume 2, pp. 1–3. [Google Scholar]
- Chang, T.Y.; Chang, C.D. Genetic Algorithm Based Parameters Tuning for the Hybrid Intelligent Controller Design for the Manipulation of Mobile Robot. In Proceedings of the 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA), Tokyo, Japan, 12–15 April 2019. [Google Scholar]
- Dandl, F.; Bogenberger, K. Comparing future autonomous electric taxis with an existing free-floating carsharing system. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2037–2047. [Google Scholar] [CrossRef]
- Hyland, M.F. Real-Time Operation of Shared-Use Autonomous Vehicle Mobility Services: Modeling, Optimization, Simulation, and Analysis. Ph.D. Thesis, Northwestern University, Evanston, IL, USA, 2018. [Google Scholar]
- Dandl, F.; Bogenberger, K. Booking Processes in Autonomous Carsharing and Taxi Systems. In Proceedings of the 7th Transport Research Arena, Vienna, Austria, 16–19 April 2018. [Google Scholar]
- Zoox-We Are Inventors, Builders and Doers. Available online: https://zoox.com/ (accessed on 22 October 2020).
- Challenges in Autonomous Vehicle Testing and Validation. In Safety of the Intended Functionality; SAE: Warrendale, PA, USA, 2020; pp. 125–142.
- Butler, R.W.; Finelli, G.B. The infeasibility of experimental quantification of life-critical software reliability. In Proceedings of the Conference on Software for Citical Systems, New Orleans, LA, USA, 4–6 December 1991; pp. 66–76. [Google Scholar]
Parameters | Values |
---|---|
Error Threshold | 1 |
Transformation Epsilon | 0.1 |
Max Iteration | 50 |
Error between GNSS Pose and NDT Pose | RMSE |
---|---|
x-direction | 0.2903 |
y-direction | 0.0296 |
z-direction | 0.0361 |
Overall | 1.7346 |
Parameters | Values |
---|---|
Sensor height | 1.8 |
Distance Threshold | 1.58 |
Angle Threshold | 0.08 |
Size Threshold | 20 |
Parameters | Values |
---|---|
Stop distance for obstacle Threshold | 10 |
Velocity replanning distance Threshold | 10 |
Detection range | 12 |
Threshold for objects | 20 |
Deceleration for obstacle | 1.3 |
Curve Angle | 0.65 |
Controller Parameters | Throttle | Steering | ||
---|---|---|---|---|
Cohen-Coon Method | Genetic Algorithm | Cohen-Coon Method | Genetic Algorithm | |
0.085 | 0.003 | 0.0009 | 0.0005 | |
0.0045 | 0.0001 | 0.0001 | 0.0002 | |
0.01 | 0.09 | 0.0005 | 0.0008 |
Pure Pursuit Parameters | Values |
---|---|
Look-ahead ratio | 2 |
Minimum Look-ahead Distance | 6 |
Hardware Level Stack Comparison | |||||
---|---|---|---|---|---|
S.No | Autonomous Vehicles | Lidar Units | Camera Units | Supplement Sensors (GNSS,IMU, Radars, Sonars) | |
1. | Waymo [14] | 5 | 1 | GNSS+IMU | |
2. | GM Chevy Bolt Cruise [15] | 5 | 16 | Radars | |
3. | ZOOX [79] | 8 | 12 | Radar+GNSS+IMU | |
4. | Uber(ATG) [12] | 7 | 20 | Radars, GNSS+IMU | |
6. | Ours (Car.Mlv.ai) | 1 | 2 | GNSS+IMU |
1994 | 2007 | 2013 | 2015 | 2016 | 2018 | 2019 | |||
---|---|---|---|---|---|---|---|---|---|
Sensors | VaMP [6] | Junior [7] | Boss [17] | Bertha [11] | Race [8] | Halmstad [9] | Bertha [18] | Apollo [10] | Ours (Car.Mlv.ai) |
Camera | ● front/rear | ❍ | ● front | ● stereo | ● front | ❍ | ● stereo/360 deg | ● front/side | ● front |
Lidar | ❍ | ● 64 channels | ● | ❍ | ● 4 channels | ❍ | ● 4 channels | ● 64 channels | ● 32 channels |
Radar | ❍ | ● | ● | ● | ● | ● series | ● | ● | ● |
GPS | ❍ | ● | ● | ● | ● | ● rtk | ● | ● rtk | ● rtk |
INS | ● | ● | ● | ? | ● | ● | ● | ● | ● |
PC | ● | ● | ● | ? | ● | ● | ● | ● | ● |
GPU | ❍ | ❍ | ❍ | ? | ❍ | ❍ | ● | ● | ● |
1994 | 2007 | 2013 | 2015 | 2016 | 2018 | 2019 | |||
---|---|---|---|---|---|---|---|---|---|
VaMP [6] | Junior [7] | Boss [17] | Bertha [11] | Race [8] | Halmstad [9] | Bertha [18] | Apollo [10] | Ours (Car.Mlv.ai) | |
Application | German Highway | DARPA Urban Challenge | German Rural | Parking | Cooperative Driving Challenge | Various | Various but focus on Autonomous cab services | ||
Middleware | ? | Publish/Subscribe IPC | Publish/Subscribe IPC | ? | RACE RTE | LCM | ROS | Cyber RT | ROS |
Operating System | ? | Linux | ? | ? | Pike OS | Linux | Linux | Linux | Linux |
Functional Safety | None | Watchdog module | Error Recovery | ? | Supporting ASIL D | Trust System | ? | System Health Monitor | Supporting ASIL D |
Controller | ? | PC | ? | ? | RACE DDC | Micro Autobox | realtime onboard comp. | PC | DriveKit and on board controller |
Licensing | Proprietary | Partly open | Proprietary | Proprietary | Proprietary | Proprietary | Proprietary | Open | Open |
Sensors | Frequency (Hz) |
---|---|
Lidar | 15 |
Camera | 25 |
GPS | 20 |
Can info | 200 |
IMU | 20 |
odomerty | 200 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Azam, S.; Munir, F.; Sheri, A.M.; Kim, J.; Jeon, M. System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment. Sensors 2020, 20, 5999. https://doi.org/10.3390/s20215999
Azam S, Munir F, Sheri AM, Kim J, Jeon M. System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment. Sensors. 2020; 20(21):5999. https://doi.org/10.3390/s20215999
Chicago/Turabian StyleAzam, Shoaib, Farzeen Munir, Ahmad Muqeem Sheri, Joonmo Kim, and Moongu Jeon. 2020. "System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment" Sensors 20, no. 21: 5999. https://doi.org/10.3390/s20215999
APA StyleAzam, S., Munir, F., Sheri, A. M., Kim, J., & Jeon, M. (2020). System, Design and Experimental Validation of Autonomous Vehicle in an Unconstrained Environment. Sensors, 20(21), 5999. https://doi.org/10.3390/s20215999