Research on Mobile Robot Navigation Method Based on Semantic Information
<p>Laser SLAM system.</p> "> Figure 2
<p>Robotic laser SLAM software platform.</p> "> Figure 3
<p>Schematic of voxel mapping.</p> "> Figure 4
<p>Schematic diagram of PointNet++ network structure.</p> "> Figure 5
<p>Outdoor experimental vehicle system.</p> "> Figure 6
<p>(<b>a</b>) Location curve for T planning. (<b>b</b>) Velocity curve for T planning. (<b>c</b>) Acceleration curve for T planning.</p> "> Figure 6 Cont.
<p>(<b>a</b>) Location curve for T planning. (<b>b</b>) Velocity curve for T planning. (<b>c</b>) Acceleration curve for T planning.</p> "> Figure 7
<p>(<b>a</b>) Original semantic map without dynamic point filtering. (<b>b</b>) Static semantic map after dynamic point filtering.</p> "> Figure 8
<p>(<b>a</b>) Dynamic point filtering effect of A-loam algorithm. (<b>b</b>) Dynamic point filtering effect of proposed algorithm.</p> "> Figure 9
<p>(<b>a</b>) Closed-loop trajectory from the horizontal perspective. (<b>b</b>) Closed-loop trajectory from the bird’s-eye view perspective.</p> "> Figure 10
<p>(<b>a</b>) Global mapping outcomes and associated trajectory. (<b>b</b>) Robot traversal trajectories during the mapping process. (<b>c</b>) Robot’s motion trajectories during the data acquisition process.</p> ">
Abstract
:1. Introduction
- (a)
- This study proposes a mobile robot navigation method based on semantic information, combining a semantic laser SLAM system based on deep learning and a trajectory interpolation algorithm to solve the navigation challenges in dynamic environments and large scenes.
- (b)
- This study introduces the concept of voxels into occupancy probability maps to better represent dynamic objects. The PointNet++ network is used for point cloud semantic segmentation to identify dynamic object points and extract semantic information.
- (c)
- This study uses semantic information to generate a global environment descriptor for loop detection and map optimization, improving the quality of map construction.
- (d)
- This study proposes a T-trajectory interpolation algorithm to ensure the smooth transition of robot motion and avoid vibration.
- (e)
- This study was experimentally verified in the SIASUN campus environment. The SLAM system can achieve real-time calculation and meet the positioning needs of mobile robots.
2. Mobile Robotic Systems Overview
2.1. Mobile Robot Navigation Technology Program
- 1.
- Featured-Based Registration:
- 2.
- Direct Registration:
2.2. General Framework of the SLAM System
3. Map Organization and Update Strategy
3.1. Laser Odometry—Nonlinear Optimization Algorithm
3.2. Voxel-Based Local Map Construction and Updating
4. Combining Deep Learning for a Semantic Laser SLAM System
4.1. Segmentation Feature Extraction Based on Ground Constraints
4.2. Closed-Loop Detection and Position Optimization Flow
5. T-Trajectory Interpolation Strategy
- Smooth transitions: Ensuring that the transitions of the robot between connecting target points are smooth to avoid erratic motion.
- Trajectory Optimization: Interpolation algorithms can be used to generate T-trajectories that optimize the trajectory for a given motion condition, ensuring the shortest path, minimum acceleration/deceleration, and minimum mechanical stress.
- Velocity Planning: The interpolation algorithm must consider the velocity changes in each part of the T-trajectory to maintain system stability by avoiding excessive speed or slowness.
6. Experimental Results and Analysis
6.1. Trajectory Interpolation Test
6.2. Semantic Maps and Closed-Loop Detection Experiments
6.3. Large-Scale Mapping Experiment for a Corporate Campus
7. Conclusions
- (1)
- This paper proposes a general framework for a SLAM system based on open-source laser SLAM algorithms.
- (2)
- The NDT algorithm addresses the issue of aligning 3D point cloud alignments. It employs the feature point method for feature extraction and scan-to-map alignment of the point cloud to obtain the robot position with high accuracy. This enhances the ability of local voxel maps to represent dynamic objects.
- (3)
- The semantic categories of the points are labeled as the point cloud and are dynamically segmented using PointNet++. A global environment descriptor is generated based on the semantic information, and loopbacks are detected using a loopback detection method. The loopbacks are then optimized using a factor graph.
- (4)
- The SLAM navigation algorithm employs T-trajectory interpolation for global and local planning to ensure the performance of the robot motion, resulting in a smooth and stable trajectory.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xie, H.; Chen, W.; Fan, Y.; Wang, J. Visual-inertial SLAM in featureless environments on lunar surface. Acta Aeronaut. Astronaut. Sin. 2021, 42, 524169. [Google Scholar]
- Xing, Z.; Zhu, X.; Dong, D. DE-SLAM: SLAM for highly dynamic environment. J. Field Robot. 2022, 39, 528–542. [Google Scholar] [CrossRef]
- Chen, W.; Wang, Y.; Chen, H.; Liu, Y. eil-slam: Depth-enhanced edge-based infrared-lidar slam. J. Field Robot. 2022, 39, 117–130. [Google Scholar] [CrossRef]
- Zhang, Y.; Song, J.; Ding, Y.; Liu, J. Heterogeneous collaborative SLAM based on fisheye and RGBD cameras. Acta Aeronaut. Astronaut. Sin. 2023, 44, 244. [Google Scholar]
- Li, R.; Qi, Y.; Xie, H.; Han, X. Tightly coupled LiDAR SLAM method for unknown environment. Infrared Laser Eng. 2023, 52, 135. [Google Scholar]
- Jiang, L.; Liu, L.; Zhou, A.; Han, L.; Li, P. Improved ORB-SLAM algorithm based on motion prediction. J. Zhejiang Univ. 2023, 57, 170. [Google Scholar]
- Zhang, L.; Wei, L.; Shen, P.; Wei, W.; Zhu, G.; Song, J. Semantic SLAM based on object detection and improved octomap. IEEE Access 2018, 6, 75545–75559. [Google Scholar] [CrossRef]
- Chen, X.; Milioto, A.; Palazzolo, E.; Giguere, P.; Behley, J.; Stachniss, C. Suma++: Efficient lidar-based semantic slam. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 4530–4537. [Google Scholar]
- Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An overview on visual slam: From tradition to semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
- Ran, T.; Yuan, L.; Zhang, J.; Tang, D.; He, L. RS-SLAM: A robust semantic SLAM in dynamic environments based on RGB-D sensor. IEEE Sens. J. 2021, 21, 20657–20664. [Google Scholar] [CrossRef]
- Tian, Y.; Chang, Y.; Arias, F.H.; Nieto-Granda, C.; How, J.P.; Carlone, L. Kimera-multi: Robust, distributed, dense metric-semantic slam for multi-robot systems. IEEE Trans. Robot. 2022, 38, 2022–2038. [Google Scholar] [CrossRef]
- Liu, X.; Nardari, G.V.; Cladera, F.; Tao, Y.; Zhou, A.; Donnelly, T.; Qu, C.; Chen, S.W.; Romero, R.A.F.; Taylor, C.J.; et al. Large-scale autonomous flight with real-time semantic slam under dense forest canopy. IEEE Robot. Autom. Lett. 2022, 7, 5512–5519. [Google Scholar] [CrossRef]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), New York, NY, USA, 4–9 December 2017; pp. 5105–5114. [Google Scholar]
Parameter | Specification |
---|---|
Horizontal field of view | 360° |
Vertical field of view | 30° |
Horizontal angular resolution | 0.1°/0.2°/0.4° |
Frame rate | 5 Hz/10 Hz/20 Hz |
Ranging capability | 150 m |
Accuracy (typical) | ±2 cm |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, R.-H.; Zhao, X.; Wu, C.-D.; Zhang, L.; Zhao, B. Research on Mobile Robot Navigation Method Based on Semantic Information. Sensors 2024, 24, 4341. https://doi.org/10.3390/s24134341
Sun R-H, Zhao X, Wu C-D, Zhang L, Zhao B. Research on Mobile Robot Navigation Method Based on Semantic Information. Sensors. 2024; 24(13):4341. https://doi.org/10.3390/s24134341
Chicago/Turabian StyleSun, Ruo-Huai, Xue Zhao, Cheng-Dong Wu, Lei Zhang, and Bin Zhao. 2024. "Research on Mobile Robot Navigation Method Based on Semantic Information" Sensors 24, no. 13: 4341. https://doi.org/10.3390/s24134341
APA StyleSun, R. -H., Zhao, X., Wu, C. -D., Zhang, L., & Zhao, B. (2024). Research on Mobile Robot Navigation Method Based on Semantic Information. Sensors, 24(13), 4341. https://doi.org/10.3390/s24134341