An Obstacle Detection Method Based on Longitudinal Active Vision
<p>An obstacle detection based on longitudinal active vision.</p> "> Figure 2
<p>Obstacle ranging model.</p> "> Figure 3
<p>Schematic diagram of the static obstacle imaging.</p> "> Figure 4
<p>Schematic diagram of dynamic obstacle imaging.</p> "> Figure 5
<p>Schematic diagram of the camera rotation.</p> "> Figure 6
<p>Architecture of the longitudinal active camera obstacle detection system.</p> "> Figure 7
<p>Steering angles corresponding to different radii of rotation.</p> "> Figure 8
<p>Distance measurements corresponding to the different steering angles.</p> "> Figure 9
<p>Two-frame image acquisition before and after camera rotation; (<b>a</b>) is the obstacle image at the initial moment, (<b>b</b>) is the feature region extraction based on MSER, and (<b>c</b>) is the feature point extraction, where red * is the lowest point of each extreme region and blue + is the intersection point of the obstacle and the road plane. (<b>d</b>) The second frame image acquired after camera rotation.</p> "> Figure 10
<p>MSERs feature region extraction; (<b>a</b>) is the obstacle image in the initial moment, (<b>b</b>) the obstacle image in the next moment, and (<b>c</b>) is the region matching image in the two moments before and after, where the red region and + are the center of mass of MSERs and MSERs in the initial moment in the image, and the cyan region and o are the center of mass of MSERs and MSERs in the next moment in the image.</p> "> Figure 11
<p>Feature point location; (<b>a</b>) shows the location of the feature point located in the image at the initial moment and (<b>b</b>) shows the location of the feature point located in the image at the next moment.</p> "> Figure 12
<p>Obstacle area division (where the yellow box is the detected obstacle area and the upper number is the distance from the obstacle to the camera).</p> "> Figure 13
<p>Experimental equipment for real vehicle.</p> "> Figure 14
<p>Real vehicle experiment route.</p> "> Figure 15
<p>Detection results.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Methods
- (1)
- Calibration of initial camera parameters
- (2)
- Image region matching and obstacle detection
- ①
- Acquire the initial frame image. At the moment t = 0, the camera obtains the first frame image . The feature points in are extracted, and the lowest point of the extracted feature points is considered as the intersection point of the obstacle and the road plane. The distance from point to the camera is then calculated, resulting in the required rotation angle for the camera to obtain the next frame image.
- ②
- Acquire the second frame image. The second frame image is acquired after the camera is rotated by an angle . The fast image region-matching method based on MSER is performed on and to find the center of mass of the matched region as feature points.
- ③
- Calculate the horizontal distance from the camera to the center of rotation. Calculate the horizontal distance from the camera to the center of rotation based on the camera rotation angle and camera rotation radius obtained in ①.
- ④
- Horizontal distance calculation. Assuming that the feature point is on the horizontal plane, the horizontal distance , from the feature point to the camera is calculated based on the monocular ranging model at the before and after moments, respectively.
- ⑤
- Obstacle judgment. According to the internal and external parameters of the camera as well as the acquired valid information, compare () and ( is a set threshold, ). The feature point is not on the horizontal plane and the region is an obstacle if . The feature point is on the horizontal plane and the region is not an obstacle if .
- (3)
- Camera reset
3.1. Fast Image Region-Matching Method Based on MSER
- (1)
- Extract the region of maximum stable extremes using the MSER algorithm.
- (2)
- Perform region range difference calculation. For the two frames captured in the experiment, it is assumed that the MSER region sets of the two frames before and after are and , respectively. is the set of the difference between the fth MSER region range in the previous frame and the unmatched region in the next frame. The set is normalized and the effect of normalization is denoted by , where
- (3)
- Perform region set spacing calculation. It is assumed that the MSER region center-of-mass sets in the front and back images are and , respectively. is the set of distances between the fth MSER region range in the previous image and the unmatched region in the latter image. The set is normalized and the result is denoted by , where
- (4)
- Extract the matching region . Let be the set of matching values of the s th MSER, and extract the MSER corresponding to the smallest as a matching region, denoted as .
3.2. Static Obstacle Detection Model
3.3. Dynamic Obstacle Detection Model
3.4. Camera Rotation Strategy
4. Experiments and Results
4.1. Experimental Equipment
4.2. Obstacle Detection Simulation Experiment
4.3. Obstacle Detection Real Vehicle Experiment
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Health Organization. Global Status Report on Road Safety 2023; World Health Organization: Geneva, Switzerland, 2023. [Google Scholar]
- Ci, W.; Xu, T.; Lin, R.; Lu, S.; Wu, X.; Xuan, J. A Novel Method for Obstacle Detection in Front of Vehicles Based on the Local Spatial Features of Point Cloud. Remote Sens. 2023, 15, 1044. [Google Scholar] [CrossRef]
- Badrloo, S.; Varshosaz, M.; Pirasteh, S.; Li, J. Image-Based Obstacle Detection Methods for the Safe Navigation of Unmanned Vehicles: A Review. Remote Sens. 2022, 14, 3824. [Google Scholar] [CrossRef]
- Dodge, D.; Yilmaz, M. Convex vision-based negative obstacle detection framework for autonomous vehicles. IEEE Trans. Intell. Veh. 2022, 8, 778–789. [Google Scholar] [CrossRef]
- Ding, N.; Mokhtarzadeh, A.A.; Geng, J. A Multi-lidar Based Dynamic Obstacle Detection and Prediction Method. In Proceedings of the 2023 International Conference on the Cognitive Computing and Complex Data (ICCD), IEEE2023, Huaian, China, 21–22 October 2023; pp. 162–167. [Google Scholar]
- Popov, A.; Gebhardt, P.; Chen, K.; Oldja, R.; Lee, H.; Murray, S.; Bhargava, R.; Smolyanskiy, N. Nvradarnet: Real-time radar obstacle and free space detection for autonomous driving. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRAIEEE), IEEE2023, London, UK, 29 May–2 June 2023; pp. 6958–6964. [Google Scholar]
- Han, H.; Chen, Y.; Hsiao, P.; Fu, L. Using Channel-Wise Attention for Deep CNN Based Real-Time Semantic Segmentation with Class-Aware Edge Information. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1041–1051. [Google Scholar] [CrossRef]
- Nguyen, H. Improving Faster R-CNN Framework for Fast Vehicle Detection. Math. Probl. Eng. 2019, 2019, 1–11. [Google Scholar] [CrossRef]
- Zaghari, N.; Fathy, M.; Jameii, S.M.; Shahverdy, M. The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm. J. Supercomput. 2021, 77, 13421–13446. [Google Scholar] [CrossRef]
- Yasmin, S.; Durrani, M.Y.; Gillani, S.; Bukhari, M.; Maqsood, M.; Zghaibeh, M. Small obstacles detection on roads scenes using semantic segmentation for the safe navigation of autonomous vehicles. J. Electron. Imaging 2022, 31, 061806. [Google Scholar] [CrossRef]
- He, D.; Ren, R.; Li, K.; Zou, Z.; Ma, R.; Qin, Y.; Yang, W. Urban rail transit obstacle detection based on Improved R-CNN. Measurement 2022, 196, 111277. [Google Scholar] [CrossRef]
- Sun, L.; Yang, K.; Hu, X.; Hu, W.; Wang, K. Real-Time Fusion Network for RGB-D Semantic Segmentation Incorporating Unexpected Obstacle Detection for Road-Driving Images. IEEE Robot. Autom. Lett. 2020, 5, 5558–5565. [Google Scholar] [CrossRef]
- Güngör, E.; Özmen, A. Stereo-image-based ground-line prediction and obstacle detection. Turk. J. Electr. Eng. Comput. Sci. 2024, 32, 465–482. [Google Scholar] [CrossRef]
- Yuan, J.; Jiang, T.; He, X.; Wu, S.; Liu, J.; Guo, D. Dynamic obstacle detection method based on U–V disparity and residual optical flow for autonomous driving. Sci. Rep. 2023, 13, 7630. [Google Scholar] [CrossRef]
- Xu, Y.; Gao, S.; Li, S.; Tan, D.; Guo, D.; Wang, Y.; Chen, Q. Vision-IMU based obstacle detection method, Green Intelligent Transportation Systems. In Proceedings of the 8th International Conference on Green Intelligent Transportation Systems and Safety, Changchun, China, 1–2 July 2017; Springer: Berlin/Heidelberg, Germany, 2019; pp. 475–487. [Google Scholar]
- Kumar, M.P.; Ashok, D. A multi-level colour thresholding based segmentation approach for improved identification of the defective region in leather surfaces. Eng. J. 2020, 24, 101–108. [Google Scholar] [CrossRef]
- Wang, S.; Li, X. A real-time monocular vision-based obstacle detection. In Proceedings of the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), IEEE2020, Singapore, 20–23 April 2020; pp. 695–699. [Google Scholar]
- Xue, F.; Chang, Y.; Wang, T.; Zhou, Y.; Ming, A. Indoor Obstacle Discovery on Reflective Ground via Monocular Camera. Int. J. Comput. Vis. 2024, 132, 987–1007. [Google Scholar] [CrossRef]
- Capito, L.; Ozguner, U.; Redmill, K. Optical Flow based Visual Potential Field for Autonomous Driving. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 885–891. [Google Scholar]
- Lin, H.; Peng, X. Autonomous Quadrotor Navigation With Vision Based Obstacle Avoidance and Path Planning. IEEE Access 2021, 9, 102450–102459. [Google Scholar] [CrossRef]
- Xu, H.; Li, S.; Ji, Y.; Cao, R.; Zhang, M. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries. Comput. Electron. Agric. 2021, 184, 106104. [Google Scholar] [CrossRef]
- Ordóñez, A.; Acción, A.; Argüello, F.; Heras, D.B. HSI-MSER: Hyperspectral Image Registration Algorithm Based on MSER and SIFT. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12061–12072. [Google Scholar] [CrossRef]
- Tong, G.; Dong, M.; Sun, X.; Song, Y. Natural scene text detection and recognition based on saturation-incorporated multi-channel MSER. Knowl. Based Syst. 2022, 250, 109040. [Google Scholar] [CrossRef]
- Kosala, G.; Harjoko, A.; Hartati, S. MSER-Vertical Sobel for Vehicle Logo Detection. J. RESTI (Rekayasa Syst. Dan Teknol. Inf.) 2023, 7, 1239–1245. [Google Scholar] [CrossRef]
- Li, Y.; Li, Z.; Shen, Y.; Yang, J. Infrared small target detection using reinforced MSER-induced saliency measure. Infrared Phys. Technol. 2023, 133, 104829. [Google Scholar] [CrossRef]
- Yi, X.; Song, G.; Derong, T.; Dong, G.; Liang, S.; Yuqiong, W. Fast road obstacle detection method based on maximally stable extremal regions. Int. J. Adv. Robot. Syst. 2018, 151, 1729881418759118. [Google Scholar] [CrossRef]
- Liu, Z.; Zhao, S.; Wang, X. Research on Driving Obstacle Detection Technology in Foggy Weather Based on GCANet and Feature Fusion Training. Sensors 2023, 23, 2822. [Google Scholar] [CrossRef] [PubMed]
- Dairi, A.; Harrou, F.; Senouci, M.; Sun, Y. Unsupervised obstacle detection in driving environments using deep-learning-based stereovision. Robot. Auton. Syst. 2018, 100, 287–301. [Google Scholar] [CrossRef]
- Lambert, R.; Chavez-Galaviz, J.; Li, J.; Mahmoudian, N. ROSEBUD: A Deep Fluvial Segmentation Dataset for Monocular Vision-Based River Navigation and Obstacle Avoidance. Sensors 2022, 22, 4681. [Google Scholar] [CrossRef] [PubMed]
Feature Point | d1/cm | d2/cm | Δl/cm |
---|---|---|---|
1 | 39.88 | 42.27 | 2.21 |
2 | 39.74 | 42.09 | 2.17 |
3 | 39.78 | 42.12 | 2.16 |
4 | 29.12 | 30.14 | 0.83 |
5 | 21.04 | 23.54 | 2.32 |
6 | 21.22 | 23.78 | 2.37 |
7 | 21.19 | 23.75 | 2.37 |
8 | 29.77 | 32.97 | 3.22 |
9 | 29.43 | 32.76 | 3.15 |
10 | 29.56 | 32.86 | 3.12 |
11 | 29.40 | 32.75 | 3.17 |
12 | 29.37 | 32.75 | 3.20 |
13 | 29.17 | 32.55 | 3.20 |
14 | 29.25 | 32.58 | 3.15 |
Experimental Method | TP | FP | TN | FN |
---|---|---|---|---|
VIDAR | 3710 | 397 | 118 | 254 |
VIDAR + MSER | 3856 | 356 | 41 | 226 |
YOLOv8s | 3527 | 362 | 289 | 301 |
Proposed method | 4033 | 146 | 168 | 132 |
Experimental Method | mAP/% | Recall/% | Accuracy/% | Precision/% | Time/s |
---|---|---|---|---|---|
VIDAR | 89.3 | 93.6 | 85.5 | 90.3 | 0.324 |
VIDAR + MSER | 92.7 | 94.4 | 87.6 | 91.5 | 0.343 |
YOLOv8s | 88.1 | 92.1 | 85.2 | 90.7 | 0.205 |
Proposed method | 96.7 | 96.8 | 93.8 | 96.5 | 0.317 |
Obstacle | Measuring Distance (m) | Actual Distance (m) | Error (m) |
---|---|---|---|
1 | 4.79 | 4.88 | 0.09 |
2 | 5.13 | 5.24 | 0.11 |
3 | 7.60 | 7.73 | 0.13 |
4 | 8.33 | 8.44 | 0.11 |
5 | 11.09 | 11.24 | 0.15 |
6 | 10.10 | 10.27 | 0.17 |
7 | 12.78 | 12.97 | 0.19 |
8 | 15.72 | 15.92 | 0.20 |
9 | 15.90 | 16.08 | 0.18 |
10 | 18.68 | 18.89 | 0.21 |
11 | 22.53 | 22.76 | 0.23 |
12 | 12.19 | 12.34 | 0.15 |
13 | 13.50 | 13.64 | 0.14 |
14 | 14.58 | 14.66 | 0.18 |
15 | 9.50 | 9.65 | 0.15 |
16 | 12.98 | 12.12 | 0.14 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, S.; Ni, J.; Kong, X.; Zhu, H.; Zhan, J.; Sun, Q.; Xu, Y. An Obstacle Detection Method Based on Longitudinal Active Vision. Sensors 2024, 24, 4407. https://doi.org/10.3390/s24134407
Shi S, Ni J, Kong X, Zhu H, Zhan J, Sun Q, Xu Y. An Obstacle Detection Method Based on Longitudinal Active Vision. Sensors. 2024; 24(13):4407. https://doi.org/10.3390/s24134407
Chicago/Turabian StyleShi, Shuyue, Juan Ni, Xiangcun Kong, Huajian Zhu, Jiaze Zhan, Qintao Sun, and Yi Xu. 2024. "An Obstacle Detection Method Based on Longitudinal Active Vision" Sensors 24, no. 13: 4407. https://doi.org/10.3390/s24134407
APA StyleShi, S., Ni, J., Kong, X., Zhu, H., Zhan, J., Sun, Q., & Xu, Y. (2024). An Obstacle Detection Method Based on Longitudinal Active Vision. Sensors, 24(13), 4407. https://doi.org/10.3390/s24134407