Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps
<p>Block diagram of the proposed multi-sensor-based detection model.</p> "> Figure 2
<p>Process for generating a depth map for image registration: (<b>a</b>) RGB image; (<b>b</b>) PCD projected on RGB image; (<b>c</b>) depth map.</p> "> Figure 3
<p>Preprocessing of depth maps using the Gaussian–Sobel filter: (<b>a</b>) depth map; (<b>b</b>) depth map after Gaussian filtering; (<b>c</b>) depth map after Gaussian–Sobel filtering; (<b>d</b>) depth map after Canny edge filtering.</p> "> Figure 3 Cont.
<p>Preprocessing of depth maps using the Gaussian–Sobel filter: (<b>a</b>) depth map; (<b>b</b>) depth map after Gaussian filtering; (<b>c</b>) depth map after Gaussian–Sobel filtering; (<b>d</b>) depth map after Canny edge filtering.</p> "> Figure 4
<p>Flowchart for non-maximum suppression (NMS).</p> "> Figure 5
<p>Comparison of pedestrian detection performance of the proposed model and similar models at 100% brightness: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> "> Figure 5 Cont.
<p>Comparison of pedestrian detection performance of the proposed model and similar models at 100% brightness: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> "> Figure 6
<p>Comparison of pedestrian detection performance of the proposed model and similar models at 40% brightness level: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> "> Figure 6 Cont.
<p>Comparison of pedestrian detection performance of the proposed model and similar models at 40% brightness level: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> "> Figure 7
<p>Comparison of the pedestrian detection performance of the proposed model and similar models at 40% brightness and 0.5% noise level: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> "> Figure 7 Cont.
<p>Comparison of the pedestrian detection performance of the proposed model and similar models at 40% brightness and 0.5% noise level: (<b>a</b>) depth map; (<b>b</b>) RGB + depth map; (<b>c</b>) Maragos and Pessoa [<a href="#B12-applsci-14-08326" class="html-bibr">12</a>]; (<b>d</b>) Deng [<a href="#B13-applsci-14-08326" class="html-bibr">13</a>]; (<b>e</b>) Ali and Clausi [<a href="#B14-applsci-14-08326" class="html-bibr">14</a>]; (<b>f</b>) proposed model.</p> ">
Abstract
:1. Introduction
- -
- We improved image resolution and enhanced object–background segmentation by preprocessing LiDAR-derived depth maps using a fusion of Gaussian blurring and the Sobel mask.
- -
- We proposed a versatile object detection model that effectively combines RGB images from cameras and depth maps preprocessed by the Gaussian–Sobel filter. This convergence enables robust object detection in diverse lighting conditions, ranging from bright daylight to low-light environments, complementing the strengths and weaknesses of cameras and LiDAR.
- -
- By applying the Gaussian–Sobel filter, we enhanced the robustness of LiDAR, leading to improved detection performance in environments with speckle noise, which is commonly found in adverse weather conditions.
2. Materials and Methods
2.1. Object Detection
2.2. Proposed Multi-Sensor-Based Detection Model
Algorithm 1 Image Processing and Object Detection |
1: Input: List of images (camera images and LiDAR Depth Maps) 2: Output: Performance metrics for each image 3: for each image in image_list do 4: if image is camera_image then 5: bboxes←YOLO(image) 6: filtered_bboxes←NMS(bboxes, threshold) 7: else 8: filtered_depth_map←GAUSSIAN_SOBEL_FILTER(image) 9: bboxes←YOLO(filtered_depth_map) 10: filtered_bboxes←NMS(bboxes, threshold) 11: end if 12: performance←EVALUATE_PERFORMANCE(filtered_bboxes) 13: PRINT performance 14: end for |
2.2.1. Creating a Depth Map for Image Registration
2.2.2. Preprocessing with the Gaussian–Sobel Filter
2.2.3. Object Estimation Using NMS
3. Experimental Results
3.1. Experimental Environment
3.2. Performance Evaluation of Object Detection under Varying Brightness Levels
3.3. Evaluation of Detection Performance under Varying Noises
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Claussmann, L.; Revilloud, M.; Gruyer, D.; Glaser, S. A Review of Motion Planning for Highway Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2020, 21, 1826–1848. [Google Scholar] [CrossRef]
- Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef]
- Jhong, S.; Chen, Y.; Hsia, C.; Wang, Y.; Lai, C. Density-Aware and Semantic-Guided Fusion for 3-D Object Detection Using LiDAR-Camera Sensors. IEEE Sens. J. 2023, 23, 22051–22063. [Google Scholar] [CrossRef]
- Cheng, L.; He, Y.; Mao, Y.; Liu, Z.; Dang, X.; Dong, Y.; Wu, L. Personnel Detection in Dark Aquatic Environments Based on Infrared Thermal Imaging Technology and an Improved YOLOv5s Model. Sensors 2024, 24, 3321. [Google Scholar] [CrossRef] [PubMed]
- Hsu, W.; Yang, P. Pedestrian Detection Using Multi-Scale Structure-Enhanced Super-Resolution. IEEE Trans. Intell. Transp. Syst. 2023, 24, 12312–12322. [Google Scholar] [CrossRef]
- Zhang, T.; Ye, Q.; Zhang, B.; Liu, J.; Zhang, X.; Tian, Q. Feature Calibration Network for Occluded Pedestrian Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4151–4163. [Google Scholar] [CrossRef]
- Mushtaq, Z.; Nasti, S.; Verma, C.; Raboaca, M.; Kumar, N.; Nasti, S. Super Resolution for Noisy Images Using Convolutional Neural Networks. Mathematics 2022, 10, 777. [Google Scholar] [CrossRef]
- Xu, X.; Wang, S.; Wang, Z.; Zhang, X.; Hu, R. Exploring Image Enhancement for Salient Object Detection in Low Light Images. ACM Trans. Multimed. Comput. Commun. Appl. 2021, 17, 1–19. [Google Scholar] [CrossRef]
- Gilroy, S.; Jones, E.; Glavin, M. Overcoming Occlusion in the Automotive Environment—A Review. IEEE Trans. Intell. Transp. Syst. 2020, 22, 23–35. [Google Scholar] [CrossRef]
- Lin, T.; Tan, D.; Tang, H.; Chien, S.; Chang, F.; Chen, Y.; Cheng, W. Pedestrian Detection from Lidar Data via Cooperative Deep and Hand-Crafted Features. In Proceedings of the IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018; pp. 1922–1926. [Google Scholar]
- Qi, C.; Liu, W.; Wu, C.; Su, H.; Guibas, L. Frustum PointNets for 3D Object Detection from RGB-D Data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
- Maragos, P. Morphological Filtering for Image Enhancement and Feature Detection. In The Image and Video Processing Handbook, 2nd ed.; Bovik, A.C., Ed.; Elsevier Academic Press: Cambridge, MA, USA, 2005; pp. 135–156. [Google Scholar]
- Deng, G. A Generalized Unsharp Masking Algorithm. IEEE Trans. Image Process. 2011, 20, 1249–1261. [Google Scholar] [CrossRef]
- Ali, M.; Clausi, D. Using the Canny Edge Detector for Feature Extraction and Enhancement of Remote Sensing Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Sydney, Australia, 9–13 July 2001; pp. 2298–2300. [Google Scholar]
- Bochkovskiy, A.; Chien, W.; Hong, L. Yolov4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Ravpreet, K.; Sarbjeet, S. A Comprehensive Review of Object Detection with Deep Learning. Digit. Signal Process. 2023, 132, 203812. [Google Scholar]
- Pham, M.; Courtrai, L.; Friguet, C.; Lefèvre, S.; Baussard, A. YOLO-Fine: One-Stage Detector of Small Objects Under Various Backgrounds in Remote Sensing Images. Remote Sens. 2020, 12, 2501. [Google Scholar] [CrossRef]
- Chen, K.; Li, J.; Lin, W.; See, J.; Wang, J.; Duan, L.; Chen, Z.; He, C.; Zou, J. Towards Accurate One-Stage Object Detection with AP-Loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5114–5122. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A. SSD: Single Shot Multibox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Cai, H.; Pang, W.; Chen, X.; Wang, Y.; Liang, H. A Novel Calibration Board and Experiments for 3D LiDAR and Camera Calibration. Sensors 2020, 20, 1130. [Google Scholar] [CrossRef] [PubMed]
- Xie, X.; Wang, C.; Li, M. A Fragile Watermark Scheme for Image Recovery Based on Singular Value Decomposition, Edge Detection and Median Filter. Appl. Sci. 2019, 9, 3020. [Google Scholar] [CrossRef]
- Tang, D.; Xu, Y.; Liu, X. Application of an Improved Laplacian-of-Gaussian Filter for Bearing Fault Signal Enhancement of Motors. Machines 2024, 12, 389. [Google Scholar] [CrossRef]
- Popkin, T.; Cavallaro, A.; Hands, D. Accurate and Efficient Method for Smoothly Space-Variant Gaussian Blurring. IEEE Trans. Image Process. 2010, 19, 1362–1370. [Google Scholar] [CrossRef]
- Ma, Y.; Ma, H.; Chu, P. Demonstration of Quantum Image Edge Extraction Enhancement through Improved Sobel Operator. IEEE Access 2020, 8, 210277–210285. [Google Scholar] [CrossRef]
- Kanopoulos, N.; Vasanthavada, N.; Baker, R. Design of an Image Edge Detection Filter Using the Sobel Operator. IEEE J. Solid-State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
- Pawar, K.; Nalbalwar, S. Distributed Canny Edge Detection Algorithm Using Morphological Filter. In Proceedings of the IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology, Bangalore, India, 20–21 May 2016; pp. 1523–1527. [Google Scholar]
- Zaghari, N.; Fathy, M.; Jameii, S.; Shahverdy, M. The improvement in obstacle detection in autonomous vehicles using YOLO non-maximum suppression fuzzy algorithm. J. Supercomput. 2021, 77, 13421–13446. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Behl, A.; Mohapatra, P.; Jawahar, C.; Kumar, M. Optimizing Average Precision Using Weakly Supervised Data. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 2545–2557. [Google Scholar] [CrossRef] [PubMed]
Model | Filter Used for Depth Map Preprocessing | Brightness Level | ||
---|---|---|---|---|
100% | 70% | 40% | ||
Depth Map | - | 83.49 | 83.49 | 83.49 |
Depth Map + RGB | - | 91.99 | 91.32 | 85.60 |
Maragos and Pessoa [12] | Morphology dilation | 91.94 | 91.00 | 86.05 |
Deng [13] | Unsharp Mask | 92.13 | 90.72 | 85.76 |
Ali and Clausi [14] | Canny Edge | 92.43 | 91.09 | 85.08 |
Proposed model | Gaussian–Sobel | 92.07 | 91.49 | 87.03 |
Model | Filter Used for Depth Map Preprocessing | Noise Level | ||
---|---|---|---|---|
0% | 0.2% | 0.5% | ||
Depth Map | - | 83.49 | 69.84 | 51.26 |
Depth Map + RGB | - | 85.60 | 80.34 | 75.37 |
Maragos and Pessoa [12] | Morphology dilation | 86.05 | 74.88 | 65.55 |
Deng [13] | Unsharp Mask | 85.76 | 77.87 | 68.75 |
Ali and Clausi [14] | Canny Edge | 85.08 | 81.96 | 77.62 |
Proposed model | Gaussian–Sobel | 87.03 | 86.29 | 84.72 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jung, M.; Cho, J. Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps. Appl. Sci. 2024, 14, 8326. https://doi.org/10.3390/app14188326
Jung M, Cho J. Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps. Applied Sciences. 2024; 14(18):8326. https://doi.org/10.3390/app14188326
Chicago/Turabian StyleJung, Minyoung, and Jeongho Cho. 2024. "Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps" Applied Sciences 14, no. 18: 8326. https://doi.org/10.3390/app14188326