Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR
<p>Schematic diagram of bulk grain-loading site.</p> "> Figure 2
<p>Flowchart of bulk grain-loading status recognition.</p> "> Figure 3
<p>(<b>a</b>) The front view of the sensor installation locations; (<b>b</b>) the side view of the sensor installation locations.</p> "> Figure 4
<p>LiDAR coordinate system. The black coordinates represent the coordinate system before calibration, while the red coordinates represent the coordinate system after calibration.</p> "> Figure 5
<p>Dual LiDAR data fusion. The black coordinate system represents the LiDAR coordinate system, and the red coordinate system represents the loading site coordinate system. The solid cube indicates the real target, while the dashed cube represents the target point cloud.</p> "> Figure 6
<p>(<b>a</b>) Before point cloud filtering; (<b>b</b>) after point cloud filtering.</p> "> Figure 7
<p>Five categories of targets. From left to right and top to bottom, the targets are pedestrians, cars, bicycles, trucks, and trailers.</p> "> Figure 8
<p>Labeled body parts, where blue is cab of vehicle, cyan is front wall of cargo area, green is cargo area, and red is rear wall of cargo area. (<b>a</b>) Truck. (<b>b</b>) Trailer.</p> "> Figure 9
<p>PNGL network.</p> "> Figure 10
<p>Set abstraction layer, where <span class="html-italic">N</span> and <span class="html-italic">N</span>′ are the numbers of point clouds before and after sampling, respectively, <span class="html-italic">K</span> represents the number of neighboring points, <span class="html-italic">d</span> denotes coordinate information, <span class="html-italic">C</span> is the number of features, <span class="html-italic">C</span>′ is the number of newly generated features, and GAP stands for global average pooling.</p> "> Figure 11
<p>Feature propagation layer, where IDW stands for inverse distance weighted.</p> "> Figure 12
<p>Bulk grain-loading status. Blue represents the vehicle’s cab, cyan represents the front wall of the cargo area, green represents the cargo area itself, and red represents the rear wall of the cargo area.</p> "> Figure 13
<p>Point cloud classification.</p> "> Figure 14
<p>Point cloud segmentation. Head is cab of vehicle, front is front wall of cargo area, rear is rear wall of cargo area, body is cargo area. (<b>a</b>) the point cloud segmentation result of truck and (<b>b</b>) the point cloud segmentation result of trainer.</p> "> Figure 15
<p>Segmentation results comparison. Blue represents the vehicle’s cab, cyan represents the front wall of the cargo area, green represents the cargo area itself, and red represents the rear wall of the cargo area.</p> ">
Abstract
:1. Introduction
2. Overview
2.1. LiDAR Detection Principle
2.2. Problem Overview
- Limited Perception Range: The presence of large loading and unloading equipment and components on the vehicle body causes occlusions for the LiDAR sensors at the loading site.
- Interference from Irrelevant Points: Due to the characteristics of grain, the loading site often has dust, which interferes with feature extraction. Additionally, this dust can obscure parts of the target vehicle, leading to missing point cloud data.
- False detections and missed detections: During the loading task, it is necessary to detect the target vehicle, determine its position, and extract the grain-loading shape to avoid missed detections. When the loading task is not being performed, the perception area should also accommodate other engineering vehicles and personnel. In such cases, the method should remain unaffected and avoid false detections.
- Loading Assistance: The driver needs to dynamically adjust the vehicle position based on the loading status to make full use of the cargo area space and avoid grain overflow accidents.
2.3. Methods
- Installation of Two Multi-line LiDAR Sensors: To broaden the perception range and eliminate the impact of occlusions. These sensors complement each other to provide a wider perception range and reduce the effects of occlusions.
- Preprocessing of Raw Data: To ensure the integrity and accuracy of the point cloud data. This ensures that the key features of the vehicle and grain point clouds are maintained.
- Classification of Point Clouds: To accurately identify the target vehicle and prevent false detections and missed detections. This achieves accurate target vehicle identification and prevents false detections and missed detections.
- Segmentation of Vehicle Components: To determine the vehicle position and extract grain point clouds. Segment the point cloud of vehicle components and use the relationships between components to determine the vehicle’s position. Extract the grain point clouds and recognize the loading status based on the point cloud shape.
- Output Prompts Based on Loading Status: To guide the vehicle in completing the loading task and improve grain-loading efficiency.
3. Method for Recognizing Bulk Grain-Loading Status
3.1. Sensor Installation
3.2. Sensor Calibration and Fusion
3.3. Data Preprocessing
3.4. Dataset Construction
3.5. PNGL Network Design
- Feature Extraction: This part includes three set abstraction layers to recursively extract multi-scale features at the scale of {1/4, 1/64, 1/256} for the input point cloud consisting of N points. The raw point cloud data is processed through these layers, transforming it into high-dimensional nonlinear representations and aggregating features within both local and global regions. Specifically, the first set abstraction layers take an N × 3 matrix as the input and outputs an N/4 × 64 matrix of N/4 subsampled points with 64 dimensional feature vectors summarizing the local contextual information. Following the same principle, the second and third set abstraction layers proceed in the same manner, ultimately yielding an N/256 × 512 matrix. Figure 10 shows an example set abstraction layers.
- Point Cloud Classification: The extracted features are input into fully connected layers to output the corresponding categories. To prevent overfitting and increase the model’s generalization ability, dropout is introduced in the fully connected layers for random drop processing.
- Component Segmentation: This part consists of three feature propagation layers. These layers perform inverse distance interpolation and concatenation on the extracted high-dimensional features to form new features. Through progressive upsample, each point obtains sufficient contextual information during segmentation, thus improving segmentation accuracy. The first feature propagation layer output data size of the encoder is N/64 × 256 (where 256 is the dimension of features). Similarly, the second and third feature propagation layers receive input from the preceding layer. Ultimately, the target point cloud is segmented into 8 parts. Figure 11 shows an example of the feature propagation layers.
- Sampling Layer: The point cloud data is input, and octree sampling selects a set of sampled points N′. These points define the centroids of local regions.
- Grouping Layer: Using a query ball, this layer constructs local region sets by finding K points within the radius of a sphere around each centroid. The output point set at this stage is N′ × k × (d + C), where K represents the number of points in the neighborhood of each centroid point. Each set abstraction module has different values for the sampling number K and the sampling radius R. Additionally, K and R increase with each layer, allowing the set abstraction to capture local point cloud features at different scales. Multiple set abstractions output the global features of the point cloud.
- PointNet Layer: This layer uses a small PointNet network to encode the local region patterns into feature vectors. It employs a multi-layer perceptron (MLP) structure and applies global average pooling to the features. The resulting point set after feature extraction by the PointNet layer is N′ × (d + C′).
3.6. Recognition of Loading Status
- Empty Vehicle: The average height of the compartment is close to the height from the bottom of the compartment to the ground, approximately the minimum value.
- Grain Loading: The height of the point cloud in the compartment below the discharge port gradually increases.
- Overheight Warning: The height of the point cloud in the compartment below the discharge port is about to exceed the height of the front wall of the compartment.
- Loading Complete: The average height inside the compartment is close to the height of the compartment walls, and the rear wall of the compartment is near the discharge port.
- Standby: There is no target vehicle in the loading area, and the system is in standby mode.
- Empty Vehicle: Calculate the vehicle’s relative position and direct the vehicle to move so that its front wall slightly exceeds the discharge port.
- Grain Loading: Calculate the grain height in the discharge port area and the grain volume, compute the loading percentage, and provide real-time feedback to the driver. When the grain height is about to exceed the height of the front wall of the compartment, transition to the Overheight Warning status.
- Overheight Warning: Prompt the driver to move the vehicle forward until the grain height in the discharge port area returns to a safe range.
- Loading Complete: Once loading is complete, stop the discharge from the port and prompt the vehicle to leave the loading area. Standby Status: The system waits for the next vehicle to enter the loading area.
4. Experiments
4.1. Experimental Setup
4.2. Evaluation Metrics
4.3. Results of Experiments
4.4. Comparative Experiments
5. Discussion
- How does the PNGL network handle occlusions and dust interference compared to other methods? The PNGL network, combined with octree sampling and advanced feature propagation techniques, enhances its ability to filter out noise and maintain high precision even in dusty and occluded environments. The improvement in performance metrics compared to other methods proves this point.
- What are the advantages of using the PNGL network in dynamic and sparse point cloud scenarios? The dynamic nature of grain loading presents challenges for point cloud processing. The hierarchical feature extraction and global average pooling of the PNGL network reduce sensitivity to sparsity and motion in point clouds, thereby achieving more reliable classification and segmentation results.
- Why is the PNGL network more suitable for bulk grain-loading applications? The bulk grain-loading process requires precise detection of load levels to prevent overflow and optimize space utilization. The PNGL network can accurately segment vehicle components and detect grain shapes, ensuring that it can provide real-time feedback for efficient and safe loading operations.
- Reliable Loading Status Recognition: By comparing the detected grain shape and height with predefined thresholds and setting buffer zones, the method ensures that all four vehicle statuses can be correctly determined. This prevents accidents caused by grain height exceeding the cargo area’s limits due to occasional recognition errors.
- Intelligent Prompting: Based on the detected status and grain shape, the system outputs vehicle movement instructions to the driver, effectively guiding the loading process. This significantly reduces the labor intensity of workers, improves safety, and enhances loading efficiency.
- Higher Recognition Accuracy: Compared to commonly used deep learning models, this method shows significant improvements in accuracy for both point cloud-classification and -segmentation tasks.
- Robustness in Complex Environments: This method does not require large-scale modifications to the loading site. It can accurately detect vehicle positions and recognize loading statuses even in the presence of dust interference and occlusion in the point cloud data.
6. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Mahima, K.T.Y.; Perera, A.; Anavatti, S.; Garratt, M. Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving. Sensors 2023, 23, 9579. [Google Scholar] [CrossRef] [PubMed]
- Ghasemieh, A.; Kashef, R. 3D object detection for autonomous driving: Methods, models, sensors, data, and challenges. Transp. Eng. 2022, 8, 100115. [Google Scholar] [CrossRef]
- Soori, M.; Arezoo, B.; Dastres, R. Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
- Qian, R.; Lai, X.; Li, X. 3D object detection for autonomous driving: A survey. Pattern Recognit. 2022, 130, 108796. [Google Scholar] [CrossRef]
- Mao, J.; Shi, S.; Wang, X.; Li, H. 3D object detection for autonomous driving: A comprehensive survey. Int. J. Comput. Vis. 2023, 131, 1909–1963. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-sensor fusion in automated driving: A survey. IEEE Access 2019, 8, 2847–2868. [Google Scholar] [CrossRef]
- Wang, B.; Zhu, M.; Lu, Y.; Wang, J.; Gao, W.; Wei, H. Real-time 3D object detection from point cloud through foreground segmentation. IEEE Access 2021, 9, 84886–84898. [Google Scholar] [CrossRef]
- Ruan, X.; Liu, B. Review of 3d point cloud data segmentation methods. Int. J. Adv. Netw. Monit. Control. 2020, 5, 66–71. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–7 December 2017. [Google Scholar]
- Qian, G.; Li, Y.; Peng, H.; Mai, J.; Hammoud, H.; Elhoseiny, M.; Ghanem, B. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. Adv. Neural Inf. Process. Syst. 2022, 35, 23192–23204. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (Tog) 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Zhang, K.; Hao, M.; Wang, J.; de Silva, C.W.; Fu, C. Linked dynamic graph cnn: Learning on point cloud via linking hierarchical features. arXiv 2019, arXiv:1904.10014. [Google Scholar]
- Zhang, Z.; Hua, B.S.; Yeung, S.K. Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 1607–1616. [Google Scholar]
- Ma, X.; Qin, C.; You, H.; Ran, H.; Fu, Y. Rethinking network design and local geometry in point cloud: A simple residual MLP framework. arXiv 2022, arXiv:2202.07123, 2022. [Google Scholar]
- Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Deep learning on 3D point clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Zhang, R.; Zhang, G.; Yin, J.; Jia, X.; Mian, A. Mesh-based DGCNN: Semantic Segmentation of Textured 3D Urban Scenes. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Gamal, A.; Wibisono, A.; Wicaksono, S.B.; Abyan, M.A.; Hamid, N.; Wisesa, H.A.; Jatmiko, W.; Ardhianto, R. Automatic LIDAR building segmentation based on DGCNN and euclidean clustering. J. Big Data 2020, 7, 102. [Google Scholar] [CrossRef]
- Liu, Y.; Li, W.; Liu, J.; Chen, H.; Yuan, Y. GRAB-Net: Graph-based boundary-aware network for medical point cloud segmentation. IEEE Trans. Med. Imaging 2023, 42, 2776–2786. [Google Scholar] [CrossRef] [PubMed]
- Camuffo, E.; Mari, D.; Milani, S. Recent advancements in learning algorithms for point clouds: An updated overview. Sensors 2022, 22, 1357. [Google Scholar] [CrossRef]
- Dang, X.; Jin, P.; Hao, Z.; Ke, W.; Deng, H.; Wang, L. Human Movement Recognition Based on 3D Point Cloud Spatiotemporal Information from Millimeter-Wave Radar. Sensors 2023, 23, 9430. [Google Scholar] [CrossRef]
- Hao, H.; Jincheng, Y.; Ling, Y.; Gengyuan, C.; Sumin, Z.; Huan, Z. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size. Comput. Electron. Agric. 2023, 205, 107560. [Google Scholar] [CrossRef]
- Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [Google Scholar] [CrossRef]
- Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
- Wen, X.; Hu, J.; Chen, H.; Huang, S.; Hu, H.; Zhang, H. Research on an adaptive method for the angle calibration of roadside LiDAR point clouds. Sensors 2023, 23, 7542. [Google Scholar] [CrossRef] [PubMed]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 640–651. [Google Scholar]
- Turpin, A.; Scholer, F. User performance versus precision measures for simple search tasks. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 6–11 August 2006; pp. 11–18. [Google Scholar]
Parameter | Value |
---|---|
Measurement Principle | Time of Flight |
Detection Range | 200 m |
Measurement Accuracy | ±3 cm |
Scanning Frequency | 10 Hz |
Horizontal Field of View | 360° |
Vertical Field of View | −15° to 15° |
Angular Resolution | 0.18° (horizontal), 2° (vertical) |
Dimensions | 102 mm (diameter) × 81 mm (height) |
Categories | Pedestrians | Cars | Bicycles | Trucks | Trailers |
---|---|---|---|---|---|
Classification training | 125 | 108 | 22 | 186 | 864 |
Classification testing | 41 | 36 | 8 | 62 | 288 |
Segmentation training | — | — | — | 180 | 569 |
segmentation testing | — | — | — | 60 | 261 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, J.; Wen, X.; Liu, Y.; Hu, H.; Zhang, H. Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR. Sensors 2024, 24, 5105. https://doi.org/10.3390/s24165105
Hu J, Wen X, Liu Y, Hu H, Zhang H. Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR. Sensors. 2024; 24(16):5105. https://doi.org/10.3390/s24165105
Chicago/Turabian StyleHu, Jiazun, Xin Wen, Yunbo Liu, Haonan Hu, and Hui Zhang. 2024. "Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR" Sensors 24, no. 16: 5105. https://doi.org/10.3390/s24165105
APA StyleHu, J., Wen, X., Liu, Y., Hu, H., & Zhang, H. (2024). Research on the Method for Recognizing Bulk Grain-Loading Status Based on LiDAR. Sensors, 24(16), 5105. https://doi.org/10.3390/s24165105