Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques
<p>Comparing the linear regression model and RANSAC. The image is modified from Scikit-learn [<a href="#B32-remotesensing-15-04656" class="html-bibr">32</a>].</p> "> Figure 2
<p>The flowchart of the proposed methodology.</p> "> Figure 3
<p>Comparing three types of bounding boxes: axis-aligned bounding box (AABB), k-discrete-orientation polytope (k-DOP), and oriented bounding box (OBB).</p> "> Figure 4
<p>(<b>a</b>) The radius of the search area is 1 m. (<b>b</b>) The radius of the search area is 50 cm.</p> "> Figure 5
<p>(<b>a</b>) Setting a search area (the large orange circle in (<b>a</b>) to extract a bottom plane. (<b>b</b>) Detecting the edge of the plane by setting the search area to include both the road surface and the boundary. The extracted edge of the plane (the sky-blue line in (<b>b</b>)) is a reference line for the oriented bounding box (OBB) (the pink rectangle in (<b>b</b>)).</p> "> Figure 6
<p>Representing the importance of setting the location of the search area to detect the road boundary. (<b>a</b>) The error of detecting the edge occurs because the search area (the large orange circle in (<b>a</b>)) is not adjacent to the road surface. (<b>b</b>) None of the edges are identified due to the search area (the large orange circle in (<b>b</b>)) only including the road surface.</p> "> Figure 7
<p>The cross product of two vectors for a normal vector (<span class="html-italic">N</span>).</p> "> Figure 8
<p>Generating oriented bounding box (OBB) of road boundaries. (<b>a</b>) Calculating normal vectors in the search area and selecting parts of them over 85 degrees. (<b>b</b>) Making a plane using the k-NN algorithm and generating a reference line (the black solid line in (<b>b</b>)) at the boundary. (<b>c</b>) Generating OBB (the black dotted rectangle in (<b>c</b>)) based on the reference line.</p> "> Figure 9
<p>Extracting road lane area based on the normal vector of the plane.</p> "> Figure 10
<p>(<b>a</b>) The search area (the orange circles in (<b>a</b>,<b>b</b>)) detects linear road markings using intensity values. The center line (the sky-blue line in (<b>a</b>,<b>b</b>)) of the extracted area is a reference line of OBB, and the width is 20 cm, which determines an OBB (the pink rectangle in (<b>a</b>,<b>b</b>)). (<b>b</b>) The search area can detect the linear road markings by including the width of it. (<b>c</b>) The linear road markings whose intensity values are out of the range fail to be identified. (<b>d</b>) The search area cannot find linear road markings due to the exclusion of the markings area.</p> "> Figure 11
<p>Representing separating axis theorem (SAT).</p> "> Figure 12
<p>Creating linear data with OBB centerline connections.</p> "> Figure 13
<p>The procedure of the OBB collision-detection algorithm to extract road boundaries and linear road markings.</p> "> Figure 14
<p>(<b>a</b>) Applying the OBB collision-detection algorithm and (<b>b</b>) connecting the center line of OBB A and OBB B.</p> "> Figure 15
<p>Study area for the experiments for the proposed automatic extraction algorithm.</p> "> Figure 16
<p>(<b>a</b>) The yellow boxes indicate occluded areas. (<b>b</b>) The yellow circles mark areas with inference with bushes. (<b>c</b>) The yellow circles indicate rainwater basins in the case area.</p> "> Figure 17
<p>Successful cases in automatic road boundary extraction in areas with little noise. (<b>a</b>) The reference line of OBB is correctly generated, and (<b>b</b>) the road boundary is identified.</p> "> Figure 18
<p>(<b>a</b>) The repeated search allows the road boundaries to be automatically extracted by ignoring short-distance occlusion (the red dotted circle in (<b>a</b>)). (<b>b</b>) Connecting a line between the OBBs can ignore fine noises (the red dotted circle in (<b>b</b>)) and extract road boundaries.</p> "> Figure 19
<p>The red dotted lines in (<b>a</b>,<b>b</b>) represent occluded areas. (<b>a</b>) The occlusion length is 47.43 m, and the one in (<b>b</b>) is 52.80 m.</p> "> Figure 20
<p>(<b>a</b>,<b>b</b>) Error that occurred in generating two reference data values in the same place (the sky-blue line in (<b>a</b>,<b>b</b>) for road boundary data due to uneven road surface conditions). (<b>c</b>) It shows that the error in double reference results in a discrepancy between the ground truth (the yellow dotted line in (<b>c</b>)) and the automatically extracted road boundary (the green solid line in (<b>c</b>)).</p> "> Figure 21
<p>(<b>a</b>) In the case of lane markings, the search length is 8 m, and the search width is 8 cm for the longer linear road markings (the sky-blue dotted box in (<b>a</b>)) and a search length is 6 m and a search width is 8 cm for the shorter ones (the yellow dotted box in (<b>a</b>)). (<b>b</b>) For edge markings (the red dotted box in (<b>b</b>)), the search length is 1 m, and the search width is 4 cm.</p> "> Figure 22
<p>Result of automatically extracted edge marking and lane markings using the proposed method.</p> "> Figure 23
<p>The automatically extracted edge and lane markings (yellow lines in Figure) correspond to the ground truth found in the photographs. (<b>a</b>) A frontal image. (<b>b</b>) A right-front-side image. (<b>c</b>) A left-rear-side image. (<b>d</b>) A rear image.</p> "> Figure 24
<p>Edge marking cannot be extracted automatically where the length of the occluded area is over 25 m.</p> "> Figure 25
<p>(<b>a</b>) Location where the number of lanes changed produces extraction error. (<b>b</b>) Unclear intensity values can generate curved lines.</p> "> Figure 26
<p>Calculating error values by calculating the separation distance between ours (red line) and data from manually digitized data (black line).</p> "> Figure 27
<p>Visualizing the error level in road boundaries and linear road markings. The green line is the automatically extracted data (ours), and the yellow dotted line represents the manually digitized data. The RMSE for extracted road boundaries is <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>4.8</mn> </mrow> </semantics></math> cm, and the RMSE for linear road markings is <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>5.3</mn> </mrow> </semantics></math> cm.</p> "> Figure 28
<p>Comparing the error of the automatically extracted road boundary and manually digitized data.</p> "> Figure 29
<p>Locations where error occurred due to (<b>a</b>) occlusion and (<b>b</b>) data noise. The white circles indicate the discrepancy between ours (the green line) and the manually digitized data (the yellow dotted line).</p> "> Figure 30
<p>Comparing the error of the automatically extracted edge and lane markings and manually digitized data.</p> "> Figure 31
<p>Locations where relatively high errors occurred due to (<b>a</b>) a change in the number of lanes and (<b>b</b>) missing data from occlusion. The green line in Figure indicates ours, and the yellow dotted line in Figure is ground truth data.</p> ">
Abstract
:1. Introduction
2. Literature Review
3. Method of Automatic Road Boundary and Linear Road Marking Extraction
3.1. Creating Oriented Bounding Box of Road Boundaries Using Normal Vector
Algorithm 1: Calculating a normal vector (N) |
1 Input: point p1, point p2, point p3 2 Output: normal vector N(x, y, z) 3 vector 4 vector 5 6 //Calculating a plane to find normal vector 7 vector N = crossProduct(u, v) //perform cross product of two lines on the plane 8 9 If vectorSize(N) > 0 10 m_n = normalize(N) // after normalization, assign it into a new variable m_n 11 12 // the distance between the origin point and the plane 13 End 14 15 //Calculating normal vector using orientation values 16 17 18 19 |
3.2. Creating Oriented Bounding Box of Edge Markings and Linear Road Markings Using Intensity Value
3.3. Automatic Extraction of Road Boundary and Road Markings Using Collision-Detection Algorithm
Algorithm 2: Overlapped OBB collision detection algorithm |
1 function OverlapOBB (): 2 for( = 0; < ; ): 3 4 = t 5 = t 6 // calculate the length between nodes 7 for (c = 1; c < ; c++): 8 = ( + ) / 2 9 if (): 10 11 elseif (): 12 13 // check the length is in the range between dMin and dMax 14 if (()): 15 // The OBBs are not overlapped 16 return false 17 return true |
4. Experiments on Automatically Extracting Road Boundaries and Linear Road Markings
5. Results and Discussion
5.1. Result of the Experiments of Automatically Extracting Road Boundaries and Linear Road Markings
5.1.1. Result of Automatic Road Boundaries Extraction
5.1.2. Result of Automatic Linear Road Markings Extraction
5.2. Comparing Positional Accuracy between Automatically Extracted Data and Manually Digitized Data and Verifying the Method
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Fujita, Y.; Hoshino, Y.; Ogata, S.; Kobayashi, I. Attribute Assignment to Point Cloud Data and Its Usage. Glob. J. Comput. Sci. Technol. 2015, 15, 11–18. [Google Scholar]
- Zhang, Y.; Yang, Y.; Gao, X.; Xu, L.; Liu, B.; Liang, X. Robust Extraction of Multiple-Type Support Positioning Devices in the Catenary System of Railway Dataset Based on MLS Point Clouds. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5702314. [Google Scholar] [CrossRef]
- Kang, Z.; Zhang, Q. Semi-Automatic Road Lane Marking Detection Based on Point-Cloud Data for Mapping. J. Phys. Conf. Ser. 2020, 1453, 012141. [Google Scholar] [CrossRef]
- Lin, C.; Guo, Y.; Li, W.; Liu, H.; Wu, D. An Automatic Lane Marking Detection Method with Low-Density Roadside LiDAR Data. IEEE Sens. J. 2021, 21, 10029–10038. [Google Scholar] [CrossRef]
- Ma, L.; Li, Y.; Li, J.; Yu, Y.; Junior, J.M.; Gonçalves, W.N.; Chapman, M.A. Capsule-Based Networks for Road Marking Extraction and Classification From Mobile LiDAR Point Clouds. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1981–1995. [Google Scholar] [CrossRef]
- Yang, R.; Li, Q.; Tan, J.; Li, S.; Chen, X. Accurate Road Marking Detection from Noisy Point Clouds Acquired by Low-Cost Mobile LiDAR Systems. ISPRS Int. J. Geo-Inf. 2020, 9, 608. [Google Scholar] [CrossRef]
- Yao, L.; Qin, C.; Chen, Q.; Wu, H. Automatic Road Marking Extraction and Vectorization from Vehicle-Borne Laser Scanning Data. Remote Sens. 2021, 13, 2612. [Google Scholar] [CrossRef]
- Tian, Y.; Gelernter, J.; Wang, X.; Chen, W.; Gao, J.; Zhang, Y.; Li, X. Lane Marking Detection via Deep Convolutional Neural Network. Neurocomputing 2018, 280, 46–55. [Google Scholar] [CrossRef]
- Wen, C.; Sun, X.; Li, J.; Wang, C.; Guo, Y.; Habib, A. A Deep Learning Framework for Road Marking Extraction, Classification and Completion from Mobile Laser Scanning Point Clouds. ISPRS J. Photogramm. Remote Sens. 2019, 147, 178–192. [Google Scholar] [CrossRef]
- Ye, C.; Zhao, H.; Ma, L.; Jiang, H.; Li, H.; Wang, R.; Chapman, M.A.; Junior, J.M.; Li, J. Robust Lane Extraction From MLS Point Clouds Towards HD Maps Especially in Curve Road. IEEE Trans. Intell. Transp. Syst. 2022, 23, 1505–1518. [Google Scholar] [CrossRef]
- Cheng, Y.-T.; Patel, A.; Wen, C.; Bullock, D.; Habib, A. Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds. Remote Sens. 2020, 12, 1379. [Google Scholar] [CrossRef]
- Kukolj, D.; Marinović, I.; Nemet, S. Road Edge Detection Based on Combined Deep Learning and Spatial Statistics of LiDAR Data. J. Spat. Sci. 2021, 68, 245–259. [Google Scholar] [CrossRef]
- Gao, Y.; Zhong, R.; Tang, T.; Wang, L.; Liu, X. Automatic Extraction of Pavement Markings on Streets from Point Cloud Data of Mobile LiDAR. Meas. Sci. Technol. 2017, 28, 085203. [Google Scholar] [CrossRef]
- Chang, Y.F.; Chiang, K.W.; Tsai, M.L.; Lee, P.L.; Zeng, J.C.; El-Sheimy, N.; Darweesh, H. The implementation of semi-automated road surface markings extraction schemes utilizing mobile laser scanned point clouds for HD maps production. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, XLVIII-1-W1-2023, 93–100. [Google Scholar] [CrossRef]
- Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
- Zeybek, M. Extraction of Road Lane Markings from Mobile Lidar Data. Transp. Res. Rec. 2021, 2675, 30–47. [Google Scholar] [CrossRef]
- Xu, S.; Wang, R. Power Line Extraction From Mobile LiDAR Point Clouds; Power Line Extraction from Mobile LiDAR Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 734–743. [Google Scholar] [CrossRef]
- Chen, X.; Stroila, M.; Wang, R.; Kohlmeyer, B.; Alwar, N.; Bach, J. Next Generation Map Making: Geo-Referenced Ground Level LIDAR Point Clouds for Automatic Retro-Reflective Road Feature Extraction. In Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 4–6 November 2009. [Google Scholar]
- Tran, T.-T.; Ali, S.; Laurendeau, D. Automatic Sharp Feature Extraction from Point Clouds with Optimal Neighborhood Size. In Proceedings of the MVA2013 IAPR International Conference on Machine Vision Applications, Kyoto, Japan, 20–23 May 2013. [Google Scholar]
- Tian, P.; Hua, X.; Tao, W.; Zhang, M. Robust Extraction of 3D Line Segment Features from Unorganized Building Point Clouds. Remote Sens. 2022, 14, 3279. [Google Scholar] [CrossRef]
- Ma, L.; Li, Y.; Li, J.; Zhong, Z.; Chapman, M.A. Generation of Horizontally Curved Driving Lines in HD Maps Using Mobile Laser Scanning Point Clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1572–1586. [Google Scholar] [CrossRef]
- Miyazaki, R.; Yamamoto, M.; Harada, K. Line-Based Planar Structure Extraction from a Point Cloud with an Anisotropic Distribution. Int. J. Autom. Technol. 2017, 11, 657–665. [Google Scholar] [CrossRef]
- Lu, X.; Liu, Y.; Li, K. Fast 3D Line Segment Detection from Unorganized Point Cloud. arXiv 2019, arXiv:1901.02532. [Google Scholar]
- Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line Segment Extraction for Large Scale Unorganized Point Clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [Google Scholar] [CrossRef]
- Kang, X.; Li, J.; Fan, X. Line Feature Extraction from RGB Laser Point Cloud; Line Feature Extraction from RGB Laser Point Cloud. In Proceedings of the 2018 11th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Beijing, China, 13–15 October 2018. [Google Scholar]
- Foley, J.D.; Fischler, M.A.; Bolles, R.C. Graphics and Image Processing Random Sample Consensus: A Paradigm for Model Fitting with Apphcatlons to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar]
- Bazazian, D.; Casas, J.; Ruiz-Hidalgo, J. Fast and Robust Edge Extraction in Unorganized Point Clouds. In Proceedings of the 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, Australia, 23–25 November 2015; pp. 1–8. [Google Scholar]
- Yang, M.Y.; Förstner, W. Plane Detection in Point Cloud Data. In Proceedings of the 2nd International Conference on Machine Control Guidance, Bonn, Germany, 9–11 March 2010; Volume 1, pp. 95–104. [Google Scholar]
- Ni, H.; Lin, X.; Ning, X.; Zhang, J. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods. Remote Sens. 2016, 8, 710. [Google Scholar] [CrossRef]
- Zhang, X.; Liu, J.; Liu, Z.; Zhang, L.; Qin, X.; Zhang, R.; Liu, X.; Wei, J. Collision Detection Based on OBB Simplified Modeling. J. Phys. Conf. Ser. 2019, 1213, 042079. [Google Scholar] [CrossRef]
- Prochazka, D.; Prochazkova, J.; Landa, J. Automatic Lane Marking Extraction from Point Cloud into Polygon Map Layer. Eur. J. Remote Sens. 2019, 52, 26–39. [Google Scholar] [CrossRef]
- Robust Linear Model Estimation Using RANSAC—Scikit-Learn 1.2.2 Documentation. Available online: https://scikit-learn.org/stable/auto_examples/linear_model/plot_ransac.html (accessed on 17 April 2023).
- Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. In Symposium of ISPRS Commission III: Photogrammetric Computer Vision PCV06. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences; FGAN-FOM, Research Institute for Optronics and Pattern Recognition: Ettlingen, Germany, 2006. [Google Scholar]
- Wu, H.; Yao, L.; Xu, Z.; Li, Y.; Ao, X.; Chen, Q.; Li, Z.; Meng, B. Road Pothole Extraction and Safety Evaluation by Integration of Point Cloud and Images Derived from Mobile Mapping Sensors. Adv. Eng. Inform. 2019, 42, 100936. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, X.; Chen, Z.; Lu, Z. A Review of Deep Learning-Based Semantic Segmentation for Point Cloud. IEEE Access 2019, 7, 179118–179133. [Google Scholar] [CrossRef]
- Balado, J.; Martínez-Sánchez, J.; Arias, P.; Novo, A. Road Environment Semantic Segmentation with Deep Learning from MLS Point Cloud Data. Sensors 2019, 19, 3466. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern. Anal. Mach. Intell. 2021, 43, 4338–4364. [Google Scholar] [CrossRef]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U., Von Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
- Xu, T.; Gao, X.; Yang, Y.; Xu, L.; Xu, J.; Wang, Y. Construction of a Semantic Segmentation Network for the Overhead Catenary System Point Cloud Based on Multi-Scale Feature Fusion. Remote Sens. 2022, 14, 2768. [Google Scholar] [CrossRef]
- Liu, L.; Ma, H.; Chen, S.; Tang, X.; Xie, J.; Huang, G.; Mo, F. Image-Translation-Based Road Marking Extraction from Mobile Laser Point Clouds. IEEE Access 2020, 8, 64297–64309. [Google Scholar] [CrossRef]
- Siwei, H.; Baolong, L. Review of Bounding Box Algorithm Based on 3D Point Cloud. Int. J. Adv. Netw. Monit. Control. 2021, 6, 18–23. [Google Scholar] [CrossRef]
Numbers of Trials | The Length of Extracted Linear Road Markings (m) | The Length of Failure Section (m) |
---|---|---|
1 | 815.39 | 47.43 |
2 | 872.78 | - |
3 | 226.32 | - |
4 | 1186.24 | - |
5 | 51.89 | - |
Sum | 3152.62 | 47.43 |
Numbers of Trials | The Length of Extracted Linear Road Markings (m) | The Length of Failure Section (m) |
---|---|---|
1 | 1176.22 | 52.80 |
2 | 771.20 | - |
3 | 405.30 | - |
4 | 771.41 | - |
Sum | 3124.13 | 52.80 |
Items | Length of Correct Extraction (m) | Length of Extraction Error (m) | Total Length (m) | Success Rate of Extraction (%) | Error Ratio of Extraction (%) |
---|---|---|---|---|---|
right road boundaries | 2995.84 | 156.78 | 3,152,62 | 95.03 | 4.97 |
left road boundaries | 2853.76 | 270.37 | 3,124,13 | 91.35 | 8.56 |
Item | The Length of Extracted Lane Markings (m) | The Length of Failure Section (m) |
---|---|---|
Lane 1 | 3200.54 | - |
Lane 2 | 3198.11 | - |
Lane 3 | 1170.60 240.28 720.77 892.35 153.28 | 54.93 35.72 |
Sum | 3114.28 | 90.65 |
Item | The Length of Extracted Edge Markings (m) | The Length of Failure Section (m) |
---|---|---|
center edge markings | 1575.97 605.42 155.20 837.51 | 25.13 |
edge markings | 1151.82 | - |
Item | The Length of Extracted Markings (m) | Success Rate of Extraction (%) | The Length of Failed Extraction (m) | The Ratio of Extraction Error (%) |
---|---|---|---|---|
Edge markings | 4301.62 | 99.44 | 24.30 | 0.56 |
Lane markings | 9314.05 | 97.91 | 198.88 | 2.09 |
Item | Min (m) | Max (m) | Mean (m) | Standard Deviation (m) | RMSE (m) |
---|---|---|---|---|---|
Error in road boundary | 0.001 | 0.138 | 0.036 | 0.035 | 0.048 |
Error in edge markings and lane markings | 0.001 | 0.197 | 0.034 | 0.042 | 0.053 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, S.; Lee, J.; Lee, J. Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques. Remote Sens. 2023, 15, 4656. https://doi.org/10.3390/rs15194656
Kang S, Lee J, Lee J. Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques. Remote Sensing. 2023; 15(19):4656. https://doi.org/10.3390/rs15194656
Chicago/Turabian StyleKang, Seokchan, Jeongwon Lee, and Jiyeong Lee. 2023. "Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques" Remote Sensing 15, no. 19: 4656. https://doi.org/10.3390/rs15194656
APA StyleKang, S., Lee, J., & Lee, J. (2023). Developing a Method to Automatically Extract Road Boundary and Linear Road Markings from a Mobile Mapping System Point Cloud Using Oriented Bounding Box Collision-Detection Techniques. Remote Sensing, 15(19), 4656. https://doi.org/10.3390/rs15194656