Advanced Methods for Point Cloud Processing and Simplification
<p>Proposed point cloud processing block diagram.</p> "> Figure 2
<p>Point cloud model.</p> "> Figure 3
<p>Distribution of points in the point cloud model: (<b>a</b>) Input point cloud; (<b>b</b>) Histograms of all coordinate axes.</p> "> Figure 4
<p>Removal outliers result: (<b>a</b>) Resultant point cloud; (<b>b</b>) New range of histograms.</p> "> Figure 5
<p>Output point cloud from the registration process: (<b>a</b>) X–Y top view; (<b>b</b>) Histograms of all coordinate axes.</p> "> Figure 6
<p>X-axis histogram: (<b>a</b>) Original histogram; (<b>b</b>) Result of Equation (5) applied on this figure (<b>a</b>) with marked parts and their maxima.</p> "> Figure 7
<p>Histograms of the X and Y axes for a different angle of rotation: (<b>a</b>) Original angle of 0 degrees; (<b>b</b>) Angle of 4.3 degrees.</p> "> Figure 8
<p>Sum of local maxima of the X and Y axes histograms: (<b>a</b>) X-axis sum; (<b>b</b>) Y-axis sum.</p> "> Figure 9
<p>Point cloud with corrected orientation: (<b>a</b>) Output point cloud; (<b>b</b>) New histograms of point density in all coordinate axes.</p> "> Figure 10
<p>Planar surface detection in X-axis: (<b>a</b>) 2nd level selection in the X-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> "> Figure 11
<p>Planar surface detection in Y-axis: (<b>a</b>) 1st level selection in the Y-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> "> Figure 12
<p>Planar surface detection in Z-axis: (<b>a</b>) 1st level selection in the Z-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> "> Figure 13
<p>Input point cloud and its level images: (<b>a</b>) Point cloud of the 1st level in the Z-axis; (<b>b</b>) Segmented point cloud from (<b>a</b>); (<b>c</b>) Level image after the application of morphological operations; (<b>d</b>) Filled level image.</p> "> Figure 14
<p>Shape perimeter determination: (<b>a</b>) Shape with the marked optimal perimeter path; (<b>b</b>) Way of searching neighbor pixels on borders of a shape.</p> "> Figure 15
<p>Detected vertices: (<b>a</b>) Vertices of <a href="#applsci-10-03340-f014" class="html-fig">Figure 14</a>a; (<b>b</b>) Vertices of <a href="#applsci-10-03340-f013" class="html-fig">Figure 13</a>d.</p> "> Figure 16
<p>Measurement illustration: (<b>a</b>) Developed 3D scanner; (<b>b</b>) Measurement frame.</p> "> Figure 17
<p>3D scanner precision: (<b>a</b>) Error graph; (<b>b</b>) Error in percent.</p> "> Figure 18
<p>Input point cloud of the flat with marked scan positions: (<b>a</b>) Individual 3D scans, green number in bracket is equal with the scanning positions in (<b>b</b>); (<b>b</b>) Flat top view scheme with marked measurement positions; (<b>c</b>) Composed final point cloud.</p> "> Figure 19
<p>Flat visualization with marked detected planes: red color marks plane detected in X-axis, green color is used for Y-axis, and blue color marks detected planes in the Z-axis.</p> "> Figure 20
<p>Planar surface area with a hole: (<b>a</b>) Level image; (<b>b</b>) Convex hull.</p> "> Figure 21
<p>Planar surface visualization: (<b>a</b>) Point cloud with a hole; (<b>b</b>) Normal surface.</p> "> Figure 22
<p>Fine planar surface range estimation: (<b>a</b>) Points histogram in the scanning dimension with marked thinner range; (<b>b</b>) Thinner range selection in the comparison with <a href="#applsci-10-03340-f010" class="html-fig">Figure 10</a>a,b; (<b>c</b>) Selected points by this range; (<b>d</b>) Illustration of the problematic case.</p> "> Figure 23
<p>Point cloud segmentation by the level image: (<b>a</b>) Input point cloud; (<b>b</b>) Input level image; (<b>c</b>) Point cloud of the segmented planar surface.</p> "> Figure 24
<p>Segmentation of different planar surfaces: (<b>a</b>) Illustration of two segmented planar surfaces with different area in Y-axis; (<b>b</b>) First level segmentation in X-axis; (<b>c</b>) Resultant point cloud.</p> ">
Abstract
:1. Introduction
1.1. Related Works
1.2. Our Point Cloud Processing Contribution
2. Point Cloud Processing Approach
3. Outlier Points Elimination
4. Correction of Initial Point Cloud Rotation
5. Planar Surface Detection Algorithm
6. Fill Gaps in Measurement Data
Algorithm 1. Fill Element Area |
Input: Image level element |
Output: Filled element |
1 = ElementBorderPositions () |
2 = Extend borders about 1 px |
3 = true(size()) |
4 ( > 0) = false |
//find separate areas (1 - surroundings, 2 - filled element) |
5 = ConnectedComponentLabeling(, 4) |
//add edges back |
6 = ( = 2) | |
7 = Narrow borders about 1 px |
7. Vectorization of Basic Space Features
Algorithm 2. Element Vertices Extraction |
Input: Perimeter path ; |
Score function in the Y axis |
Condition of circle perimeter |
Output: Vertices of element |
1 |
2 (++,∀) = [(1), 1] |
3 for each (n) from n = 2 do |
4 if((, 1) (n)) then |
5 //last position before change |
6 (++,∀) = [(n − 1),n − 1] |
7 //actual position |
8 (++,∀) = [(n),n] |
9 end |
10 end |
11 Exclude repeated items in |
12 if ( && ) then |
13 Remove ( ∀) |
14 end |
8. Results
8.1. Optical Rangefinder for Space Scanning
8.2. Input Point Cloud
8.3. Processing Results
8.4. Evaluation of Processing Algorithms
9. Conclusions
- Outliers and rotation—The simple point concentration histogram offers to remove outliers. This improved the result of the final point cloud composition by the registration process. Moreover, the thoroughly selected processing of histograms in the two axes allows the relatively fast finding of the initial point cloud orientation against the third coordinate axis. This is mainly important for the decreasing processing time. Only three passes of the scanning algorithms allow finding of all planar surfaces in each coordinate axis as documented in the tables of results.
- Holes removing—The composed point cloud from several scans may lack of points in some part of the presented surfaces. The filling of holes in a level image representing a planar surface makes it possible to reconstruct the missing points. The filled holes allow the estimation of the real surface area. It makes even it possible to estimate the volume of a space when its height is known in the case of the floor or ceiling.
- Simplification of objects—The proposed approach showed the successful planar surface detection in the desired coordinate axes. It offers alternative point cloud processing and extends the possibilities of the present methods. The scanning algorithm for the planar surface detection allows the description of these planar surfaces by the statistical parameters. The standard deviation of the detected planar surfaces is small, which indicates their precise detection, as was shown in the capture results. We can see that thanks to this, it is easy to possibly detect the parallel walls. The presentation of planar surfaces by the level images offers to estimate important space properties as the area and perimeter. The image processing methods allow easy determination of these parameters against the difficult mathematical solution for the irregular shapes. All the detected properties make it possible to classify each planar surface. The results also show the ability of the algorithm for the descriptive point decreasing. All the planar surfaces presented only by their vertices showed a 99% decrease in points almost in all cases.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Berger, M.; Tagliasacchi, A.; Seversky, L.M.; Alliez, P.; Levine, J.A.; Sharf, A.; Silva, C.T. State of the Art in Surface Reconstruction from Point Clouds. Eurographics 2014, 161–185. [Google Scholar] [CrossRef]
- Ochmann, S.; Vock, R.; Klein, R. Automatic reconstruction of fully volumetric 3D building models from oriented point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 151, 251–262. [Google Scholar] [CrossRef] [Green Version]
- Mura, C.; Mattausch, O.; Jaspe Villanueva, A.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Jain, S.; Argall, B. Estimation of Surface Geometries in Point Clouds for the Manipulation of Novel Household Objects. In Proceedings of the RSS 2017 Workshop on Spatial-Semantic Representations in Robotics, Cambridge, MA, USA, 12–16 July 2017; Robotics Science and Systems: Corvallis, OR, USA, 2017. [Google Scholar]
- Zhao, R.; Pang, M.; Liu, C.; Zhang, Y. Robust Normal Estimation for 3D LiDAR Point Clouds in Urban Environments. Sensors 2019, 19, 1248. [Google Scholar] [CrossRef] [Green Version]
- El-Sayed, E.; Abdel-Kader, R.F.; Nashaat, H.; Marei, M. Plane detection in 3D point cloud using octree-balanced density down-sampling and iterative adaptive plane extraction. IET Image Process. 2018, 12, 1595–1605. [Google Scholar] [CrossRef] [Green Version]
- Czerniawski, T.; Sankaran, B.; Nahangi, M.; Haas, C.; Leite, F. 6D DBSCAN-based segmentation of building point clouds for planar object classification. Autom. Construct. 2018, 88, 44–58. [Google Scholar] [CrossRef]
- Xu, X.; Luo, M.; Tan, Z.; Zhang, M.; Yang, H. Plane segmentation and fitting method of point clouds based on improved density clustering algorithm for laser radar. Infrared Phys. Technol. 2019, 96, 133–140. [Google Scholar] [CrossRef]
- Zhang, Y.; Lu, T.; Yang, J.; Kong, H. Split and Merge for Accurate Plane Segmentation in RGB-D Images. In Proceedings of the 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 26–29 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 49–54. [Google Scholar] [CrossRef]
- Jin, Z.; Tillo, T.; Zou, W.; Zhao, Y.; Li, X. Robust Plane Detection Using Depth Information from a Consumer Depth Camera. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 447–460. [Google Scholar] [CrossRef]
- Vera, E.; Lucio, D.; Fernandes, L.A.F.; Velho, L. Hough Transform for real-time plane detection in depth images. Pattern Recognit. Lett. 2018, 103, 8–15. [Google Scholar] [CrossRef]
- Skulimowski, P.; Owczarek, M.; Strumiłło, P. Ground Plane Detection in 3D Scenes for an Arbitrary Camera Roll Rotation Through “V-Disparity” Representation. In Proceedings of the Federated Conference on Computer Science and Information Systems, Prague, Czech Republic, 3–6 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 669–674. [Google Scholar] [CrossRef] [Green Version]
- Phalak, A.; Chen, Z.; Yi, D.; Gupta, K.; Badrinarayanan, V.; Rabinovich, A. DeepPerimeter: Indoor Boundary Estimation from Posed Monocular Sequences. arXiv 2019, arXiv:1904.11595. [Google Scholar]
- Guerrero, P.; Kleiman, Y.; Ovsjanikov, M.; Mitra, N.J. PCPNet Learning Local Shape Properties from Raw Point Clouds. Comput. Graph. Forum 2018, 37, 75–85. [Google Scholar] [CrossRef] [Green Version]
- Griffiths, D.; Boehm, J. Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours. ISPRS J. Photogramm. Remote Sens. 2019, 154, 70–83. [Google Scholar] [CrossRef]
- Beksi, W.J.; Papanikolopoulos, N. A topology-based descriptor for 3D point cloud modeling: Theory and experiments. Image Vis. Comput. 2019, 88, 84–95. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, Y.; Ling, X.; Wan, Y.; Liu, L.; Li, Q. TopoLAP: Topology Recovery for Building Reconstruction by Deducing the Relationships between Linear and Planar Primitives. Remote Sens. 2019, 11, 1372. [Google Scholar] [CrossRef] [Green Version]
- Lian, X.; Hu, H. Terrestrial laser scanning monitoring and spatial analysis of ground disaster in Gaoyang coal mine in Shanxi. Environ. Earth Sci. 2017, 76, 287. [Google Scholar] [CrossRef]
- Drews, T.; Miernik, G.; Anders, K.; Höfle, B.; Profe, J.; Emmerich, A.; Bechstädt, T. Validation of fracture data recognition in rock masses by automated plane detection in 3D point clouds. Int. J. Rock Mech. Min. Sci. 2018, 109, 19–31. [Google Scholar] [CrossRef]
- Nguyen, H.L.; Belton, D.; Helmholz, P. Planar surface detection for sparse and heterogeneous mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 151, 141–161. [Google Scholar] [CrossRef]
- Jiang, Y.; Li, C.; Takeda, F.; Kramer, E.A.; Ashrafi, H.; Hunter, J. 3D point cloud data to quantitatively characterize size and shape of shrub crops. Horticult. Res. 2019, 6, 43. [Google Scholar] [CrossRef] [Green Version]
- Hu, C.; Pan, Z.; Li, P. A 3D Point Cloud Filtering Method for Leaves Based on Manifold Distance and Normal Estimation. Remote Sens. 2019, 11, 198. [Google Scholar] [CrossRef] [Green Version]
- Peebles, M.; Lim, S.H.; Streeter, L.; Duke, M.; Au, C.K. Ground Plane Segmentation of Time-of-Flight Images for Asparagus Harvesting. In Proceedings of the International Conference on Image and Vision Computing, Auckland, New Zealand, 19–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Pascu, I.S.; Dobre, A.C.; Badea, O.; Tănase, M.A. Estimating forest stand structure attributes from terrestrial laser scans. Sci. Total Environ. 2019, 691, 205–215. [Google Scholar] [CrossRef]
- Del-Campo-Sanchez, A.; Moreno, M.; Ballesteros, R.; Hernandez-Lopez, D. Geometric Characterization of Vines from 3D Point Clouds Obtained with Laser Scanner Systems. Remote Sens. 2019, 11, 2365. [Google Scholar] [CrossRef] [Green Version]
- Lubiw, A.; Maftuleac, D.; Owen, M. Shortest paths and convex hulls in 2D complexes with non-positive curvature. Comput. Geom. 2020, 89, 101626. [Google Scholar] [CrossRef] [Green Version]
- Chmelar, P.; Beran, L.; Kudriavtseva, N. Projection of Point Cloud for Basic Object Detection. In Proceedings of the ELMAR 2014, Zadar, Croatia, 10–12 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
- Chmelar, P.; Beran, L.; Rejfek, L. The Depth Map Construction from a 3D Point Cloud. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2016; Volume 75. [Google Scholar] [CrossRef] [Green Version]
- Chmelar, P.; Rejfek, L.; Beran, L.; Dobrovolny, M. A point cloud decomposition by the 3D level scanning for planes detection. Int. J. Adv. Appl. Sci. 2017, 4, 121–126. [Google Scholar] [CrossRef] [Green Version]
- Chmelar, P.; Beran, L.; Chmelarova, N.; Rejfek, L. Advanced Plane Properties by Using Level Image. In Proceedings of the 28th International Conference Radioelektronika (RADIOELEKTRONIKA), Prague, Czech Republic, 19–20 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Chmelar, P.; Rejfek, L.; Beran, L.; Chmelarova, N.; Dobrovolny, M. Point Cloud Plane Visualization by Using Level Image. J. Fundam. Appl. Sci. 2018, 10, 547–560. [Google Scholar]
- Chmelarova, N.; Chmelar, P.; Rejfek, L. The Fine Plane Range Estimation from Point Cloud. In Proceedings of the 29th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Beran, L.; Chmelar, P.; Dobrovolny, M. Navigation of Robotic Platform with Using Inertial Measurement Unit and Direct Cosine Matrix. In Proceedings of the ELMAR 2014, Zadar, Croatia, 10–12 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
- Beran, L.; Chmelar, P.; Rejfek, L. Navigation of Robotics Platform Using Monocular Visual Odometry. In Proceedings of the 25th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 21–22 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 213–216. [Google Scholar] [CrossRef]
- Chmelar, P.; Dobrovolny, M. The Fusion of Ultrasonic and Optical Measurement Devices for Autonomous Mapping. In Proceedings of the 23rd International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–17 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 292–296. [Google Scholar] [CrossRef]
- Chmelar, P.; Beran, L.; Rejfek, L.; Chmelarova, N. The Point Cloud Visualisation for Rotary Optical Rangefinders. In Proceedings of the 27th International Conference Radioelektronika (RADIOELEKTRONIKA), Brno, Czech Republic, 19–20 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Chmelar, P.; Beran, L.; Rejfek, L.; Kudriavtseva, N. Effective Lens Distortion Correction for 3D Range Scannig Systems. In Proceedings of the 57th International Symposium ELMAR, Zadar, Croatia, 28–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 37–40. [Google Scholar] [CrossRef]
- Chmelarova, N.; Chmelar, P.; Rejfek, L. The Automatic Undistortion Strength Estimation for Any Describable Optical Distortion. In Proceedings of the 29th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Chmelar, P.; Benkrid, A. Efficiency of HSV Over RGB Gaussian Mixture Model for Fire Detection. In Proceedings of the 24th International Conference Radioelektronika (RADIOELEKTRONIKA), Bratislava, Slovakia, 15–16 April 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
- Chmelar, P.; Beran, L.; Kudriavtseva, N. The Laser Color Detection for 3D Range Scanning Using Gaussian Mixture Model. In Proceedings of the 25th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 21–22 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 248–253. [Google Scholar] [CrossRef]
- Chmelar, P.; Dobrovolny, M. The Laser Line Detection for Autonomous Mapping Based on Color Segmentation. Int. J. Comput. Inf. Eng. 2013, 7, 19–24. [Google Scholar]
- Chmelarova, N.; Chmelar, P.; Beran, L.; Rejfek, L. Improving Precision of Laser Line Detection in 3D Range Scanning Systems. In Proceedings of the 26th International Conference Radioelektronika (RADIOELEKTRONIKA), Kosice, Slovakia, 19–20 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 207–212. [Google Scholar] [CrossRef]
- Chmelar, P.; Dobrovolny, M. The Optical Measuring Device for the Autonomous Exploration and Mapping of unknown Environments. Perners Contacts Univ. Pardubice 2012, 7, 41–50. [Google Scholar]
- Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation, Shangai, China, 9–13 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
Axis | lvlS | lvlRS | qD |
---|---|---|---|
X | 0.1 m | 2.5 | 4 cm |
Y | 0.1 m | 2.5 | 4 cm |
Z | 0.2 m | 1.5 | 4 cm |
Idx | dL (m) | Pos (m) | AE (m2) | PE (m) | (m) | PC p. | IL px. | V | P. decr.% |
---|---|---|---|---|---|---|---|---|---|
2 | 0.313 | 0.313, 3.935, 3 | 7.174 | 19.33 | 0.0008 | 7306 | 4420 | 92 | 98.74 |
4 | 3.521 | 3.52, 3.926, 3 | 9.862 | 18.177 | 0.0018 | 15,211 | 6148 | 71 | 99.53 |
5 | 3.687 | 3.687, 2.581, 2.88 | 11.645 | 21.41 | 0.0006 | 14,460 | 7166 | 92 | 99.36 |
14 | 7.162 | 7.162, 2.56, 2.96 | 13.278 | 20.833 | 0.0006 | 10,687 | 8112 | 87 | 99.19 |
16 | 7.371 | 7.371, 4.454, 2.8 | 8.754 | 11.96 | 0.001 | 16,025 | 5450 | 43 | 99.73 |
22 | 8.917 | 8.917, 0.291, 2.72 | 7.15 | 15.443 | 0.001 | 7424 | 4452 | 81 | 98.91 |
Idx | dL (m) | Pos (m) | AE (m2) | PE (m) | (m) | PC p. | IL px. | V | P. decr.% |
---|---|---|---|---|---|---|---|---|---|
1 | 0.081 | 3.926, 0.081, 2.88 | 13.334 | 15.226 | 0.0013 | 22,161 | 8270 | 82 | 99.63 |
7 | 2.647 | 3.653, 2.647, 2.92 | 9.7856 | 12.4 | 0.0008 | 13,456 | 6054 | 49 | 99.64 |
10 | 3.949 | 0.135, 3.949, 2.96 | 10.026 | 12.69 | 0.0007 | 14,802 | 6192 | 39 | 99.74 |
14 | 4.497 | 7.115, 4.497, 2.80 | 7.7008 | 16.89 | 0.0009 | 9847 | 4748 | 85 | 99.14 |
29 | 7.819 | 0.271, 7.819, 3.12 | 21.661 | 35.236 | 0.0018 | 26,529 | 13840 | 162 | 99.39 |
Idx | dL (m) | Pos (m) | AE (m2) | PE (m) | (m) | PC p. | IL px. | V | P. decr.% |
---|---|---|---|---|---|---|---|---|---|
1 | 0.023 | 0.262, 0.018, 0.023 | 55.835 | 45.643 | 0.006 | 14,223 | 34,897 | 270 | 98.1 |
4 | 0.987 | 0.091, 3.936, 0.987 | 0.76 | 4.61 | 0.010 | 2673 | 475 | 36 | 98.65 |
6 | 0.942 | 4.415, 7.593, 0.942 | 0.739 | 4.537 | 0.014 | 1710 | 462 | 21 | 98.77 |
8 | 0.926 | 8.013, 5.763, 0.926 | 1.845 | 9.8 | 0.012 | 6629 | 1153 | 58 | 99.13 |
12 | 1.353 | 10.129, 5.853, 1.353 | 0.773 | 4.96 | 0.009 | 2555 | 483 | 22 | 99.14 |
21 | 2.956 | 0.102, 0.012, 2.672 | 59.246 | 39.319 | 0.019 | 16,173 | 37,029 | 162 | 99 |
Idx | AE (m2) | A phys (m2) | Δ A (%) | PE (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
2 | 7.174 | 6.97 | 2.84 | 19.33 | 18.95 | 1.97 |
4 | 9.862 | 8.976 | 8.98 | 18.177 | 16.856 | 7.27 |
5 | 11.645 | 11.728 | 0.71 | 21.41 | 21.68 | 1.26 |
14 | 13.278 | 13.489 | 1.59 | 20.833 | 21.34 | 2.43 |
16 | 8.754 | 8.28 | 5.41 | 11.96 | 11.22 | 6.19 |
22 | 7.15 | 6.95 | 2.80 | 15.443 | 15.133 | 2.01 |
Idx | AE (m2) | A phys (m2) | Δ A (%) | PE (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
1 | 13.334 | 12.45 | 6.63 | 15.226 | 14.92 | 2.01 |
7 | 9.7856 | 10.26 | 4.85 | 12.4 | 12.76 | 2.90 |
10 | 10.026 | 10.12 | 0.94 | 12.69 | 12.78 | 0.71 |
14 | 7.7008 | 7.43 | 3.52 | 16.89 | 16.27 | 3.67 |
29 | 21.661 | 22.3 | 2.95 | 35.236 | 36.44 | 3.42 |
Idx | AE (m2) | A phys (m2) | Δ A (%) | PE (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
1 | 55.835 | 54.32 | 2.71 | 45.643 | 44.28 | 2.99 |
4 | 0.76 | 0.79 | 3.95 | 4.61 | 4.86 | 5.42 |
6 | 0.739 | 0.712 | 3.65 | 4.537 | 4.33 | 4.56 |
8 | 1.845 | 1.765 | 4.34 | 9.8 | 9.45 | 3.57 |
12 | 0.773 | 0.81 | 4.79 | 4.96 | 5.15 | 3.83 |
21 | 59.246 | 56.392 | 4.82 | 39.319 | 38.507 | 2.07 |
Idx | ACH (m2) | A phys (m2) | Δ A (%) | PCH (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
2 | 11.461 | 6.97 | 39.19 | 13.338 | 18.95 | 42.08 |
4 | 11.425 | 8.976 | 21.44 | 13.38 | 16.856 | 25.98 |
5 | 14.121 | 11.728 | 16.95 | 15.343 | 21.68 | 41.30 |
14 | 14.948 | 13.489 | 9.76 | 15.632 | 21.34 | 36.51 |
16 | 9.1922 | 8.28 | 9.92 | 11.919 | 11.22 | 5.86 |
22 | 8.3255 | 6.95 | 16.52 | 11.031 | 15.133 | 37.19 |
Idx | ACH (m2) | A phys (m2) | Δ A (%) | PCH (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
1 | 13.494 | 12.45 | 7.74 | 14.693 | 14.92 | 1.54 |
7 | 9.8345 | 10.26 | 4.33 | 12.294 | 12.76 | 3.79 |
10 | 10.231 | 10.12 | 1.08 | 12.342 | 12.78 | 3.55 |
14 | 9.5068 | 7.43 | 21.85 | 11.878 | 16.27 | 36.98 |
29 | 29.797 | 22.3 | 25.16 | 25.862 | 36.44 | 40.90 |
Idx | ACH (m2) | A phys (m2) | Δ A (%) | PCH (m) | P phys (m) | Δ P (%) |
---|---|---|---|---|---|---|
1 | 69.044 | 54.32 | 21.33 | 32.062 | 44.28 | 38.11 |
4 | 0.938 | 0.79 | 15.78 | 5.07 | 4.86 | 4.23 |
6 | 0.789 | 0.712 | 9.76 | 4.60 | 4.33 | 5.80 |
8 | 4.033 | 1.765 | 56.24 | 8.40 | 9.45 | 12.52 |
12 | 0.948 | 0.81 | 14.56 | 4.92 | 5.15 | 4.61 |
21 | 71.703 | 56.392 | 21.35 | 32.786 | 38.507 | 17.45 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chmelar, P.; Rejfek, L.; Nguyen, T.N.; Ha, D.-H. Advanced Methods for Point Cloud Processing and Simplification. Appl. Sci. 2020, 10, 3340. https://doi.org/10.3390/app10103340
Chmelar P, Rejfek L, Nguyen TN, Ha D-H. Advanced Methods for Point Cloud Processing and Simplification. Applied Sciences. 2020; 10(10):3340. https://doi.org/10.3390/app10103340
Chicago/Turabian StyleChmelar, Pavel, Lubos Rejfek, Tan N. Nguyen, and Duy-Hung Ha. 2020. "Advanced Methods for Point Cloud Processing and Simplification" Applied Sciences 10, no. 10: 3340. https://doi.org/10.3390/app10103340
APA StyleChmelar, P., Rejfek, L., Nguyen, T. N., & Ha, D. -H. (2020). Advanced Methods for Point Cloud Processing and Simplification. Applied Sciences, 10(10), 3340. https://doi.org/10.3390/app10103340