Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Plasma Enhanced Wet Chemical Surface Activation of TiO2 for the Synthesis of High Performance Photocatalytic Au/TiO2 Nanocomposites
Previous Article in Journal
Effects of Tension–Compression Asymmetry on Bending of Steels
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Methods for Point Cloud Processing and Simplification

1
Faculty of Electrical Engineering and Informatics, University of Pardubice, 532 10 Pardubice, Czech Republic
2
Wireless Communications Research Group, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
3
Faculty of Electrical Engineering and Computer Science, VSB—Technical University of Ostrava, 17, Listopadu 2172/15, 708 00 Ostrava, Czech Republic
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(10), 3340; https://doi.org/10.3390/app10103340
Submission received: 9 March 2020 / Revised: 24 April 2020 / Accepted: 30 April 2020 / Published: 12 May 2020
Figure 1
<p>Proposed point cloud processing block diagram.</p> ">
Figure 2
<p>Point cloud model.</p> ">
Figure 3
<p>Distribution of points in the point cloud model: (<b>a</b>) Input point cloud; (<b>b</b>) Histograms of all coordinate axes.</p> ">
Figure 4
<p>Removal outliers result: (<b>a</b>) Resultant point cloud; (<b>b</b>) New range of histograms.</p> ">
Figure 5
<p>Output point cloud from the registration process: (<b>a</b>) X–Y top view; (<b>b</b>) Histograms of all coordinate axes.</p> ">
Figure 6
<p>X-axis histogram: (<b>a</b>) Original histogram; (<b>b</b>) Result of Equation (5) applied on this figure (<b>a</b>) with marked parts and their maxima.</p> ">
Figure 7
<p>Histograms of the X and Y axes for a different angle of rotation: (<b>a</b>) Original angle of 0 degrees; (<b>b</b>) Angle of 4.3 degrees.</p> ">
Figure 8
<p>Sum of local maxima of the X and Y axes histograms: (<b>a</b>) X-axis sum; (<b>b</b>) Y-axis sum.</p> ">
Figure 9
<p>Point cloud with corrected orientation: (<b>a</b>) Output point cloud; (<b>b</b>) New histograms of point density in all coordinate axes.</p> ">
Figure 10
<p>Planar surface detection in X-axis: (<b>a</b>) 2nd level selection in the X-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> ">
Figure 11
<p>Planar surface detection in Y-axis: (<b>a</b>) 1st level selection in the Y-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> ">
Figure 12
<p>Planar surface detection in Z-axis: (<b>a</b>) 1st level selection in the Z-axis; (<b>b</b>) Segmented point cloud; (<b>c</b>) Resulting level image; (<b>d</b>) Morphologically closed and opened image.</p> ">
Figure 13
<p>Input point cloud and its level images: (<b>a</b>) Point cloud of the 1st level in the Z-axis; (<b>b</b>) Segmented point cloud from (<b>a</b>); (<b>c</b>) Level image after the application of morphological operations; (<b>d</b>) Filled level image.</p> ">
Figure 14
<p>Shape perimeter determination: (<b>a</b>) Shape with the marked optimal perimeter path; (<b>b</b>) Way of searching neighbor pixels on borders of a shape.</p> ">
Figure 15
<p>Detected vertices: (<b>a</b>) Vertices of <a href="#applsci-10-03340-f014" class="html-fig">Figure 14</a>a; (<b>b</b>) Vertices of <a href="#applsci-10-03340-f013" class="html-fig">Figure 13</a>d.</p> ">
Figure 16
<p>Measurement illustration: (<b>a</b>) Developed 3D scanner; (<b>b</b>) Measurement frame.</p> ">
Figure 17
<p>3D scanner precision: (<b>a</b>) Error graph; (<b>b</b>) Error in percent.</p> ">
Figure 18
<p>Input point cloud of the flat with marked scan positions: (<b>a</b>) Individual 3D scans, green number in bracket is equal with the scanning positions in (<b>b</b>); (<b>b</b>) Flat top view scheme with marked measurement positions; (<b>c</b>) Composed final point cloud.</p> ">
Figure 19
<p>Flat visualization with marked detected planes: red color marks plane detected in X-axis, green color is used for Y-axis, and blue color marks detected planes in the Z-axis.</p> ">
Figure 20
<p>Planar surface area with a hole: (<b>a</b>) Level image; (<b>b</b>) Convex hull.</p> ">
Figure 21
<p>Planar surface visualization: (<b>a</b>) Point cloud with a hole; (<b>b</b>) Normal surface.</p> ">
Figure 22
<p>Fine planar surface range estimation: (<b>a</b>) Points histogram in the scanning dimension with marked thinner range; (<b>b</b>) Thinner range selection in the comparison with <a href="#applsci-10-03340-f010" class="html-fig">Figure 10</a>a,b; (<b>c</b>) Selected points by this range; (<b>d</b>) Illustration of the problematic case.</p> ">
Figure 23
<p>Point cloud segmentation by the level image: (<b>a</b>) Input point cloud; (<b>b</b>) Input level image; (<b>c</b>) Point cloud of the segmented planar surface.</p> ">
Figure 24
<p>Segmentation of different planar surfaces: (<b>a</b>) Illustration of two segmented planar surfaces with different area in Y-axis; (<b>b</b>) First level segmentation in X-axis; (<b>c</b>) Resultant point cloud.</p> ">
Versions Notes

Abstract

:
Nowadays, mobile robot exploration needs a rangefinder to obtain a large number of measurement points to achieve a detailed and precise description of a surrounding area and objects, which is called the point cloud. However, a single point cloud scan does not cover the whole area, so multiple point cloud scans must be acquired and compared together to find the right matching between them in a process called registration method. This method requires further processing and places high demands on memory consumption, especially for small embedded devices in mobile robots. This paper describes a novel method to reduce the burden of processing for multiple point cloud scans. We introduce our approach to preprocess an input point cloud in order to detect planar surfaces, simplify space description, fill gaps in point clouds, and get important space features. All of these processes are achieved by applying advanced image processing methods in combination with the quantization of physical space points. The results show the reliability of our approach to detect close parallel walls with suitable parameter settings. More importantly, planar surface detection shows a 99% decrease in necessary descriptive points almost in all cases. This proposed approach is verified on the real indoor point clouds.

1. Introduction

The main remote sensing task is to obtain a large number of measurement points to achieve a detailed and precise description of a surrounding area and objects in a scanned space. The inseparable part constitutes their processing to obtain desired input data for the purpose of the application which the scanning was used for. There are many different applications where the processing of remote sensing is important; see the following subsections for how to deal and process point clouds.
The main issue of our research is point cloud processing and simplification with the focus on the detection, statistical description, vectorization, and visualization of basic space features. This will allow the decreased memory consumption of physical data storage. For example, individual points of a planar surface composed of a thousand points are not so important for the subsequent processing. We rather look for vertices of the analyzed planar surface. Our contribution in point cloud processing is to show a different alternative way of processing. The main advantages of our approach lie in the combination of the physical space points quantization with image processing methods. This connection allows: obtaining important space features such as the area, perimeter and volume; the space and statistical descriptions of planar surfaces, including the descriptive point amount decreasing by the vectorization of vertices; the correction and recovery of missing space data; and a planar surface presented as the image allows the alternative way of the physical points storage, including its visualization. It also serves to other researchers as the extension to the used methods presented in the following subsection. Moreover, the presented approach is usable not only for point clouds but also as the general processing and analysis of different levels for any 3D data.
This paper is focused on the presentation of several new methods for point cloud processing such as the outlier points removal, estimation of the initial point cloud rotation, point cloud data correction and recovery, and the vertices detection of a planar surface. In the Introduction section is a survey about different point cloud processing methods for basic object detection and segmentation, the planar surface and space features estimation, and their typical application. The paper further includes the description of our developed point cloud processing pipeline and the presentation of the new methods in detail to give a comprehensive overview of our developed approach, including verification on real space data. In the results part, the processing of the complex point cloud with several rooms is presented. The results and algorithms are evaluated also in terms of precision against the physical space dimensions and comparison with the convex hull results. The recommendations for this processing approach are stated.

1.1. Related Works

A wide survey of 3D surface reconstruction methods is provided in reference [1]. The authors accurately and in a well-arranged way evaluated the state-of-the-art methods with their advantages and disadvantages. The survey includes a table of all methods, where the main features and the possibility of use are summarized.
Valuable work is presented by the authors in reference [2]. The paper deals with the automatic reconstruction of the fully volumetric 3D building models from oriented point clouds. The proposed method can process the large complex real-world point clouds from the unfiltered form into the separated room model, including suppressing of undesired objects inside a room. Several methods for 3D modelling an indoor environment relying explicitly on the prior knowledge of scanner positions are in reference [3]. In many software products provided by a manufacturer of scanners, the scanning position is irretrievably lost. Therefore, the authors propose a method for the reconstruction of the original position of the scanners. The presented method can determine these positions also under very unfavorable conditions. The next research focuses on the estimation of surface geometries directly from a point cloud as in reference [4]. It introduces the 3D surface estimation method of the household objects inside the area. The authors developed a Global Principal Curvature Shape Descriptor (GPCSD) for the categorization of objects into groups. The main purpose is to improve the manipulation planning for a robotic hand.
Researchers in reference [5] introduced a point cloud segmentation method for urban environment data. Their method is based on the robust normal estimation by the construction of an octree-based hierarchical representation for input data. The achieved results prove the concept of usability for the urban environment point clouds even if there is space for improving the detection of some regular objects with curved surfaces. The octree-based approach, with a combination of PCA analysis, is presented in reference [6]. Results are compared with other detection algorithms such as a 3D Kernel-based Hough Transform (3D-KHT) or the classical Random Sample Consensus (RANSAC). This method provides accurate results, robustness to noise, and the possibility to detect planes with small angle variations. The similar research in reference [7] focuses on the plane segmentation of building point clouds. The proposed Density-Based Spatial Clustering of Applications with Noise (DBSCAN) method also respects the curvatures to provide the final fine segmentation. The DBSCAN clustering for surface segmentation is used also in reference [8]. A part, in which the results are presented, shows the desired surface segmentation, but only in a simple point cloud, which is not a real-world case. The surfaces can be extracted also from the RGBD images (see reference [9]). The proposed split and merge approach produces interesting visual results. The planes are detected from depth images by the Depth-driven Plane Detection (DPD) algorithm based on a region grooving (see reference [10]). The Hough transformation (HT) approach called D-KHT can be used for a plane segmentation from these images, as it is shown in reference [11]. Due to the properties of the HT, this method can detect the planes with discontinuities. The ground plane detection using the depth maps captured by a stereo camera is presented in reference [12]. The camera is moving, and the system deals with the elimination of their roll angle for the correct plane segmentation.
The growing topical issue of deep learning in recent years is used for point cloud segmentation. One case of use is indoor boundary estimation (see reference [13]). The described method relies on the depth estimation and wall segmentation to generate the exterior point cloud. The deep supervised clustering helps to fit the wall planes to obtain the resulting boundary map. In a different research work, the deep learning method called PCPNet is used to estimate local 3D shape properties in point clouds (see reference [14]). The main purpose is to classify the objects and shapes or do semantic labeling. The presented results show good estimation ability for the normals and the curvatures, even in the noisy point clouds. The deep learning approach is likewise used on automatic building detection and segmentation from the aerial images or the point clouds (see reference [15]). The main focus lies in improving and preparing high-quality training data, which allows better segmentation of the detected objects. The presented results show the high detection accuracy (higher than 90%) with the RGB-lidar and the fused RGB-lidar data of the urban scenes. The next research (shown in reference [16]) presents the new topology-based global 3D point cloud descriptor called Signature of Topologically Persistent Points (STPP). The topological grouping improves the detection robustness and increases the resistance against a noise. The complementary information also improves deep learning performance. The different topological method TopoLAB described in reference [17], that no longer uses neural networks, focuses on a pipeline to recover the broken topology of planar primitives during the reconstruction of complex building models. Due to the scanning difficulties and a variable point cloud density, some parts of a model can be missing. The proposed method allows the recovery of these parts, including to visualize the different levels of the details.
With respect to the safety and health of the persons, the laser scanning plays an important role in exploring abandoned or dangerous places like various mines (see reference [18]). Similar to this use is the scanning of rock masses (see reference [19]). The proposed method for the planar surface extraction is able to deal with rough and complex surfaces.
For the Mobile Laser Scanning (MLS), the new method Planes detection and Segmentation is proposed based on the Planarity values of Scan profile groups (PSPS) (see reference [20]). The presented results prove the quality of this method in comparison with other state-of-the-art algorithms for these types of spare point clouds, for example, the segmented planar surfaces stand out by their compactness.
The 3D point cloud processing is important not only to get physical shapes, but also it can be used to quantitatively characterize the height, width, and volume of shrub plants. Authors in reference [21] analyze point clouds of different blueberry genotype groups to improve the mechanical harvesting process. For comparison of different shrubs, the correlation is used. In the study described in reference [22], the researchers focus on the filtering method of leaves based on the manifold distance and the normal estimation. The scanned leaves contain the outliers and the noise. The precise estimation of the shape and area of a leaf is important to obtain the tree growth index. Moreover, also the ground plane segmentation plays an important role in the analysis of asparagus harvesting (see reference [23]). The asparagus stems are scanned by a modern time-of-flight camera, which ranks as the fastest scanning device. The proposed Hyun method outperformed the classical RANSAC method in a scene with high clutter. Similar to the previous works is a parametrization of the forest stand described in reference [24]. The researchers propose to use, instead of leaf area index, other space characteristics such as the stand denseness, canopy density, crown porosity, and others. In the practical part, they discovered a high correlation index between each other. The geometric characterization of vines from 3D point clouds is important in the agronomy sector (see reference [25]). The convex hull method is used for calculating the volume of these plants. The recent research in reference [26] deals also with this method in the aim to find the shortest path in 2D complexes with non-positive curvatures. The convex hull method is often used for the determination of an area and perimeter in 2D shapes.
Point cloud processing, object detection and segmentation are widely used in various research areas for different purposes as is shown above. The right processing method choice helps with the extraction of desired information. Input data transformation into the different forms allows the easy obtaining of an unknown feature.

1.2. Our Point Cloud Processing Contribution

The first work of object detection, which we proposed, was the algorithm for point cloud unfolding into a single plane with height preservation, as in reference [27]. The presented method works correctly for rectangular rooms with four walls, a known scanner position, and walls not occupied by many objects. The aim of this work was to unfold all walls with objects into a single plane. The single plane representation allows the removal of walls close to zero and analyzing of the remaining objects. Finally, this method proved to be a dead-end for follow-up utilization. The unfolding of the objects placed on the corner splitting line caused their division into two parts. The global position of the object is also lost.
Many researchers are using depth images in their work (as shown in references [9,10,11,12,13]). The depth images contain the real physical sizes and distances which are possible to utilize further. We had a point cloud without depth information, so we decided to develop the algorithm for the depth map construction, which we described in reference [28]. The main advantage of this algorithm lies in the possibility to create a depth map from any point cloud with the arbitrary camera position and arbitrary camera orientation. The concept of space quantization has shown to be useful for further space data processing. Moreover, we can process these images by image processing methods and for example, transform depth pixels back into the point cloud by using the reverse process. The depth image includes space information about the structure and the quality of an analyzed surface. The depth image offers to detect surfaces or other objects (see references [9,10,11,12,13]). A depth camera covers only the area, which is seen by the camera itself. This is the main disadvantage for global object detection and segmentation. Some objects cover the other objects and areas present in a scene. For this reason, a depth image is not suitable for global space segmentation and description, but the space quantization into the image with a connection of image processing methods offers wide possibilities for basic object detection and detailed space description.
In the papers [29,30], we introduced a novel algorithm called the Level Connected Component Labeling (LCCL) for global 3D data analysis in terms of detecting global levels with high data concentration in selected space dimensions. A planar surface presence is indicated mainly by the high concentration of points in the particular dimension at the specific level. To find the high point concentration at the specific level is not sufficient for planar surface detection. Several planar surfaces can be present in one of the detected levels. Thus, we are using space quantization into an image for this purpose. This allows the use of classically connected labeling to separate individual planar surfaces. A planar surface is presented by the tool called Level Image ( L I ), which is described in references [29,30,31,32]. For each L I , its origin and rotation angle are known. The surface selected dimension is calculated as the average of the points on this surface. From the LCCL output, the statistical parameters, such as the mean value, variance, standard deviation, and data mode, are also known. These parameters describe the quality of the detected planar surface.
The connection of a L I with image processing methods allows expression of a planar surface area and perimeter [30] or interactive visualization [31], including color presentation when available. One of our last research works [32] deals with fine plane range estimation. In this paper, we will present the next methods for point cloud preprocessing and processing, which we did not publish yet, helping to decrease a point amount for the planar surface description, fill the gaps in the point clouds, and obtain other important space features.

2. Point Cloud Processing Approach

Following Figure 1 shows the processing diagram of a point cloud in the form of how it is introduced in this paper. Each block is marked by the section where a method is described or at least mentioned. This paper is mainly focused on four parts of our proposed processing approach, which we did not present before, concretely the outlier elimination; estimation of initial point cloud rotation; possible point cloud correction, such as fill gaps in detected planar surfaces; and vectorization of basic detected shapes for the purpose of decreasing the number of points.
The aim of this paper is to present the methods on how to process point cloud data from a 3D scanner to achieve the desired segmentation, visualization and physical space description. The main contribution of our proposed approach is the connection of algorithms with image processing, which allows correction of scanned data; obtaining important space features easily; or even making color space visualization possible. Moreover, for planar surface detection, we can use any of the described methods from Section 1 to get segmented parts of a point cloud and apply the proposed approach in this paper for their further processing. The following sections describe the developed methods in detail. For their description, the point cloud model depicted in Figure 2 is used.
The point cloud is the real edge of a room with the radiator. This simple model nicely illustrates the environment with several planar surfaces of different sizes and origins in a space.

3. Outlier Points Elimination

Due to the rangefinder measurement errors, a final point cloud may contain outlying points. These points have an unfavorable influence on the overall details during the point cloud visualization, but mainly they caused problems with point cloud registration. Figure 3a illustrates the visualization of the influence of several outlying points. To illustrate this issue, the five outlying points were added into the model in Figure 2. The problem is that only one outlying point will cause the same undesired visual look. The histograms of the measurement point concentrations of all coordinate axes are shown in Figure 3b.
Outlying points caused mainly problems in point cloud registration. Our presented processing pipeline in Figure 1 entails in more iteration steps of the algorithms. The method for far point elimination uses just these histograms and concentration of points in a specific level of coordinate axes. Histograms are marked as h X , h Y , and h Z . The proposed method uses the user defined threshold h T defining the minimal concentration of points. The histogram of the axis is marked as h d , and this is used for the definition of the general equation, where d denotes the coordinate axis. Then, the histogram is described by the following equation:
h d ( n ) = # X ( v d ( n , 1 ) : v d ( n , 2 ) , d ) ,
where X is the input point cloud, the symbol # denotes the number of selected points by the individual density ranges of the histogram v d . The value n [ 1 , 2 , , N d ] is the actual range index of the total count of the values defined from the equation.
N d = ( max ( X ( , d ) ) min ( X ( , d ) ) ) 1 e 3 .
Equation (2) is multiplied by the constant 1 e 3 expressing the millimeter resolution, denoted as histogram step h S t . The v d values are step ranges of the histogram obtained from the range of the analyzed axis h R d , defined as:
h R d = [ min ( X ( , d ) ) , max ( X ( , d ) ) ] ;
total count of values N d from Equation (2); and its step h S t . Values of v d consist of N d rows and two columns marked as 1 and 2 in Equation (1) define starts and ends of the individual ranges of the histogram. The threshold value h T for the elimination outliers is set by the user. It is applied on both sides of the histogram h d as the first value higher than the threshold.
h d ( n ) h T .
Following Figure 4 gives the point cloud from Figure 3 with removed outliers and threshold value h T = 2 . The new narrower range of the histograms is marked by two red dots in Figure 4b.
The final point clouds are suitable for the registration process and further processing. The initial point cloud rotation is important for some object segmentation algorithms, like our algorithm for surface detection. The known initial rotation substantially speeds-up the detection process. Next, Section 4 deals with the initial rotation estimation of a point cloud.

4. Correction of Initial Point Cloud Rotation

A space 3D scanner is usually placed into a scanned space on several positions sequentially. Each scanner positioning can be influenced by the wrong heading of the scanner even in a few angle degrees. This fact is negligible for the registration process but, for example, when the first point cloud is influenced by this error, the composed final point cloud is rotated in the same way. The knowledge of an initial point cloud rotation angle is necessary for the best results and the speed assurance of our proposed methods. We can also rotate by the point cloud in a prescribed range of angles (see reference [29]) and find the best match of points in the detected planar surfaces, but this is time-consuming and inefficient. We developed the following approach for the purpose of finding the initial rotation angle of a point cloud.
A density histogram of the points in the individual axes is also possible to use for the correction of a point cloud rotation. This correction can be likewise applied before the registration process on individual point clouds. It improves the registration success probability. The known orientation angle of the point cloud significantly improves the processing time in our proposed methods for point cloud processing.
The main assumption for its success lies in the presence of flat surfaces in the input point cloud, which is true in most cases of indoor space. The point cloud with correct orientation in the X and Y axes is shown in Figure 5a. Its bottom planar surface is parallel to the zero level in the Z-axis. The unknown parameter is the rotation in the Z-axis. Figure 5b gives constructed histograms of all coordinate axes with a resolution of 1 mm.
The developed method for the estimation of the initial rotation is able to determine the rotation angle of one coordinate axis from the two others. The one main maximum is noticeable in the histogram of the Z-axis, as is shown in Figure 5b. Its position marks the floor of the edge. In the case of the X and Y axes, the maxima of histograms represent the walls of the model and the radiator. The proposed method analyzes these maxima in the prescribed range of the rotation angles to estimate the correction angle of the initial rotation.
In the following example, shown in Figure 6, the rotation estimation of the Z-axis from the X and Y axes is shown. The flat surfaces in the analyzed point cloud may occur in different levels. It is necessary to analyze all local maxima from this reason, and not only the biggest. Their values may also vary depending on a concentration of points. To get these maxima from a histogram, we use the modified algorithm for the connected components searching, which respects gaps between two components. If a gap i z is smaller than the user-defined threshold, then these two components are considered as one bigger one. For the better separation of local maxima, we recommend applying the threshold of a histogram defined as:
h d ( h d < ( μ ( h d ) + σ ( h d ) ) ) = 0 ,
where d marks the selected dimension as in Equation (1). The values of constructed histogram h d lower than its mean value μ ( h d ) with the added standard deviation σ ( h d ) are set to zero. After using Equation (5), the modified algorithm for the connected component labeling with i Z = 0.05 N d is applied. The value i z denotes 5% of the total count of the histogram elements N d . The result with marked local maxima by the red points and numbered components is shown in Figure 6b, presenting the histogram of the X-axis.
The sum of all local maxima found by the modified version of the connected component labeling algorithm with respect to small gaps, is expressed in the following equation:
h d ( k ) = n = 1 L N max ( h d ( L = n ) ) = 0 ,
where L N is the count of found components in the index accumulator L . Figure 7 describes k -th step of the algorithm, where h d ( k ) is estimated for one rotation given by the index k . The histograms of the X and Y axes are shown in Figure 7a for the initial angle 0 degrees and for the angle 4.3 degrees in Figure 7b. This serves for a better illustration of different h d ( k ) values. The different local maxima are noticeable from the analysis of both histograms in Figure 7.
The algorithm rotates the input point cloud by the angle of one rotation step s R in the range R φ φ R s , φ Re . We recommend using a smaller rotation step as 0.1 degrees for better resolution. The analyzed point cloud is firstly rotated to the start angle φ R s . The rotation matrix of one iteration step R ( s R ) is then constructed. The sum of all local maxima is evaluated in each rotation step and the actual point cloud is multiplied by R ( s R ) until the end angle φ Re is reached. Figure 7a gives a sum curve h X for the X-axis and Figure 7b for the Y-axis. In this example, the R φ 0 , 8 and s R = 0.1 are used.
Positions of both maxima are somewhat different. The searched orientation angle φ [ ° ] is the average value of both maxima positions. For this example, the maximum for X-axis in Figure 8a is 3.5 degrees and the maximum for the Y-axis in Figure 8b is 4.7 degrees. Then, the orientation angle is φ [ ° ] = 4.1 degrees. Figure 9 shows the corrected point cloud from Figure 5 including the final histograms. The new histograms in Figure 9b are sharper and have several bigger local maxima in the X and Y axes in comparison to Figure 5b. The histogram in the Z-axis is the same.
The range of the rotation interval R φ is not possible to select arbitrarily. When a point cloud is rotated in an angle of 90 degrees, then, the X-axis becomes the Y-axis. The universal solution is to determine the initial point cloud rotation as the 0 degrees and the rotation range R φ = ± 45 degrees. This algorithm is also fast because it uses a classical histogram, which its ways depend on the number of steps. The algorithm estimated the rotation angle φ o in 10 ms for the used range of angles R φ . As was mentioned above, this algorithm allows the finding of the point cloud rotation of one coordinate axis from the other two. The assumptions for its success are the presence of flat surfaces, which is typically valid for indoor spaces.

5. Planar Surface Detection Algorithm

The known orientation angle of a point cloud allows the detection of planar surfaces in all coordinate axes directly by using only three passes for each axis. More details about the LCCL algorithm are described in papers [29,30,32]. In short, there are two important parameters: the l v l S as the maximal deviation from a level value and the l v l R S multiplier denoting the new level searching range. The algorithm scans a selected coordinate axis by the global virtual plane, which covers the whole horizontal range of the selected axis. The following figures (Figure 10, Figure 11 and Figure 12) illustrate the planar surface detection by using the LCCL algorithm and level image in the X, Y and Z axes respectively.
The width of the analyzed data is defined as the l v l S l v l R S and from the selected range, a level d L with the highest concentration is found by using the histogram. The range of the detected level is defined as d L ± l v l S . We also developed the algorithm for finding the fine range described in reference [32]. The Figure 10c,d nicely illustrates the planar surface detection of the radiator and the erasing of the selected undesired points by the morphological open. The thickness of the undesired points is smaller than the used element.
The range for the next level estimation is selected from the last d L + l v l S and the same range l v l S l v l R S used above. This way of selection and using the histogram for finding the highest point concentration allows finding all uniformly distributed flat surfaces in the analyzed point cloud.
The LCCL algorithm itself is not sufficient for planar surface detection. As in Figure 10c, Figure 11c and Figure 12c, from each founded level d L , the I L is created by using the point space quantization parameter q D . For this testing purpose, q D = 4 cm, l v l S = 5 cm and l v l R S = 2.5 are selected. The I L is firstly processed by the morphological operations and the all planar surfaces are found by the level connected component labeling algorithm. Searching of all levels in all coordinate axes took about 80 ms. The main advantage of I L is the definite physical position determination of each pixel in space. The processing I L allows the filling of these missing data. The proposed method is described in the next section.

6. Fill Gaps in Measurement Data

During the planar surface detection process, there can be a requirement for the area determination including the inner holes. This situation occurs mainly when we process the floor or the ceiling areas. The point cloud with missing points on the floor is shown in Figure 13. This example describes the first level estimation in the Z-axis. During the point cloud analysis, there can be the assumption, that the floor is compact and we have a requirement to estimate the area of the floor. The use of I L for point cloud processing offers the application of wide range image processing algorithms, which help with the important unknown parameter determination in the analyzed point cloud.
The I L in Figure 13c contains undesired holes, which are possible to fill easily by the use of image processing methods. We used the binary image and connected component labeling algorithm for the filling of the presented holes. The pseudo-code of the proposed algorithm is shown in Algorithm 1.
Algorithm 1. Fill Element Area
Input: Image level element I L E x
Output: Filled element I E f i l l
1  E x B = ElementBorderPositions ( I L E x )
2  E x B e = Extend E x B borders about 1 px
3  I o n e = true(size( E x B e ))
4  I o n e ( E x B e > 0) = false
//find separate areas (1 - surroundings, 2 - filled element)
5  L = ConnectedComponentLabeling( I o n e , 4)
//add edges back
6  I E f i l l = ( L = 2) | E x B e
7  I E f i l l = Narrow I E f i l l borders about 1 px
The level image consists of several elements representing individual planar surfaces in general. The analyzed element index is marked as I L E x . By using the algorithm for the element borders detection, presented in our previous work [30], the borders of the element E x B are extracted. The algorithm uses erasing of the eroded element by about one pixel from the original element for the border finding. It is necessary to extend the binary image borders by at least about one pixel E x B e for the preserving of the indexing elements expected in further processing. Then, the second image I o n e of the same size with all values equal to one is created. On the detected border, positions are set to zeros, as shown in the following equation:
I o n e ( E x B e = 1 ) = 0 .
The connected component algorithm with the connection of four is applied to the obtained image. The connection of eight is not possible to use because the border width is only one pixel thin and in the diagonal directions the algorithm may cross the borders. The result of labeling L is two areas with different indexes. Index 1 is the surroundings, and index 2 is the desired filled area without borders. By adding the borders of the analyzed element to the labeled area by the index 2, the original filled area is obtained as:
I E f i l l = ( L = 2 ) E x B e
The extension of one pixel is erased for the getting of the original level image size. Figure 13d gives the final filled area, which is calculated from the original floor elements with the holes. The filled planar surface allows estimation of the real floor area or the space volume like in Section 8 with the results. We can reconstruct the missing points by the reverse process from the fixed floor planar surface. Moreover, from the difference of Figure 13c,d, it is possible to reconstruct only the missing parts and with the knowledge of planar surface statistical parameters, we can generate the missing points with the same statistical distribution.

7. Vectorization of Basic Space Features

One of the point cloud processing aims is the decreasing of descriptive points. The meaning of the term vectorization is to describe a shape by the vertices points. For example, the knowledge of all border positions of pixels is unnecessary, only the vertices points are important for reconstruction of the shape, as is shown in Figure 13d.
The proposed method for finding the vertices points is based on our presented method for the perimeter estimation (see reference [30]). Figure 14 illustrates the way how to determine the perimeter of a shape.
The perimeter is not possible to determine as simply as the area by multiplication of the count of border pixels. Borders are extracted in the same way as in Algorithm 1. The additional analysis of these borders is necessary for precise perimeter estimation. We need to determine the direct and diagonal path of the perimeter. This mentioned problem solves the modified connected component algorithm for the determination of these borders, including the record of the searching direction of connected pixels, the perimeter path P p . The algorithm uses a different mechanism of searching connected pixels, see Figure 14b. When the first border pixel n is found, the neighbor pixel ( n + 1 ) is searched in the direction denoting the numbers in Figure 14b. When the neighbor pixel ( n + 1 ) is found, the actual searching ends. The pixel ( n + 1 ) becomes the pixel n and the searching continues in the same way until all pixels of the border are found. For the direct pixel path, there are numbers 1, 3, 5 and 7 and their length is pixel size. Numbers 2, 4, 6 and 8 are used for the diagonal direction and their length is equal to the diagonal of one pixel. We also watch the score function in the Y-axis determining the position of the last pixel of the round perimeter against the first pixel of the perimeter. For more details, see paper [30]. Figure 14a gives the example of all possible occurrences of directions during the perimeter pixels searching. The optimal perimeter is marked by the light red color line. The number in the corner is the border pixel order and the bigger numbers are directions of their detection. The difference of the perimeter estimation in this simple example with consideration of diagonal direction against the simple pixel multiplication is 10%.
We developed the algorithm for the shape vertices points extraction represented by a level image from the known history of the searching path P p , the score function Y s and round perimeter condition. The pseudo-code is shown in Algorithm 2.
Algorithm 2. Element Vertices Extraction
Input: Perimeter path P p ;
Score function in the Y axis Y s
Condition of circle perimeter c i r P
Output: Vertices of element c P o s
1  c P i = 0
2  c P o s ( c P i ++,) = [ P p (1), 1]
3 for each P p (n) P p from n = 2 do
4     if( c P o s ( c P i , 1) P p (n)) then
5       //last position before change
6        c P o s ( c P i ++,) = [ P p (n − 1),n − 1]
7       //actual position
8        c P o s ( c P i ++,) = [ P p (n),n]
9     end
10 end
11 Exclude repeated items in c P o s
12 if ( c i r P && Y S > 0 ) then
13  Remove c P o s ( e n d , )
14 end
The vertices positions are stored in the array c P o s . The first found border direction is written into this array. The next step is searching through the whole history of the direction path P p from the index n = 2 . The algorithm focuses on indexes with a change of path direction against the last found vertex c P i . When the direction change is detected, the actual path direction on n position is stored including the border position index. The corresponding values on position ( n 1 ) are stored too. The reason for this is obvious in Figure 14a. For example, the direction change from 5 to 6 of the 9th pixel is detected on pixel 10, but the vertex is just pixel 9. After the P p searching, the doubled directions occur in c P o s array, for example, the pixels 7 and 9 or 10 and 11. The first position of doubled directions is removed. The last step of the algorithm is the decision about the last position of P p . The last pixel position against the first has to be determined for the round border defined by c i r P . When the score function Y s is higher or equal to one, the last pixel connection with the first is diagonal. Then, the last item P p is added. The last item P p is excluded for the diagonal direction longer than one pixel. The detection of vertices for the test images of Figure 14a and Figure 13d are illustrated in Figure 15a and in Figure 15b respectively.
The red pixels are detected vertices of the input level image. The compact version of the level image in Figure 14a is created from 39 pixels and only 13 pixels describe vertices of the shape. The origin image in Figure 13d is composed of 961 pixels and 67 vertices were found. The presented method is valid for the compact level images. When some holes are present in an image, the image can be binary inverted. The complete vertices description of a shape (including the holes) is available if we determine also the vertices of the holes.

8. Results

Our previous papers [29,30,31] usually show the point cloud processing only of a single room. In this paper, we decide to show processing of the entire flat. All blocks from the processing scheme in Figure 1 were used. This section is divided into four subsections: in the first, our optical rangefinder is briefly described; the second focused on the used input point cloud representing the real flat with three rooms and one corridor, this subsection also describes its scanning process and the complex output point cloud construction; the next subsection presents obtained results by using the developed processing pipeline; and the last evaluates the algorithms in detail in terms of the precision and possibilities of using.

8.1. Optical Rangefinder for Space Scanning

As the remote sensing device for space scanning, we used our developed optical rangefinder. The described 3D range scanning system is part of a bigger project called ARES (Autonomous Research Exploration System), which is described in references [33,34]. The rangefinder sweeps a laser point into the vertical line and in each point of this line, it is possible to estimate the point position in the space. The measurement principle is based on the triangle similarity described in reference [35]. For detailed information on how to determine a 3D point in a scanned space, see references [35,36]. The whole measuring device consists of a tripod; a high-quality Basler color camera with a resolution of 2590 × 1942 pixels; a green 200 mW laser diode; an optical filter with an angle of 90 degrees for vertical swapping of the laser beam; and a fast and powerful stepping motor for 360 degrees rotation. The following figure shows the practical measurement Figure 16a with measurement frame Figure 16b.
We replaced the used camera lens with a lower focal length to cover a higher range in the height during the development of this device. The lower focal length caused the undesired barrel distortion effect. This effect elimination is described in reference [37]. In one of our recent papers ([38]), we developed the automatic algorithm for the best undistortion parameter estimation. The laser line is segmented from a static measurement frame by the HSV GMM (Gaussian Mixture Model). The HSV color model best covers the intensity range of the used laser (see references [39,40]). The intensity of laser pixels in the measurement image depends on the distance from a scanning object and its reflection of the laser spectrum. Nevertheless, in the measured frames, pixels with a low and high intensity similar to the spectrum of a laser or reflected laser light can occur. For the robustness of the position determination and distinguishing from a laser colored object, it is important to analyze, the laser pixel intensity as well as the laser element shape. A statistical analysis of both parameters offers to detect the main laser line intensity correctly. More details about the best laser element selection and the main laser intensity estimation is described in references [41,42]. We also implemented the fusion of image data from the camera with points in space to achieve a colored point cloud by using a template matching method in two measurement frames (see reference [36]). The measurement system can determine the vertical laser line with a half-pixel accuracy. The accuracy of the distance determination for the actual rangefinder configuration is shown in Figure 17.
The maximal error is 3.5% in the range of 15 m. In the bigger distances, the laser line is closer to a measurement frame center, and the change of position of about one pixel causes a bigger change in the measurement distance. This dependency is non-linear and causes bigger measurement errors. More about this non-linearity and calibration of the optical rangefinder is described in reference [43]. In comparison with the present expensive professional laser scanners, our rangefinder can be labeled as a home 3D scanner.

8.2. Input Point Cloud

As a scanning space, we selected an indoor space, a flat. There are three rooms and one corridor in this flat. To cover the whole space, we placed our optical rangefinder on eight measurement positions. Individual positions are marked in the top view flat scheme in Figure 18b. The room with positions 3 and 4 is the kitchen. All constructed output point clouds are shown in Figure 18a. The green numbers in brackets are equal to the measurement positions. Differences in the color intensity in Figure 18a are caused by the different light conditions during the scanning process. The complete composed point cloud of the flat is shown in Figure 18c.
For the register of the individual point clouds, we used the NTD (Normal Distribution Transform) registration algorithm, which is implemented in the PCL (Point Cloud Library) described in reference [44]. This colored point cloud provides an overview of the scanned space, and from coordinate axes, we can estimate the basic size. However, to obtain the important features of the other space, further processing is needed. The presented point clouds in Figure 18a are clear of outlier points. These points have an influence on the registration process and their removing must be done before the registration process. For their elimination, we proposed the algorithm based on the histogram of a point density in the individual coordinate axes. The algorithm is described in Section 3.
The outlier points were removed from each scan after the scanning from the eight measurement positions. Then, the output point cloud of the flat was composed by the NDT registration process. The initial angle rotation of −3.15 degrees was detected by the algorithm presented in Section 4 from the final point cloud. The algorithm estimated the initial rotation angle in 3.35 s.
The preprocessed space data can be analyzed further in terms of the object detection and physical space description described in Section 7, Section 8 and Section 9. The knowledge of the initial rotation angle allows the detection of the planar surfaces in all coordinate axes by our proposed LCCL algorithm with only three passes. The point cloud rotation in a prescribed range of angles and finding the best match of points in detected planar surfaces for the complex plane detection with all possible orientation angles can be realized according to [29]. Searching of all levels in all coordinate axes took about 1.5 s.

8.3. Processing Results

Table 1 summarizes the used detection parameters for all coordinate axes. The detection parameter selection depends on the quality of a used 3D scanner.
In our case, due to the worse scanner calibration, the range l v l S is higher. We can choose a bigger value of l v l S and l v l R S , if we do not expect parallel surfaces close to each other in the selected range l v l S l v l R S . The level d L detection by the histogram will always find the highest point concentration. In following Figure 19, the visualization of the flat by our proposed processing is shown with the utilization of the level image, allowing also the plane visualization (see reference [31]). The orientation of the planar surfaces is marked by three colors according to the surface orientation (X, Y, and Z).
The detected planar surfaces with their physical parameters are described in the following tables (Table 2, Table 3 and Table 4). Presented planes in the X- and Y-axis have an area bigger than 5 m2. The attributes denote from the left side: Idx—the detection level index; dL—the detected level in the scanned dimension; Pos—the coordinates of the image level origin in the space; AE the area of the planar surface plane; PE the perimeter of the plane; σ —the standard deviation of the points forming the planar surface; PC p.—the number of points representing the planar surface; IL px—the count of level image pixels; V—vertices of the plane; and P. decr.% expressing the decrease between the number of found vertices and the original points of the plane.
We can notice from the analysis of the tables, that indexes 4, 5 and 14, 16 in Table 2 show the ability of the scanning algorithm to detect parallel walls. The last row in all tables shows how many percentages of storage can be saved if we present a planar surface only by their vertices. The first planar plane surface in Table 3 is the floor of the flat. We can express the volume of the flat as the known area multiplied by the height of the flat. The planar surface 21, detected in the level 2.956 m, is the ceiling in Table 3. Then, the volume of the flat is approximately 165 m3.
The obtained results show the advantages of the proposed processing. By using the level image, it is possible to easily express the statistical parameters of the represented planar surface, process plane visualization, and decrease the number of points necessary for the planar surface description. The presented algorithms were tested on the CPU i5-2410M 2,3 GHz. The input point cloud consists of 219,216 points and the detection time of all planar surfaces is not higher than three seconds.
The mentioned algorithms in Section 1 can be used for the detection of planar surfaces. Then, the segmented raw point cloud of a planar surface can be processed by our proposed approach utilizing a level image with the connection of image processing methods. This concept allows us to easily get the important physical space properties, decrease the descriptive point amount, or visualize the planar surface.
The following tables (Table 5, Table 6 and Table 7) give the percentage difference between the real physical area and perimeter in each coordinate axis. Columns marked by p h y s determine the physical values. The estimated values directly reflect the level image content which depends mainly on the objects covering by measurement points.
When we compare the percentage values, it is noticeable that some planar surfaces were segmented correctly, where the error is lower than 3%. Few planar surfaces have the difference between the estimated and physical value close to 10%. These cases indicate a worse covering by the measurement points. As it will be shown in the next subsection, our developed approach reflects the real input data content.
For the robust results evaluation, we also compared the physical results with the convex hull method. The next tables (Table 8, Table 9 and Table 10) give the percentage difference form the real physical values. Columns marked by C H determine the area and perimeter by the convex hull.
The comparison of both tables (Table 5, Table 6 and Table 7) with (Table 8, Table 9 and Table 10) shows that in several cases the percentage difference is in units of percent. Even in four cases, the convex hull method overcomes our method in the determination of at least one compared value. However, in many cases, the difference is even more than 15%. This is caused by the convex hull nature to find the smallest convex polygon that has no corner bent inwards, as shown in following Figure 20. For simple compact planar shapes as squares, rectangles, triangles, and similar shapes without corners bent inwards, this method gives precise results. This assumption is not usually valid in real environments.
Figure 20b shows the problem of finding the convex polygon that has no corner bent inwards. Additionally, the convex hull does not detect holes as Figure 20a illustrating the detection by the level image. Instead of the area and perimeter determination, the level image has also features to detect holes, segment all planar surfaces in one detected level and express statistical properties of a planar surface. The precision of the level image depends mainly on the suitable selection of quantization parameter q D . The most important advantage of a level image is the planar surface representation in the different form, allowing the using of image processing methods. This offers to use the wide range methods to get important planar surface properties. The mathematical solution for complex shapes can be difficult. The image presentation offers to analyze also the depth information, which can be used to detect 3D shapes in the future.

8.4. Evaluation of Processing Algorithms

Following Figure 21 shows the planar surfaces visualization by using the level images from Figure 13c,d. The level images were achieved by the LCCL algorithm. For a better illustration, the segmented points are also included. From both visualizations, it is obvious the good covering of all points. From this follows that the results of the developed algorithms strongly depend on the quality of an input point cloud.
The results of the planar surface segmentation can be improved by the fine segmentation, as we introduced in reference [32]. Figure 22 gives an example of the usage for the level detection in Figure 10. For the processing of the final point cloud composing the flat, we did not use this fine ranging. Figure 22d shows the problematic case. The noisy curve is the histogram of the points representing a level in the scanning dimension dim s . The green line is the moving average of the histogram with the selected window size 1 cm. From the filtered curve, it is possible to surely estimate positions of both elbows, where the derivation is smaller than the desired threshold. For example, due to measurement errors and the worse result of the registration process, points from two scans covering one wall can be slightly shifted and two local maxima of points concentration can be close to each other. When this situation occurs, the algorithm ends with a similar result as in Figure 22d. This can lead to the detection of double walls.
For this reason, it is better to not use this approach on data with measurement errors. Prior information about the point cloud data quality or the demands on the planar surface detection is desirable. We are focusing on the two main parameters, the expected standard point deviation from a detected planar surface and the minimal distance between two planar surfaces in the direction of analyzing individual levels. This reflects the selection of the parameters l v l S and l v l R S . The l v l S value also expresses the expected standard point deviation from a planar surface. If this parameter is set too high, the two close planar surfaces can be detected as one. The higher l v l R S value selection is always recommended, as this ensures the sufficient data range selection in the scanning dimension and the histogram of points concentration will find the best probability of a level position for a planar surface presence. To ensure the probability that a point belongs to the best planar surface, we can follow the statistical parameters of all planar surfaces.
Next, Figure 23 shows the possibility of using the level image for the point cloud segmentation. For a level image, its physical origin in space, orientation angle, and quantization parameter are known. From this, we can easily extract desired points as shown in this example.
The next advantage lies in the extensibility of well known point cloud processing methods mentioned in Section 1. A segmented planar surface can be further processed by our developed algorithms. Following Figure 24 shows the segmentation of different planar surfaces. When we are processing data, we can also focus only on planar surfaces with the desired parameters as the area, perimeter, shape, or for example a planar surface variance, see Figure 24. It is necessary to mention that the flat from Figure 19 is empty of furniture except for the kitchen units. Even if some furniture is present in the scanning scene, we can detect all present planar surfaces. The presented point cloud processing pipeline detects, presents, and reflects all what it is physically present in input data. The use of this proposed processing concept is wide.

9. Conclusions

In this paper, we presented several methods for point cloud processing from basic preprocessing, such as removing outlier points, initial rotation angle estimation against base coordinate axis to the point cloud simplification, and global space parameter estimation. The main assumption for its success depended on planar surface presence in analyzed data, which is highly probable and almost every time fulfilled in an indoor space. The results of these methods are summarized in the following three points:
  • Outliers and rotation—The simple point concentration histogram offers to remove outliers. This improved the result of the final point cloud composition by the registration process. Moreover, the thoroughly selected processing of histograms in the two axes allows the relatively fast finding of the initial point cloud orientation against the third coordinate axis. This is mainly important for the decreasing processing time. Only three passes of the scanning algorithms allow finding of all planar surfaces in each coordinate axis as documented in the tables of results.
  • Holes removing—The composed point cloud from several scans may lack of points in some part of the presented surfaces. The filling of holes in a level image representing a planar surface makes it possible to reconstruct the missing points. The filled holes allow the estimation of the real surface area. It makes even it possible to estimate the volume of a space when its height is known in the case of the floor or ceiling.
  • Simplification of objects—The proposed approach showed the successful planar surface detection in the desired coordinate axes. It offers alternative point cloud processing and extends the possibilities of the present methods. The scanning algorithm for the planar surface detection allows the description of these planar surfaces by the statistical parameters. The standard deviation of the detected planar surfaces is small, which indicates their precise detection, as was shown in the capture results. We can see that thanks to this, it is easy to possibly detect the parallel walls. The presentation of planar surfaces by the level images offers to estimate important space properties as the area and perimeter. The image processing methods allow easy determination of these parameters against the difficult mathematical solution for the irregular shapes. All the detected properties make it possible to classify each planar surface. The results also show the ability of the algorithm for the descriptive point decreasing. All the planar surfaces presented only by their vertices showed a 99% decrease in points almost in all cases.
Moreover, all features of the proposed processing pipeline are usable not only for point clouds, but they are valid for the detection and analysis of different levels for any 3D data in general. Future research will focus on object detection and tracking. The planar surfaces representation allows removal of the scanned space boundary and extracts only the desired objects. The analysis of the level of image shape can be used to categorize objects into groups. The extension of the level image content in more depth can offer the categorization of 3D objects. The next possible way is to create the simulator based on the measured data, which will create the training spaces with some targets, and this can be used for neural networks training.

Author Contributions

T.N.N. and D.-H.H. helped with the work on the signal processing and made the revision of the text integrity. P.C. developed the main concept of the level scanning algorithm and level image. He is the writer of the original draft. L.R. shared ideas, helped with the measurements and he is the main supporter and organizer of this research. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by ERDF/ESF “Cooperation in Applied Research between the University of Pardubice and companies, in the Field of Positioning, Detection and Simulation Technology for Transport Systems (PosiTrans)” (No. CZ.02.1.01/0.0/0.0/17_049/0008394).

Acknowledgments

Thanks to Paul Charles Hooper for the final language revision.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berger, M.; Tagliasacchi, A.; Seversky, L.M.; Alliez, P.; Levine, J.A.; Sharf, A.; Silva, C.T. State of the Art in Surface Reconstruction from Point Clouds. Eurographics 2014, 161–185. [Google Scholar] [CrossRef]
  2. Ochmann, S.; Vock, R.; Klein, R. Automatic reconstruction of fully volumetric 3D building models from oriented point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 151, 251–262. [Google Scholar] [CrossRef] [Green Version]
  3. Mura, C.; Mattausch, O.; Jaspe Villanueva, A.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
  4. Jain, S.; Argall, B. Estimation of Surface Geometries in Point Clouds for the Manipulation of Novel Household Objects. In Proceedings of the RSS 2017 Workshop on Spatial-Semantic Representations in Robotics, Cambridge, MA, USA, 12–16 July 2017; Robotics Science and Systems: Corvallis, OR, USA, 2017. [Google Scholar]
  5. Zhao, R.; Pang, M.; Liu, C.; Zhang, Y. Robust Normal Estimation for 3D LiDAR Point Clouds in Urban Environments. Sensors 2019, 19, 1248. [Google Scholar] [CrossRef] [Green Version]
  6. El-Sayed, E.; Abdel-Kader, R.F.; Nashaat, H.; Marei, M. Plane detection in 3D point cloud using octree-balanced density down-sampling and iterative adaptive plane extraction. IET Image Process. 2018, 12, 1595–1605. [Google Scholar] [CrossRef] [Green Version]
  7. Czerniawski, T.; Sankaran, B.; Nahangi, M.; Haas, C.; Leite, F. 6D DBSCAN-based segmentation of building point clouds for planar object classification. Autom. Construct. 2018, 88, 44–58. [Google Scholar] [CrossRef]
  8. Xu, X.; Luo, M.; Tan, Z.; Zhang, M.; Yang, H. Plane segmentation and fitting method of point clouds based on improved density clustering algorithm for laser radar. Infrared Phys. Technol. 2019, 96, 133–140. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Lu, T.; Yang, J.; Kong, H. Split and Merge for Accurate Plane Segmentation in RGB-D Images. In Proceedings of the 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 26–29 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 49–54. [Google Scholar] [CrossRef]
  10. Jin, Z.; Tillo, T.; Zou, W.; Zhao, Y.; Li, X. Robust Plane Detection Using Depth Information from a Consumer Depth Camera. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 447–460. [Google Scholar] [CrossRef]
  11. Vera, E.; Lucio, D.; Fernandes, L.A.F.; Velho, L. Hough Transform for real-time plane detection in depth images. Pattern Recognit. Lett. 2018, 103, 8–15. [Google Scholar] [CrossRef]
  12. Skulimowski, P.; Owczarek, M.; Strumiłło, P. Ground Plane Detection in 3D Scenes for an Arbitrary Camera Roll Rotation Through “V-Disparity” Representation. In Proceedings of the Federated Conference on Computer Science and Information Systems, Prague, Czech Republic, 3–6 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 669–674. [Google Scholar] [CrossRef] [Green Version]
  13. Phalak, A.; Chen, Z.; Yi, D.; Gupta, K.; Badrinarayanan, V.; Rabinovich, A. DeepPerimeter: Indoor Boundary Estimation from Posed Monocular Sequences. arXiv 2019, arXiv:1904.11595. [Google Scholar]
  14. Guerrero, P.; Kleiman, Y.; Ovsjanikov, M.; Mitra, N.J. PCPNet Learning Local Shape Properties from Raw Point Clouds. Comput. Graph. Forum 2018, 37, 75–85. [Google Scholar] [CrossRef] [Green Version]
  15. Griffiths, D.; Boehm, J. Improving public data for building segmentation from Convolutional Neural Networks (CNNs) for fused airborne lidar and image data using active contours. ISPRS J. Photogramm. Remote Sens. 2019, 154, 70–83. [Google Scholar] [CrossRef]
  16. Beksi, W.J.; Papanikolopoulos, N. A topology-based descriptor for 3D point cloud modeling: Theory and experiments. Image Vis. Comput. 2019, 88, 84–95. [Google Scholar] [CrossRef]
  17. Liu, X.; Zhang, Y.; Ling, X.; Wan, Y.; Liu, L.; Li, Q. TopoLAP: Topology Recovery for Building Reconstruction by Deducing the Relationships between Linear and Planar Primitives. Remote Sens. 2019, 11, 1372. [Google Scholar] [CrossRef] [Green Version]
  18. Lian, X.; Hu, H. Terrestrial laser scanning monitoring and spatial analysis of ground disaster in Gaoyang coal mine in Shanxi. Environ. Earth Sci. 2017, 76, 287. [Google Scholar] [CrossRef]
  19. Drews, T.; Miernik, G.; Anders, K.; Höfle, B.; Profe, J.; Emmerich, A.; Bechstädt, T. Validation of fracture data recognition in rock masses by automated plane detection in 3D point clouds. Int. J. Rock Mech. Min. Sci. 2018, 109, 19–31. [Google Scholar] [CrossRef]
  20. Nguyen, H.L.; Belton, D.; Helmholz, P. Planar surface detection for sparse and heterogeneous mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 151, 141–161. [Google Scholar] [CrossRef]
  21. Jiang, Y.; Li, C.; Takeda, F.; Kramer, E.A.; Ashrafi, H.; Hunter, J. 3D point cloud data to quantitatively characterize size and shape of shrub crops. Horticult. Res. 2019, 6, 43. [Google Scholar] [CrossRef] [Green Version]
  22. Hu, C.; Pan, Z.; Li, P. A 3D Point Cloud Filtering Method for Leaves Based on Manifold Distance and Normal Estimation. Remote Sens. 2019, 11, 198. [Google Scholar] [CrossRef] [Green Version]
  23. Peebles, M.; Lim, S.H.; Streeter, L.; Duke, M.; Au, C.K. Ground Plane Segmentation of Time-of-Flight Images for Asparagus Harvesting. In Proceedings of the International Conference on Image and Vision Computing, Auckland, New Zealand, 19–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  24. Pascu, I.S.; Dobre, A.C.; Badea, O.; Tănase, M.A. Estimating forest stand structure attributes from terrestrial laser scans. Sci. Total Environ. 2019, 691, 205–215. [Google Scholar] [CrossRef]
  25. Del-Campo-Sanchez, A.; Moreno, M.; Ballesteros, R.; Hernandez-Lopez, D. Geometric Characterization of Vines from 3D Point Clouds Obtained with Laser Scanner Systems. Remote Sens. 2019, 11, 2365. [Google Scholar] [CrossRef] [Green Version]
  26. Lubiw, A.; Maftuleac, D.; Owen, M. Shortest paths and convex hulls in 2D complexes with non-positive curvature. Comput. Geom. 2020, 89, 101626. [Google Scholar] [CrossRef] [Green Version]
  27. Chmelar, P.; Beran, L.; Kudriavtseva, N. Projection of Point Cloud for Basic Object Detection. In Proceedings of the ELMAR 2014, Zadar, Croatia, 10–12 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
  28. Chmelar, P.; Beran, L.; Rejfek, L. The Depth Map Construction from a 3D Point Cloud. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2016; Volume 75. [Google Scholar] [CrossRef] [Green Version]
  29. Chmelar, P.; Rejfek, L.; Beran, L.; Dobrovolny, M. A point cloud decomposition by the 3D level scanning for planes detection. Int. J. Adv. Appl. Sci. 2017, 4, 121–126. [Google Scholar] [CrossRef] [Green Version]
  30. Chmelar, P.; Beran, L.; Chmelarova, N.; Rejfek, L. Advanced Plane Properties by Using Level Image. In Proceedings of the 28th International Conference Radioelektronika (RADIOELEKTRONIKA), Prague, Czech Republic, 19–20 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  31. Chmelar, P.; Rejfek, L.; Beran, L.; Chmelarova, N.; Dobrovolny, M. Point Cloud Plane Visualization by Using Level Image. J. Fundam. Appl. Sci. 2018, 10, 547–560. [Google Scholar]
  32. Chmelarova, N.; Chmelar, P.; Rejfek, L. The Fine Plane Range Estimation from Point Cloud. In Proceedings of the 29th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  33. Beran, L.; Chmelar, P.; Dobrovolny, M. Navigation of Robotic Platform with Using Inertial Measurement Unit and Direct Cosine Matrix. In Proceedings of the ELMAR 2014, Zadar, Croatia, 10–12 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
  34. Beran, L.; Chmelar, P.; Rejfek, L. Navigation of Robotics Platform Using Monocular Visual Odometry. In Proceedings of the 25th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 21–22 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 213–216. [Google Scholar] [CrossRef]
  35. Chmelar, P.; Dobrovolny, M. The Fusion of Ultrasonic and Optical Measurement Devices for Autonomous Mapping. In Proceedings of the 23rd International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–17 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 292–296. [Google Scholar] [CrossRef]
  36. Chmelar, P.; Beran, L.; Rejfek, L.; Chmelarova, N. The Point Cloud Visualisation for Rotary Optical Rangefinders. In Proceedings of the 27th International Conference Radioelektronika (RADIOELEKTRONIKA), Brno, Czech Republic, 19–20 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  37. Chmelar, P.; Beran, L.; Rejfek, L.; Kudriavtseva, N. Effective Lens Distortion Correction for 3D Range Scannig Systems. In Proceedings of the 57th International Symposium ELMAR, Zadar, Croatia, 28–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 37–40. [Google Scholar] [CrossRef]
  38. Chmelarova, N.; Chmelar, P.; Rejfek, L. The Automatic Undistortion Strength Estimation for Any Describable Optical Distortion. In Proceedings of the 29th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 16–18 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  39. Chmelar, P.; Benkrid, A. Efficiency of HSV Over RGB Gaussian Mixture Model for Fire Detection. In Proceedings of the 24th International Conference Radioelektronika (RADIOELEKTRONIKA), Bratislava, Slovakia, 15–16 April 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar] [CrossRef]
  40. Chmelar, P.; Beran, L.; Kudriavtseva, N. The Laser Color Detection for 3D Range Scanning Using Gaussian Mixture Model. In Proceedings of the 25th International Conference Radioelektronika (RADIOELEKTRONIKA), Paradubice, Czech Republic, 21–22 April 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 248–253. [Google Scholar] [CrossRef]
  41. Chmelar, P.; Dobrovolny, M. The Laser Line Detection for Autonomous Mapping Based on Color Segmentation. Int. J. Comput. Inf. Eng. 2013, 7, 19–24. [Google Scholar]
  42. Chmelarova, N.; Chmelar, P.; Beran, L.; Rejfek, L. Improving Precision of Laser Line Detection in 3D Range Scanning Systems. In Proceedings of the 26th International Conference Radioelektronika (RADIOELEKTRONIKA), Kosice, Slovakia, 19–20 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 207–212. [Google Scholar] [CrossRef]
  43. Chmelar, P.; Dobrovolny, M. The Optical Measuring Device for the Autonomous Exploration and Mapping of unknown Environments. Perners Contacts Univ. Pardubice 2012, 7, 41–50. [Google Scholar]
  44. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation, Shangai, China, 9–13 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Proposed point cloud processing block diagram.
Figure 1. Proposed point cloud processing block diagram.
Applsci 10 03340 g001
Figure 2. Point cloud model.
Figure 2. Point cloud model.
Applsci 10 03340 g002
Figure 3. Distribution of points in the point cloud model: (a) Input point cloud; (b) Histograms of all coordinate axes.
Figure 3. Distribution of points in the point cloud model: (a) Input point cloud; (b) Histograms of all coordinate axes.
Applsci 10 03340 g003
Figure 4. Removal outliers result: (a) Resultant point cloud; (b) New range of histograms.
Figure 4. Removal outliers result: (a) Resultant point cloud; (b) New range of histograms.
Applsci 10 03340 g004
Figure 5. Output point cloud from the registration process: (a) X–Y top view; (b) Histograms of all coordinate axes.
Figure 5. Output point cloud from the registration process: (a) X–Y top view; (b) Histograms of all coordinate axes.
Applsci 10 03340 g005
Figure 6. X-axis histogram: (a) Original histogram; (b) Result of Equation (5) applied on this figure (a) with marked parts and their maxima.
Figure 6. X-axis histogram: (a) Original histogram; (b) Result of Equation (5) applied on this figure (a) with marked parts and their maxima.
Applsci 10 03340 g006
Figure 7. Histograms of the X and Y axes for a different angle of rotation: (a) Original angle of 0 degrees; (b) Angle of 4.3 degrees.
Figure 7. Histograms of the X and Y axes for a different angle of rotation: (a) Original angle of 0 degrees; (b) Angle of 4.3 degrees.
Applsci 10 03340 g007
Figure 8. Sum of local maxima of the X and Y axes histograms: (a) X-axis sum; (b) Y-axis sum.
Figure 8. Sum of local maxima of the X and Y axes histograms: (a) X-axis sum; (b) Y-axis sum.
Applsci 10 03340 g008
Figure 9. Point cloud with corrected orientation: (a) Output point cloud; (b) New histograms of point density in all coordinate axes.
Figure 9. Point cloud with corrected orientation: (a) Output point cloud; (b) New histograms of point density in all coordinate axes.
Applsci 10 03340 g009
Figure 10. Planar surface detection in X-axis: (a) 2nd level selection in the X-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Figure 10. Planar surface detection in X-axis: (a) 2nd level selection in the X-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Applsci 10 03340 g010
Figure 11. Planar surface detection in Y-axis: (a) 1st level selection in the Y-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Figure 11. Planar surface detection in Y-axis: (a) 1st level selection in the Y-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Applsci 10 03340 g011
Figure 12. Planar surface detection in Z-axis: (a) 1st level selection in the Z-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Figure 12. Planar surface detection in Z-axis: (a) 1st level selection in the Z-axis; (b) Segmented point cloud; (c) Resulting level image; (d) Morphologically closed and opened image.
Applsci 10 03340 g012
Figure 13. Input point cloud and its level images: (a) Point cloud of the 1st level in the Z-axis; (b) Segmented point cloud from (a); (c) Level image after the application of morphological operations; (d) Filled level image.
Figure 13. Input point cloud and its level images: (a) Point cloud of the 1st level in the Z-axis; (b) Segmented point cloud from (a); (c) Level image after the application of morphological operations; (d) Filled level image.
Applsci 10 03340 g013
Figure 14. Shape perimeter determination: (a) Shape with the marked optimal perimeter path; (b) Way of searching neighbor pixels on borders of a shape.
Figure 14. Shape perimeter determination: (a) Shape with the marked optimal perimeter path; (b) Way of searching neighbor pixels on borders of a shape.
Applsci 10 03340 g014
Figure 15. Detected vertices: (a) Vertices of Figure 14a; (b) Vertices of Figure 13d.
Figure 15. Detected vertices: (a) Vertices of Figure 14a; (b) Vertices of Figure 13d.
Applsci 10 03340 g015
Figure 16. Measurement illustration: (a) Developed 3D scanner; (b) Measurement frame.
Figure 16. Measurement illustration: (a) Developed 3D scanner; (b) Measurement frame.
Applsci 10 03340 g016
Figure 17. 3D scanner precision: (a) Error graph; (b) Error in percent.
Figure 17. 3D scanner precision: (a) Error graph; (b) Error in percent.
Applsci 10 03340 g017
Figure 18. Input point cloud of the flat with marked scan positions: (a) Individual 3D scans, green number in bracket is equal with the scanning positions in (b); (b) Flat top view scheme with marked measurement positions; (c) Composed final point cloud.
Figure 18. Input point cloud of the flat with marked scan positions: (a) Individual 3D scans, green number in bracket is equal with the scanning positions in (b); (b) Flat top view scheme with marked measurement positions; (c) Composed final point cloud.
Applsci 10 03340 g018
Figure 19. Flat visualization with marked detected planes: red color marks plane detected in X-axis, green color is used for Y-axis, and blue color marks detected planes in the Z-axis.
Figure 19. Flat visualization with marked detected planes: red color marks plane detected in X-axis, green color is used for Y-axis, and blue color marks detected planes in the Z-axis.
Applsci 10 03340 g019
Figure 20. Planar surface area with a hole: (a) Level image; (b) Convex hull.
Figure 20. Planar surface area with a hole: (a) Level image; (b) Convex hull.
Applsci 10 03340 g020
Figure 21. Planar surface visualization: (a) Point cloud with a hole; (b) Normal surface.
Figure 21. Planar surface visualization: (a) Point cloud with a hole; (b) Normal surface.
Applsci 10 03340 g021
Figure 22. Fine planar surface range estimation: (a) Points histogram in the scanning dimension with marked thinner range; (b) Thinner range selection in the comparison with Figure 10a,b; (c) Selected points by this range; (d) Illustration of the problematic case.
Figure 22. Fine planar surface range estimation: (a) Points histogram in the scanning dimension with marked thinner range; (b) Thinner range selection in the comparison with Figure 10a,b; (c) Selected points by this range; (d) Illustration of the problematic case.
Applsci 10 03340 g022
Figure 23. Point cloud segmentation by the level image: (a) Input point cloud; (b) Input level image; (c) Point cloud of the segmented planar surface.
Figure 23. Point cloud segmentation by the level image: (a) Input point cloud; (b) Input level image; (c) Point cloud of the segmented planar surface.
Applsci 10 03340 g023
Figure 24. Segmentation of different planar surfaces: (a) Illustration of two segmented planar surfaces with different area in Y-axis; (b) First level segmentation in X-axis; (c) Resultant point cloud.
Figure 24. Segmentation of different planar surfaces: (a) Illustration of two segmented planar surfaces with different area in Y-axis; (b) First level segmentation in X-axis; (c) Resultant point cloud.
Applsci 10 03340 g024
Table 1. Detection parameter for all coordinate axes.
Table 1. Detection parameter for all coordinate axes.
AxislvlSlvlRSqD
X0.1 m2.54 cm
Y0.1 m2.54 cm
Z0.2 m1.54 cm
Table 2. Detected planes with the physical description in the X-axis.
Table 2. Detected planes with the physical description in the X-axis.
IdxdL (m)Pos (m)AE (m2)PE (m) σ (m)PC p.IL px.VP. decr.%
20.3130.313, 3.935, 37.17419.330.0008730644209298.74
43.5213.52, 3.926, 39.86218.1770.001815,21161487199.53
53.6873.687, 2.581, 2.8811.64521.410.000614,46071669299.36
147.1627.162, 2.56, 2.9613.27820.8330.000610,68781128799.19
167.3717.371, 4.454, 2.88.75411.960.00116,02554504399.73
228.9178.917, 0.291, 2.727.1515.4430.001742444528198.91
Table 3. Detected planes with the physical description in the Y-axis.
Table 3. Detected planes with the physical description in the Y-axis.
IdxdL (m)Pos (m)AE (m2)PE (m) σ (m)PC p.IL px.VP. decr.%
10.0813.926, 0.081, 2.8813.33415.2260.001322,16182708299.63
72.6473.653, 2.647, 2.929.785612.40.000813,45660544999.64
103.9490.135, 3.949, 2.9610.02612.690.000714,80261923999.74
144.4977.115, 4.497, 2.807.700816.890.0009984747488599.14
297.8190.271, 7.819, 3.1221.66135.2360.001826,5291384016299.39
Table 4. Detected planes with the physical description in the Z-axis.
Table 4. Detected planes with the physical description in the Z-axis.
IdxdL (m)Pos (m)AE (m2)PE (m) σ (m)PC p.IL px.VP. decr.%
10.0230.262, 0.018, 0.02355.83545.6430.00614,22334,89727098.1
40.9870.091, 3.936, 0.9870.764.610.01026734753698.65
60.9424.415, 7.593, 0.9420.7394.5370.01417104622198.77
80.9268.013, 5.763, 0.9261.8459.80.012662911535899.13
121.35310.129, 5.853, 1.3530.7734.960.00925554832299.14
212.9560.102, 0.012, 2.67259.24639.3190.01916,17337,02916299
Table 5. Physical area and perimeter, difference in percent (X-axis).
Table 5. Physical area and perimeter, difference in percent (X-axis).
IdxAE (m2)A phys (m2)Δ A (%)PE (m)P phys (m)Δ P (%)
27.1746.972.8419.3318.951.97
49.8628.9768.9818.17716.8567.27
511.64511.7280.7121.4121.681.26
1413.27813.4891.5920.83321.342.43
168.7548.285.4111.9611.226.19
227.156.952.8015.44315.1332.01
Table 6. Physical area and perimeter, difference in percent (Y-axis).
Table 6. Physical area and perimeter, difference in percent (Y-axis).
IdxAE (m2)A phys (m2)Δ A (%)PE (m)P phys (m)Δ P (%)
113.33412.456.6315.22614.922.01
79.785610.264.8512.412.762.90
1010.02610.120.9412.6912.780.71
147.70087.433.5216.8916.273.67
2921.66122.32.9535.23636.443.42
Table 7. Physical area and perimeter, difference in percent (Z-axis).
Table 7. Physical area and perimeter, difference in percent (Z-axis).
IdxAE (m2)A phys (m2)Δ A (%)PE (m)P phys (m)Δ P (%)
155.83554.322.7145.64344.282.99
40.760.793.954.614.865.42
60.7390.7123.654.5374.334.56
81.8451.7654.349.89.453.57
120.7730.814.794.965.153.83
2159.24656.3924.8239.31938.5072.07
Table 8. Physical area and perimeter, difference in percent from convex hull (X-axis).
Table 8. Physical area and perimeter, difference in percent from convex hull (X-axis).
IdxACH (m2)A phys (m2)Δ A (%)PCH (m)P phys (m)Δ P (%)
211.4616.9739.1913.33818.9542.08
411.4258.97621.4413.3816.85625.98
514.12111.72816.9515.34321.6841.30
1414.94813.4899.7615.63221.3436.51
169.19228.289.9211.91911.225.86
228.32556.9516.5211.03115.13337.19
Table 9. Physical area and perimeter, difference in percent from convex hull (Y-axis).
Table 9. Physical area and perimeter, difference in percent from convex hull (Y-axis).
IdxACH (m2)A phys (m2)Δ A (%)PCH (m)P phys (m)Δ P (%)
113.49412.457.7414.69314.921.54
79.834510.264.3312.29412.763.79
1010.23110.121.0812.34212.783.55
149.50687.4321.8511.87816.2736.98
2929.79722.325.1625.86236.4440.90
Table 10. Physical area and perimeter, difference in percent from convex hull (Z-axis).
Table 10. Physical area and perimeter, difference in percent from convex hull (Z-axis).
IdxACH (m2)A phys (m2)Δ A (%)PCH (m)P phys (m)Δ P (%)
169.04454.3221.3332.06244.2838.11
40.9380.7915.785.074.864.23
60.7890.7129.764.604.335.80
84.0331.76556.248.409.4512.52
120.9480.8114.564.925.154.61
2171.70356.39221.3532.78638.50717.45

Share and Cite

MDPI and ACS Style

Chmelar, P.; Rejfek, L.; Nguyen, T.N.; Ha, D.-H. Advanced Methods for Point Cloud Processing and Simplification. Appl. Sci. 2020, 10, 3340. https://doi.org/10.3390/app10103340

AMA Style

Chmelar P, Rejfek L, Nguyen TN, Ha D-H. Advanced Methods for Point Cloud Processing and Simplification. Applied Sciences. 2020; 10(10):3340. https://doi.org/10.3390/app10103340

Chicago/Turabian Style

Chmelar, Pavel, Lubos Rejfek, Tan N. Nguyen, and Duy-Hung Ha. 2020. "Advanced Methods for Point Cloud Processing and Simplification" Applied Sciences 10, no. 10: 3340. https://doi.org/10.3390/app10103340

APA Style

Chmelar, P., Rejfek, L., Nguyen, T. N., & Ha, D. -H. (2020). Advanced Methods for Point Cloud Processing and Simplification. Applied Sciences, 10(10), 3340. https://doi.org/10.3390/app10103340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop