Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (654)

Search Parameters:
Keywords = point cloud classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
52 pages, 7911 KiB  
Review
Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments
by Prasad Nethala, Dugan Um, Neha Vemula, Oscar Fernandez Montero, Kiju Lee and Mahendra Bhandari
Remote Sens. 2024, 16(23), 4370; https://doi.org/10.3390/rs16234370 - 22 Nov 2024
Viewed by 399
Abstract
This paper presents an extensive review of techniques for plant feature extraction and segmentation, addressing the growing need for efficient plant phenotyping, which is increasingly recognized as a critical application for remote sensing in agriculture. As understanding and quantifying plant structures become essential [...] Read more.
This paper presents an extensive review of techniques for plant feature extraction and segmentation, addressing the growing need for efficient plant phenotyping, which is increasingly recognized as a critical application for remote sensing in agriculture. As understanding and quantifying plant structures become essential for advancing precision agriculture and crop management, this survey explores a range of methodologies, both traditional and cutting-edge, for extracting features from plant images and point cloud data, as well as segmenting plant organs. The importance of accurate plant phenotyping in remote sensing is underscored, given its role in improving crop monitoring, yield prediction, and stress detection. The review highlights the challenges posed by complex plant morphologies and data noise, evaluating the performance of various techniques and emphasizing their strengths and limitations. The insights from this survey offer valuable guidance for researchers and practitioners in plant phenotyping, advancing the fields of plant science and agriculture. The experimental section focuses on three key tasks: 3D point cloud generation, 2D image-based feature extraction, and 3D shape classification, feature extraction, and segmentation. Comparative results are presented using collected plant data and several publicly available datasets, along with insightful observations and inspiring directions for future research. Full article
Show Figures

Figure 1

Figure 1
<p>Trend for publications on feature extraction of plants using remote sensing.</p>
Full article ">Figure 2
<p>Overview of remote sensing techniques applied to canopy-, plant-, and organ-level phenotyping.</p>
Full article ">Figure 3
<p>Support Vector Machine (SVM) segmentation pipeline.</p>
Full article ">Figure 4
<p>Comparison between PointNet and SVM segmentation.</p>
Full article ">Figure 4 Cont.
<p>Comparison between PointNet and SVM segmentation.</p>
Full article ">Figure 5
<p>Comparison of PointNet vs. SVM performance.</p>
Full article ">Figure 6
<p>PointNet loss and accuracy plots for training and validation.</p>
Full article ">Figure 7
<p>Unmanned aerial vehicle data processing [<a href="#B205-remotesensing-16-04370" class="html-bibr">205</a>].</p>
Full article ">Figure 8
<p>Three-dimensional point clouds by remote sensing.</p>
Full article ">Figure 9
<p>Three-dimensional data collection system for tomato plants.</p>
Full article ">Figure 10
<p>(<b>a</b>) Zoomed image with annotation. (<b>b</b>) F1–confidence curve.</p>
Full article ">Figure 11
<p>Data distribution before preprocessing.</p>
Full article ">Figure 12
<p>Data distribution after preprocessing.</p>
Full article ">Figure 13
<p>Leaf and stem classification and segmentation results.</p>
Full article ">Figure 14
<p>Training loss and training accuracy.</p>
Full article ">
13 pages, 46604 KiB  
Article
Human Activity Recognition Based on Point Clouds from Millimeter-Wave Radar
by Seungchan Lim, Chaewoon Park, Seongjoo Lee and Yunho Jung
Appl. Sci. 2024, 14(22), 10764; https://doi.org/10.3390/app142210764 - 20 Nov 2024
Viewed by 307
Abstract
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, [...] Read more.
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, a low-power and lightweight design of the HAR system is essential. In this paper, we propose a low-power and lightweight HAR system using point-cloud data collected by radar. The proposed HAR system uses a pillar feature encoder that converts 3D point-cloud data into a 2D image and a classification network based on depth-wise separable convolution for lightweighting. The proposed classification network achieved an accuracy of 95.54%, with 25.77 M multiply–accumulate operations and 22.28 K network parameters implemented in a 32 bit floating-point format. This network achieved 94.79% accuracy with 4 bit quantization, which reduced memory usage to 12.5% compared to existing 32 bit format networks. In addition, we implemented a lightweight HAR system optimized for low-power design on a heterogeneous computing platform, a Zynq UltraScale+ ZCU104 device, through hardware–software implementation. It took 2.43 ms of execution time to perform one frame of HAR on the device and the system consumed 3.479 W of power when running. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection setup.</p>
Full article ">Figure 2
<p>Configuration of dataset classes and their corresponding point clouds: (<b>a</b>) Stretching; (<b>b</b>) Standing; (<b>c</b>) Taking medicine; (<b>d</b>) Squatting; (<b>e</b>) Sitting chair; (<b>f</b>) Reading news; (<b>g</b>) Sitting floor; (<b>h</b>) Picking; (<b>i</b>) Crawl; (<b>j</b>) Lying wave hands; (<b>k</b>) Lying.</p>
Full article ">Figure 3
<p>Overview of the proposed HAR system.</p>
Full article ">Figure 4
<p>Proposed classification network.</p>
Full article ">Figure 5
<p>Training and test loss curve and accuracy curve: (<b>a</b>) Training and test loss curve; (<b>b</b>) Training and test accuracy curve.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Environment used for FPGA implementation and verification.</p>
Full article ">
31 pages, 2257 KiB  
Article
Evaluation of Cluster Algorithms for Radar-Based Object Recognition in Autonomous and Assisted Driving
by Daniel Carvalho de Ramos, Lucas Reksua Ferreira, Max Mauro Dias Santos, Evandro Leonardo Silva Teixeira, Leopoldo Rideki Yoshioka, João Francisco Justo and Asad Waqar Malik
Sensors 2024, 24(22), 7219; https://doi.org/10.3390/s24227219 - 12 Nov 2024
Viewed by 668
Abstract
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable [...] Read more.
Perception systems for assisted driving and autonomy enable the identification and classification of objects through a concentration of sensors installed in vehicles, including Radio Detection and Ranging (RADAR), camera, Light Detection and Ranging (LIDAR), ultrasound, and HD maps. These sensors ensure a reliable and robust navigation system. Radar, in particular, operates with electromagnetic waves and remains effective under a variety of weather conditions. It uses point cloud technology to map the objects in front of you, making it easy to group these points to associate them with real-world objects. Numerous clustering algorithms have been developed and can be integrated into radar systems to identify, investigate, and track objects. In this study, we evaluate several clustering algorithms to determine their suitability for application in automotive radar systems. Our analysis covered a variety of current methods, the mathematical process of these methods, and presented a comparison table between these algorithms, including Hierarchical Clustering, Affinity Propagation Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Mini-Batch K-Means, K-Means Mean Shift, OPTICS, Spectral Clustering, and Gaussian Mixture. We have found that K-Means, Mean Shift, and DBSCAN are particularly suitable for these applications, based on performance indicators that assess suitability and efficiency. However, DBSCAN shows better performance compared to others. Furthermore, our findings highlight that the choice of radar significantly impacts the effectiveness of these object recognition methods. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>FMCW radar system block diagram.</p>
Full article ">Figure 2
<p>Basic topology of a radar system.</p>
Full article ">Figure 3
<p>Radar measurement range classification: Short Range Radar (SRR)/Middle Range Radar (MRR)/Long Range Radar (LRR).</p>
Full article ">Figure 4
<p>Architecture of the Automotive ECU-Radar with its components, technologies, and applications enabled for DA features.</p>
Full article ">Figure 5
<p>Information processing of automotive ECU-Radar.</p>
Full article ">Figure 6
<p>Neighborhood of a point: each point in a cluster has its neighborhood with a certain radius that contains at least a certain number of points.</p>
Full article ">Figure 7
<p>Direct Density Reach is when the object <span class="html-italic">p</span> is directly reachable by the density of object <span class="html-italic">q</span>, when <span class="html-italic">p</span> is <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>-neighborhood of <span class="html-italic">q</span>, and <span class="html-italic">q</span> is a midpoint.</p>
Full article ">Figure 8
<p>Reach by Density is defined when the object <span class="html-italic">p</span> is reachable by the density of object <span class="html-italic">q</span>, in a set D, if there is a chain of objects, such that <span class="html-italic">p</span> is reachable by density directly from <span class="html-italic">q</span> with respect to MinPts.</p>
Full article ">Figure 9
<p>The parameters of DBSCAN.</p>
Full article ">Figure 10
<p>Point clustering using K-Means algorithm.</p>
Full article ">Figure 11
<p>Mean Shift algorithm parameters.</p>
Full article ">Figure 12
<p>OPTICS algorithm parameters.</p>
Full article ">Figure 13
<p>Mixture of Gaussian: three Gaussian functions are illustrated, so K = 3. Each Gaussian explains the data contained in each of the three available clusters.</p>
Full article ">Figure 14
<p>Automotive radar process from point cloud to cluster and object detection and recognition.</p>
Full article ">Figure 15
<p>Process for applying clustering radar system.</p>
Full article ">Figure 16
<p>Radar detecting pedestrians in Driving Scenario Design.</p>
Full article ">Figure 17
<p>Radar detecting cyclist in Driving Scenario Design.</p>
Full article ">Figure 18
<p>Radar detecting a stopped vehicle in Driving Scenario Design.</p>
Full article ">Figure 19
<p>Radar detecting a moving vehicle in Driving Scenario Design.</p>
Full article ">Figure 20
<p>Radar detecting many objects in Driving Scenario Design.</p>
Full article ">Figure 21
<p>DBSCAN recognizing many objects.</p>
Full article ">Figure 22
<p>Comparing clustering algorithm.</p>
Full article ">Figure 23
<p>Performance of algorithms in tests.</p>
Full article ">
18 pages, 982 KiB  
Review
Remote Sensing and GIS in Natural Resource Management: Comparing Tools and Emphasizing the Importance of In-Situ Data
by Sanjeev Sharma, Justin O. Beslity, Lindsey Rustad, Lacy J. Shelby, Peter T. Manos, Puskar Khanal, Andrew B. Reinmann and Churamani Khanal
Remote Sens. 2024, 16(22), 4161; https://doi.org/10.3390/rs16224161 - 8 Nov 2024
Viewed by 1110
Abstract
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, [...] Read more.
Remote sensing (RS) and Geographic Information Systems (GISs) provide significant opportunities for monitoring and managing natural resources across various temporal, spectral, and spatial resolutions. There is a critical need for natural resource managers to understand the expanding capabilities of image sources, analysis techniques, and in situ validation methods. This article reviews key image analysis tools in natural resource management, highlighting their unique strengths across diverse applications such as agriculture, forestry, water resources, soil management, and natural hazard monitoring. Google Earth Engine (GEE), a cloud-based platform introduced in 2010, stands out for its vast geospatial data catalog and scalability, making it ideal for global-scale analysis and algorithm development. ENVI, known for advanced multi- and hyperspectral image processing, excels in vegetation monitoring, environmental analysis, and feature extraction. ERDAS IMAGINE specializes in radar data analysis and LiDAR processing, offering robust classification and terrain analysis capabilities. Global Mapper is recognized for its versatility, supporting over 300 data formats and excelling in 3D visualization and point cloud processing, especially in UAV applications. eCognition leverages object-based image analysis (OBIA) to enhance classification accuracy by grouping pixels into meaningful objects, making it effective in environmental monitoring and urban planning. Lastly, QGIS integrates these remote sensing tools with powerful spatial analysis functions, supporting decision-making in sustainable resource management. Together, these tools when paired with in situ data provide comprehensive solutions for managing and analyzing natural resources across scales. Full article
Show Figures

Figure 1

Figure 1
<p>Articles published using different image analysis tools in different time intervals.</p>
Full article ">Figure 2
<p>Map of sites identified and included in database.</p>
Full article ">
18 pages, 5160 KiB  
Article
DPFANet: Deep Point Feature Aggregation Network for Classification of Irregular Objects in LIDAR Point Clouds
by Shuming Zhang and Dali Xu
Electronics 2024, 13(22), 4355; https://doi.org/10.3390/electronics13224355 - 6 Nov 2024
Viewed by 411
Abstract
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) [...] Read more.
Point cloud data acquired by scanning with Light Detection and Ranging (LiDAR) devices typically contain irregular objects, such as trees, which lead to low classification accuracy in existing point cloud classification methods. Consequently, this paper proposes a deep point feature aggregation network (DPFANet) that integrates adaptive graph convolution and space-filling curve sampling modules to effectively address the feature extraction problem for irregular object point clouds. To refine the feature representation, we utilize the affinity matrix to quantify inter-channel relationships and adjust the input feature matrix accordingly, thereby improving the classification accuracy of the object point cloud. To validate the effectiveness of the proposed approach, a TreeNet dataset was created, comprising four categories of tree point clouds derived from publicly available UAV point cloud data. The experimental findings illustrate that the model attains a mean accuracy of 91.4% on the ModelNet40 dataset, comparable to prevailing state-of-the-art techniques. When applied to the more challenging TreeNet dataset, the model achieves a mean accuracy of 88.0%, surpassing existing state-of-the-art methods in all classification metrics. These results underscore the high potential of the model for point cloud classification of irregular objects. Full article
(This article belongs to the Special Issue Point Cloud Data Processing and Applications)
Show Figures

Figure 1

Figure 1
<p>Network architecture of DPFANet. For the AGConv module (see <a href="#sec3dot2-electronics-13-04355" class="html-sec">Section 3.2</a>), different feature correspondences are learned by generating an adaptive convolutional kernel. The FEAF module (see <a href="#sec3dot3-electronics-13-04355" class="html-sec">Section 3.3</a>) is the point feature extraction and fusion module. The CAA module (see <a href="#sec3dot4-electronics-13-04355" class="html-sec">Section 3.4</a>) is a channel-based attention mechanism specifically designed for fine-grained representation of features. SA and SA(MSG) represent the set abstraction module proposed by PointNet. The LBRD layer comprises the linear layer, BatchNorm layer, ReLU layer, and dropout layer.</p>
Full article ">Figure 2
<p>The figure illustrates a neighborhood target point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> </mrow> </msub> </mrow> </semantics></math> processed in AGConv. Based on the feature inputs on the edge <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">e</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">j</mi> </mrow> </msub> </mrow> </semantics></math>, the adaptive kernel <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi mathvariant="normal">e</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">j</mi> <mi mathvariant="normal">m</mi> </mrow> </msub> </mrow> </semantics></math> is generated, and a convolution operation is performed with the spatial input <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">j</mi> </mrow> </msub> </mrow> </semantics></math>. Subsequently, the edge features <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">h</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">j</mi> </mrow> </msub> </mrow> </semantics></math> are constructed by merging all dimensions of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">h</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">j</mi> <mi mathvariant="normal">m</mi> </mrow> </msub> </mrow> </semantics></math>. Eventually, these edge features are integrated using the aggregation function to obtain the center point’s output feature <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi mathvariant="normal">x</mi> </mrow> <mrow> <mi mathvariant="normal">i</mi> </mrow> <mrow> <mo>′</mo> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The serialized point cloud neighborhood mapping sampling strategy for Z-order curve ordering begins by sampling equally spaced points, where the sampling spacing is fixed to 3. The correlation tensor is then designed to evaluate the relationship between local features and structural features. The CRF layer represents the combination of the 2D Conv layer, ReLU layer, and Pooling layer.</p>
Full article ">Figure 4
<p>The channel affinity attention module comprises two main components. The CCS component (light green part of the figure) computes the similarity matrix between channels, while the CAE component (light blue part of the figure) utilizes this matrix to evaluate the weight matrix.</p>
Full article ">Figure 5
<p>Point cloud feature maps for the TreeNet dataset.</p>
Full article ">Figure 6
<p>The confusion matrix of detailed classification results of each algorithm on the test set.</p>
Full article ">Figure 7
<p>The precision bars for each category of the eight algorithms computed from the confusion matrix.</p>
Full article ">Figure 8
<p>The recall bar charts for each category of the eight algorithms computed from the confusion matrix.</p>
Full article ">
25 pages, 24649 KiB  
Article
Power Corridor Safety Hazard Detection Based on Airborne 3D Laser Scanning Technology
by Shuo Wang, Zhigen Zhao and Hang Liu
ISPRS Int. J. Geo-Inf. 2024, 13(11), 392; https://doi.org/10.3390/ijgi13110392 - 1 Nov 2024
Viewed by 692
Abstract
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning [...] Read more.
Overhead transmission lines are widely deployed across both mountainous and plain areas and serve as a critical infrastructure for China’s electric power industry. The rapid advancement of three-dimensional (3D) laser scanning technology, with airborne LiDAR at its core, enables high-precision and rapid scanning of the detection area, offering significant value in identifying safety hazards along transmission lines in complex environments. In this paper, five transmission lines, spanning a total of 160 km in the mountainous area of Sanmenxia City, Henan Province, China, serve as the primary research objects and generate several insights. The location and elevation of each power tower pole are determined using an Unmanned Aerial Vehicle (UAV), which assesses the direction and elevation changes in the transmission lines. Moreover, point cloud data of the transmission line corridor are acquired and archived using a UAV equipped with LiDAR during variable-height flight. The data processing of the 3D laser point cloud of the power corridor involves denoising, line repair, thinning, and classification. By calculating the clearance, horizontal, and vertical distances between the power towers, transmission lines, and other surface features, in conjunction with safety distance requirements, information about potential hazards can be generated. The results of detecting these five transmission lines reveal 54 general hazards, 22 major hazards, and an emergency hazard in terms of hazards of the vegetation type. The type of hazard in the current working condition is mainly vegetation, and the types of cross-crossing hazards are power lines and buildings. The detection results are submitted to the local power department in a timely manner, and relevant measures are taken to eliminate hazards and ensure the normal supply of power resources. The research in this paper will provide a basis and an important reference for identifying the potential safety hazards of transmission lines in Henan Province and other complex environments and solving existing problems in the manual inspection of transmission lines. Full article
Show Figures

Figure 1

Figure 1
<p>Research method flow chart.</p>
Full article ">Figure 2
<p>Location of study area in Henan Province.</p>
Full article ">Figure 3
<p>Distribution of transmission lines.</p>
Full article ">Figure 4
<p>Topographic map of study area: (<b>a</b>) DEM topographic map; (<b>b</b>) topographic profile.</p>
Full article ">Figure 5
<p>Spatial location information of a power tower.</p>
Full article ">Figure 6
<p>Schematic diagram of the UAV flying with variable altitude.</p>
Full article ">Figure 7
<p>Flight path of UAV with variable altitude.</p>
Full article ">Figure 8
<p>Schematic diagram of crossing an intersecting line with “6 points crossing method”.</p>
Full article ">Figure 9
<p>Schematic diagram based on statistical filtering algorithm.</p>
Full article ">Figure 10
<p>Point cloud map of power corridor before denoising.</p>
Full article ">Figure 11
<p>Point cloud map of power corridor after denoising.</p>
Full article ">Figure 12
<p>Point cloud map of scene environment after denoising.</p>
Full article ">Figure 13
<p>Flowchart for point cloud repair of transmission lines.</p>
Full article ">Figure 14
<p>Color point cloud data generation graph.</p>
Full article ">Figure 15
<p>Point cloud classification results of power corridor.</p>
Full article ">Figure 16
<p>True color tinted point cloud image of power corridor.</p>
Full article ">Figure 17
<p>Elevation tinted point cloud image of power corridor.</p>
Full article ">Figure 18
<p>Category tinted point cloud image of power corridor.</p>
Full article ">Figure 19
<p>Feature tinted point cloud image of power corridor.</p>
Full article ">Figure 20
<p>Result map of point cloud distance measurement (here, <span class="html-italic">3D</span> means clearance distance, <span class="html-italic">2D</span> means horizontal distance, and <span class="html-italic">H</span> means vertical distance).</p>
Full article ">Figure 21
<p>Detection result of a transmission line crossing another line.</p>
Full article ">Figure 22
<p>Detection result of a transmission line crossing a road.</p>
Full article ">Figure 23
<p>Clearance distance of vegetation does not meet provision (No. 010-011).</p>
Full article ">Figure 24
<p>Clearance distance of vegetation does not meet provision (No. 024-025).</p>
Full article ">Figure 25
<p>Elevation system.</p>
Full article ">
17 pages, 10616 KiB  
Article
Filtering-Assisted Airborne Point Cloud Semantic Segmentation for Transmission Lines
by Wanjing Yan, Weifeng Ma, Xiaodong Wu, Chong Wang, Jianpeng Zhang and Yuncheng Deng
Sensors 2024, 24(21), 7028; https://doi.org/10.3390/s24217028 - 31 Oct 2024
Viewed by 496
Abstract
Point cloud semantic segmentation is crucial for identifying and analyzing transmission lines. Due to the number of point clouds being huge, complex scenes, and unbalanced sample proportion, the mainstream machine learning methods of point cloud segmentation cannot provide high efficiency and accuracy when [...] Read more.
Point cloud semantic segmentation is crucial for identifying and analyzing transmission lines. Due to the number of point clouds being huge, complex scenes, and unbalanced sample proportion, the mainstream machine learning methods of point cloud segmentation cannot provide high efficiency and accuracy when extending to transmission line scenes. This paper proposes a filter-assisted airborne point cloud semantic segmentation for transmission lines. First, a large number of ground point clouds is identified by introducing the well-developed cloth simulation filter to alleviate the impact of the imbalance of the target object proportion on the classifier’s performance. The multi-dimensional features are then defined, and the classification model is trained to achieve the multi-element semantic segmentation of the transmission line scene. The experimental results and analysis indicate that the proposed filter-assisted algorithm can significantly improve the semantic segmentation performance of the transmission line point cloud, enhancing both the point cloud segmentation efficiency and accuracy by more than 25.46% and 3.15%, respectively. The filter-assisted point cloud semantic segmentation method reduces the volume of sample data, the number of sample classes, and the sample imbalance index in power line scenarios to a certain extent, thereby improving the classification accuracy of classifiers and reducing time consumption. This research holds significant theoretical reference value and engineering application potential for scene reconstruction and intelligent understanding of airborne laser point cloud transmission lines. Full article
(This article belongs to the Special Issue Advances in Mobile LiDAR Point Clouds)
Show Figures

Figure 1

Figure 1
<p>Experimental data.</p>
Full article ">Figure 2
<p>Key steps of the proposed method (the black triangle in (3) represents the point cloud of the best field, and the different colors and triangles in (4) represents the point cloud identified as a certain category).</p>
Full article ">Figure 3
<p>Filtered ground and non-ground points of the experimental data.</p>
Full article ">Figure 4
<p>Classification results of vegetation before and after filter assistance by different machine learning methods based on Data1: ①, ② and ③, respectively, indicate the vegetation point clouds of two areas in the scene.</p>
Full article ">Figure 5
<p>Classification results of power lines before and after filter assistance by different machine learning methods: ① and ②, respectively, indicate the transmission line point clouds of two areas in the scene.</p>
Full article ">Figure 6
<p>Classification results of different machine learning methods before and after filtering: ①, ②, ③, and ④ indicate the numbers of the four pylons in the scene, respectively.</p>
Full article ">Figure 7
<p>Evaluation of the efficiency based on the different classifiers: (<b>a</b>) time-consuming and reduction rates comparison of the four groups of experiments; (<b>b</b>) overall accuracy and growth rates comparison of the four experiments.</p>
Full article ">Figure 8
<p>Evaluation of site classification accuracy based on different experimental data: (<b>a</b>–<b>d</b>) indicate the precision and precision growth rate of power line, pylon, and vegetation in Data0, Data1, Data2, and Data3.</p>
Full article ">
18 pages, 6936 KiB  
Article
A Calculating Method for the Height of Multi-Type Buildings Based on 3D Point Cloud
by Yuehuan Wang, Shuwen Yang, Ruixiong Kou, Zhuang Shi and Yikun Li
Buildings 2024, 14(11), 3412; https://doi.org/10.3390/buildings14113412 - 27 Oct 2024
Viewed by 557
Abstract
Building height is a critical variable in urban studies, and the automated acquisition of the precise building height is essential for intelligent construction, safety, and the sustainable development of cities. The building height is often approximated by the building’s highest point. However, the [...] Read more.
Building height is a critical variable in urban studies, and the automated acquisition of the precise building height is essential for intelligent construction, safety, and the sustainable development of cities. The building height is often approximated by the building’s highest point. However, the calculation method of the building height of the various roof types differs according to building codes, making it challenging to accurately calculate the height of buildings with complex roof structures or multiple upper appendages. Consequently, this paper utilizes point clouds to propose an automated method for calculating building heights conforming to design codes. The model considers roof types and allows for fast, automated, and highly accurate building height estimation. First, roofs are extracted from the point cloud by combining normal vector density clustering with a region-growing algorithm. Second, combined with variational Bayes, a Gaussian mixture model is employed to segment the roof surfaces. Finally, roofs are classified based on slope characteristics, achieving the automatic acquisition of building heights for various roof types over large areas. Experiments were conducted on Vaihingen and STPLS3D datasets. In the Vaihingen area, the maximum error, root-mean-square-error (RMSE), and mean absolute error (MAE) of the measured heights are 1.92 cm, 1.18 cm, and 1.13 cm, respectively. In the STPLS3D area, these values are 1.79 cm, 0.82 cm, and 0.68 cm, respectively. The results demonstrate that the proposed method is reliable and effective, which offers valuable data for the development, construction, and planning of three-dimensional (3D) cities. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

Figure 1
<p>Flowchart of building heights calculation. (In Step 3 Ridge and Eave Lines, the red color indicates the ridge line and the blue color indicates the eave line).</p>
Full article ">Figure 2
<p>Experimental data. where (<b>a</b>) denotes the study area in the Vaihingen data and (<b>b</b>) denotes the study area in the STPLS3D dataset.</p>
Full article ">Figure 3
<p>Point clouds of different types of roofs.</p>
Full article ">Figure 4
<p>Flow of density clustering algorithm based on normal vector features.</p>
Full article ">Figure 5
<p>Variational Bayesian Gaussian mixture model flow.</p>
Full article ">Figure 6
<p>Slope and direction calculation.</p>
Full article ">Figure 7
<p>Schematic diagram of slope direction.</p>
Full article ">Figure 8
<p>Building roof types.</p>
Full article ">Figure 9
<p>Roof extraction results.</p>
Full article ">Figure 10
<p>Roof segmentation results.</p>
Full article ">Figure 11
<p>Representation of building heights calculation.</p>
Full article ">Figure 12
<p>Calculation error representation of building heights in the Vaihingen experimental area.</p>
Full article ">Figure 13
<p>Calculation error representation of building height in STPLS3D experimental area.</p>
Full article ">
17 pages, 13097 KiB  
Article
Airborne LiDAR Point Cloud Classification Using Ensemble Learning for DEM Generation
by Ting-Shu Ciou, Chao-Hung Lin and Chi-Kuei Wang
Sensors 2024, 24(21), 6858; https://doi.org/10.3390/s24216858 - 25 Oct 2024
Viewed by 598
Abstract
Airborne laser scanning (ALS) point clouds have emerged as a predominant data source for the generation of digital elevation models (DEM) in recent years. Traditionally, the generation of DEM using ALS point clouds involves the steps of point cloud classification or ground point [...] Read more.
Airborne laser scanning (ALS) point clouds have emerged as a predominant data source for the generation of digital elevation models (DEM) in recent years. Traditionally, the generation of DEM using ALS point clouds involves the steps of point cloud classification or ground point filtering to extract ground points and labor-intensive post-processing to correct the misclassified ground points. The current deep learning techniques leverage the ability of geometric recognition for ground point classification. However, the deep learning classifiers are generally trained using 3D point clouds with simple geometric terrains, which decrease the performance of model inferencing. In this study, a point-based deep learning model with boosting ensemble learning and a set of geometric features as the model inputs is proposed. With the ensemble learning strategy, this study integrates specialized ground point classifiers designed for different terrains to boost classification robustness and accuracy. In experiments, ALS point clouds containing various terrains were used to evaluate the feasibility of the proposed method. The results demonstrated that the proposed method can improve the point cloud classification and the quality of generated DEMs. The classification accuracy and F1 score are improved from 80.9% to 92.2%, and 82.2% to 94.2%, respectively, by using the proposed methods. In addition, the DEM generation error, in terms of mean squared error (RMSE), is reduced from 0.318–1.362 m to 0.273–1.032 m by using the proposed ensemble learning. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>The network structure of the DGCNN segmentation model.</p>
Full article ">Figure 2
<p>The edge convolution operation.</p>
Full article ">Figure 3
<p>The inconsistent intensity value in point cloud data.</p>
Full article ">Figure 4
<p>Workflow of ground point determination by using ensemble learning.</p>
Full article ">Figure 5
<p>Spatial distribution of training datasets. (<b>Left</b>) the locations of mountain, urban, and mixed datasets are marked gray, orange, and pink; (<b>right</b>) examples of datasets.</p>
Full article ">Figure 6
<p>Results of the urban classifier <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> applied to an urban dataset. (<b>Top</b>) ground truth; (<b>bottom</b>) prediction result. The point cloud profile of the red line in the left subfigure is displayed in the right subfigure, and the ground points are marked in orange.</p>
Full article ">Figure 7
<p>Results of the urban classifier <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mi>m</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> applied to mountain data. (<b>Top</b>) ground truth; (<b>bottom</b>) prediction result. The point cloud profile of the red line in the left subfigure is displayed in the right subfigure, and the ground points are marked in orange.</p>
Full article ">Figure 8
<p>Comparison of prediction results of three ground point extraction processes on the mixed dataset. The <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> represent the start and end of the profile. The locations of the profiles are marked with red, and the ground points are marked in orange.</p>
Full article ">Figure 9
<p>Classification result of AHN dataset using the proposed method. The locations of the profiles are marked with red, and the ground points are marked in orange.</p>
Full article ">Figure 10
<p>Comparison of ground points in ground truth and prediction. The locations of the profiles are marked with red.</p>
Full article ">Figure 11
<p>Error maps of the generated DEM.</p>
Full article ">
26 pages, 19393 KiB  
Article
ML Approaches for the Study of Significant Heritage Contexts: An Application on Coastal Landscapes in Sardinia
by Marco Cappellazzo, Giacomo Patrucco and Antonia Spanò
Heritage 2024, 7(10), 5521-5546; https://doi.org/10.3390/heritage7100261 - 5 Oct 2024
Viewed by 800
Abstract
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light [...] Read more.
Remote Sensing (RS) and Geographic Information Science (GIS) techniques are powerful tools for spatial data collection, analysis, management, and digitization within cultural heritage frameworks. Despite their capabilities, challenges remain in automating data semantic classification for conservation purposes. To address this, leveraging airborne Light Detection And Ranging (LiDAR) point clouds, complex spatial analyses, and automated data structuring is crucial for supporting heritage preservation and knowledge processes. In this context, the present contribution investigates the latest Artificial Intelligence (AI) technologies for automating existing LiDAR data structuring, focusing on the case study of Sardinia coastlines. Moreover, the study preliminary addresses automation challenges in the perspective of historical defensive landscapes mapping. Since historical defensive architectures and landscapes are characterized by several challenging complexities—including their association with dark periods in recent history and chronological stratification—their digitization and preservation are highly multidisciplinary issues. This research aims to improve data structuring automation in these large heritage contexts with a multiscale approach by applying Machine Learning (ML) techniques to low-scale 3D Airborne Laser Scanning (ALS) point clouds. The study thus develops a predictive Deep Learning Model (DLM) for the semantic segmentation of sparse point clouds (<10 pts/m2), adaptable to large landscape heritage contexts and heterogeneous data scales. Additionally, a preliminary investigation into object-detection methods has been conducted to map specific fortification artifacts efficiently. Full article
Show Figures

Figure 1

Figure 1
<p>Defensive heritage artifacts. Capo Boi tower, Sinnai—Cagliari (<b>a</b>). Sant’Ignazio fortress, Calamosca—Cagliari (<b>b</b>). Position no. 5 of Stronghold V, Porto Ferro—Alghero (<b>c</b>). Position no. 2 (hypothesis) of Stronghold XI, Alghero (<b>d</b>).</p>
Full article ">Figure 2
<p>Map of the location of the presented case studies related to the research framework of the present contribution, Sardinia (Italy).</p>
Full article ">Figure 3
<p>Case study 1. The case study 1 area is located in the southern region of Sardinia and covers the whole extension of Cagliari town and hinterland.</p>
Full article ">Figure 4
<p>Case study 2. The case study 2 area is located in the northwest region of Sardinia and covers the whole extension of Alghero town and hinterland.</p>
Full article ">Figure 5
<p>Map of the data provided by the Sardinia Region. The stripes available from the two airborne LiDAR surveys are located on this map. The two surveys were carried out using two different sensors, as detailed in <a href="#heritage-07-00261-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Methodological schema. The workflow is developed for heterogeneous landscape heritage frameworks leveraging multiple existing airborne LiDAR datasets (1). The case studies that have been selected thus not only pertain to distinguished heritage contexts but are characterized by different acquisition scales and densities. The second stage of the methodology consists of applying unsupervised (2a) and data fusion strategies (2b) to prepare reference data for DL classification model training (2c). Finally, a preliminary investigation of object detection strategies (3) addresses the system mapping and artifact recognition challenges of historical defensive heritage using point cloud deep learning approaches [<a href="#B56-heritage-07-00261" class="html-bibr">56</a>].</p>
Full article ">Figure 7
<p>Sentinel-2 data fusion approaches. Vector water mask generation for areal segmentation (<b>a</b>). Band 8 NIR projection on DSM mesh for scalar value interpolation (<b>b</b>).</p>
Full article ">Figure 8
<p>Map of the location of the training dataset (case study 1, Cagliari). The red tiles are about the training set, while the green blocks relate to the point cloud tiles used to validate the model.</p>
Full article ">Figure 9
<p>Map of the test dataset areas, A, B, C, and D locations. The test point clouds are in the case study 2 area (Alghero).</p>
Full article ">Figure 10
<p>Label generation workflow: from a point feature class to a 3D geometry.</p>
Full article ">Figure 11
<p>Point cloud echo information, reflectance intensity, and newly calculated scalar field comparison for geometrical and digital number filtering unsupervised segmentation. Number of returns (<b>a</b>). Intensity (<b>b</b>). Data fusion near infrared from Sentinel 2 band 8, 784 nm–899.5 nm (<b>c</b>). λ<sub>3</sub> eigenvalue (normals) calculated on 2.5 m radius (<b>d</b>).</p>
Full article ">Figure 12
<p>Predictive model training results. The model performance is evaluated using the validation data from the Training dataset of case study 1 (Cagliari).</p>
Full article ">Figure 13
<p>Predictive model testing results. The model performance is evaluated using the test dataset A, B, C, and D of case study 2 (Alghero).</p>
Full article ">Figure 14
<p>Bounding box generation processing for reference data generation. The aim is to apply 3D deep learning for defensive heritage mapping. In this case, the three areas are focused on bunker class objects.</p>
Full article ">Figure 15
<p>Model validation graph, showing training and validation logarithmic loss functions during epochs. While the training loss function decreases, validation loss is constantly flat.</p>
Full article ">
21 pages, 9982 KiB  
Article
Classification and Mapping of Fuels in Mediterranean Forest Landscapes Using a UAV-LiDAR System and Integration Possibilities with Handheld Mobile Laser Scanner Systems
by Raúl Hoffrén, María Teresa Lamelas and Juan de la Riva
Remote Sens. 2024, 16(18), 3536; https://doi.org/10.3390/rs16183536 - 23 Sep 2024
Viewed by 803
Abstract
In this study, we evaluated the capability of an unmanned aerial vehicle with a LiDAR sensor (UAV-LiDAR) to classify and map fuel types based on the Prometheus classification in Mediterranean environments. UAV data were collected across 73 forest plots located in NE of [...] Read more.
In this study, we evaluated the capability of an unmanned aerial vehicle with a LiDAR sensor (UAV-LiDAR) to classify and map fuel types based on the Prometheus classification in Mediterranean environments. UAV data were collected across 73 forest plots located in NE of Spain. Furthermore, data collected from a handheld mobile laser scanner system (HMLS) in 43 out of the 73 plots were used to assess the extent of improvement in fuel identification resulting from the fusion of UAV and HMLS data. UAV three-dimensional point clouds (average density: 452 points/m2) allowed the generation of LiDAR metrics and indices related to vegetation structure. Additionally, voxels of 5 cm3 derived from HMLS three-dimensional point clouds (average density: 63,148 points/m2) facilitated the calculation of fuel volume at each Prometheus fuel type height stratum (0.60, 2, and 4 m). Two different models based on three machine learning techniques (Random Forest, Linear Support Vector Machine, and Radial Support Vector Machine) were employed to classify the fuel types: one including only UAV variables and the other incorporating HMLS volume data. The most relevant UAV variables introduced into the classification models, according to Dunn’s test, were the 99th and 10th percentile of the vegetation heights, the standard deviation of the heights, the total returns above 4 m, and the LiDAR Height Diversity Index (LHDI). The best classification using only UAV data was achieved with Random Forest (overall accuracy = 81.28%), with confusion mainly found between similar shrub and tree fuel types. The integration of fuel volume from HMLS data yielded a substantial improvement, especially in Random Forest (overall accuracy = 95.05%). The mapping of the UAV model correctly estimated the fuel types in the total area of 55 plots and at least part of the area of 59 plots. These results confirm that UAV-LiDAR systems are valid and operational tools for forest fuel classification and mapping and show how fusion with HMLS data refines the identification of fuel types, contributing to more effective management of forest ecosystems. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Graphical representation from the UAV’s colored point cloud of the structural differences of the Prometheus fuel types considered in this study. Units are in meters. The yellow, blue, and red dotted lines correspond to the height thresholds of the Prometheus classification (0.60, 2, and 4 m, respectively).</p>
Full article ">Figure 2
<p>Spatial distribution of study sectors and forest plots where UAV and HMLS data were collected in the Autonomous Community of Aragón (NE Spain).</p>
Full article ">Figure 3
<p>(<b>a</b>) <span class="html-italic">DJI Matrice 300 RTK</span> UAV unit with <span class="html-italic">DJI Zenmuse L1</span> LiDAR sensor and (<b>b</b>) general flight scheme for UAV data collection on two forest plots.</p>
Full article ">Figure 4
<p>Mapping of Prometheus fuel types from the best classification model (RF) of the UAV data at 20 m spatial resolution. Results in areas with shrub types forest plots: (<b>1</b>) Plots “vi17” and “vi18”: FT2; (<b>2</b>) Plots “vi39” and “vi40”: FT3; (<b>3</b>) Plots “al03” and “al04”: FT4.</p>
Full article ">Figure 5
<p>Mapping of Prometheus fuel types from the best classification model (RF) of the UAV data at 20 m spatial resolution. Results in areas with tree types forest plots: (<b>1</b>) plots “ay47” and “ay48”: FT5; (<b>2</b>) plots “ay06” and “ay15”: FT6; (<b>3</b>) plots “ay17” and “ay18”: FT7.</p>
Full article ">Figure 6
<p>Spatial distribution of the Prometheus fuel types identified by the best UAV classification model (<b>A</b>) and the variables introduced into the models (<b>B</b>–<b>F</b>) in the UAV flight area of plots “ay12”, “ay49”, and “ay50” (represented as pink circles).</p>
Full article ">Figure 7
<p>Distribution of values by each Prometheus fuel type for the five UAV variables and the HMLS variable introduced in the classification models.</p>
Full article ">
23 pages, 7346 KiB  
Article
Automatic Measurement of Seed Geometric Parameters Using a Handheld Scanner
by Xia Huang, Fengbo Zhu, Xiqi Wang and Bo Zhang
Sensors 2024, 24(18), 6117; https://doi.org/10.3390/s24186117 - 22 Sep 2024
Viewed by 714
Abstract
Seed geometric parameters are important in yielding trait scorers, quantitative trait loci, and species recognition and classification. A novel method for automatic measurement of three-dimensional seed phenotypes is proposed. First, a handheld three-dimensional (3D) laser scanner is employed to obtain the seed point [...] Read more.
Seed geometric parameters are important in yielding trait scorers, quantitative trait loci, and species recognition and classification. A novel method for automatic measurement of three-dimensional seed phenotypes is proposed. First, a handheld three-dimensional (3D) laser scanner is employed to obtain the seed point cloud data in batches. Second, a novel point cloud-based phenotyping method is proposed to obtain a single-seed 3D model and extract 33 phenotypes. It is connected by an automatic pipeline, including single-seed segmentation, pose normalization, point cloud completion by an ellipse fitting method, Poisson surface reconstruction, and automatic trait estimation. Finally, two statistical models (one using 11 size-related phenotypes and the other using 22 shape-related phenotypes) based on the principal component analysis method are built. A total of 3400 samples of eight kinds of seeds with different geometrical shapes are tested. Experiments show: (1) a single-seed 3D model can be automatically obtained with 0.017 mm point cloud completion error; (2) 33 phenotypes can be automatically extracted with high correlation compared with manual measurements (correlation coefficient (R2) above 0.9981 for size-related phenotypes and R2 above 0.8421 for shape-related phenotypes); and (3) two statistical models are successfully built to achieve seed shape description and quantification. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Flowchart for automatic measurement of seed geometric parameters based on handheld scanners.</p>
Full article ">Figure 2
<p>Data acquisition: (<b>a</b>) process of data scanning; (<b>b</b>) details of obtained point clouds monitoring in real time with rendering visualization; (<b>c</b>) details of peanut scanning (red laser crosses are laser beams, and the white points are marker points); (<b>d</b>) one sample of the obtained original peanut point cloud, and (<b>e</b>) the obtained point clouds of peanuts (filtering 50% for effective visualization).</p>
Full article ">Figure 3
<p>The point cloud processing: (<b>a</b>) the scanned point cloud of peanuts; (<b>b</b>) the preserved point clouds after RANSAC plane detection; (<b>c</b>) the clusters of point clouds after region growing segmentation; (<b>d</b>) the single-seed segmentation result; (<b>e</b>) details of the scanned point clouds in blue box area in (<b>b</b>); (<b>f</b>) details of the single-seed segmentation result in red box area in (<b>d</b>), and (<b>g</b>,<b>h</b>) the single peanut seed point cloud in the world coordinate system before and after poses normalization.</p>
Full article ">Figure 4
<p>Seed longitudinal profile (<span class="html-italic">YOZ</span>) contour fitted by: (<b>a</b>) B-spline curve; (<b>b</b>) circle, and (<b>c</b>) least-squares ellipse.</p>
Full article ">Figure 5
<p>The point cloud processing: (<b>a</b>) a series of sliced point clouds; (<b>b</b>) one example of the sliced profile point cloud; (<b>c</b>) the fitted ellipse (in blue) based on the sliced point cloud; (<b>d</b>) the filled complete profile point cloud; (<b>e</b>–<b>g</b>) one example of the reconstructed peanut point cloud in three view angles, where the red point cloud is the incomplete scanned point cloud and the blue one is the completed point cloud using our proposed ellipse fitting-based point cloud completion method; (<b>h</b>) the filtered single peanut seed point cloud, and (<b>i</b>,<b>j</b>) the triangle mesh and surface visualization of the peanut’s 3D model built by the Poisson surface reconstruction method.</p>
Full article ">Figure 6
<p>Visualization of size-related phenotypes of a peanut: (<b>a</b>) triangulated Poisson mesh, an AABB box of the single peanut 3D model; (<b>b</b>) a peanut dividing by three perpendicular principal component profiles (the first one in margarine, second one in yellow, and third one in green), and (<b>c</b>–<b>e</b>) the first, second, and third profiles.</p>
Full article ">Figure 7
<p>3D point cloud completion results: (<b>a</b>) original scanned point clouds; (<b>b</b>) completed point clouds, and (<b>c</b>) ground truth point clouds.</p>
Full article ">Figure 8
<p>3D point cloud completion errors.</p>
Full article ">Figure 9
<p>Weight comparisons among 8 types of seeds in the size-related statistical model.</p>
Full article ">Figure 10
<p>Weight comparisons among 8 types of seeds in the shape-related statistical model.</p>
Full article ">Figure 11
<p>Ellipse fitting based on (<b>a</b>) the original scanned point cloud; (<b>b</b>) the point cloud with half the data missing; (<b>c</b>) the point cloud with three-fourths of the data missing.</p>
Full article ">Figure 12
<p>Comparisons of 3D models reconstructed by: (<b>a</b>) screened Poisson surface reconstruction; (<b>b</b>) symmetry-based 3D reconstruction method; (<b>c</b>) our proposed method based on the incomplete scanning point cloud, and (<b>d</b>) commercial software (Geomagic Studio) based on the artificially obtained complete scanning point cloud.</p>
Full article ">Figure A1
<p>Scanning and segmentation results: (<b>a</b>) seeds on the table ready for data scanning; (<b>b</b>) the obtained point clouds (rendered for effective visualization); (<b>c</b>) segmentation results; (<b>d</b>) the detailed display of the obtained point clouds in the red box area of (<b>b</b>), and (<b>e</b>) the detailed display of the obtained point clouds in the red box area of (<b>c</b>).</p>
Full article ">Figure A2
<p>Seed phenotype measurement results.</p>
Full article ">Figure A3
<p>Comparison between the automatically and manually measured phenotypes of seeds.</p>
Full article ">
21 pages, 2923 KiB  
Article
Multi-Scale Classification and Contrastive Regularization: Weakly Supervised Large-Scale 3D Point Cloud Semantic Segmentation
by Jingyi Wang, Jingyang He, Yu Liu, Chen Chen, Maojun Zhang and Hanlin Tan
Remote Sens. 2024, 16(17), 3319; https://doi.org/10.3390/rs16173319 - 7 Sep 2024
Viewed by 874
Abstract
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between [...] Read more.
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between background and objects in large scenes. Therefore, we propose MCCR (Multi-scale Classification and Contrastive Regularization), an end-to-end semantic segmentation framework for large-scale 3D scenes under weak supervision. MCCR first aggregates features and applies random downsampling to the input data. Then, it captures the local features of a random point based on multi-layer features and the input coordinates. These features are then fed into the network to obtain the initial and final prediction results, and MCCR iteratively trains the model using strategies such as contrastive learning. Notably, MCCR combines multi-scale classification with contrastive regularization to fully exploit multi-scale features and weakly labeled information. We investigate both point-level and local contrastive regularization to leverage point cloud augmentor and local semantic information and introduce a Decoupling Layer to guide the loss optimization in different spaces. Results on three popular large-scale datasets, S3DIS, SemanticKITTI and SensatUrban, demonstrate that our model achieves state-of-the-art (SOTA) performance on large-scale outdoor datasets with only 0.1% labeled points for supervision, while maintaining strong performance on indoor datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-scale classification. The point cloud is processed to obtain multi-scale features, which are extracted and analyzed at different scales, and then fed into classifiers. The results are fused to produce the final prediction.</p>
Full article ">Figure 2
<p>The architecture of MCCR. The original point clouds are first processed through random mirroring, rotation and jittering to generate augmented point clouds. Then, both the original and augmented points are subjected to Local Feature Aggregation and Random Sampling modules to obtain multi-scale features. By randomly selecting a point, its local features are captured at different scales and interpolated accordingly. The interpolated local features are, on the one hand, used for multi-scale classification, and on the other hand, they are fused and fed into a series of MLPs to obtain initial prediction results. These initial predictions are then utilized for local and point-level contrastive regularization, and combined with the multi-scale classification outcomes to derive the final predictions. Note that the red dot represents the input point, while the yellow and reddish-brown dots represent the predictions based on the original data and augmented data, respectively.</p>
Full article ">Figure 3
<p>Visualization results on the validation set (Sequence 08) of SemanticKITTI. Red circles highlight where we outperform SQN*.</p>
Full article ">Figure 4
<p>Visualization results on the validation set of SensatUrban. Raw point cloud, ground truth, our results and the baseline are presented separately, from left to right, and we use black circles to highlight where we outperform SQN*.</p>
Full article ">Figure 5
<p>Visualization results on the test set of S3DIS Area-5. Red circles highlight where we outperform SQN*.</p>
Full article ">Figure 6
<p>Visualization results on the test set of S3DIS Area-5. Each one includes raw point cloud with segmentation results displayed in the red rectangle for the ground truth, baseline results and ours, respectively. With 1% labeled points, the segmentation results of the baseline method deviate from the ground truth, but our proposed MCCR is able to obtain results consistent with the ground truth.</p>
Full article ">Figure 7
<p>Visualization results on the test set of S3DIS Area-5. Each one includes the raw point cloud with segmentation results displayed in the red rectangle for the ground truth, baseline results and ours, respectively. With 1% labeled points, the segmentation results of our proposed MCCR deviate from the ground truth, but the baseline method is able to obtain results consistent with the ground truth.</p>
Full article ">
14 pages, 1578 KiB  
Technical Note
Instantaneous Material Classification Using a Polarization-Diverse RMCW LIDAR
by Cibby Pulikkaseril, Duncan Ross, Alexander Tofini, Yannick K. Lize and Federico Collarte
Sensors 2024, 24(17), 5761; https://doi.org/10.3390/s24175761 - 4 Sep 2024
Viewed by 720
Abstract
Light detection and ranging (LIDAR) sensors using a polarization-diverse receiver are able to capture polarimetric information about the target under measurement. We demonstrate this capability using a silicon photonic receiver architecture that enables this on a shot-by-shot basis, enabling polarization analysis nearly instantaneously [...] Read more.
Light detection and ranging (LIDAR) sensors using a polarization-diverse receiver are able to capture polarimetric information about the target under measurement. We demonstrate this capability using a silicon photonic receiver architecture that enables this on a shot-by-shot basis, enabling polarization analysis nearly instantaneously in the point cloud, and then use this data to train a material classification neural network. Using this classifier, we show an accuracy of 85.4% for classifying plastic, wood, concrete, and coated aluminum. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Example of a received RMCW signal corrupted by white noise, and (<b>b</b>) the resulting correlation signal showing a peak at the delayed time of the received waveform.</p>
Full article ">Figure 2
<p>(<b>a</b>) Mach–Zehnder modulator (MZM) transfer function; operating point shown in red denotes the desired bias point to operate in phase-modulation (<math display="inline"><semantics> <mrow> <mi>V</mi> <mi>π</mi> </mrow> </semantics></math> is the half-wave voltage). (<b>b</b>) Input electrical modulation in the form of high and low voltages. (<b>c</b>) Output optical modulation in the form of intensity and phase information. (<b>d</b>) The transmit (TX) portion of the photonic chip receives its input light from an external laser, which is then distributed between the MZM and the path leading to the local oscillator using a Mach–Zehnder interferometer (MZI).</p>
Full article ">Figure 3
<p>(<b>a</b>) Single channel polarization-diverse in-phase/quadrature (IQ) receiver (XIp, XIn = x-polarization in-phase pair; YIp, YIn = y-polarization in-phase pair; XQp, XQn = x-polarization quadrature pair; YQp, YQn = y-polarization quadrature pair; PSR = polarization splitter/rotator; PD = photodiode; LO = local oscillator). (<b>b</b>) X/Y polarization constellation diagrams. Two example measurements are shown with their component breakdowns, which are labelled on the receiver.</p>
Full article ">Figure 4
<p>Experimental setup: we use a Tx BOSA for modulating the RMCW code on the transmit path, and an RX BOSA for performing the polarization-diverse IQ demodulation, using unmodulated light as the local oscillator (DBR: distributed Bragg reflector, EDFA: erbium-doped fiber amplifier, PC: polarization controller, PBS: polarization beamsplitter, Rx: received optical path, XI, XQ, YI, YQ: the in-phase and quadrature portions of the x- and y-polarization.).</p>
Full article ">Figure 5
<p>Materials used for experimental validation: (<b>a</b>) coated aluminum, (<b>b</b>) concrete, (<b>c</b>) black plastic, (<b>d</b>) engineered wood.</p>
Full article ">Figure 6
<p>Confusion matrix for the 64-node classifier on four different materials.</p>
Full article ">Figure 7
<p>Distribution of SNR values for all materials in the dataset.</p>
Full article ">Figure 8
<p>Classification accuracy as number of nodes in hidden layer increase.</p>
Full article ">Figure 9
<p>Scatter plots to demonstrate the clustering of measurements in polarization space, and with SNR. Points are colored by material, with concrete (red), coated metal (green), plastic (orange), and engineered wood (blue). (<b>a</b>) 3D visualization of S1, S2, and S3. (<b>b</b>) 2D scatter plot of SNR and S2.</p>
Full article ">
17 pages, 11761 KiB  
Article
Prediction of Useful Eggplant Seedling Transplants Using Multi-View Images
by Xiangyang Yuan, Jingyan Liu, Huanyue Wang, Yunfei Zhang, Ruitao Tian and Xiaofei Fan
Agronomy 2024, 14(9), 2016; https://doi.org/10.3390/agronomy14092016 - 4 Sep 2024
Viewed by 543
Abstract
Traditional deep learning methods employing 2D images can only classify healthy and unhealthy seedlings; consequently, this study proposes a method by which to further classify healthy seedlings into primary seedlings and secondary seedlings and finally to differentiate three classes of seedling through a [...] Read more.
Traditional deep learning methods employing 2D images can only classify healthy and unhealthy seedlings; consequently, this study proposes a method by which to further classify healthy seedlings into primary seedlings and secondary seedlings and finally to differentiate three classes of seedling through a 3D point cloud for the detection of useful eggplant seedling transplants. Initially, RGB images of three types of substrate-cultivated eggplant seedlings (primary, secondary, and unhealthy) were collected, and healthy and unhealthy seedlings were classified using ResNet50, VGG16, and MobilNetV2. Subsequently, a 3D point cloud was generated for the three seedling types, and a series of filtering processes (fast Euclidean clustering, point cloud filtering, and voxel filtering) were employed to remove noise. Parameters (number of leaves, plant height, and stem diameter) extracted from the point cloud were found to be highly correlated with the manually measured values. The box plot shows that the primary and secondary seedlings were clearly differentiated for the extracted parameters. The point clouds of the three seedling types were ultimately classified directly using the 3D classification models PointNet++, dynamic graph convolutional neural network (DGCNN), and PointConv, in addition to the point cloud complementary operation for plants with missing leaves. The PointConv model demonstrated the best performance, with an average accuracy, precision, and recall of 95.83, 95.83, and 95.88%, respectively, and a model loss of 0.01. This method employs spatial feature information to analyse different seedling categories more effectively than two-dimensional (2D) image classification and three-dimensional (3D) feature extraction methods. However, there is a paucity of studies applying 3D classification methods to predict useful eggplant seedling transplants. Consequently, this method has the potential to identify different eggplant seedling types with high accuracy. Furthermore, it enables the quality inspection of seedlings during agricultural production. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the information processing process.</p>
Full article ">Figure 2
<p>Schematic diagram of an image acquisition device comprising (<b>A</b>) computer (to process image data and for 3D reconstruction), (<b>B</b>) camera (image collection), (<b>C</b>) rotation platform (rotating while carrying seedlings), and (<b>D</b>) eggplant seedling (experimental materials).</p>
Full article ">Figure 3
<p>Point cloud reconstruction and preprocessing results: <b>A</b>(<b>1</b>)–<b>D</b>(<b>1</b>) shows the primary seedlings, <b>A</b>(<b>2</b>)–<b>D</b>(<b>2</b>) shows the secondary seedlings, and <b>A</b>(<b>3</b>)–<b>D</b>(<b>3</b>) shows the unhealthy seedlings; <b>A</b>(<b>1</b>)–<b>A</b>(<b>3</b>) shows the point cloud plants after 3D reconstruction; <b>B</b>(<b>1</b>)–<b>B</b>(<b>3</b>) shows the results of fast Euclidean clustering; <b>C</b>(<b>1</b>)–<b>C</b>(<b>3</b>) shows the results based on colour threshold filtering; <b>D</b>(<b>1</b>)–<b>D</b>(<b>3</b>) shows the results of voxel filtering.</p>
Full article ">Figure 4
<p>Completion of missing point clouds: (<b>A</b>) shows the plant containing missing leaves; (<b>B</b>) shows the segmented incomplete leaves; (<b>C</b>) shows the missing leaves with the RGB data removed; (<b>D</b>) shows the purple missing section generated by PF-Net prediction; (<b>E</b>) shows the completed leaves; (<b>F</b>) shows the entire plant after point cloud completion.</p>
Full article ">Figure 5
<p>The fitting performance of phenotype extraction values based on the 3D point cloud and manual measurements: (<b>A</b>) primary seedling plant height (actual vs. predicted); (<b>B</b>) primary seedling stem diameter (actual vs. predicted); (<b>C</b>) primary seedling number of leaves (random deviation obtained in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis directions for the same values) (actual vs. predicted); (<b>D</b>) secondary seedling plant height (actual vs. predicted); (<b>E</b>) secondary seedling stem diameter (actual vs. predicted); (<b>F</b>) secondary seedling number of leaves (random deviation obtained in the <span class="html-italic">x</span>- and <span class="html-italic">y</span>-axis directions for the same values) (actual vs. predicted).</p>
Full article ">Figure 6
<p>Box plot data distribution for each primary and secondary seedling parameter: (<b>A</b>) data distribution on the primary and secondary seedling number of leaves; (<b>B</b>) data distribution on the primary and secondary seedling plant heights; (<b>C</b>) data distribution on the primary and secondary seedling stem diameters.</p>
Full article ">Figure 7
<p>Model convergence in testing: (<b>A</b>) accuracy variation comparison; (<b>B</b>) loss variation comparison.</p>
Full article ">Figure 8
<p>Confusion matrix of different models.</p>
Full article ">
Back to TopTop