Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,392)

Search Parameters:
Keywords = feature points

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 20234 KiB  
Article
PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model
by Jiale Liu and Jingwen Luo
Electronics 2024, 13(23), 4592; https://doi.org/10.3390/electronics13234592 - 21 Nov 2024
Abstract
This paper proposes an enhanced visual simultaneous localization and mapping (vSLAM) algorithm tailored for mobile robots operating in indoor dynamic scenes. By incorporating point-line features and leveraging the Manhattan world model, the proposed PLM-SLAM framework significantly improves localization accuracy and map consistency. This [...] Read more.
This paper proposes an enhanced visual simultaneous localization and mapping (vSLAM) algorithm tailored for mobile robots operating in indoor dynamic scenes. By incorporating point-line features and leveraging the Manhattan world model, the proposed PLM-SLAM framework significantly improves localization accuracy and map consistency. This algorithm optimizes the line features detected by the Line Segment Detector (LSD) through merging and pruning strategies, ensuring real-time performance. Subsequently, dynamic point-line features are rejected based on Lucas–Kanade (LK) optical flow, geometric constraints, and depth information, minimizing the impact of dynamic objects. The Manhattan world model is then utilized to reduce rotational estimation errors and optimize pose estimation. High-precision line feature matching and loop closure detection mechanisms further enhance the robustness and accuracy of the system. Experimental results demonstrate the superior performance of PLM-SLAM, particularly in high-dynamic indoor environments, outperforming existing state-of-the-art methods. Full article
19 pages, 5999 KiB  
Article
Automated Pipeline for Robust Cat Activity Detection Based on Deep Learning and Wearable Sensor Data
by Md Ariful Islam Mozumder, Tagne Poupi Theodore Armand, Rashadul Islam Sumon, Shah Muhammad Imtiyaj Uddin and Hee-Cheol Kim
Sensors 2024, 24(23), 7436; https://doi.org/10.3390/s24237436 - 21 Nov 2024
Abstract
The health, safety, and well-being of household pets such as cats has become a challenging task in previous years. To estimate a cat’s behavior, objective observations of both the frequency and variability of specific behavior traits are required, which might be difficult to [...] Read more.
The health, safety, and well-being of household pets such as cats has become a challenging task in previous years. To estimate a cat’s behavior, objective observations of both the frequency and variability of specific behavior traits are required, which might be difficult to come by in a cat’s ordinary life. There is very little research on cat activity and cat disease analysis based on real-time data. Although previous studies have made progress, several key questions still need addressing: What types of data are best suited for accurately detecting activity patterns? Where should sensors be strategically placed to ensure precise data collection, and how can the system be effectively automated for seamless operation? This study addresses these questions by pointing out whether the cat should be equipped with a sensor, and how the activity detection system can be automated. Magnetic, motion, vision, audio, and location sensors are among the sensors used in the machine learning experiment. In this study, we collect data using three types of differentiable and realistic wearable sensors, namely, an accelerometer, a gyroscope, and a magnetometer. Therefore, this study aims to employ cat activity detection techniques to combine data from acceleration, motion, and magnetic sensors, such as accelerometers, gyroscopes, and magnetometers, respectively, to recognize routine cat activity. Data collecting, data processing, data fusion, and artificial intelligence approaches are all part of the system established in this study. We focus on One-Dimensional Convolutional Neural Networks (1D-CNNs) in our research, to recognize cat activity modeling for detection and classification. Such 1D-CNNs have recently emerged as a cutting-edge approach for signal processing-based systems such as sensor-based pet and human health monitoring systems, anomaly identification in manufacturing, and in other areas. Our study culminates in the development of an automated system for robust pet (cat) activity analysis using artificial intelligence techniques, featuring a 1D-CNN-based approach. In this experimental research, the 1D-CNN approach is evaluated using training and validation sets. The approach achieved a satisfactory accuracy of 98.9% while detecting the activity useful for cat well-being. Full article
(This article belongs to the Special Issue Advances in Sensing-Based Animal Biomechanics)
Show Figures

Figure 1

Figure 1
<p>Housing, monitoring, and husbandry environment of the cats.</p>
Full article ">Figure 2
<p>Wearable sensors with internal features.</p>
Full article ">Figure 3
<p>Data collection procedure. (<b>A</b>) Server room for real-time monitoring and storing data, (<b>B</b>) sensor device, (<b>C</b>) sensor device on the cat’s neck, (<b>D</b>) cat living space, including surveillance cameras, (<b>E</b>) transferring sensor data to the server.</p>
Full article ">Figure 4
<p>Data distribution of activity detection.</p>
Full article ">Figure 5
<p>Samples of bio-signals from the wearable devices on the cats.</p>
Full article ">Figure 6
<p>The deep learning model architecture of our experimental research work.</p>
Full article ">Figure 7
<p>Classification of the five activities.</p>
Full article ">Figure 8
<p>The complete process of the automated pipeline.</p>
Full article ">Figure 9
<p>Confusion matrix without normalization using the test dataset.</p>
Full article ">Figure 10
<p>Confusion matrix with normalization using the test dataset.</p>
Full article ">Figure 11
<p>Accuracy graph for the validation and training.</p>
Full article ">Figure 12
<p>Loss graph for the validation and training.</p>
Full article ">Figure 13
<p>Receiver operating characteristic (ROC) curves and AUCs for each class.</p>
Full article ">
24 pages, 4679 KiB  
Article
The Coral Reefs and Fishes of St. Brandon, Indian Ocean Archipelago: Implications for Sustainable Fisheries
by Melanie Ricot, Sruti Jeetun, Shakeel Yavan Jogee, Deepeeka Kaullysing, Nawsheen Taleb-Hossenkhan, Maina Joseph Mbui, Beatriz Estela Casareto, Yoshimi Suzuki, Diah Permata Wijayanti and Ranjeet Bhagooli
Diversity 2024, 16(12), 710; https://doi.org/10.3390/d16120710 - 21 Nov 2024
Abstract
Understanding the factors influencing the variability in the composition of fish assemblages is essential for bolstering the resilience of coral reef ecosystems, effective coral reef management and maintaining sustainable fisheries. The benthic composition and reef fish assemblages at eight sites at the poorly [...] Read more.
Understanding the factors influencing the variability in the composition of fish assemblages is essential for bolstering the resilience of coral reef ecosystems, effective coral reef management and maintaining sustainable fisheries. The benthic composition and reef fish assemblages at eight sites at the poorly studied St. Brandon, also known as a bank fisheries area in the Indian Ocean, were assessed to discern distribution patterns, including differences between channel (Passe Grand Capitaine, Passe Ile Longue-Canal Coco and Passe La Cayane) and non-channel (Chaloupe, Anchor Points 1 and 2, Bain des Dames, Pearl Island) sites and fisheries sustainability. The benthic composition exhibited clusters, revealing the distinct separation of Chaloupe which predominantly featured sand (75.26%) interspersed with sporadic coral patches characterized by live and dead corals and rubble. The three channel sites composed a cluster. Coral species across eight families were identified, with significant variability (p < 0.05) observed in their benthic cover, particularly live coral cover (LCC). Fish density and diversity analyses unveiled 58 fish species from 12 families, with no statistically significant disparity in density among sites. Total fish biomass (TFB) and target fish biomass (TB) ranged from 138.02 ± 65.04 to 4110.16 ± 3048.70 kg/ha and from 28.31 ± 24.52 to 3851.27 ± 2753.18 kg/ha, respectively. TFB and TB differed significantly (p < 0.05) among sites irrespective of channel and non-channel sites, with Pearl Island recording the highest biomass. TFB and TB recorded at five out of the eight surveyed sites exceeded the mean biomass benchmark (B0) for the Western Indian Ocean, set at 1150.00 and 560.00 kg/ha for TFB and TB, respectively. Functional group analysis unveiled six discrete groups influencing TFB, with scrapers being the most dominant. This study presents the first report on fish biomass surveys in St. Brandon, highlighting a case for sustainable fisheries in the waters of the Republic of Mauritius. Full article
(This article belongs to the Special Issue Biodiversity and Conservation of Coral Reefs)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Study sites around St Brandon. (<b>B</b>) Location of St. Brandon in the Western Indian Ocean.</p>
Full article ">Figure 2
<p>Dendrogram classification of the eight sites around St Brandon. The similarity index was measured based on the benthic composition at each study site.</p>
Full article ">Figure 3
<p>The occurrence and distribution of (<b>A</b>) benthic cover. The insert provides an enhanced view of the overlapping pie charts at Anchor Points 1 and 2 and (<b>B</b>) coral families at eight studied sites around St Brandon.</p>
Full article ">Figure 4
<p>Fish density (number of individuals per hectare) at the study sites (bars represent mean ± standard deviation). The red dots represent the Shannon–Weiner diversity index value for the surveyed fish.</p>
Full article ">Figure 5
<p>The total fish biomass and target biomass at 8 surveyed sites around St Brandon (bars represent mean ± standard deviation).</p>
Full article ">Figure 6
<p>The occurrence and distribution of (<b>A</b>) different fish family and (<b>B</b>) different functional groups at eight sites around St Brandon. The insert provides an enhanced view of the overlapping pie charts at Anchor Points 1 and 2.</p>
Full article ">Figure 6 Cont.
<p>The occurrence and distribution of (<b>A</b>) different fish family and (<b>B</b>) different functional groups at eight sites around St Brandon. The insert provides an enhanced view of the overlapping pie charts at Anchor Points 1 and 2.</p>
Full article ">Figure 7
<p>Canonical correspondence analysis between the benthic cover and the different fish functional groups for St Brandon. The substrate variables are live coral, dead coral, rubble, algae, crustose coralline algae (CCA) and sand, and are labeled in red. The sites are shown in green. The different functional groups are labeled in black.</p>
Full article ">Figure 8
<p>Principal component analysis biplot conducted employing data from fish functional groups, fish families, benthic cover factors and hard coral families for channel and non-channel habitats. Ellipses were applied to group the different components. The two primary components account for 21.10% and 15.00% of the variance, respectively. The components associated with channel and non-channel sites are indicated in yellow and blue, respectively. Fish families are labeled in red, fish functional groups in green, benthic cover factors in purple and coral families in black.</p>
Full article ">Figure 9
<p>Variations in the TFB across countries within the WIO and St. Brandon. Chagos = Chagos Archipelago, Mal = Maldives, Ken = Kenya, May = Mayotte, Moz = Mozambique, Tan = Tanzania, Mau = Mauritius, Mad = Madagascar, and Sey = the Seychelles [<a href="#B60-diversity-16-00710" class="html-bibr">60</a>,<a href="#B63-diversity-16-00710" class="html-bibr">63</a>,<a href="#B86-diversity-16-00710" class="html-bibr">86</a>].</p>
Full article ">
22 pages, 6594 KiB  
Article
Rice Growth-Stage Recognition Based on Improved YOLOv8 with UAV Imagery
by Wenxi Cai, Kunbiao Lu, Mengtao Fan, Changjiang Liu, Wenjie Huang, Jiaju Chen, Zaoming Wu, Chudong Xu, Xu Ma and Suiyan Tan
Agronomy 2024, 14(12), 2751; https://doi.org/10.3390/agronomy14122751 - 21 Nov 2024
Viewed by 135
Abstract
To optimize rice yield and enhance quality through targeted field management at each growth stage, rapid and accurate identification of rice growth stages is crucial. This study presents the Mobilenetv3-YOLOv8 rice growth-stage recognition model, designed for high efficiency and accuracy using Unmanned Aerial [...] Read more.
To optimize rice yield and enhance quality through targeted field management at each growth stage, rapid and accurate identification of rice growth stages is crucial. This study presents the Mobilenetv3-YOLOv8 rice growth-stage recognition model, designed for high efficiency and accuracy using Unmanned Aerial Vehicle (UAV) imagery. A UAV captured images of rice fields across five distinct growth stages from two altitudes (3 m and 20 m) across two independent field experiments. These images were processed to create training, validation, and test datasets for model development. Mobilenetv3 was introduced to replace the standard YOLOv8 backbone, providing robust small-scale feature extraction through multi-scale feature fusion. Additionally, the Coordinate Attention (CA) mechanism was integrated into YOLOv8’s backbone, outperforming the Convolutional Block Attention Module (CBAM) by enhancing position-sensitive information capture and focusing on crucial pixel areas. Compared to the original YOLOv8, the enhanced Mobilenetv3-YOLOv8 model improved rice growth-stage identification accuracy and reduced the computational load. With an input image size of 400 × 400 pixels and the CA implemented in the second and third backbone layers, the model achieved its best performance, reaching 84.00% mAP and 84.08% recall. The optimized model achieved parameters and Giga Floating Point Operations (GFLOPs) of 6.60M and 0.9, respectively, with precision values for tillering, jointing, booting, heading, and filling stages of 94.88%, 93.36%, 67.85%, 78.31%, and 85.46%, respectively. The experimental results revealed that the optimal Mobilenetv3-YOLOv8 shows excellent performance and has potential for deployment in edge computing devices and practical applications for in-field rice growth-stage recognition in the future. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the proposed method.</p>
Full article ">Figure 2
<p>The study site and rice field experiment designs. (<b>a</b>) Spring rice field experiment, EXP.1. (<b>b</b>) Autumn rice field experiment, EXP.2.</p>
Full article ">Figure 3
<p>Unmanned Aerial Vehicle photography.</p>
Full article ">Figure 4
<p>Diagram of YOLOv8 model. Note: The color block in the figure simulates the process of YOLOv8 image input: the image enters the backbone network for feature extraction, passes through the standard convolution and the new C2F convolution structure, and finally enters the image classification function module.</p>
Full article ">Figure 5
<p>Diagram of Mobilenetv3-YOLOv8 model. Note: Like <a href="#agronomy-14-02751-f004" class="html-fig">Figure 4</a>, the backbone part of YOLOv8 in the figure is replaced by Mobilenetv3: Conv2d is a two-dimensional convolution layer. Bneck is a special bottleneck structure of Mobilenetv3.</p>
Full article ">Figure 6
<p>Overview diagram of CBAM mechanism.</p>
Full article ">Figure 7
<p>Overview diagram of CA mechanism.</p>
Full article ">Figure 8
<p>Recognition effect of Mobilenetv3-YOLOv8 model on images of different input sizes.</p>
Full article ">Figure 9
<p>Performance comparison of different Mobilenet networks.</p>
Full article ">Figure 10
<p>Performance comparison of different models.</p>
Full article ">Figure 11
<p>Location map of CA mechanism. Notes: Subfigures (<b>a</b>–<b>e</b>) are five different discussions on adding one layer of attention mechanism, two layers, three layers, four layers and five layers to the backbone network. The blue color block is the backbone network work layer. The first layer adopts a bottleneck, which includes a 3 × 3 convolution, and the input feature has a spatial dimension of 320 × 320 and consists of 16 channels. The second layer adopts a bottleneck, which includes a 3 × 3 convolution, and the input feature has a spatial dimension of 160 × 160 and consists of 16 channels. The third layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 80 × 80 and consists of 24 channels. The forth layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 40 × 40 and consists of 48 channels. The fifth layer adopts a bottleneck, which includes a 5 × 5 convolution, and the input feature has a spatial dimension of 20 × 20 and consists of 96 channels. The red color block is the added CA attention mechanism layer.</p>
Full article ">Figure 12
<p>Confusion matrix for Mobilenetv3-YOLOv8 evaluated on the test dataset.</p>
Full article ">Figure 13
<p>False-positive detection with (<b>a</b>) booting stage being falsely recognized as tillering stage and (<b>b</b>) filling stage being falsely recognized as jointing stage.</p>
Full article ">Figure 14
<p>False-positive detection with booting stage being falsely recognized as tillering stage.</p>
Full article ">
13 pages, 46604 KiB  
Article
Human Activity Recognition Based on Point Clouds from Millimeter-Wave Radar
by Seungchan Lim, Chaewoon Park, Seongjoo Lee and Yunho Jung
Appl. Sci. 2024, 14(22), 10764; https://doi.org/10.3390/app142210764 - 20 Nov 2024
Viewed by 221
Abstract
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, [...] Read more.
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, a low-power and lightweight design of the HAR system is essential. In this paper, we propose a low-power and lightweight HAR system using point-cloud data collected by radar. The proposed HAR system uses a pillar feature encoder that converts 3D point-cloud data into a 2D image and a classification network based on depth-wise separable convolution for lightweighting. The proposed classification network achieved an accuracy of 95.54%, with 25.77 M multiply–accumulate operations and 22.28 K network parameters implemented in a 32 bit floating-point format. This network achieved 94.79% accuracy with 4 bit quantization, which reduced memory usage to 12.5% compared to existing 32 bit format networks. In addition, we implemented a lightweight HAR system optimized for low-power design on a heterogeneous computing platform, a Zynq UltraScale+ ZCU104 device, through hardware–software implementation. It took 2.43 ms of execution time to perform one frame of HAR on the device and the system consumed 3.479 W of power when running. Full article
Show Figures

Figure 1

Figure 1
<p>Data collection setup.</p>
Full article ">Figure 2
<p>Configuration of dataset classes and their corresponding point clouds: (<b>a</b>) Stretching; (<b>b</b>) Standing; (<b>c</b>) Taking medicine; (<b>d</b>) Squatting; (<b>e</b>) Sitting chair; (<b>f</b>) Reading news; (<b>g</b>) Sitting floor; (<b>h</b>) Picking; (<b>i</b>) Crawl; (<b>j</b>) Lying wave hands; (<b>k</b>) Lying.</p>
Full article ">Figure 3
<p>Overview of the proposed HAR system.</p>
Full article ">Figure 4
<p>Proposed classification network.</p>
Full article ">Figure 5
<p>Training and test loss curve and accuracy curve: (<b>a</b>) Training and test loss curve; (<b>b</b>) Training and test accuracy curve.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">Figure 7
<p>Environment used for FPGA implementation and verification.</p>
Full article ">
22 pages, 2388 KiB  
Article
DeFFace: Deep Face Recognition Unlocked by Illumination Attributes
by Xiangling Zhou, Zhongmin Gao, Huanji Gong and Shenglin Li
Electronics 2024, 13(22), 4566; https://doi.org/10.3390/electronics13224566 - 20 Nov 2024
Viewed by 161
Abstract
General face recognition is currently one of the key technologies in the field of computer vision, and it has achieved tremendous success with the support of deep-learning technology. General face recognition models currently exhibit extremely high accuracy on some high-quality face datasets. However, [...] Read more.
General face recognition is currently one of the key technologies in the field of computer vision, and it has achieved tremendous success with the support of deep-learning technology. General face recognition models currently exhibit extremely high accuracy on some high-quality face datasets. However, their performance decreases in challenging environments, such as low-light scenes. To enhance the performance of face recognition models in low-light scenarios, we propose a face recognition approach based on feature decoupling and fusion (DeFFace). Our main idea is to extract facial-related features from images that are not influenced by illumination. First, we introduce a feature decoupling network (D-Net) to decouple the image into facial-related features and illumination-related features. By incorporating the illumination triplet loss optimized with unpaired identity IDs, we regulate illumination-related features to minimize the impact of lighting conditions on the face recognition system. However, the decoupled features are relatively coarse. Therefore, we introduce a feature fusion network (F-Net) to further extract the residual facial-related features from the illumination-related features and fuse them with the initial facial-related features. Finally, we introduce a lighting-facial correlation loss to reduce the correlation between the two decoupled features in the specific space. We demonstrate the effectiveness of our method on four real-world low-light datasets and three simulated low-light datasets. We retrain multiple general face recognition methods using our proposed low-light training sets to further validate the advanced performance of our method. Compared to general face recognition methods, our approach achieves an average improvement of more than 2.11 percentage points on low-light face datasets. In comparison with image enhancement-based solutions, our method shows an average improvement of around 16 percentage points on low-light datasets, and it also delivers an average improvement of approximately 5.67 percentage points when compared to illumination normalization-based methods. Full article
Show Figures

Figure 1

Figure 1
<p>Diagrams of different approaches for low-light face image recognition. (<b>a</b>) Illumination-enhanced low-light face recognition method. (<b>b</b>) Illumination normalization-based low-light face recognition method using Retinex theory. (<b>c</b>) Face recognition based on near-infrared camera. (<b>d</b>) Our Low-light face recognition method based on feature decoupling and fusion scheme (DeFFace).</p>
Full article ">Figure 2
<p>The DeFFace architecture diagram primarily consists of four parts: backbone, D-Net, F-Net, and face recognition. The D-Net is constrained by the illumination triplet loss, the F-Net is constrained by the lighting–facial correlation loss, and the face recognition module is constrained by the Softmax-based loss.</p>
Full article ">Figure 3
<p>Diagram showing the detailed network configuration of the D-Net and F-Net. Subfigure (<b>a</b>) presents the detailed configuration of D-Net, and subfigure (<b>b</b>) presents the detailed configuration of the F-Net.</p>
Full article ">Figure 4
<p>Examples from the LowCASIA-Train, where green boxes indicate well-illuminated facial areas, blue boxes denote low-light facial areas, and yellow boxes represent randomly selected lighting triples.</p>
Full article ">Figure 5
<p>Examples from the validation set. LFW* indicates the Low-light version of LFW.</p>
Full article ">Figure 6
<p>Left depicts the performance of decoupling sub-modules with identical layer configurations, while right illustrates the performance of decoupling sub-modules with varying numbers of layers.</p>
Full article ">Figure 7
<p>The visualized results of rank-10 retrieval on the low-light face dataset LFW* using our method and the ArcFace method are presented. Using the person on the far left as an example, the green dashed box indicates a match as the same person, the yellow dashed box indicates a different person, and the blue and orange text boxes represent confidence scores. We display the top rank-10 visualization results from high to low.</p>
Full article ">
16 pages, 9195 KiB  
Article
Simulating and Verifying a 2D/3D Laser Line Sensor Measurement Algorithm on CAD Models and Real Objects
by Rok Belšak, Janez Gotlih and Timi Karner
Sensors 2024, 24(22), 7396; https://doi.org/10.3390/s24227396 - 20 Nov 2024
Viewed by 231
Abstract
The increasing adoption of 2D/3D laser line sensors in industrial and research applications necessitates accurate and efficient simulation tools for tasks such as surface inspection, dimensional verification, and quality control. This paper presents a novel algorithm developed in MATLAB for simulating the measurements [...] Read more.
The increasing adoption of 2D/3D laser line sensors in industrial and research applications necessitates accurate and efficient simulation tools for tasks such as surface inspection, dimensional verification, and quality control. This paper presents a novel algorithm developed in MATLAB for simulating the measurements of any 2D/3D laser line sensor on STL CAD models. The algorithm uses a modified fast-ray triangular intersection method, addressing challenges such as overlapping triangles in assembly models and incorporating sensor resolution to ensure realistic simulations. Quantitative analysis shows a significant reduction in computation time, enhancing the practical utility of the algorithm. The simulation results exhibit a mean deviation of 0.42 mm when compared to real-world measurements. Notably, the algorithm effectively handles complex geometric features, such as holes and grooves, and offers flexibility in generating point cloud data in both local and global coordinate systems. This work not only reduces the need for physical prototyping, thereby contributing to sustainability, but also supports AI training by generating accurate synthetic data. Future work should aim to further optimize the simulation speed and explore noise modeling to enhance the realism of simulated measurements. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Simulation principle with the assembly CAD model and a 2D laser line sensor.</p>
Full article ">Figure 2
<p>Generated laser lines with vectors within the working area.</p>
Full article ">Figure 3
<p>Ray/triangle intersection principle.</p>
Full article ">Figure 4
<p>Modified ray/triangle intersection method.</p>
Full article ">Figure 5
<p>(<b>a</b>) STL CAD model with laser lines and (<b>b</b>) first and second intersection points of the i-th laser line.</p>
Full article ">Figure 6
<p>(<b>a</b>) Assembly STL CAD model with critical overlapping area and (<b>b</b>) detailed overlapping area with a laser line intersecting four triangles.</p>
Full article ">Figure 7
<p>(<b>a</b>) Generated plane and the CAD model and (<b>b</b>) principle for finding intersecting triangles with the plane.</p>
Full article ">Figure 8
<p>Flowchart of the developed algorithm.</p>
Full article ">Figure 9
<p>Experimental setup with a real Wenglor sensor, linear unit, and Matlab program.</p>
Full article ">Figure 10
<p>Parts on which laser measurements have been simulated and their regions of interest.</p>
Full article ">Figure 11
<p>Generated point cloud results of each part, where (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) are measured in the laser coordinate system, and (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are measured in the global coordinate system.</p>
Full article ">Figure 11 Cont.
<p>Generated point cloud results of each part, where (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) are measured in the laser coordinate system, and (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are measured in the global coordinate system.</p>
Full article ">Figure 12
<p>Comparing simulated point cloud measurements with point cloud measurements from a real sensor.</p>
Full article ">Figure 13
<p>Comparing simulated resolution of measurements to sensor resolution of measurements.</p>
Full article ">Figure 14
<p>Detected algorithm deficiency at specific area of the assembly CAD STL model.</p>
Full article ">
25 pages, 12054 KiB  
Article
Towards 3D Reconstruction of Multi-Shaped Tunnels Utilizing Mobile Laser Scanning Data
by Xuan Ding, Shen Chen, Mu Duan, Jinchang Shan, Chao Liu and Chuli Hu
Remote Sens. 2024, 16(22), 4329; https://doi.org/10.3390/rs16224329 - 20 Nov 2024
Viewed by 223
Abstract
Using digital twin models of tunnels has become critical to their efficient maintenance and management. A high-precision 3D tunnel model is the prerequisite for a successful digital twin model of tunnel applications. However, constructing high-precision 3D tunnel models with high-quality textures and structural [...] Read more.
Using digital twin models of tunnels has become critical to their efficient maintenance and management. A high-precision 3D tunnel model is the prerequisite for a successful digital twin model of tunnel applications. However, constructing high-precision 3D tunnel models with high-quality textures and structural integrity based on mobile laser scanning data remains a challenge, particularly for tunnels of different shapes. This study addresses this problem by developing a novel method for the 3D reconstruction of multi-shaped tunnels based on mobile laser scanning data. This method does not require any predefined mathematical models or projection parameters to convert point clouds into 2D intensity images that conform to the geometric features of tunnel linings. This method also improves the accuracy of 3D tunnel mesh models by applying an adaptive threshold approach that reduces the number of pseudo-surfaces generated during the Poisson surface reconstruction of tunnels. This method was experimentally verified by conducting 3D reconstruction tasks involving tunnel point clouds of four different shapes. The superiority of this method was further confirmed through qualitative and quantitative comparisons with related approaches. By automatically and efficiently constructing a high-precision 3D tunnel model, the proposed method offers an important model foundation for digital twin engineering and a valuable reference for future tunnel model construction projects. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed method.</p>
Full article ">Figure 2
<p>Location of the tunnel point cloud in the 3D coordinate system: (<b>a</b>) The Cartesian coordinate system; (<b>b</b>) The tunnel stationing coordinate system.</p>
Full article ">Figure 3
<p>Intuitive illustration of extracting tunnel ring sections and unifying the mileage.</p>
Full article ">Figure 4
<p>Diagram of calculating the minimum bounding circle.</p>
Full article ">Figure 5
<p>Diagram of Coordinate Mapping.</p>
Full article ">Figure 6
<p>Intuitive illustration of the standardization of tunnel cross-sectional point sets: (<b>a</b>) Missing points in the cross-section; (<b>b</b>) Abnormal depression in the cross-section; (<b>c</b>) Protrusion in the cross-section; (<b>d</b>) Linear interpolation for standardizing the tunnel cross-section.</p>
Full article ">Figure 7
<p>Diagram of cylindrical projection unwrapping.</p>
Full article ">Figure 8
<p>Diagram of Image Rectification.</p>
Full article ">Figure 9
<p>Calculate the normal vector of every point.</p>
Full article ">Figure 10
<p>Intuitive illustration of Poisson reconstruction tunnel cross-section: (<b>a</b>) Oriented points <math display="inline"><semantics> <mover accent="true"> <mi>V</mi> <mo>→</mo> </mover> </semantics></math>; (<b>b</b>) Indicator gradient <math display="inline"><semantics> <mrow> <mo>∇</mo> <msub> <mi>χ</mi> <mi>M</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) Indicator function <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi>M</mi> </msub> </mrow> </semantics></math>; (<b>d</b>) Model entity <math display="inline"><semantics> <mrow> <msub> <mo>∂</mo> <mi>M</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>The tunnel model constructed by the traditional Poisson reconstruction algorithm: (<b>a</b>) 3D tunnel mesh model in wireframe mode; (<b>b</b>) Pseudo-surfaces and triangular mesh on the edges of the model.</p>
Full article ">Figure 12
<p>Prototype system architecture and interface diagram: (<b>a</b>) Architecture of the 3D tunnel management system; (<b>b</b>) System interface.</p>
Full article ">Figure 13
<p>Tunnel point cloud data: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 13 Cont.
<p>Tunnel point cloud data: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 14
<p>Preprocessing before and after cross-sections of different shaped tunnels: (<b>a</b>) Original tunnel point cloud cross-section.; (<b>b</b>) Preprocessed tunnel point cloud cross-section.</p>
Full article ">Figure 15
<p>The results of tunnel point cloud projection calculations and cross-section standardization: (<b>a</b>) Minimum bounding circle of the tunnel cross-section; (<b>b</b>) Standardized tunnel cross-section.</p>
Full article ">Figure 16
<p>Intensity images with different grid resolutions: (<b>a</b>) Grid resolution = 0.0010; (<b>b</b>) Grid resolution = 0.0012; (<b>c</b>) Grid resolution = 0.0014; (<b>d</b>) Grid resolution = 0.0018.</p>
Full article ">Figure 17
<p>Intuitive illustration of 2D intensity images: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 18
<p>Rectification ratio of sub-images: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 19
<p>Models constructed by Poisson surface reconstruction: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 20
<p>Intuitive illustration of 3D tunnel model: (<b>a</b>) Clipped 3D mesh model; (<b>b</b>) 3D texture model.</p>
Full article ">Figure 21
<p>The variation in the number of the points with the absolute distance difference between the reconstructed model and the input points: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 21 Cont.
<p>The variation in the number of the points with the absolute distance difference between the reconstructed model and the input points: (<b>a</b>) Rectangular tunnel; (<b>b</b>) Elliptical tunnel; (<b>c</b>) Horseshoe tunnel; (<b>d</b>) Circular tunnel.</p>
Full article ">Figure 22
<p>Qualitative image comparison and Regions of Interest: (<b>a</b>) Rectangle 1 and Rectangle 2 in tunnel images generated by cylindrical projection from horseshoe tunnel point cloud; (<b>b</b>) Rectangle 1 and Rectangle 2 in tunnel images generated by the proposed method from horseshoe tunnel point cloud; (<b>c</b>) Rectangle 1 and Rectangle 2 in tunnel images generated by cylindrical projection from rectangular tunnel point cloud; (<b>d</b>) Rectangle 1 and Rectangle 2 in tunnel images generated by the proposed method from rectangular tunnel point cloud.</p>
Full article ">Figure 23
<p>Qualitative modeling comparison from (<b>a</b>) Delaunay triangulation and (<b>b</b>) ours. The quantitative statistics are plotted in (<b>c</b>), where the horizontal axis stands for the absolute distance error from the reconstructed model to the input LiDAR scan, and the vertical axis is the number of points.</p>
Full article ">Figure 24
<p>Qualitative modeling comparison from (<b>a</b>) Alpha Shape reconstruction and (<b>b</b>) ours. The quantitative statistics are plotted in (<b>c</b>).</p>
Full article ">
17 pages, 8599 KiB  
Article
Att-BEVFusion: An Object Detection Algorithm for Camera and LiDAR Fusion Under BEV Features
by Peicheng Shi, Mengru Zhou, Xinlong Dong and Aixi Yang
World Electr. Veh. J. 2024, 15(11), 539; https://doi.org/10.3390/wevj15110539 - 20 Nov 2024
Viewed by 230
Abstract
To improve the accuracy of detecting small and long-distance objects while self-driving cars are in motion, in this paper, we propose a 3D object detection method, Att-BEVFusion, which fuses camera and LiDAR data in a bird’s-eye view (BEV). First, the transformation from the [...] Read more.
To improve the accuracy of detecting small and long-distance objects while self-driving cars are in motion, in this paper, we propose a 3D object detection method, Att-BEVFusion, which fuses camera and LiDAR data in a bird’s-eye view (BEV). First, the transformation from the camera view to the BEV space is achieved through an implicit supervision-based method, and then the LiDAR BEV feature point cloud is voxelized and converted into BEV features. Then, a channel attention mechanism is introduced to design a BEV feature fusion network to realize the fusion of camera BEV feature space and LiDAR BEV feature space. Finally, regarding the issue of insufficient global reasoning in the BEV fusion features generated by the channel attention mechanism, as well as the challenge of inadequate interaction between features. We further develop a BEV self-attention mechanism to apply global operations on the features. This paper evaluates the effectiveness of the Att-BEVFusion fusion algorithm on the nuScenes dataset, and the results demonstrate that the algorithm achieved 72.0% mean average precision (mAP) and 74.3% nuScenes detection score (NDS), with an advanced detection accuracy of 88.9% and 91.8% for single-item detection of automotive and pedestrian categories, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison between BEVFusion and our proposed method, Att-BEVFusion, which shows that our method is able to effectively detect both distant and occluded objects.</p>
Full article ">Figure 2
<p>Overall structure diagram of Att-BEVFusion.</p>
Full article ">Figure 3
<p>Extraction of image features.</p>
Full article ">Figure 4
<p>Transformation of LIDAR point cloud data to BEV features.</p>
Full article ">Figure 5
<p>Structure of the channel attention mechanism (where r is the ratio of compression).</p>
Full article ">Figure 6
<p>Structure of the self-attention mechanism.</p>
Full article ">Figure 7
<p>Att-BEVFusion qualitative detection results.</p>
Full article ">Figure 7 Cont.
<p>Att-BEVFusion qualitative detection results.</p>
Full article ">
27 pages, 28012 KiB  
Article
A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement
by Boyang You and Barmak Honarvar Shakibaei Asli
Big Data Cogn. Comput. 2024, 8(11), 164; https://doi.org/10.3390/bdcc8110164 - 20 Nov 2024
Viewed by 231
Abstract
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. [...] Read more.
To address the challenge of rapid geometric model development in the digital twin industry, this paper presents a comprehensive pipeline for constructing 3D models from images using monocular vision imaging principles. Firstly, a structure-from-motion (SFM) algorithm generates a 3D point cloud from photographs. The feature detection methods scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and KAZE are compared across six datasets, with SIFT proving the most effective (matching rate higher than 0.12). Using K-nearest-neighbor matching and random sample consensus (RANSAC), refined feature point matching and 3D spatial representation are achieved via antipodal geometry. Then, the Poisson surface reconstruction algorithm converts the point cloud into a mesh model. Additionally, texture images are enhanced by leveraging a visual geometry group (VGG) network-based deep learning approach. Content images from a dataset provide geometric contours via higher-level VGG layers, while textures from style images are extracted using the lower-level layers. These are fused to create texture-transferred images, where the image quality assessment (IQA) metrics SSIM and PSNR are used to evaluate texture-enhanced images. Finally, texture mapping integrates the enhanced textures with the mesh model, improving the scene representation with enhanced texture. The method presented in this paper surpassed a LiDAR-based reconstruction approach by 20% in terms of point cloud density and number of model facets, while the hardware cost was only 1% of that associated with LiDAR. Full article
Show Figures

Figure 1

Figure 1
<p>Samples from Dataset 1 (Source: <a href="https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets" target="_blank">https://github.com/Abhishek-Aditya-bs/MultiView-3D-Reconstruction/tree/main/Datasets</a> accessed on 18 November 2024) and samples from Dataset 2.</p>
Full article ">Figure 2
<p>Demonstration of Dataset 3.</p>
Full article ">Figure 3
<p>Diagram of SFM algorithm.</p>
Full article ">Figure 4
<p>Camera imaging model.</p>
Full article ">Figure 5
<p>Coplanarity condition of photogrammetry.</p>
Full article ">Figure 6
<p>Process of surface reconstruction.</p>
Full article ">Figure 7
<p>Demonstration of isosurface.</p>
Full article ">Figure 8
<p>Demonstration of VGG network.</p>
Full article ">Figure 9
<p>Demonstration of Gram matrix.</p>
Full article ">Figure 10
<p>Style transformation architecture.</p>
Full article ">Figure 11
<p>Texture mapping process.</p>
Full article ">Figure 12
<p>Demonstration of the three kinds of feature descriptors used on Dataset 1 and Dataset 2.</p>
Full article ">Figure 13
<p>Matching rate fitting of three kinds of image descriptors.</p>
Full article ">Figure 14
<p>SIFT point matching for <span class="html-italic">CNC1</span> object under different thresholds.</p>
Full article ">Figure 15
<p>SIFT point matching for <span class="html-italic">Fountain</span> object under different thresholds.</p>
Full article ">Figure 16
<p>Matching result of Dataset 2 using RANSAC method.</p>
Full article ">Figure 17
<p>Triangulation presentation of feature points obtained from objects in Dataset 1.</p>
Full article ">Figure 18
<p>Triangulation presentation of feature points obtained from objects in Dataset 2.</p>
Full article ">Figure 19
<p>Point cloud data of objects in Dataset 1.</p>
Full article ">Figure 20
<p>Point cloud data of objects in Dataset 2.</p>
Full article ">Figure 21
<p>Normal vector presentation of the points set obtained from objects in Dataset 1.</p>
Full article ">Figure 22
<p>Normal vector of the points set obtained from objects in Dataset 2.</p>
Full article ">Figure 23
<p>Poisson surface reconstruction results of objects in Dataset 1.</p>
Full article ">Figure 24
<p>Poisson surface reconstruction results of objects in Dataset 2.</p>
Full article ">Figure 25
<p>Style transfer result of <span class="html-italic">Statue</span> object.</p>
Full article ">Figure 26
<p>Style transfer result of <span class="html-italic">Fountain</span> object.</p>
Full article ">Figure 27
<p>Style transfer result of <span class="html-italic">Castle</span> object.</p>
Full article ">Figure 28
<p>Style transfer result of <span class="html-italic">CNC1</span> object.</p>
Full article ">Figure 29
<p>Style transfer result of <span class="html-italic">CNC2</span> object.</p>
Full article ">Figure 30
<p>Style transfer result of <span class="html-italic">Robot</span> object.</p>
Full article ">Figure 31
<p>Training loss in style transfer for <b>CNC1</b> object.</p>
Full article ">Figure 32
<p>IQA assessment for <b>CNC1</b> images after style transfer.</p>
Full article ">Figure 33
<p>Results of texture mapping for Dataset 1.</p>
Full article ">Figure 34
<p>Results of texture mapping for Dataset 2.</p>
Full article ">Figure A1
<p>Results of Camera calibration.</p>
Full article ">
23 pages, 11324 KiB  
Article
Optimal Feature-Guided Position-Shape Dual Optimization for Building Point Cloud Facade Detail Enhancement
by Shiming Li, Fengtao Yan, Kaifeng Ma, Qingfeng Hu, Feng Wang and Wenkai Liu
Remote Sens. 2024, 16(22), 4324; https://doi.org/10.3390/rs16224324 - 20 Nov 2024
Viewed by 356
Abstract
Dense three-dimensional point clouds are the cornerstone of modern architectural 3D reconstruction, containing a wealth of semantic structural information about building facades. However, current methods struggle to automatically and accurately extract the complex detailed structures of building facades from unstructured point clouds, with [...] Read more.
Dense three-dimensional point clouds are the cornerstone of modern architectural 3D reconstruction, containing a wealth of semantic structural information about building facades. However, current methods struggle to automatically and accurately extract the complex detailed structures of building facades from unstructured point clouds, with detailed facade modeling often relying heavily on manual interaction. This study introduces an efficient method for semantic structural detail enhancement of building facade point clouds, achieved through feature-guided dual-layer optimization of position and shape. The proposed framework addresses three key challenges: (1) robust extraction of facade semantic feature point clouds to effectively perceive the underlying geometric features of facade structures; (2) improved grouping of similarly structured objects using Hausdorff distance discrimination, overcoming the impact of point cloud omissions and granularity differences; (3) position-shape double optimization for facade enhancement, achieving detailed structural optimization. Validated on three typical datasets, the proposed method not only achieved 98.5% accuracy but also effectively supplemented incomplete scan results. It effectively optimizes semantic structures that widely exist and have the characteristic of repeated appearance on building facades, providing robust support for smart city construction and analytical applications. Full article
Show Figures

Figure 1

Figure 1
<p>The workflow of the proposed method.</p>
Full article ">Figure 2
<p>Calculate the vector from the point to the centroid to describe the local structural features. (The yellow point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> represents a certain point in the point cloud during the traversal process, and a neighborhood will be constructed with it; the blue point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> represents a certain point in the neighborhood of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>; the purple point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> is the traditional centroid, and the distance from the yellow point to the purple point is the magnitude of the traditional centroid displacement vector; the red point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> is the weighted centroid, and the distance from the yellow point to the red point is the magnitude of the weighted centroid displacement vector).</p>
Full article ">Figure 3
<p>The locally weighted centroid displacement scheme. (The temporary point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> is obtained through the parallelogram rule constructed by <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>c</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>. It should be noted that each <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>p</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> will produce a <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>, and the sum of all <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> will result in the red point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>. The specific calculation method is referred to as Formula (4)).</p>
Full article ">Figure 4
<p>Region growing based on the slices. (First, the optimal slice height is determined through a sliding window to ensure that each slice represents a single layer of windows; then, region growing is used to perform preliminary segmentation on each layer of windows).</p>
Full article ">Figure 5
<p>Flowchart of Hausdorff distance based on FPFH.</p>
Full article ">Figure 6
<p>Location optimization process of window semantic center point. (The blue point represents the initial center position of the window, which may be displaced due to errors during scanning. The yellow point represents the corrected center of the window, obtained from the intersection of the line fitted based on the blue point. The red point represents the originally obscured position of the window, inferred from the collinearity relationship).</p>
Full article ">Figure 7
<p>Center point distribution.</p>
Full article ">Figure 8
<p>Diagram of class LINE.</p>
Full article ">Figure 9
<p>Robust contour structures obtained by group object overlay and line segment optimization. (Firstly, the α-shape is used to extract the contour line segments of all windows and overlay them; secondly, the proposed similarity measure is applied to merge line segments with high similarity to complete shape optimization; finally, the optimized contours are placed in the optimized positions to achieve dual optimization of position and shape).</p>
Full article ">Figure 10
<p>Facade enhancement of building model.</p>
Full article ">Figure 11
<p>The details of the datasets.</p>
Full article ">Figure 12
<p>Experimental evaluation of dataset NCWU-MLS.</p>
Full article ">Figure 13
<p>Experimental evaluation of datasets WHU-TLS and SJY-TLS.</p>
Full article ">Figure 14
<p>An intuitive comparison between the proposed method and the F3D method.</p>
Full article ">Figure 15
<p>Comparison of feature point extraction.</p>
Full article ">Figure 16
<p>Hausdorff distance matrix-based object structure comparison and grouping.</p>
Full article ">Figure 17
<p>Evaluation of center point location optimization.</p>
Full article ">
14 pages, 2321 KiB  
Article
Tumor Morphology for Prediction of Poor Responses Early in Neoadjuvant Chemotherapy for Breast Cancer: A Multicenter Retrospective Study
by Wen Li, Nu N. Le, Rohan Nadkarni, Natsuko Onishi, Lisa J. Wilmes, Jessica E. Gibbs, Elissa R. Price, Bonnie N. Joe, Rita A. Mukhtar, Efstathios D. Gennatas, John Kornak, Mark Jesus M. Magbanua, Laura J. van’t Veer, Barbara LeStage, Laura J. Esserman and Nola M. Hylton
Tomography 2024, 10(11), 1832-1845; https://doi.org/10.3390/tomography10110134 - 20 Nov 2024
Viewed by 303
Abstract
Background: This multicenter and retrospective study investigated the additive value of tumor morphologic features derived from the functional tumor volume (FTV) tumor mask at pre-treatment (T0) and the early treatment time point (T1) in the prediction of pathologic outcomes for breast cancer patients [...] Read more.
Background: This multicenter and retrospective study investigated the additive value of tumor morphologic features derived from the functional tumor volume (FTV) tumor mask at pre-treatment (T0) and the early treatment time point (T1) in the prediction of pathologic outcomes for breast cancer patients undergoing neoadjuvant chemotherapy. Methods: A total of 910 patients enrolled in the multicenter I-SPY 2 trial were included. FTV and tumor morphologic features were calculated from the dynamic contrast-enhanced (DCE) MRI. A poor response was defined as a residual cancer burden (RCB) class III (RCB-III) at surgical excision. The area under the receiver operating characteristic curve (AUC) was used to evaluate the predictive performance. The analysis was performed in the full cohort and in individual sub-cohorts stratified by hormone receptor (HR) and human epidermal growth factor receptor 2 (HER2) status. Results: In the full cohort, the AUCs for the use of the FTV ratio and clinicopathologic data were 0.64 ± 0.03 (mean ± SD [standard deviation]). With morphologic features, the AUC increased significantly to 0.76 ± 0.04 (p < 0.001). The ratio of the surface area to volume ratio between T0 and T1 was found to be the most contributing feature. All top contributing features were from T1. An improvement was also observed in the HR+/HER2- and triple-negative sub-cohorts. The AUC increased significantly from 0.56 ± 0.05 to 0.70 ± 0.06 (p < 0.001) and from 0.65 ± 0.06 to 0.73 ± 0.06 (p < 0.001), respectively, when adding morphologic features. Conclusion: Tumor morphologic features can improve the prediction of RCB-III compared to using FTV only at the early treatment time point. Full article
(This article belongs to the Section Cancer Imaging)
Show Figures

Figure 1

Figure 1
<p>Illustration of radiomic feature extraction from functional tumor volume (FTV) tumor mask in dynamic contrast-enhanced MRI. First, early percent enhancement (PE) and signal enhancement ratio (SER) thresholds were applied to generate the FTV tumor mask, from which the FTV was calculated. Second, multiple preprocessing steps were applied to the FTV tumor mask to fill small holes, smooth edges, and remove small clusters of connected voxels. Lastly, the Pyradiomics package was used to extract radiomic 3D shape features.</p>
Full article ">Figure 2
<p>Data inclusion and exclusion. Patients were excluded from the analysis because of missing pathological outcomes, clinicopathologic data, or MRI or unusable imaging data.</p>
Full article ">Figure 3
<p>Boxplots of area under the receiver operating characteristic curve (AUC) for prediction of residual disease with and without shape features. AUCs were evaluated by optimal machine learning models independently in 20 stratified subsamples of the analysis cohort (n = 910) for the prediction of RCB-III. Model—without shape: FTV<sub>R</sub> and clinicopathologic data were used in the predictive model. Model—with shape: shape features were added to the predictive model together with FTV<sub>R</sub> and clinicopathologic data.</p>
Full article ">Figure 4
<p>Variable importance for prediction of RCB-III using shape features together with FTV ratio and demographic variables by random forest. Variable importance was ranked according to the mean decrease in accuracy (%) when a variable was excluded. A higher value means that a variable was more important. T0: pretreatment time point. T1: early treatment time point.</p>
Full article ">Figure 5
<p>Beeswarm plots with overlaid boxplot of MRI features. (<b>a</b>) Functional tumor volume (FTV) ratio between pretreatment and early treatment. (<b>b</b>) Ratio of surface area to volume between pretreatment and early treatment. Number of patients in the RCB-III group was 141, with 769 patients in the nRCB-III group (RCB-0, -I, or -II).</p>
Full article ">Figure 6
<p>Example cases. Representative slices of post-contrast dynamic contrast-enhanced MRI at pretreatment (T0) and early treatment (T1) time points are shown. Functional tumor volume (FTV) tumor masks were generated by voxels within the region-of-interest box in yellow that had an early percent enhancement (PE) above 70% and are shown superimposed on the representative slices. Colors within FTV tumor masks represent different levels of signal enhancement ratio (SER)—blue: 0 to 0.9; purple: 0.9 to 1.0; green: 1.0 to 1.3; red: 1.3 to 1.75; white: 1.75 and higher. Three-dimensional surface rendering of the preprocessed tumor mask is shown next to the representative slice. (<b>a</b>) An example of a patient with RCB-0. (<b>b</b>) An example of a patient with RCB-III.</p>
Full article ">Figure 7
<p>Boxplots of area under the receiver operating characteristic curve (AUC) for prediction of residual disease with and without shape features. AUCs were evaluated by optimal machine learning models independently in 20 stratified subsamples of the analysis cohort (n = 910) for the prediction of RCB-III in (<b>a</b>) HR+/HER2−, (<b>b</b>) triple-negative, and (<b>c</b>) HR+/HER2+. Model—without shape: FTV<sub>R</sub> and clinicopathologic data were used in the predictive model. Model—with shape: shape features were added to the predictive model together with FTV<sub>R</sub> and clinicopathologic data.</p>
Full article ">
15 pages, 6086 KiB  
Article
Improved Visual SLAM Algorithm Based on Dynamic Scenes
by Jinxing Niu, Ziqi Chen, Tao Zhang and Shiyu Zheng
Appl. Sci. 2024, 14(22), 10727; https://doi.org/10.3390/app142210727 - 20 Nov 2024
Viewed by 241
Abstract
This work presents a novel RGB-D dynamic simultaneous localization and mapping (SLAM) method that improves accuracy, stability, and efficiency of localization while relying on deep learning in a dynamic environment, in contrast to traditional static scene-based visual SLAM methods. Based on the classic [...] Read more.
This work presents a novel RGB-D dynamic simultaneous localization and mapping (SLAM) method that improves accuracy, stability, and efficiency of localization while relying on deep learning in a dynamic environment, in contrast to traditional static scene-based visual SLAM methods. Based on the classic framework of traditional visual SLAM, we propose a method that replaces the traditional feature extraction method with a convolutional neural network approach, aiming to enhance the accuracy of feature extraction and localization, as well as to improve the algorithm’s ability to capture and represent the characteristics of the entire scene. Subsequently, the semantic segmentation thread was utilized in a target detection network combined with geometric methods to identify potential dynamic areas in the image and generate masks for dynamic objects. Finally, the standard deviation of the depth information of potential dynamic points was calculated to identify true dynamic feature points, to guarantee that static feature points were used for position estimation. We performed experiments based on the public datasets to validate the feasibility of the proposed algorithm. The experimental results indicate that the improved SLAM algorithm, which boasts a reduction in absolute trajectory error (ATE) by approximately 97% compared to traditional static visual SLAM and about 20% compared to traditional dynamic visual SLAM, also exhibited a 68% decrease in computation time compared to well-known dynamic visual SLAM, thereby possessing absolute advantages in both positioning accuracy and operational efficiency. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>Overview of the enhanced SLAM system. The framework of the algorithm comprises four threads: semantic segmentation, tracking, local mapping, and loop closing.</p>
Full article ">Figure 2
<p>GCNv2 feature extraction network structure with channel numbers listed below each convolutional layer.</p>
Full article ">Figure 3
<p>YOLOv5’s network architecture diagram.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) and (<b>c</b>,<b>d</b>) are the semantic segmentation results based on the modified SLAM. Red indicates the detection boxes from YOLOv5x for object detection, while green represents the extracted feature points.</p>
Full article ">Figure 5
<p>Comparing feature point distribution between ORB and GCNv2, the scenes in figures (<b>a</b>,<b>b</b>) are cluttered with various objects, including computer screens, which made it difficult to obtain features. The images in (<b>c</b>,<b>d</b>) were taken from the corner of a table where the camera was moving, resulting in significant changes in viewpoint.</p>
Full article ">Figure 6
<p>Comparing the ATE of the improved SLAM, ORB-SLAM2 across five dynamic scene sequences from the fr3 dataset, (<b>a</b>–<b>e</b>) represent the trajectory maps of ORB-SLAM2, while (<b>f</b>–<b>j</b>) represent the trajectory maps of the improved SLAM.</p>
Full article ">Figure 7
<p>Shows the results for the fr3_walking_xyz sequence. Panels (<b>a</b>,<b>b</b>) illustrate the estimated trajectories compared to the ground truth, as well as the errors along the x, y, and z axes for ORB-SLAM2 and the improved SLAM. Panel (<b>c</b>) displays the time consumption for each method.</p>
Full article ">
20 pages, 2662 KiB  
Review
No-Reference Objective Quality Metrics for 3D Point Clouds: A Review
by Simone Porcu, Claudio Marche and Alessandro Floris
Sensors 2024, 24(22), 7383; https://doi.org/10.3390/s24227383 - 19 Nov 2024
Viewed by 337
Abstract
Three-dimensional (3D) applications lead the digital transition toward more immersive and interactive multimedia technologies. Point clouds (PCs) are a fundamental element in capturing and rendering 3D digital environments, but they present significant challenges due to the large amount of data typically needed to [...] Read more.
Three-dimensional (3D) applications lead the digital transition toward more immersive and interactive multimedia technologies. Point clouds (PCs) are a fundamental element in capturing and rendering 3D digital environments, but they present significant challenges due to the large amount of data typically needed to represent them. Although PC compression techniques can reduce the size of PCs, they introduce degradations that can negatively impact the PC’s quality and therefore the object representation’s accuracy. This trade-off between data size and PC quality highlights the critical importance of PC quality assessment (PCQA) techniques. In this article, we review the state-of-the-art no-reference (NR) objective quality metrics for PCs, which can accurately estimate the quality of generated and compressed PCs solely based on feature information extracted from the distorted PC. These characteristics make NR PCQA metrics particularly suitable in real-world application scenarios where the original PC data are unavailable for comparison, such as in streaming applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The scheme of full-reference (FR), reduced-reference (RR), and no-reference (NR) metrics.</p>
Full article ">Figure 2
<p>Examples of distorted PCs from the SJTU-PCQA dataset [<a href="#B13-sensors-24-07383" class="html-bibr">13</a>]. (<b>a</b>) Original Shiva PC. (<b>b</b>) OcTree-based compression (85%). (<b>c</b>) Color noise (70%). (<b>d</b>) Downscaling (90%).</p>
Full article ">Figure 3
<p>Examples of distorted PCs from the LS-PCQA dataset [<a href="#B19-sensors-24-07383" class="html-bibr">19</a>]. (<b>a</b>) Original Asterix PC. (<b>b</b>) Gamma noise with parameter 1. (<b>c</b>) Gamma noise with parameter 7. (<b>d</b>) Multiplicative Gaussian noise with parameter 1. (<b>e</b>) Multiplicative Gaussian noise with parameter 7. (<b>f</b>) Poisson Reconstruction with parameter 3. (<b>g</b>) Poisson Reconstruction with parameter 7. (<b>h</b>) Original Aya PC. (<b>i</b>) Poisson noise with parameter 3. (<b>j</b>) Poisson noise with parameter 7. (<b>k</b>) GPCC-Lossless geometry and lossy attributes with parameter 3. (<b>l</b>) GPCC-Lossless geometry and lossy attributes with parameter 7. (<b>m</b>) AVS-Limited lossy geometry and lossy attributes with parameter 3. (<b>n</b>) AVS-Limited lossy geometry and lossy attributes with parameter 7.</p>
Full article ">Figure 4
<p>The scheme of model-based, projection-based and hybrid NR PCQA approaches.</p>
Full article ">Figure 5
<p>Performance comparison of six SOTA NR PCQA models on the SJTU-PCQA dataset as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>]. (<b>a</b>) PLCC, SRCC, and KRCC. (<b>b</b>) RMSE.</p>
Full article ">Figure 6
<p>Performance comparison of six SOTA NR PCQA models, in terms of PLCC, on the various distortion types on the SJTU-PCQA dataset, as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>].</p>
Full article ">Figure 7
<p>Performance comparison of six SOTA NR PCQA models, in terms of SRCC, on the various distortion types on the SJTU-PCQA dataset, as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>].</p>
Full article ">Figure 8
<p>Performance comparison of most SOTA NR PCQA models on the WPC dataset as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>]. (<b>a</b>) PLCC, SRCC, and KRCC. (<b>b</b>) RMSE.</p>
Full article ">Figure 9
<p>Performance comparison of most of the SOTA NR PCQA models, in terms of PLCC, on the various distortion types on the WPC dataset, as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>].</p>
Full article ">Figure 10
<p>Performance comparison of most of the SOTA NR PCQA models, in terms of SRCC, on the various distortion types on the WPC dataset, as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>].</p>
Full article ">Figure 11
<p>Performance comparison of most SOTA NR PCQA models on the SIAT-PCQD dataset as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>]. (<b>a</b>) PLCC, SRCC, and KRCC. (<b>b</b>) RMSE.</p>
Full article ">Figure 12
<p>Performance comparison of most of the SOTA NR PCQA models in terms of PLCC and SRCC on the M-PCCD (<b>a</b>) and LS-PCQA (<b>b</b>) datasets as provided in [<a href="#B25-sensors-24-07383" class="html-bibr">25</a>].</p>
Full article ">
40 pages, 28645 KiB  
Article
Underwater Paleotopographic and Geoarchaeological Investigations at Le Castella (Crotone, Italy): New Data on the Late Holocene Coastline Changes and the Presence of Two Disappeared Islets
by Salvatore Medaglia, Daniela Basso, Valentina Alice Bracchi, Fabio Bruno, Emilio Cellini, Ercole Gaetano, Antonio Lagudi, Fabrizio Mauri, Francesco Megna, Sante Francesco Rende, Umberto Severino and Armando Taliano Grasso
Heritage 2024, 7(11), 6392-6431; https://doi.org/10.3390/heritage7110299 - 19 Nov 2024
Viewed by 210
Abstract
A submerged elevation located off the coast of Le Castella, a small village on the Ionian Coast of Calabria (Italy) populated for thousands of years that features notable archaeological remains from the Great Greece (Magna Graecia) and the Middle Ages, was [...] Read more.
A submerged elevation located off the coast of Le Castella, a small village on the Ionian Coast of Calabria (Italy) populated for thousands of years that features notable archaeological remains from the Great Greece (Magna Graecia) and the Middle Ages, was investigated through in-depth, multidisciplinary, geoarchaeological research. This submarine elevation, once aligned with the marine terrace MIS 3 of Le Castella and still completely emerged between 10 and 8 ka years ago, slowly sank due to erosion and local tectonic-structural subsidence and was also favoured by a submerged normal fault that cuts the terrace in two. The dismantling and sinking of this part of the marine terrace has significantly changed the Late Holocene shorelines, with notable consequences on a topographic and archaeological level. In fact, one of the consequences of the sinking of this ancient promontory was the disappearance of two small islands that were reported to be right in front of Le Castella by numerous historical and cartographic sources. In the last decades, there has been a scientific debate over the existence of these islets, but no convincing evidence has been found about their actual presence up until now. This research, funded by the Marine Protected Area “Capo Rizzuto”, was conducted by means of underwater archaeological and geological surveys, geophysical seabed mapping systems, and both direct and instrumental optical surveys made with an Autonomous Surface Vehicle. The outcomes allow us to confirm the presence of these two partially emerged rock bodies up to half a millennium ago. In addition, the presence of anthropogenic extrabasinal materials in a marine area corresponding to one of the highest points of the submerged elevation allows us to define the exact position of one of the two islets. These archaeological findings have been subject, for the first time ever, to a thorough topographical and architectural analysis, then compared with other near and very similar submerged structures. On the basis of these comparisons, the findings should be attributed to the Byzantine Age or, at most, to the Middle Ages. In-depth archival research on portolan charts and navigation maps, in many cases unpublished and dating from the Middle Ages to the early 18th century, supports the results of our marine investigations from a historical point of view. Full article
Show Figures

Figure 1

Figure 1
<p>The fortress of Le Castella as seen by a drone (from <span class="html-italic">Sail in History—Magna Grecia Cruise</span>, 2019, MAGNA Project brochure; photo by F. Mauri).</p>
Full article ">Figure 2
<p>The Geographical and geological setting of the studied area. (<b>a</b>) A simplified geographical setting with the indication of the studied area indicated with the black box. (<b>b</b>) A simplified geological map of the study area, with the indication of the Sila Massif, mainly composed of Paleozoic intrusive and low- to high-metamorphic rocks and the sedimentary Crotone Basin. The position of the two main left-lateral shear zones (the Rossano-San Nicola as RSSZ and the Petilia Sosti as PSSZ) affecting this area and limiting the Crotone basin is also reported. The red box indicates the study site. (<b>c</b>) The study site with the indication of the Marine Isotope Stage 3 marine terrace (green line), the proximal margin marked by a normal fault, and the inferred buried normal fault (dotted yellow line).</p>
Full article ">Figure 3
<p>The boat ARPACAL (Regional Agency for Environmental Protection—Calabria) used to carry out the SBP surveys. The arrow indicates the instrument’s “pole” arrangement on the side. Some curious dolphins participate in the survey activities (photo by F. Mauri).</p>
Full article ">Figure 4
<p>The SBP Route profiles viewed on a DTM (Digital Terrain Model) MBES and LiDAR by ISPRA derived from LiDAR scanning on an aerial platform acquired by the PON-MAMPIRA (Monitoring of Marine Protected Areas Affected by Environmental Crimes) project integrated with ISPRA MBES (Multibeam Echo Sounder) surveys.</p>
Full article ">Figure 5
<p>Locations of underwater surveys.</p>
Full article ">Figure 6
<p>Artificial atoll with GPS Atlaslink<sup>TM</sup> GNSS Smart Antenna.</p>
Full article ">Figure 7
<p>The scuba divers and surface operators while conducting a GPS positioning.</p>
Full article ">Figure 8
<p>Autonomous Surface Vehicle (ASV) named DEVSS.</p>
Full article ">Figure 9
<p>A portion of folio 56<sup>v</sup> belonging to the Italian Manuscript 2115 (17th century) preserved in the National Library of France, Manuscripts Department: “[Da Squ]illace alla fossa Carpina miglie 15. [Da] Caprina alle Castella miglie 15. Castella è una Città, alla quale lontano un miglio vi sono [d]ue Isole, intorno à quelle non è troppo netto, et il fondo è […]itino, e così alle Stanze della Città, quale è quasi Isolata […] anti di Levante, quanto di Ponenti, il fondo è brutto, et [int]orno la Città, è fondo cattivo pieno de sassi”. Source <a href="https://gallica.bnf.fr" target="_blank">https://gallica.bnf.fr</a>—<a href="https://gallica.bnf.fr/ark:/12148/btv1b55002492h" target="_blank">https://gallica.bnf.fr/ark:/12148/btv1b55002492h</a> (accessed on 20 October 2024).</p>
Full article ">Figure 10
<p>Some excerpts from the navigation charts: (<b>A</b>) Giacomo de Giroldi, Chart of the central Mediterranean and the Adriatic Sea, 1425–1450 (British Library, Add. 18665); (<b>B</b>) Petrus Roselli, Chart of the Mediterranean and the Black Sea, 1450–1500 (<a href="https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop" target="_blank">https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop</a> accessed on 24 October 2024 Nationale de France, Département Cartes et Plans CPL GE C-15118); (<b>C</b>) Battista Agnese, Chart of the central Mediterranean and Adriatic Sea, c. 1536 (Saxon State and University Library Dresden, Mscr. Dresd. F 140b, PDM 1.0); (<b>D</b>) Vesconte Maggiolo, Chart of the central and western Mediterranean, 1511 (John Carter Brown Library, 3-SIZE Codex Z); (<b>E</b>) Petrus Russus, Chart of the Mediterranean, Black Sea, and western Europe, 1516 (<a href="https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop" target="_blank">https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop</a> accessed on 24 October 2024 Nationale de France, Département Cartes et Plans, CPL GE B-1425; and (<b>F</b>) Zuan from Naples, Chart of the central and eastern Mediterranean and Black Sea, before 1489 (Cornaro Atlas, Egerton MS 73, British Library).</p>
Full article ">Figure 11
<p>Piri Re’is’ <span class="html-italic">Kitab-iBahriyye</span>: excerpts depicting the stretch of the coast between Capo Colonna and Le Castella. On the <b>left</b>: <a href="https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop" target="_blank">https://gallica.bnf.fr/accueil/en/content/accueil-en?mode=desktop</a> accessed on 24 October 2024 Nationale de France, Département des Manuscrits. Supplément turc 956, 244v; on the <b>right</b>: Table 212b of ms. Baltimore Walters Art Museum 658.</p>
Full article ">Figure 12
<p>Giovanni Battista Cavallini, Chart of the Mediterranean and Black Sea, 1639 (gallica.bnf.fr /Bibliothèque Nationale de France, Département Cartes et Plans, CPL GE DD-2019 RES.</p>
Full article ">Figure 13
<p>The DTM (Digital Terrain Model) from the MBES conducted by CSRM, ARPA Calabria (elab. by F. Mauri).</p>
Full article ">Figure 14
<p>Marine sector in front of Le Castella, two examples of SBP profiles.</p>
Full article ">Figure 15
<p>Le Castella, Sector A. The yellow arrows indicate the location of the submerged stones piles (Google Earth satellite image).</p>
Full article ">Figure 16
<p>Le Castella, Sector A. Photos of the piles made up of the extrabasinal materials along the eastern flank of the islet.</p>
Full article ">Figure 17
<p>Le Castella, Sector B. The extrabasinal materials scattered on the seabed.</p>
Full article ">Figure 18
<p>Le Castella, Sector B. The extrabasinal materials scattered on the seabed.</p>
Full article ">Figure 19
<p>Le Castella, Sector B. The extrabasinal materials scattered on the seabed.</p>
Full article ">Figure 20
<p>Le Castella, Area B. The figures show the approximate location of the extrabasinal stone piles. The presence of <span class="html-italic">Posidonia oceanica</span> prevents the exact demarcation of the site’s boundaries (indicated by a dashed line).</p>
Full article ">Figure 21
<p>The submerged elevation mapped with MBES and LiDAR surveys by the PON–MAMPIRA (Monitoring of Marine Protected Areas Affected by Environmental Crimes) project revised by ISPRA and with MBES by ARPA Calabria. (<b>A</b>) The mobile seafloor composed of Cutro clays; (<b>B</b>) the depression characterised by the exposed base of clays; (<b>C</b>) the extensive debris plateau; (<b>D</b>) the depression likely related to the paleo-mouth of the Acquavrara torrent; and (<b>E</b>) the gullies belonging to the continental margin.</p>
Full article ">Figure 22
<p>The sections of the submerged elevation south of Le Castella.</p>
Full article ">Figure 23
<p>Le Castella, Area A. Comparison among some extrabasinal stones (from left to right, 1 to 4) and calcarenite found in megablocks (right).</p>
Full article ">Figure 24
<p>Le Castella. An orto-photogrammetric reconstruction from 3D optical surveys showing one of the submerged areas featuring the extrabasinal materials within Sector B.</p>
Full article ">Figure 25
<p>Capo Rizzuto. The localisation of the extrabasinal materials pertaining to a supposedly Byzantine marine facility.</p>
Full article ">Figure 26
<p>The DTM from MBES with the outcomes of the paleo-geographic simulation of the area with a sea level lower than 9.5 m, considering both the local uplift and sliding (for details, see Geological setting). The white isobath indicates a fixed isoline created with GlobalMapper and rigidly placed at −9.5 m, while the light beige fill indicates the result of an algorithm that hypothesises the water drainage on a 3D surface. In this last model, we used the “Simulate Water Level Rise/Flooding” of Global Mapper. To improve the readability of the plan, we have excluded the closed isolines with a perimeter of less than 300 m. The red ellipse roughly outlines the localisation of the extrabasinal materials, as shown in <a href="#heritage-07-00299-f019" class="html-fig">Figure 19</a>. Simulation elab. by F. Mauri.</p>
Full article ">Figure 27
<p>Le Castella. An underwater operator carrying out the 3D optical survey of <a href="#heritage-07-00299-f023" class="html-fig">Figure 23</a>.</p>
Full article ">
Back to TopTop