Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (330)

Search Parameters:
Keywords = time-of-flight camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 27582 KiB  
Article
Multi-Level Spectral Attention Network for Hyperspectral BRDF Reconstruction from Multi-Angle Multi-Spectral Images
by Liyao Song and Haiwei Li
Remote Sens. 2025, 17(5), 863; https://doi.org/10.3390/rs17050863 - 28 Feb 2025
Viewed by 175
Abstract
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to [...] Read more.
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to establish a hyperspectral bidirectional reflectance distribution function (BRDF) model suitable for the area of imaging. However, obtaining multi-angle information from UAV push-broom hyperspectral data is difficult. Achieving uniform push-broom imaging and flexibly acquiring multi-angle data is challenging due to spatial distortions, particularly under heightened roll or pitch angles, and the need for multiple flights; this extends acquisition time and exacerbates uneven illumination, introducing errors in BRDF model construction. To address these issues, we propose leveraging the advantages of multi-spectral cameras, such as their compact size, lightweight design, and high signal-to-noise ratio (SNR) to reconstruct hyperspectral multi-angle data. This approach enhances spectral resolution and the number of bands while mitigating spatial distortions and effectively captures the multi-angle characteristics of ground objects. In this study, we collected UAV hyperspectral multi-angle data, corresponding illumination information, and atmospheric parameter data, which can solve the problem of existing BRDF modeling not considering outdoor ambient illumination changes, as this limits modeling accuracy. Based on this dataset, we propose an improved Walthall model, considering illumination variation. Then, the radiance consistency of BRDF multi-angle data is effectively optimized, the error caused by illumination variation in BRDF modeling is reduced, and the accuracy of BRDF modeling is improved. In addition, we adopted Transformer for spectral reconstruction, increased the number of bands on the basis of spectral dimension enhancement, and conducted BRDF modeling based on the spectral reconstruction results. For the multi-level Transformer spectral dimension enhancement algorithm, we added spectral response loss constraints to improve BRDF accuracy. In order to evaluate BRDF modeling and quantitative application potential from the reconstruction results, we conducted comparison and ablation experiments. Finally, we solved the problem of difficulty in obtaining multi-angle information due to the limitation of hyperspectral imaging equipment, and we provide a new solution for obtaining multi-angle features of objects with higher spectral resolution using low-cost imaging equipment. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Multi-level BRDF spectral reconstruction network. (<b>a</b>) Single-level spectral transformer module SST; (<b>b</b>) Multi-level spectral reconstruction network.</p>
Full article ">Figure 2
<p>The structure of each component in the SST module. (<b>a</b>) Spectral Multi-Head Attention Module S-MSA; (<b>b</b>) Dual RsFFN; (<b>c</b>) Spectral Attention Module SAB.</p>
Full article ">Figure 3
<p>UAV nested multi-rectangular flight routes.</p>
Full article ">Figure 4
<p>Changes in aerosol and water vapor content on the day of the experiment (<b>left column</b>: the first day; <b>right column</b>: the second day).</p>
Full article ">Figure 5
<p>Schematic diagram of observation angles at the moment of UAV imaging.</p>
Full article ">Figure 6
<p>Processing flow of multi-angle data.</p>
Full article ">Figure 7
<p>Comparison of true color results of BRDF data reconstructed using different methods at different observation zenith angles.</p>
Full article ">Figure 8
<p>Comparison of mean spectral curves and error curves reconstructed by different BRDF methods at different angles.</p>
Full article ">Figure 9
<p>Error heat map comparison of the reconstruction results of different reconstruction methods in the 5-<math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </semantics></math> and 15-<math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </semantics></math> bands.</p>
Full article ">Figure 10
<p>Comparison of Walthall model data distribution with/without considering the illumination variation.</p>
Full article ">Figure 11
<p>Hyperspectral BRDF modeling with illumination correction Walthall model.</p>
Full article ">Figure 12
<p>Multi-angle spectral reconstruction BRDF modeling considering the illumination-corrected Walthall model.</p>
Full article ">Figure 13
<p>Error analysis comparison between spectral reconstruction BRDF model and hyperspectral BRDF model.</p>
Full article ">
37 pages, 7441 KiB  
Review
Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors
by Wouter H. Maes
Remote Sens. 2025, 17(4), 606; https://doi.org/10.3390/rs17040606 - 10 Feb 2025
Viewed by 945
Abstract
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, [...] Read more.
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, hyperspectral, or thermal cameras. Based on a literature review, this paper provides comprehensive guidelines and best practices for executing such mapping flights. It addresses critical aspects of flight preparation and flight execution. Key considerations in flight preparation covered include sensor selection, flight height and GSD, flight speed, overlap settings, flight pattern, direction, and viewing angle; considerations in flight execution include on-site preparations (GCPs, camera settings, sensor calibration, and reference targets) as well as on-site conditions (weather conditions, time of the flights) to take into account. In all these steps, high-resolution and high-quality data acquisition needs to be balanced with feasibility constraints such as flight time, data volume, and post-flight processing time. For reflectance and thermal measurements, BRDF issues also influence the correct setting. The formulated guidelines are based on literature consensus. However, the paper also identifies knowledge gaps for mapping flight settings, particularly in viewing angle pattern, flight direction, and thermal imaging in general. The guidelines aim to advance the harmonization of UAV mapping practices, promoting reproducibility and enhanced data quality across diverse applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the UAV mapping process. This review focuses on the areas in bold and green.</p>
Full article ">Figure 2
<p>Schematic overview of the solar and sensor viewing angles.</p>
Full article ">Figure 3
<p>BRDF influence on spectral reflectance. (<b>a</b>) Images obtained with a UAV from a meadow from different sensor zenith and azimuth angles (Canon S110 camera, on a Vulcan hexacopter with an AV200 gimbal (PhotoHigher, Wellington, New Zealand), obtained on 28 July 2015 over a meadow near Richmond, NSW, Australia (lat: 33.611° S, lon: 150.732° E)). (<b>b</b>) Empirical BRDF in the green wavelength over a tropical forest (Robson Creek, Queensland, Australia (lat: 17.118° S, lon: 145.630° E), obtained with the same UAV and camera on 16 August 2015, from [<a href="#B21-remotesensing-17-00606" class="html-bibr">21</a>], (<b>c</b>–<b>e</b>) Simulations of reflectance in the red (<b>c</b>) and near infrared (<b>d</b>) spectrum and for NDVI (<b>e</b>) (SCOPE; for a vegetation of 1 m height, LAI of 2, Chlorophyll content of 40 μg/cm<sup>2</sup> and fixed solar zenith angle of 30°).</p>
Full article ">Figure 4
<p>General workflow for the flight planning with an indication of the most important considerations in each step.</p>
Full article ">Figure 5
<p>The effect of ground sampling distance (GSD) on the image quality, in this case for weed detection in a corn field. Image taken on 14/07/2022 in Bottelare, Belgium (lat: 50.959° N, lon: 3.767° E), with a Sony α7R IV camera, equipped with an 85 mm lens flying at 18 m altitude on a DJI M600 Pro UAV. Here, a small section of the orthomosaic, created in Agisoft Metashape, is shown. The original GSD was 0.85 mm, which was downscaled and exported at different GSD using Agisoft Metashape.</p>
Full article ">Figure 6
<p>(<b>a</b>) The 1 ha-field of which the simulation was done. (<b>b</b>) The effect of GSD on the estimated flight time and the number of images required for mapping this area. Here, we calculated the flight time and number of images for a multispectral camera (MicaSense RedEdge-MX Dual). The simulation was performed using the DJI Pilot app, with horizontal and vertical overlap set at 80%, and the maximum flight speed set at 5 m s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Illustration of terrain following (<b>b</b>) option relative to the standard flight height option (<b>a</b>). The colors in (<b>b</b>) represent the actual altitude of the UAV above sea level, in m. Output (print screen) of DJI Pilot 2, here for a Mavic 3E (RGB) camera with 70% overlap and GSD of 2.7 cm.</p>
Full article ">Figure 8
<p>(<b>a</b>) Schematic figure of a standard (parallel) mapping mission over a target area (orange line) with the planned locations for image capture (dots) illustrating the vertical and horizontal overlap. (<b>b</b>) The same area, but now covered with a grid flight pattern (<a href="#sec3dot5dot1-remotesensing-17-00606" class="html-sec">Section 3.5.1</a>).</p>
Full article ">Figure 9
<p>(<b>a</b>) The number of images collected for mapping a 1 ha area (100 m × 100 m field, see <a href="#remotesensing-17-00606-f006" class="html-fig">Figure 6</a>) with a MicaSense RedEdge-MX multispectral camera as a function of the vertical and horizontal overlap. Image number was estimated in DJI Pilot. (<b>b</b>) Simulated number of cameras seen per point for the same range of overlap and camera. (<b>c</b>) Simulated coordinates of the cameras (in m) seeing the center point (black +, relative coordinates of (0,0)) for different overlaps (same horizontal and vertical overlap, see color scale) for the same camera, flown at 50 m flight height.</p>
Full article ">Figure 10
<p>Adjusted overlap (overlap needed to be given as input in the flight app) as a function of flight height and vegetation height, when the target overlap is 80%.</p>
Full article ">Figure 11
<p>Orthomosaic (<b>a</b>) full field; (<b>b</b>) detail) of a flight generated with a flight overlap of 80% in horizontal and vertical direction. The yellow lines indicate the area taken from each single image. Notice the constant pattern in the core of the images, whereas the edges typically have larger areas from a single image, increasing the risk of anisotropic effects. Image taken from Agisoft Metashape from a dataset of multispectral imagery (MicaSense RedEdge-MX Dual), acquired on 07/10/2024, over a potato field in Bottelare, Belgium (lat: 50.9612°N, lon: 3.7677°E), at a flight height of 32 m.</p>
Full article ">Figure 12
<p>Illustration of different viewing angle options available. (<b>a</b>) standard nadir option; (<b>b</b>) Limited number of oblique images from a single direction (“Elevation Optimization”) and (<b>c</b>–<b>f</b>) oblique mapping under four different viewing angles. Output (print screen) of DJI Pilot 2 app, here for a Zenmuse P1 RGB camera (50 mm lens) on a DJI M350, with 65% horizontal and vertical overlap and a GSD of 0.22 cm.</p>
Full article ">Figure 13
<p>Schematic overview of corrections of thermal measurements atmospheric correction (L<sub>atm</sub>, τ) and the additional correction for emissivity (ε) and longwave incoming radiation (L<sub>in</sub>, W m<sup>−2</sup>) needed to retrieve surface temperature (T<sub>s</sub>, K). (L<sub>sensor</sub> = ad-sensor radiance, W m<sup>−2</sup>; L<sub>atm</sub>= upwelling ad-sensor radiance, W m<sup>−2</sup>; τ = atmospheric transmittance (-), σ = Stefan–Boltzmann constant = 5.67 10<sup>−8</sup> W m<sup>−2</sup> K<sup>−4</sup>).</p>
Full article ">Figure 14
<p>Overall summary of flight settings and flight conditions for the different applications. * More for larger or complex terrains.</p>
Full article ">
11 pages, 2174 KiB  
Technical Note
Using Night-Time Drone-Acquired Thermal Imagery to Monitor Flying-Fox Productivity—A Proof of Concept
by Jessica Meade, Eliane D. McCarthy, Samantha H. Yabsley, Sienna C. Grady, John M. Martin and Justin A. Welbergen
Remote Sens. 2025, 17(3), 518; https://doi.org/10.3390/rs17030518 - 3 Feb 2025
Viewed by 659
Abstract
Accurate and precise monitoring of species abundance is essential for determining population trends and responses to environmental change. Species, such as bats, that have slow life histories, characterized by extended lifespans and low reproductive rates, are particularly vulnerable to environmental changes, stochastic events, [...] Read more.
Accurate and precise monitoring of species abundance is essential for determining population trends and responses to environmental change. Species, such as bats, that have slow life histories, characterized by extended lifespans and low reproductive rates, are particularly vulnerable to environmental changes, stochastic events, and human activities. An accurate assessment of productivity can improve parameters for population modelling and provide insights into species’ capacity to recover from population perturbations, yet data on reproductive output are often lacking. Recently, advances in drone technology have allowed for the development of a drone-based thermal remote sensing technique to accurately and precisely count the numbers of flying-foxes (Pteropus spp.) in their tree roosts. Here, we extend that method and use a drone-borne thermal camera flown at night to count the number of flying-fox pups that are left alone in the roost whilst their mothers are out foraging. We show that this is an effective method of estimating flying-fox productivity on a per-colony basis, in a standardized fashion, and at a relatively low cost. When combined with a day-time drone flight used to estimate the number of adults in a colony, this can also provide an estimate of female reproductive performance, which is important for assessments of population health. These estimates can be related to changes in local food availability and weather conditions (including extreme heat events) and enable us to determine, for the first time, the impacts of disturbances from site-specific management actions on flying-fox population trajectories. Full article
Show Figures

Figure 1

Figure 1
<p>Campbelltown flying-fox roost; perimeter of colony is shown in yellow. Diurnal visual colony composition surveys were conducted at 10 points across roost (numbered white dots).</p>
Full article ">Figure 2
<p>Orthomosaics recorded on 15 December 2022 with flying-foxes marked as red dots. Symbols indicate whether drone flights were conducted in the day-time or night-time.</p>
Full article ">
22 pages, 25824 KiB  
Article
NoctuDroneNet: Real-Time Semantic Segmentation of Nighttime UAV Imagery in Complex Environments
by Ruokun Qu, Jintao Tan, Yelu Liu, Chenglong Li and Hui Jiang
Drones 2025, 9(2), 97; https://doi.org/10.3390/drones9020097 - 27 Jan 2025
Viewed by 510
Abstract
Nighttime semantic segmentation represents a challenging frontier in computer vision, made particularly difficult by severe low-light conditions, pronounced noise, and complex illumination patterns. These challenges intensify when dealing with Unmanned Aerial Vehicle (UAV) imagery, where varying camera angles and altitudes compound the difficulty. [...] Read more.
Nighttime semantic segmentation represents a challenging frontier in computer vision, made particularly difficult by severe low-light conditions, pronounced noise, and complex illumination patterns. These challenges intensify when dealing with Unmanned Aerial Vehicle (UAV) imagery, where varying camera angles and altitudes compound the difficulty. In this paper, we introduce NoctuDroneNet (Nocturnal UAV Drone Network, hereinafter referred to as NoctuDroneNet), a real-time segmentation model tailored specifically for nighttime UAV scenarios. Our approach integrates convolution-based global reasoning with training-only semantic alignment modules to effectively handle diverse and extreme nighttime conditions. We construct a new dataset, NUI-Night, focusing on low-illumination UAV scenes to rigorously evaluate performance under conditions rarely represented in standard benchmarks. Beyond NUI-Night, we assess NoctuDroneNet on the Varied Drone Dataset (VDD), a normal-illumination UAV dataset, demonstrating the model’s robustness and adaptability to varying flight domains despite the lack of large-scale low-light UAV benchmarks. Furthermore, evaluations on the Night-City dataset confirm its scalability and applicability to complex nighttime urban environments. NoctuDroneNet achieves state-of-the-art performance on NUI-Night, surpassing strong real-time baselines in both segmentation accuracy and speed. Qualitative analyses highlight its resilience to under-/over-exposure and small-object detection, underscoring its potential for real-world applications like UAV emergency landings under minimal illumination. Full article
Show Figures

Figure 1

Figure 1
<p>Representative dataset samples. Each row displays the original nighttime aerial image, the corresponding semantic annotation mask (green: roof, red: obstacle, black: background), and a combined visualization that overlays the mask on a grayscale transformation of the original image. These examples illustrate the dataset’s complexity in terms of lighting conditions, rooftop structures, and obstacle configurations. We concentrate on roof and obstacle as they are critical to UAV emergency landing decisions, while other objects like cars or trees are grouped into background.</p>
Full article ">Figure 2
<p>Overview of the NoctuDroneNet architecture. The upper stream is a single-branch CNN pipeline enhanced with Conv-Transformer-Aligned Blocks (CTAB-Blocks) for global context modeling. The lower, training-only branch employs additional alignment modules to distill semantic information, improving nighttime UAV segmentation accuracy.</p>
Full article ">Figure 3
<p>Detailed structure of the CTAB-Block. By leveraging convolutional operations with learnable kernels and a softmax normalization, the CTAB-Block simulates Transformer-like global reasoning capabilities while maintaining efficiency. Residual connections and feed-forward layers further refine the semantic representations.</p>
Full article ">Figure 4
<p>Illustration of the alignment modules used during training. The left part shows the process of channel attention and channel-wise distillation (CWD) loss. The right part demonstrates the class-aware alignment and boundary-aware enhancement integrated into a unified context distillation (UCD) module, improving class discrimination and spatial refinement at the decoder stage.</p>
Full article ">Figure 5
<p>Visualization results on the NUI-Night dataset.</p>
Full article ">Figure 6
<p>Visualization results on the VDD dataset.</p>
Full article ">Figure 7
<p>Visualization results on the Night-City dataset.</p>
Full article ">Figure 8
<p>Visualization of attention distributions in CTAB-Blocks under different ablation settings. The red dashed boxes highlight critical regions (e.g., building rooftops, small obstacles) where the full model’s attention is distinctly focused. Omitting ASMA, UCD, or attention results in more scattered or weaker responses, confirming the importance of each module to robust nighttime segmentation.</p>
Full article ">Figure 9
<p>UAV emergency landing in low-light conditions. (<b>a</b>) UAV flight under low-light conditions with navigation lights. (<b>b</b>) Downward-view image captured by the UAV’s onboard camera. (<b>c</b>) UAV utilized for safe landing task execution. (<b>d</b>) Semantic segmentation results for environment perception, where green indicates safe areas and red represents obstacles. (<b>e</b>) Selection of the optimal rooftop landing zone based on semantic segmentation results, with the designated area highlighted.</p>
Full article ">
17 pages, 4505 KiB  
Article
An Application of SEMAR IoT Application Server Platform to Drone-Based Wall Inspection System Using AI Model
by Yohanes Yohanie Fridelin Panduman, Radhiatul Husna, Noprianto, Nobuo Funabiki, Shunya Sakamaki, Sritrusta Sukaridhoto, Yan Watequlis Syaifudin and Alfiandi Aulia Rahmadani
Information 2025, 16(2), 91; https://doi.org/10.3390/info16020091 - 24 Jan 2025
Viewed by 527
Abstract
Recently, artificial intelligence (AI) has been adopted in a number of Internet of Things (IoT) application systems to enhance intelligence. We have developed a ready-made server with rich built-in functions to collect, process, display, analyze, and store data from various IoT devices, the [...] Read more.
Recently, artificial intelligence (AI) has been adopted in a number of Internet of Things (IoT) application systems to enhance intelligence. We have developed a ready-made server with rich built-in functions to collect, process, display, analyze, and store data from various IoT devices, the SEMAR (Smart Environmental Monitoring and Analytics in Real-Time) IoT application server platform, in which various AI techniques have been implemented to enhance its capabilities. In this paper, we present an application of SEMAR to a drone-based wall inspection system using an object detection AI model called You Only Look Once (YOLO). This system aims to detect wall cracks at high places using images taken via a camera on a flying drone. An edge computing device is installed to control the drone, sending the taken images through the Kafka system, storing them with the drone flight data, and sending the data to SEMAR. The images are analyzed via YOLO through SEMAR. For evaluations, we implemented the system using Ryze Tello for the drone and Raspberry Pi for the edge, and we evaluated the detection accuracy. The preliminary experiment results confirmed the effectiveness of the proposal. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Design Overview of <span class="html-italic">SEMAR</span> IoT application server platform.</p>
Full article ">Figure 2
<p>Design overview of AI techniques in <span class="html-italic">SEMAR</span> IoT application server platform.</p>
Full article ">Figure 3
<p>System overview of drone-based wall inspection system.</p>
Full article ">Figure 4
<p>Flow of <span class="html-italic">Real-Time AI</span> function for detecting wall cracks.</p>
Full article ">Figure 5
<p>Comparison of average total transmission time at different <math display="inline"><semantics> <mrow> <mi>f</mi> <mi>p</mi> <mi>s</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Average RAM use of <span class="html-italic">Kafka</span> and <span class="html-italic">RabbitMQ</span> communication protocols.</p>
Full article ">Figure 7
<p>Comparison of average CPU usage rate for communication protocols.</p>
Full article ">Figure 8
<p><span class="html-italic">Box Loss</span> validation results of generated model.</p>
Full article ">Figure 9
<p>Photo of drone capturing image of wall cracks.</p>
Full article ">Figure 10
<p>User interface for detection of <span class="html-italic">SEMAR</span> server.</p>
Full article ">Figure 11
<p>Computation time of crack detection.</p>
Full article ">
20 pages, 10708 KiB  
Article
Evaluation of 3D Models of Archaeological Remains of Almenara Castle Using Two UAVs with Different Navigation Systems
by Juan López-Herrera, Serafín López-Cuervo, Enrique Pérez-Martín, Miguel Ángel Maté-González, Consuelo Vara Izquierdo, José Martínez Peñarroya and Tomás R. Herrero-Tejedor
Heritage 2025, 8(1), 22; https://doi.org/10.3390/heritage8010022 - 10 Jan 2025
Viewed by 685
Abstract
Improvements in the navigation systems incorporated into unmanned aerial vehicles (UAVs) and new sensors are improving the quality of 3D mapping results. In this study, two flights were compared over the archaeological remains of the castle of Almenara, situated in Cuenca, Spain. We [...] Read more.
Improvements in the navigation systems incorporated into unmanned aerial vehicles (UAVs) and new sensors are improving the quality of 3D mapping results. In this study, two flights were compared over the archaeological remains of the castle of Almenara, situated in Cuenca, Spain. We performed one with a DJI Phantom 4 (DJI Innovations Co., Ltd., Shenzhen, China) and the other with a Matrice 300 RTK (DJI Innovations Co., Ltd., Shenzhen, China) and the new Zenmuse P1 camera (45 mp, RGB sensor). With the help of the new software incorporated into the Zenmuse P1 camera gimbal, we could significantly reduce the flight time. We analysed the data obtained with these two UAVs and the built-in RGB sensors, comparing the flight time, the point cloud, and its resolution and obtaining a three-dimensional reconstruction of the castle. We describe the work and the flights carried out, depending on the type of UAV and its RTK positioning system. The improvement in the positioning system provides improvements in flight accuracy and data acquisition. We compared the results obtained in similar studies, and thanks to the advances in UAVs and their sensors with better resolution, we managed to reduce the data collection time and obtained 3D models with the same results as those from other types of sensors. The accuracies obtained with the RTK and the P1 camera are very high. The volumes calculated for a future archaeological excavation are precise, and the 3D models obtained by these means are excellent for the preservation of the cultural asset. These models can have various uses, such as the preservation of an asset of cultural interest, or even its dissemination and analysis in various studies. We propose to use this technology for similar studies of archaeological documentation and the three-dimensional reconstruction and visualisation of cultural heritage in virtual visits on the web. Full article
(This article belongs to the Special Issue 3D Reconstruction of Cultural Heritage and 3D Assets Utilisation)
Show Figures

Figure 1

Figure 1
<p>The castle of Almenara (<b>a</b>), located in Cuenca (Spain) (<b>b</b>) in the municipality of Puebla de Almenara (2°50′31″ W, 39°47′28″ N) (<b>c</b>). View of the municipality of Puebla de Almenara and the castle of Almenara, Cuenca (Spain), in the foothills of Sierra Jarameña (<b>d</b>). WGS84 spatial reference system.</p>
Full article ">Figure 2
<p>The location of the castle treated in this study with its lights and shadows. Own source image from drone flight using Phantom 4. Spatial reference system WGS_1984_UTM_Zone_30N.</p>
Full article ">Figure 3
<p>The location of the castle treated in this study with its lights and shadows. Own source of flight patterns used: (<b>a</b>) Phantom 4 nadiral flight; (<b>b</b>) Phantom 4 oblique flight; and (<b>c</b>) Matrice 300 RTK–P1 with one nadiral and four independent oblique flights, one for each direction.</p>
Full article ">Figure 4
<p>Flight patterns used: Matrice 300 RTK–P1 SmartOblique flight Omega angles: blue (135° SE), red (45° NW), green (45° NE), and yellow (135° SW); the flight combined all with a Kappa angle.</p>
Full article ">Figure 5
<p>Workflow followed in the process of UAV data acquisition (<b>a</b>); processing and 3D model creation (<b>b</b>); and 3D model evaluation (<b>c</b>).</p>
Full article ">Figure 6
<p>Three-dimensional point cloud model of the castle of Almenara. (<b>a</b>) Northeast, (<b>b</b>) northwest, (<b>c</b>) southeast, and (<b>d</b>) southwest views.</p>
Full article ">Figure 7
<p>Planimetry of the walled enclosure obtained from the generated orthophotography and the 3D point cloud model of the Almenara castle.</p>
Full article ">Figure 8
<p>Three-dimensional model of the point cloud of the Almenara castle with recreation of virtual walls.</p>
Full article ">Figure 9
<p>A comparative plot of accuracies obtained between both point clouds P1 vs. Phantom 4 and castle control points at different altitudes with an R2 of 0.9.</p>
Full article ">Figure 10
<p>Profile and errors obtained at different altitudes with the quality control performed with Total Stations and GNSS RTK.</p>
Full article ">Figure 11
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 12
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 13
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 14
<p>Images (<b>a</b>,<b>c</b>) correspond to the Phantom 4 point cloud and (<b>b</b>,<b>d</b>) correspond to the Matrice 300 RTK and P1.</p>
Full article ">Figure 15
<p>A digital model of surfaces in the area near the castle and generation of the TIN model for the estimate of volume (in grey). The perimeter of the study is defined as the base area.</p>
Full article ">Figure 16
<p>A DSM in the area near the collapsed wall and cross-sectional profile of the terrain.</p>
Full article ">
24 pages, 9850 KiB  
Article
RTAPM: A Robust Top-View Absolute Positioning Method with Visual–Inertial Assisted Joint Optimization
by Pengfei Tong, Xuerong Yang, Xuanzhi Peng and Longfei Wang
Drones 2025, 9(1), 37; https://doi.org/10.3390/drones9010037 - 7 Jan 2025
Viewed by 570
Abstract
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle [...] Read more.
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle (UAV) positioning method that is capable of substituting or temporarily replacing GNSS positioning. This study proposes a reliable UAV top-down absolute positioning method (RTAPM) based on a monocular RGB camera that employs joint optimization and visual–inertial assistance. The proposed method employs a bird’s-eye view monocular RGB camera to estimate the UAV’s moving position. By comparing real-time aerial images with pre-existing satellite images of the flight area, utilizing components such as template geo-registration, UAV motion constraints, point–line image matching, and joint state estimation, a method is provided to substitute satellites and obtain short-term absolute positioning information of UAVs in challenging and dynamic environments. Based on two open-source datasets and real-time flight experimental tests, the method proposed in this study has significant advantages in positioning accuracy and system robustness over existing typical UAV absolute positioning methods, and it can temporarily replace GNSS for application in challenging environments such as disaster aid or forest rescue. Full article
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the absolute visual positioning and navigation of UAV based on prior satellite images [<a href="#B2-drones-09-00037" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>The framework of RTAPM. The green arrows represent the data flow of visual–inertial odometry; the red arrows represent map matching; and the blue arrows represent pose evaluation. The black arrows represent the common data flow for visual–inertial odometry and map matching.</p>
Full article ">Figure 3
<p>Coordinate transformation in the template geographic registration.</p>
Full article ">Figure 4
<p>The diagram illustrates the solution to the motion constraints. The purple and blue dashed boxes represent the potential positions of the subsequent frame of the image based on a single-direction evaluation. The red solid line represents the potential position based on a combined evaluation of the two directions, while the green solid line represents the anticipated actual position.</p>
Full article ">Figure 5
<p>A schematic comparison of the straight line elements (nine straight line segments listed in different colors) in two adjacent aerial images of a landscape of cities. (<b>a</b>,<b>b</b>) are two adjacent aerial frames, and 1–9 represent same straight line segment features in different images.</p>
Full article ">Figure 6
<p>Schematic of anchor-based geographic image point-line matching. Frame <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>f</mi> </msub> </mrow> </semantics></math> and Frame <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>t</mi> </msub> </mrow> </semantics></math> are adjacent frames. The three straight lines indicate line features, while the red dots represent anchor points on the line segments. Employing the optical flow procedure, the anchor points are utilized as feature points to move on frame <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>t</mi> </msub> </mrow> </semantics></math> in an attempt to find the corresponding anchor points to retrieve the line features.</p>
Full article ">Figure 7
<p>Framework of the RTAPM experimental system.</p>
Full article ">Figure 8
<p>UAV experimental platform for RTAPM.</p>
Full article ">Figure 9
<p>UAV experimental platform Mavros communication system structure.</p>
Full article ">Figure 10
<p>Relationship between image parameters and computing resources. (<b>a</b>) Relationship between image sampling frequency and computing time; (<b>b</b>) relationship between image sampling frequency and positioning error.</p>
Full article ">Figure 11
<p>Comparing the number of point and line features extracted every frame, as well as the running time spent to extract features employing different approaches on various datasets. (<b>a</b>) The number of point and line features extracted from each frame; (<b>b</b>) the running time of feature extraction.</p>
Full article ">Figure 12
<p>Comparison of the positioning trajectories of three algorithms on Dataset 1. The true trajectory is represented as a yellow circle, VIO’s trajectory as a red solid line, Geo-ref. + VIO’s trajectory as a green solid line, and RTAPM’s trajectory as a blue solid line.</p>
Full article ">Figure 13
<p>(<b>a</b>) Horizontal position estimation error in the X and Y directions (top and bottom); (<b>b</b>) cumulative distribution of positioning errors in Dataset 1.</p>
Full article ">Figure 14
<p>Comparison of the positioning trajectories of three algorithms on Dataset 2. The true trajectory is represented as a yellow circle, VIO’s trajectory as a red solid line, Geo-ref. + VIO’s trajectory as a green solid line, and RTAPM’s trajectory as a blue solid line.</p>
Full article ">Figure 15
<p>(<b>a</b>) Horizontal position estimation error in the X and Y directions (top and bottom); (<b>b</b>) cumulative distribution of positioning errors in dataset 2.</p>
Full article ">Figure 16
<p>Comparison of the positioning trajectories of three algorithms on Dataset 3*. The true trajectory is represented as a yellow circle, VIO’s trajectory as a red solid line, Geo-ref. + VIO’s trajectory as a green solid line, and RTAPM’s trajectory as a blue solid line.</p>
Full article ">Figure 17
<p>(<b>a</b>) Horizontal position estimation error in the X and Y directions (top and bottom); (<b>b</b>) cumulative distribution of positioning errors in Dataset 3 (real-world flight experiment).</p>
Full article ">Figure 18
<p>Box plots are employed to compare the three algorithms’ positioning error distributions within the three datasets. The box plot’s small black circles indicate error values that are either too small or too large and are not within the range of the mean. (<b>a</b>–<b>c</b>) show the statistical results for positioning errors on datasets 1, 2, and 3, respectively. Dataset3 * is composed of images, IMU, and GNSS data collected by the UAV experimental plat-form for RTAPM in real-world flight tests.</p>
Full article ">Figure 19
<p>Comparison of the three algorithms’ positioning errors on three datasets (maximum error, mean error, and root mean square error). Dataset3 * is composed of images, IMU, and GNSS data collected by the UAV experimental plat-form for RTAPM in real-world flight tests.</p>
Full article ">Figure 20
<p>Comparison of the three approaches’ 3sigma distributions of positioning errors among three datasets. Dataset3 * is composed of images, IMU, and GNSS data collected by the UAV experimental plat-form for RTAPM in real-world flight tests.</p>
Full article ">
17 pages, 6147 KiB  
Article
A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model
by Kai Wang, Wei Zhang and Xiaosong Song
Electronics 2025, 14(1), 106; https://doi.org/10.3390/electronics14010106 - 30 Dec 2024
Viewed by 530
Abstract
During flight, aircraft cargo compartments are in a confined state. If a fire occurs, it will seriously affect flight safety. Therefore, fire detection systems must issue alarms within seconds of a fire breaking out, necessitating high real-time performance for aviation fire detection systems. [...] Read more.
During flight, aircraft cargo compartments are in a confined state. If a fire occurs, it will seriously affect flight safety. Therefore, fire detection systems must issue alarms within seconds of a fire breaking out, necessitating high real-time performance for aviation fire detection systems. In addressing the issue of fire target detection, the YOLO series models demonstrate superior performance in striking a balance between computational efficiency and recognition accuracy when compared with alternative models. Consequently, this paper opts to optimize the YOLO model. An enhanced version of the FDY-YOLO object detection algorithm is introduced in this paper for the purpose of instantaneous fire detection. Firstly, the FaB-C3 module, modified based on the FasterNet backbone network, replaces the C3 component in the YOLOv5 framework, significantly decreasing the computational burden of the algorithm. Secondly, the DySample module is used to replace the upsampling module and optimize the model’s ability to extract the features of small-scale flames or smoke in the early stages of a fire. We introduce RFID technology to manage the cameras that are capturing images. Finally, the model’s loss function is changed to the MPDIoU loss function, improving the model’s localization accuracy. Based on our self-constructed dataset, compared with the YOLOv5 model, FDY-YOLO achieves a 0.8% increase in mean average precision (mAP) while reducing the computational load by 40%. Full article
(This article belongs to the Special Issue RFID Applied to IoT Devices)
Show Figures

Figure 1

Figure 1
<p>FDY-YOLOv5 structure diagram.</p>
Full article ">Figure 2
<p>Comparison diagram of C3 module and FaB-C3 module structures.</p>
Full article ">Figure 3
<p>Diagram of the calculation process for nearest-neighbor upsampling.</p>
Full article ">Figure 4
<p>Diagram of the DySample dynamic upsampler structure.</p>
Full article ">Figure 5
<p>MPDIoU loss function considering the coordinates of the top-left and bottom-right points of the bounding box.</p>
Full article ">Figure 6
<p>Image acquisition system based on RFID technology.</p>
Full article ">Figure 7
<p>Example of the dataset image.</p>
Full article ">Figure 8
<p>Loss function convergence plot.</p>
Full article ">Figure 9
<p>Comparison of heatmaps before and after model improvement.</p>
Full article ">Figure 10
<p>Comparison of detection effects. (<b>a</b>) Only the original YOLOv5, (<b>b</b>) YOLOv5 + FaB-C3, (<b>c</b>) YOLOv5 + FaB-C3 + DySample, and (<b>d</b>) FDY-YOLO.</p>
Full article ">Figure 11
<p>Example of a D-Fire dataset image.</p>
Full article ">Figure 11 Cont.
<p>Example of a D-Fire dataset image.</p>
Full article ">Figure 12
<p>Comparison of detection effects. (<b>a</b>) YOLOv5s, (<b>b</b>) YOLOv8s, (<b>c</b>) FDY-YOLO, and (<b>d</b>) Faster R-CNN.</p>
Full article ">Figure 12 Cont.
<p>Comparison of detection effects. (<b>a</b>) YOLOv5s, (<b>b</b>) YOLOv8s, (<b>c</b>) FDY-YOLO, and (<b>d</b>) Faster R-CNN.</p>
Full article ">Figure 13
<p>Diagram of the calculation process for nearest-neighbor upsampling.</p>
Full article ">
16 pages, 7234 KiB  
Article
Key Parameters for Performance and Resilience Modeling of 3D Time-of-Flight Cameras Under Consideration of Signal-to-Noise Ratio and Phase Noise Wiggling
by Niklas Alexander Köhler, Marcel Geis, Claudius Nöh, Alexandra Mielke, Volker Groß, Robert Lange, Keywan Sohrabi and Jochen Frey
Sensors 2025, 25(1), 109; https://doi.org/10.3390/s25010109 - 27 Dec 2024
Viewed by 571
Abstract
Because of their resilience, Time-of-Flight (ToF) cameras are now essential components in scientific and industrial settings. This paper outlines the essential factors for modeling 3D ToF cameras, with specific emphasis on analyzing the phenomenon known as “wiggling”. Through our investigation, we demonstrate that [...] Read more.
Because of their resilience, Time-of-Flight (ToF) cameras are now essential components in scientific and industrial settings. This paper outlines the essential factors for modeling 3D ToF cameras, with specific emphasis on analyzing the phenomenon known as “wiggling”. Through our investigation, we demonstrate that wiggling not only causes systematic errors in distance measurements, but also introduces periodic fluctuations in statistical measurement uncertainty, which compounds the dependence on the signal-to-noise ratio (SNR). Armed with this knowledge, we developed a new 3D camera model, which we then made computationally tractable. To illustrate and evaluate the model, we compared measurement data with simulated data of the same scene. This allowed us to individually demonstrate various effects on the signal-to-noise ratio, reflectivity, and distance. Full article
(This article belongs to the Special Issue Computational Optical Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Proposed 3D camera model. The data from the modeled scene (box at the top) are processed step by step with the experimentally determined camera parameters (boxes on the right side). Finally, a complete dataset per pixel (distance error, intensity, and distance) is obtained.</p>
Full article ">Figure 2
<p>Experimental set-up. (<b>Left</b>): Schematic layout of the measuring environment. (<b>Right</b>): Photo of the actual measurement.</p>
Full article ">Figure 3
<p>Reflectivity of the target wall (red dots) and the curtains (blue dots) measured at an angle of 90°.</p>
Full article ">Figure 4
<p>Histograms of correlation coefficients between adjacent pixels. Histogram (<b>a</b>): filters enabled and low light condition; histogram (<b>b</b>): filters disabled and low light condition; histogram (<b>c</b>): filters enabled and high light condition; Histogram (<b>d</b>): filters disabled and high light condition. For characterization and modeling, it is crucial that the pixel signals are independent of each other, meaning no correlation is present. Without filters, the flexx2 fulfills this condition. All following measurements in this paper are made without filtering.</p>
Full article ">Figure 5
<p>Analysis of 100 measurements on a flat target with constant reflectivity. <b>Top</b>: Mean distance value. Apart from the fixed-pattern noise, no further anomalies are evident. <b>Bottom</b>: Mean amplitude value. The distribution arises from the illumination characteristics and the detector optics in conjunction with the sensor array. This characteristic pattern can be normalized and used for the camera model.</p>
Full article ">Figure 6
<p>Intensity profiles (diagrams (<b>a</b>,<b>c</b>)) at various distances. Normalized intensities (diagram (<b>b</b>,<b>d</b>)) exhibit significant overlap. As a result, a normalized intensity can be used for all distances.</p>
Full article ">Figure 7
<p>Normalized intensity of the sensor array. An intensity value of 1 is attributed to the brightest region on the sensor. This region is not necessarily required to be at the center of the array.</p>
Full article ">Figure 8
<p>Intensity versus integration time of the center pixel. Intensity varies linearly with exposure time. This is valid until saturation is reached.</p>
Full article ">Figure 9
<p>Intensity versus distance of the center pixel. Intensity varies as expected with distance. This is valid until saturation is reached.</p>
Full article ">Figure 10
<p>Statistical error (distance noise) versus distance and intensity. The error decreases with increasing intensity and shows a strong dependency on distance (phase noise wiggling). As the signal intensity decreases with distance, the dataset is incomplete for low intensities at short distances. The same applies to long distances. Due to technical limitations, the shortest distance measured is 0.37 m.</p>
Full article ">Figure 11
<p>Simulation of native phase measurement of a ToF measurement using the 4-phase approach ((<b>a</b>): idealized result without simulation of noise sources, (<b>b</b>): result with added shot noise for each tap). Simulation conditions: ideal demodulation, 25% duty cycle of received optical signal, pixel full-well-capacity: 100,000 electrons, signal amplitude: 1% of the full-well, signal offset: 2% of the full-well.</p>
Full article ">Figure 12
<p>Absolute phase wiggling: Simulation of absolute error of native phase measurement of a ToF measurement using the 4-phase approach ((<b>a</b>): idealized result without simulation of noise sources, (<b>b</b>): result with added shot noise for each tap). Simulation conditions: cf. <a href="#sensors-25-00109-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Phase noise wiggling: Simulation of expected standard deviation of ToF measurement using the 4-phase approach, with simulation conditions as of <a href="#sensors-25-00109-f011" class="html-fig">Figure 11</a> and <a href="#sensors-25-00109-f012" class="html-fig">Figure 12</a>. The results represent the standard deviation of simulation of <a href="#sensors-25-00109-f012" class="html-fig">Figure 12</a>b, repeated 500 times (<b>a</b>) and 5000 times (<b>b</b>).</p>
Full article ">Figure 14
<p>Phase noise wiggling: Simulation of standard deviation of ToF phase measurement. Same as <a href="#sensors-25-00109-f013" class="html-fig">Figure 13</a>b but with higher signal amplitude: 2.5% of the full-well. Note that with this higher signal amplitude, the total noise level decreases, as expected.</p>
Full article ">Figure 15
<p>Phase noise wiggling: Variation in standard deviation dR with phase delay of received signal for constant signal amplitudes. Comparison of measurement (<b>a</b>) and simulation (<b>b</b>). Note: logarithmic scale. (simulation parameter: signal amplitude = 1% of the full-well); shortest measured distance: 0.37 m.</p>
Full article ">Figure 16
<p>Phase noise wiggling: Variation in standard deviation dR with phase delay of received signal for constant signal amplitudes. Comparison of measurement (<b>a</b>) and simulation (<b>b</b>). Note: logarithmic scale. (simulation parameter: signal amplitude = 2.5% of the full-well); shortest measured distance: 0.37 m.</p>
Full article ">Figure 17
<p>Measurement setting for evaluating the simulation data. The measurement setting consists of a flexx2 camera mounted on a tripod and positioned laterally to the surface to be measured.</p>
Full article ">Figure 18
<p>Results of the simulated (<b>top</b>) and measured (<b>bottom</b>) distance, amplitude, and noise for the experimental setting from <a href="#sensors-25-00109-f017" class="html-fig">Figure 17</a>.</p>
Full article ">Figure 19
<p>Programmatic model based on the mathematical operations and the theoretical approach (cf. <a href="#sensors-25-00109-f001" class="html-fig">Figure 1</a>), displaying results for noise, intensity, reflectivity, and distance.</p>
Full article ">
16 pages, 6553 KiB  
Article
IR Pulsed Laser Ablation of Carbon Materials in High Vacuum
by Lorenzo Torrisi, Alfio Torrisi and Mariapompea Cutroneo
Appl. Sci. 2024, 14(24), 11744; https://doi.org/10.3390/app142411744 - 16 Dec 2024
Cited by 1 | Viewed by 601
Abstract
This work aimed to understand how the energy released by short laser pulses can produce different effects in carbon targets with different allotropic states. The IR pulse laser ablation, operating at 1064 nm wavelength, 3 ns pulse duration, and 100 mJ pulse energy, [...] Read more.
This work aimed to understand how the energy released by short laser pulses can produce different effects in carbon targets with different allotropic states. The IR pulse laser ablation, operating at 1064 nm wavelength, 3 ns pulse duration, and 100 mJ pulse energy, has been used to irradiate different types of carbon targets in a high vacuum. Graphite, highly oriented pyrolytic graphite, glassy carbon, active carbon, and vegetable carbon have exhibited different mass densities and have been laser irradiated. Time-of-flight (TOF) measurements have permitted the evince of the maximum carbon ion acceleration in the generated plasma (of about 200 eV per charge state) and the maximum yield emission (96 μg/pulse in the case of vegetal carbon) along the direction normal to the irradiated surface. The ion energy analyzer measured the carbon charge states (four) and their energy distributions. Further plasma investigations have been performed using a fast CCD camera image and surface profiles of the generated craters to calculate the angular emission and the ablation yield for each type of target. The effects as a function of the target carbon density and binding energy have been highlighted. Possible applications for the generation of thin films and carbon nanoparticles are discussed. Full article
Show Figures

Figure 1

Figure 1
<p>Sketch of the experimental setup.</p>
Full article ">Figure 2
<p>ICR-TOF spectra relative to the six laser irradiated targets: HOPG (<b>a</b>), graphite (<b>b</b>), pencil graphite (<b>c</b>), glassy carbon (<b>d</b>), active carbon (<b>e</b>), and vegetable carbon (<b>f</b>).</p>
Full article ">Figure 3
<p>IEA-TOF spectra relative to graphite target analyzed to different E/z filter ratios: 100 eV/z (<b>a</b>), 200 eV/z (<b>b</b>), 400 eV/z (<b>c</b>), 600 eV/z (<b>d</b>), 800 eV/z (<b>e</b>), and 1000 eV/z (<b>f</b>).</p>
Full article ">Figure 4
<p>IEA measurements of C ion energy and charge state distributions.</p>
Full article ">Figure 5
<p>CCD image of the carbon plasma produced in 1 μs from the laser ablation of HOPG (<b>a</b>), graphite (<b>b</b>), pencil graphite (<b>c</b>), glassy carbon (<b>d</b>), active carbon (<b>e</b>), and vegetal graphite (<b>f</b>).</p>
Full article ">Figure 6
<p>IC-TOF spectra acquired using a laser ablation at 45° incidence angle of graphite target and at different angles of detection around the normal direction: 0° (<b>a</b>), 22° (<b>b</b>), 45° (<b>c</b>), and 60° (<b>d</b>).</p>
Full article ">Figure 7
<p>Carbon ion angular emission distribution in graphite, for yield (<b>a</b>), maximum energy (<b>b</b>), photon intensity (<b>c</b>), and comparison between yield emission from graphite and vegetable carbon target (<b>d</b>).</p>
Full article ">Figure 8
<p>Crater depth profiles of the six targets irradiated in the same experimental conditions: HOPG (<b>a</b>), graphite (<b>b</b>), pencil graphite (<b>c</b>), glassy carbon (<b>d</b>), active carbon (<b>e</b>), and vegetable carbon (<b>f</b>).</p>
Full article ">
12 pages, 2922 KiB  
Article
Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (Vaccinium angustifolium Ait.)
by Connor C. Mullins, Travis J. Esau, Qamar U. Zaman, Ahmad A. Al-Mallahi and Aitazaz A. Farooque
J. Imaging 2024, 10(12), 324; https://doi.org/10.3390/jimaging10120324 - 15 Dec 2024
Viewed by 1013
Abstract
This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D [...] Read more.
This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red–green–blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew’s correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey’s HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (p < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Figure 1
<p>Example of wild blueberries (<span class="html-italic">Vaccinium angustifolium</span> Ait.) at time of harvest, illustrating the irregular clustering.</p>
Full article ">Figure 2
<p>Visual demonstration of conversion from point cloud to depth map using the jet colormap as <span class="html-italic">Z</span> axis representation in mm, where the background color of the depth map was set to blue.</p>
Full article ">Figure 3
<p>Dual camera mount setup for data collection with Basler Blaze-101 (67° by 51° in the X and Y axes, respectively) and Lucid Vision Labs Triton (60° by 46° in the X and Y axes, respectively).</p>
Full article ">Figure 4
<p>Visualization of segmentation mask correctness of YOLO masks for ToF 3D camera and 2D RGB camera, with true positive as green, true negative as blue, false positive as red, and false negative as orange.</p>
Full article ">Figure 5
<p>Visualization of segmentation mask correctness of Detectron2 masks for ToF 3D and 2D RGB cameras, with true positive as green, true negative as blue, false positive as red, and false negative as orange.</p>
Full article ">Figure 6
<p>Sample confusion matrices of Detectron2 R50 with FPN and YOLOv8n-seg on the testing dataset of the depth image dataset.</p>
Full article ">Figure 7
<p>Sample confusion matrices of Detectron2 R50 with FPN and YOLOv8n-seg on the testing dataset of the RGB image dataset.</p>
Full article ">
25 pages, 44855 KiB  
Article
Burned Olive Trees Identification with a Deep Learning Approach in Unmanned Aerial Vehicle Images
by Christos Vasilakos and Vassilios S. Verykios
Remote Sens. 2024, 16(23), 4531; https://doi.org/10.3390/rs16234531 - 3 Dec 2024
Viewed by 789
Abstract
Olive tree orchards are suffering from wildfires in many Mediterranean countries. Following a wildfire event, identifying damaged olive trees is crucial for developing effective management and restoration strategies, while rapid damage assessment can support potential compensation for producers. Moreover, the implementation of real-time [...] Read more.
Olive tree orchards are suffering from wildfires in many Mediterranean countries. Following a wildfire event, identifying damaged olive trees is crucial for developing effective management and restoration strategies, while rapid damage assessment can support potential compensation for producers. Moreover, the implementation of real-time health monitoring in olive groves allows producers to carry out targeted interventions, reducing production losses and preserving crop health. This research examines the use of deep learning methodologies in true-color images from Unmanned Aerial Vehicles (UAV) to detect damaged trees, including withering and desiccation of branches and leaf scorching. More specifically, the object detection and image classification computer vision techniques area applied and compared. In the object detection approach, the algorithm aims to localize and identify burned/dry and unburned/healthy olive trees, while in the image classification approach, the classifier categorizes an image showing a tree as burned/dry or unburned/healthy. Training data included true color UAV images of olive trees damaged by fire obtained by multiple cameras and multiple flight heights, resulting in various resolutions. For object detection, the Residual Neural Network was used as a backbone in an object detection approach with a Single-Shot Detector. In the image classification application, two approaches were evaluated. In the first approach, a new shallow network was developed, while in the second approach, transfer learning from pre-trained networks was applied. According to the results, the object detection approach managed to identify healthy trees with an average accuracy of 74%, while for trees with drying, the average accuracy was 69%. However, the optimal network identified olive trees (healthy or unhealthy) that the user did not detect during data collection. In the image classification approach, the application of convolutional neural networks achieved significantly better results with an F1-score above 0.94, either in the new network training approach or by applying transfer learning. In conclusion, the use of computer vision techniques in UAV images identified damaged olive trees, while the image classification approach performed significantly better than object detection. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Burn severity map of the study area.</p>
Full article ">Figure 2
<p>Aerial image of unburned, partial, and fully burned olive trees.</p>
Full article ">Figure 3
<p>Masking and labeling unburned (healthy) and burned (dry) trees to be used as training data in object detection approach.</p>
Full article ">Figure 4
<p>Flowchart of the methodology.</p>
Full article ">Figure 5
<p>Architecture of a shallow CNN developed for image classification of burned olive trees.</p>
Full article ">Figure 6
<p>Schematic workflow of transfer learning and fine-tuning.</p>
Full article ">Figure 7
<p>Average precision and Log Average Miss Rate of testing dataset for three anchors in SSD model.</p>
Full article ">Figure 8
<p>Average precision and Log Average Miss Rate of testing dataset for four anchors in SSD model.</p>
Full article ">Figure 9
<p>Average precision and Log Average Miss Rate of testing dataset for five anchors in SSD model.</p>
Full article ">Figure 10
<p>Average precision and Log Average Miss Rate of testing dataset for six anchors in SSD model.</p>
Full article ">Figure 11
<p>Average precision and Log Average Miss Rate of testing dataset for seven anchors in SSD model.</p>
Full article ">Figure 12
<p>Ground truth data (<b>left</b>) and object detection (<b>right</b>).</p>
Full article ">Figure 13
<p>Ground truth data (<b>left</b>) and object detection (<b>right</b>).</p>
Full article ">Figure 14
<p>Ground truth data (<b>left</b>) and object detection (<b>right</b>).</p>
Full article ">Figure 15
<p>Ground truth data (<b>left</b>) and object detection (<b>right</b>).</p>
Full article ">Figure 16
<p>Confusion matrices of the testing dataset for the seven trained models in image classification approach.</p>
Full article ">Figure 17
<p>Actual class and the predicted class with the corresponding score for a subset of images from testing dataset.</p>
Full article ">
11 pages, 16191 KiB  
Proceeding Paper
Lens Distortion Measurement and Correction for Stereovision Multi-Camera System
by Grzegorz Madejski, Sebastian Zbytniewski, Mateusz Kurowski, Dawid Gradolewski, Włodzimierz Kaoka and Wlodek J. Kulesza
Eng. Proc. 2024, 82(1), 85; https://doi.org/10.3390/ecsa-11-20457 - 26 Nov 2024
Viewed by 459
Abstract
In modern autonomous systems, measurement repeatability and precision are crucial for robust decision-making algorithms. Stereovision, which is widely used in safety applications, provides information about an object’s shape, orientation, and 3D localisation. The camera’s lens distortion is a common source of systematic measurement [...] Read more.
In modern autonomous systems, measurement repeatability and precision are crucial for robust decision-making algorithms. Stereovision, which is widely used in safety applications, provides information about an object’s shape, orientation, and 3D localisation. The camera’s lens distortion is a common source of systematic measurement errors, which can be estimated and then eliminated or at least reduced using a suitable correction/calibration method. In this study, a set of cameras equipped with Basler lenses (C125-0618-5M F1.8 f6mm) and Sony IMX477R matrices are calibrated using a state-of-the-art Zhang–Duda–Frese method. The resulting distortion coefficients are used to correct the images. The calibrations are evaluated with the aid of two novel methods for lens distortion measurement. The first one is based on linear regression with images of a vertical and horizontal line pattern. Based on the evaluation tests, outlying cameras are eliminated from the test set by applying the 2σ criterion. For the remaining cameras, the MSE was reduced up to 75.4 times, to 1.8 px−6.9 px. The second method is designed to evaluate the impact of lens distortion on stereovision applied to bird tracking around wind farms. A bird’s flight trajectory is synthetically generated to estimate changes in disparity and distance before and after calibration. The method shows that at the margins of the image, lens distortion might introduce errors into the object’s distance measurement of +17%−+20% for cameras with the same distortion and from −41% up to + for camera pairs with different lens distortions. These results highlight the importance of having well-calibrated cameras in systems that require precision, such as stereovision bird tracking in bird–turbine collision risk assessment systems. Full article
Show Figures

Figure 1

Figure 1
<p>Photographs used to calibrate camera 1751 with a blurred background for anonymity. (<b>a</b>) Image with the checkerboard parallel to the camera scene. (<b>b</b>) Image with yaw transformation applied to the checkerboard. (<b>c</b>) Image with pitch transformation applied to the checkerboard.</p>
Full article ">Figure 2
<p>Process of lens distortion correction/undistorting. The original images are stretched, especially at the corners. This affects the empty blackened areas in the corrected images. (<b>a</b>) Horizontal lines—original image. (<b>b</b>) Horizontal lines—corrected image. (<b>c</b>) Vertical lines—original image. (<b>d</b>) Vertical lines—corrected image.</p>
Full article ">Figure 3
<p>Comparison of heatmaps of pixel displacement for camera calibrations. The distortion level is the vector length by which the original pixel should be moved to obtain an undistorted image. (<b>a</b>) Heatmap for calibration of camera 159. (<b>b</b>) Heatmap for calibration of camera 183. (<b>c</b>) Heatmap of differences between heatmap (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 4
<p>The upper part of the horizontal line images for camera 183, distorted on the left and corrected on the right. Linear regression is applied to the photographed lines in black by taking a sample of the points in red. The regression result lines are in blue.</p>
Full article ">Figure 5
<p>Comparison of synthetic bird tracks before (blue) and after (red line) distortion. (<b>a</b>) Undistorted and distorted Track 1 for camera 159. (<b>b</b>) Undistorted and distorted Track 2 for camera 159. (<b>c</b>) Undistorted and distorted Track 1 for top camera 183 and bottom camera 2186. (<b>d</b>) Undistorted and distorted Track 1 for top camera 2186 and bottom camera 183.</p>
Full article ">
11 pages, 1531 KiB  
Article
Kinematical and Physiological Responses of Overground Running Gait Pattern at Different Intensities
by Ana Sofia Monteiro, João Paulo Galano, Filipa Cardoso, Cosme F. Buzzachera, João Paulo Vilas-Boas and Ricardo J. Fernandes
Sensors 2024, 24(23), 7526; https://doi.org/10.3390/s24237526 - 25 Nov 2024
Cited by 1 | Viewed by 785
Abstract
Runners achieve forward locomotion through diverse techniques. However, understanding the behavior of the involved kinematical variables remains incomplete, particularly when running overground and along an intensity spectrum. We aimed to characterize the biomechanical and physiological adaptations while running at low, moderate, heavy and [...] Read more.
Runners achieve forward locomotion through diverse techniques. However, understanding the behavior of the involved kinematical variables remains incomplete, particularly when running overground and along an intensity spectrum. We aimed to characterize the biomechanical and physiological adaptations while running at low, moderate, heavy and severe intensities. Ten middle- and long-distance runners completed an incremental intermittent protocol of 800 m steps until exhaustion (1 km·h−1 velocity increments and 30 s intervals) on an outdoor track field. Biomechanical data were captured using two high-resolution video cameras, and linear and angular kinematic variables were analyzed. With the intensity rise, a decrease in stride, step and contact times ([0.70–0.65], [0.35–0.33] and [0.42–0.37] s) and an increase in stride length and frequency and flight time ([3.13–3.52] m, [1.43–1.52] Hz and [0.28–0.29] s; p < 0.05) were observed, together with an increase in oxygen uptake and blood lactate concentrations ([54.7–67.6] mL∙kg−1∙min−1 and [3.1–10.2] mmol∙L−1). A more flexed hip at initial contact and toe-off (152.02–149.36] and [165.70–163.64]) and knee at initial contact ([162.64–159.57]; p < 0.05) were also observed. A consistent gait pattern along each protocol step was exhibited, with minor changes without practical significance. Runners are constantly adapting their gait pattern, reflected in both biomechanical and physiological responses, both of which should be considered for better characterization. Full article
Show Figures

Figure 1

Figure 1
<p>Hip, knee and ankle joint angle determination at initial contact and toe-off moments.</p>
Full article ">Figure 2
<p>Individual (light circles and triangles), mean/median and standard deviation/interquartile range of the linear kinematical variables from the first (dark blue) to the second (dark red) laps of each protocol step corresponding to low, moderate, heavy and severe intensity domains. * Indicates differences between laps (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 3
<p>Individual (light circles and triangles), mean and standard deviation of the angular kinematical variables from the first (dark blue) to the second (dark red) laps of each protocol step corresponding to low, moderate, heavy and severe intensity domains. * Indicates differences between laps (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
27 pages, 7620 KiB  
Article
Maturity Prediction in Soybean Breeding Using Aerial Images and the Random Forest Machine Learning Algorithm
by Osvaldo Pérez, Brian Diers and Nicolas Martin
Remote Sens. 2024, 16(23), 4343; https://doi.org/10.3390/rs16234343 - 21 Nov 2024
Viewed by 985
Abstract
Several studies have used aerial images to predict physiological maturity (R8 stage) in soybeans (Glycine max (L.) Merr.). However, information for making predictions in the current growing season using models fitted in previous years is still necessary. Using the Random Forest machine [...] Read more.
Several studies have used aerial images to predict physiological maturity (R8 stage) in soybeans (Glycine max (L.) Merr.). However, information for making predictions in the current growing season using models fitted in previous years is still necessary. Using the Random Forest machine learning algorithm and time series of RGB (red, green, blue) and multispectral images taken from a drone, this work aimed to study, in three breeding experiments of plant rows, how maturity predictions are impacted by a number of factors. These include the type of camera used, the number and time between flights, and whether models fitted with data obtained in one or more environments can be used to make accurate predictions in an independent environment. Applying principal component analysis (PCA), it was found that compared to the full set of 8–10 flights (R2 = 0.91–0.94; RMSE = 1.8–1.3 days), using data from three to five fights before harvest had almost no effect on the prediction error (RMSE increase ~0.1 days). Similar prediction accuracy was achieved using either a multispectral or an affordable RGB camera, and the excess green index (ExG) was found to be the important feature in making predictions. Using a model trained with data from two previous years and using fielding notes from check cultivars planted in the test season, the R8 stage was predicted, in 2020, with an error of 2.1 days. Periodically adjusted models could help soybean breeding programs save time when characterizing the cycle length of thousands of plant rows each season. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pipeline workflow diagram of a high-throughput phenotyping platform for predicting soybean physiological maturity (R8 stage) of three breeding experiments (2018–2020) containing trials divided into plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. On the top right, overlapped on the satellite image, © Google, 2024 [<a href="#B31-remotesensing-16-04343" class="html-bibr">31</a>], three selected orthophotos corresponding to these experiments were taken from a drone on the same flight date (10 September). The colored polygons indicate the effective area of the soybean breeding blocks (trials) for which physiological maturity was predicted. The magnified orthophoto (10 September 2019) shows the cell grid that was used to associate the pixels within each cell to the day of the year in which the plant row reached the R8 stage.</p>
Full article ">Figure 2
<p>Partial visualization of composed orthophotos obtained from time series of images taken from a drone flying over three soybean breeding experiments (2018–2020). The experiments, containing plant rows of F<sub>4:5</sub> experimental lines, were grown at the University of Illinois Research and Education Center near Savoy, IL. The imagery was collected in a total of eight flight dates in 2018, ten in 2019, and nine in 2020, although only four flight dates per year are shown according to the best matching day of the year. The raster information within each cell grid was used to predict the day of the year the plant row reached physiological maturity. All the orthophotos show the three visual spectral bands (red, green, and blue); however, while the images were taken with a digital RGB camera in 2018, in 2019 and 2020, they were with a multispectral camera of five bands: red, green, blue, red edge, and near-infrared.</p>
Full article ">Figure 3
<p>The histograms (in green) show the distribution of soybean physiological maturity (R8 stage) dates for three experiments of plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL (2018–2020). The histograms (in blue) also show the distribution of the R8 stage dates, but according to what plant rows were assigned per individual (A–F) to take the field notes.</p>
Full article ">Figure 4
<p>The boxplots show the bias of predictions (days) for soybean physiological maturity (R8 stage) according to the individuals (A–F) who together took 9252, 11,742, and 11,197 field notes from three experiments: 2018 (<b>top</b>), 2019 (<b>middle</b>), and 2020 (<b>bottom</b>), respectively. The experiments contained plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. The Random Forest algorithm was used to adjust the predictive models using different training data sizes according to what plant rows were assigned per individual (A–F). The empty boxplot spaces mean that 44.2%, 28.5%, and 27.2% of field notes, taken respectively by A, B, and C, were used to train the models in 2018. In 2019, the proportions were 21.2%, 37.9%, 11.1%, 12.8%, and 17.0% (A, D–G); and in 2020, they were 45.3%, 19.6%, 17.5%, and 17.7% (A, B and C, D, and E).</p>
Full article ">Figure 5
<p>Soybean physiological maturity (R8 stage) predictions corresponding to three breeding experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL (2018–2020). The Random Forest algorithm was applied to associate the field recorded values with three classification variables (breeding block, the individual who took the field notes, and the check cultivar) and 32 image features (red, green, blue, and a calculated excess green index —<span class="html-italic">ExG</span>—) obtained from eight drone flights. (<b>a</b>–<b>c</b>) The relationship between predicted vs. field recorded values using all the field records, and (<b>d</b>–<b>f</b>) the same, but after filtering records of plant rows that reached the R8 stage after the last drone flight date (26, 24, and 30 September, respectively, for 2018, 2019, and 2020). An equal relationship training:test data ratio (80:20) was maintained for the three experiments (<span class="html-italic">n</span> = test data). The deviation of the regression line (blue) from the 1:1 line (gray) indicates the model’s prediction bias.</p>
Full article ">Figure 6
<p>Variable importance measure of 15 most relevant variables for predicting soybean physiological maturity (R8 stage) of three experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. Spectral bands extracted from time series of images taken from a drone and the excess green index (<span class="html-italic">ExG</span>) were included in the models as explanatory variables with three other classification variables: the breeding block (Block), the individual who took the field notes (Ind.), and the check cultivar (that does not show relevant importance). In 2018, the images were taken from a drone with a digital RGB (red, green, blue) camera, whereas in 2019 and 2020, they were taken with a multispectral camera. For the latter two years, the analyses were divided into using only the red (R), green (G), and blue (B) bands (simulating a digital RGB camera) and using the five spectral bands: R, G, B, R edge, and near-infrared (NIR).</p>
Full article ">Figure 7
<p>Principal component analysis (PCA) of 32 variables belonging to a time series of RGB (red, green, blue) images and a calculated excess green index (<span class="html-italic">ExG</span>). The images were taken across eight drone flights carried out over a soybean breeding experiment (planted on 22 May 2018) containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. (<b>a</b>) Shows a regression analysis between PC1 scores and soybean physiological maturity (R8 stage); and (<b>b</b>) <span class="html-italic">a posteriori</span> association between the response variable (R8 stage) and the image features, where A and S indicate August and September 2018, respectively.</p>
Full article ">Figure 8
<p>Soybean physiological maturity (R8 stage) predictions for 2020 using four models trained with data from field recorded values collected from two previous experiments (2018–2019). The three experiments corresponded to breeding experiments containing plant rows of F<sub>4:5</sub> experimental lines grown at the University of Illinois Research and Education Center near Savoy, IL. The four models were adjusted by applying the Random Forest algorithm to associate the field recorded values with a time series of the excess green index (<span class="html-italic">ExG</span>) and three classification variables (breeding block, the individual who took the field notes, and the check cultivar). Calculated from the red, green, and blue spectral bands, <span class="html-italic">ExG</span> was obtained from digital images taken with a drone. The four models were adjusted using the following training: test data relationship: (<b>a</b>) Training 2019:Test 2020 (<span class="html-italic">n</span> = 51:49); (<b>b</b>) Training 2019<sub>plus 2020 checks</sub>:Test 2020<sub>wihout checks</sub> (<span class="html-italic">n</span> = 53:47); (<b>c</b>) Training 2018–2019: Test 2020 (<span class="html-italic">n</span> = 65:35); and (<b>d</b>) Training 2018–2019<sub>plus 2020 checks</sub>:Test 2020<sub>wihout checks</sub> (<span class="html-italic">n</span> = 67:33). The deviation of the regression line (blue) from the 1:1 line (gray) indicates the model’s prediction bias. The table below the figures gives the data used to train the models in each figure (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 9
<p>(<b>a</b>) Frequencies, (<b>b</b>) residuals, and (<b>c</b>) images showing prediction deviations for soybean physiological maturity (R8 stage) collected in a breeding experiment with plant rows of F<sub>4:5</sub> experimental lines in 2020. The mean residual (red line) indicates in (<b>b</b>) the prediction bias across time compared to predictions with zero bias from the observed R8 dates (gray dashed line). The images on the right show the excess green index (<span class="html-italic">ExG</span>), which is calculated with the red, green, and blue bands (images on the left). On the top of (<b>c</b>), the images show the three worst maturity predictions identified on (<b>b</b>); the bottom shows three examples considering predictions with an error of 2, 1, and 0 days from 30 September. The maturity predictions were carried out using a model (<a href="#remotesensing-16-04343-f008" class="html-fig">Figure 8</a>b) trained with data collected in a breeding experiment planted in 2019 (<span class="html-italic">n</span> = 11,197) and in the eight check cultivars replicated in the 2020 experiment. The 2020 experiment minus the checks (<span class="html-italic">n</span> = 11,197–493) was used to test the model, which was adjusted with the Random Forest algorithm using time series of <span class="html-italic">ExG</span> and three classification variables (breeding block, the individual who took the field notes, and the check cultivar).</p>
Full article ">
Back to TopTop