Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,503)

Search Parameters:
Keywords = airborne

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1367 KiB  
Article
Concentration Characteristics of Culturable Airborne Microbes in the Urban Forests of Yangzhou
by Xin Wan, Sumei Qiu, Cong Xu, Liwen Li, Wei Xing and Yingdan Yuan
Forests 2025, 16(2), 378; https://doi.org/10.3390/f16020378 - 19 Feb 2025
Abstract
Culturable airborne microorganisms significantly impact air quality and human health in urban forest land. Their concentrations serve as key air quality indicators. Over a year, this study analyzed airborne microorganisms in six forest stands within the Zhuyu Bay Scenic Area, Yangzhou, Jiangsu Province, [...] Read more.
Culturable airborne microorganisms significantly impact air quality and human health in urban forest land. Their concentrations serve as key air quality indicators. Over a year, this study analyzed airborne microorganisms in six forest stands within the Zhuyu Bay Scenic Area, Yangzhou, Jiangsu Province, China, to assess concentration characteristics and seasonal variations. Results showed that bacterial concentrations peaked in spring and summer, while fungal concentrations were highest in March. Microbial levels remained elevated from April to June, with variations among forest stands. A correlation analysis linked humidity, temperature, negative ion concentration, particulate matter 2.5 (PM2.5), and air pressure to microorganism fluctuations. To further explore the impact mechanism of urban microclimate on air microorganism concentrations, this study confirmed a strong positive influence of climatic factors on microorganism concentrations, particularly temperature and humidity. In conclusion, this study identifies seasonal patterns and microclimate interactions that affect airborne microorganism concentrations in urban forests. Findings contribute to ecosystem assessment, urban ecological planning, and climate improvement strategies, supporting informed decision-making. Full article
(This article belongs to the Section Urban Forestry)
21 pages, 684 KiB  
Article
A High Performance Air-to-Air Unmanned Aerial Vehicle Target Detection Model
by Hexiang Hao, Yueping Peng, Zecong Ye, Baixuan Han, Xuekai Zhang, Wei Tang, Wenchao Kang and Qilong Li
Drones 2025, 9(2), 154; https://doi.org/10.3390/drones9020154 - 19 Feb 2025
Abstract
In the air-to-air UAV target detection tasks, the existing algorithms suffer from low precision, low recall and high dependence on device processing power, which makes it difficult to detect UAV small targets efficiently. To solve the above problems, this paper proposes an high-precision [...] Read more.
In the air-to-air UAV target detection tasks, the existing algorithms suffer from low precision, low recall and high dependence on device processing power, which makes it difficult to detect UAV small targets efficiently. To solve the above problems, this paper proposes an high-precision model, ATA-YOLOv8. In this paper, we analyze the problem of UAV small target detection from the perspective of the efficient receptive field. The proposed model is evaluated using two air-to-air UAV image datasets, MOT-FLY and Det-Fly, and compared with YOLOv8n and other SOTA algorithms. The experimental results show that the mAP50 of ATA-YOLOv8 is 94.9% and 96.4% on the MOT-FLY and Det-Fly datasets, respectively, which are 25% and 5.9% higher than the mAP of YOLOv8n, while maintaining a model size of 5.1 MB. The methods in this paper improve the accuracy of UAV target detection in air-to-air scenarios. The proposed model’s small size, fast speed and high accuracy make it possible for real-time air-to-air UAV detection on edge-computing devices. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the ATA-YOLOv8.</p>
Full article ">Figure 2
<p>The structure of the cross stage detail enhance block.</p>
Full article ">Figure 3
<p>The structure of the Efficient Multi-Scale Attention Module.</p>
Full article ">Figure 4
<p>The structure of cross-stage-partial_partial multi-scale feature aggregation.</p>
Full article ">Figure 5
<p>The structure of Omni-Kernel Module.</p>
Full article ">Figure 6
<p>The structure of the Detail Enhanced Rep Shared Convolutional Detection Head. The shared block in this figure shows the structure. VC, ADC, CDC, VDC and HDC represent the five parallel convolution layers, which are the standard convolution, angle differential convolution, center differential convolution, vertical differential convolution and horizontal differential convolution, respectively.</p>
Full article ">Figure 7
<p>The mAP50, GFLOPs and parameters of ablation experiments.</p>
Full article ">Figure 8
<p>Effective receptive fields of the 13th, 17th, 18th and 21st layers of the network, which corresponds respectively to (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 9
<p>The heat map using HiResCAM, where (<b>a</b>) is the original image; (<b>b</b>) is the heatmap of the YOLOv8 model; (<b>c</b>) is the heatmap of the YOLOv8 + CSDE model; (<b>d</b>) is the heatmap of the YOLOv8 + CSDE + CSP_PMSA + OmniKernel model; (<b>e</b>) is the heatmap of ATA-YOLOv8 model.</p>
Full article ">Figure 10
<p>Visualization of the network’s effective receptive fields.</p>
Full article ">
31 pages, 21485 KiB  
Article
UAV-SfM Photogrammetry for Canopy Characterization Toward Unmanned Aerial Spraying Systems Precision Pesticide Application in an Orchard
by Qi Bing, Ruirui Zhang, Linhuan Zhang, Longlong Li and Liping Chen
Drones 2025, 9(2), 151; https://doi.org/10.3390/drones9020151 - 18 Feb 2025
Viewed by 175
Abstract
The development of unmanned aerial spraying systems (UASSs) has significantly transformed pest and disease control methods of crop plants. Precisely adjusting pesticide application rates based on the target conditions is an effective method to improve pesticide use efficiency. In orchard spraying, the structural [...] Read more.
The development of unmanned aerial spraying systems (UASSs) has significantly transformed pest and disease control methods of crop plants. Precisely adjusting pesticide application rates based on the target conditions is an effective method to improve pesticide use efficiency. In orchard spraying, the structural characteristics of the canopy are crucial for guiding the pesticide application system to adjust spraying parameters. This study selected mango trees as the research sample and evaluated the differences between UAV aerial photography with a Structure from Motion (SfM) algorithm and airborne LiDAR in the results of extracting canopy parameters. The maximum canopy height, canopy projection area, and canopy volume parameters were extracted from the canopy height model of SfM (CHMSfM) and the canopy height model of LiDAR (CHMLiDAR) by grids with the same width as the planting rows (5.0 m) and 14 different heights (0.2 m, 0.3 m, 0.4 m, 0.5 m, 0.6 m, 0.8 m, 1.0 m, 2.0 m, 3.0 m, 4.0 m, 5.0 m, 6.0 m, 8.0 m, and 10.0 m), respectively. Linear regression equations were used to fit the canopy parameters obtained from different sensors. The correlation was evaluated using R2 and rRMSE, and a t-test (α = 0.05) was employed to assess the significance of the differences. The results show that as the grid height increases, the R2 values for the maximum canopy height, projection area, and canopy volume extracted from CHMSfM and CHMLiDAR increase, while the rRMSE values decrease. When the grid height is 10.0 m, the R2 for the maximum canopy height extracted from the two models is 92.85%, with an rRMSE of 0.0563. For the canopy projection area, the R2 is 97.83%, with an rRMSE of 0.01, and for the canopy volume, the R2 is 98.35%, with an rRMSE of 0.0337. When the grid height exceeds 1.0 m, the t-test results for the three parameters are all greater than 0.05, accepting the hypothesis that there is no significant difference in the canopy parameters obtained by the two sensors. Additionally, using the coordinates x0 of the intersection of the linear regression equation and y=x as a reference, CHMSfM tends to overestimate lower canopy maximum height and projection area, and underestimate higher canopy maximum height and projection area compared to CHMLiDAR. This to some extent reflects that the surface of CHMSfM is smoother. This study demonstrates the effectiveness of extracting canopy parameters to guide UASS systems for variable-rate spraying based on UAV oblique photography combined with the SfM algorithm. Full article
(This article belongs to the Special Issue Recent Advances in Crop Protection Using UAV and UGV)
Show Figures

Figure 1

Figure 1
<p>In the 1800 m<sup>2</sup> study area, the mango trees are in rows 4.8 m apart and the distance between mango trees in the same planting row is 3.5 m. We utilized an airborne LiDAR and an aerial drone with an RGB camera to collect point cloud data and aerial imagery from the study area.</p>
Full article ">Figure 2
<p>Diagram about Structure from Motion (SfM) algorithm data acquisition. Obtaining images of the object overlapping each other and recording the camera’s positional parameters to reconstruct a three-dimensional model of the object. The diagram illustrates spatial position coordinates of matching points based on UAV multi-view images.</p>
Full article ">Figure 3
<p>The process of CHM generation: (<b>a</b>) digital surface model (DSM); (<b>b</b>) digital terrain model (DTM); (<b>c</b>) canopy height model (CHM) of the mango trees in the orchard. The transition of the color of the pixel from green to yellow to red represents an increase in the elevation of that point on the model.</p>
Full article ">Figure 4
<p>One of the mango trees in the orchard: (<b>a</b>) flat view image; (<b>b</b>) top view image; (<b>c</b>) top view of its 3D model generated by the SfM algorithm.</p>
Full article ">Figure 5
<p>Canopy height model in raster format: (<b>a</b>) canopy height model generated from the SfM algorithm; (<b>b</b>) canopy height model generated from the LiDAR point cloud. The height of the points on the model is represented by the color. From green to yellow to red, the height of the canopy increases in turn. Pixels of ground were not separated from the canopy height model. A lower generated DSM or a higher generated DTM could both result in a normalized CHM model with lower elevations or negative ground point elevation values.</p>
Full article ">Figure 6
<p>Diagram of pixel division into a grid. <math display="inline"><semantics> <mrow> <mi mathvariant="normal">d</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">s</mi> <mi mathvariant="normal">t</mi> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">c</mi> <mi mathvariant="normal">e</mi> </mrow> </semantics></math>: distance between origin of the coordinate and the pixel; α: angle between north direction and the positive direction of the y axis; <math display="inline"><semantics> <mrow> <mi>β</mi> </mrow> </semantics></math>: angle between the connecting line from and the positive direction of the y axis; origin point toward pixels and the positive direction of the y axis; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">d</mi> </mrow> <mrow> <mi mathvariant="normal">x</mi> </mrow> </msub> <mo>=</mo> <mrow> <mrow> <mi>sin</mi> </mrow> <mo>⁡</mo> <mrow> <mi>β</mi> </mrow> </mrow> <mo>×</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">d</mi> </mrow> <mrow> <mi mathvariant="normal">y</mi> </mrow> </msub> <mo>=</mo> <mrow> <mrow> <mi>cos</mi> </mrow> <mo>⁡</mo> <mrow> <mi>β</mi> </mrow> </mrow> <mo>×</mo> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </semantics></math>; the <span class="html-italic">x</span>-coordinate of the pixel’s corresponding grid: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mo>[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> </mrow> <mrow> <msub> <mrow> <mi>g</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>w</mi> <mi>i</mi> <mi>d</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> </mrow> </mfrac> </mstyle> <mo>]</mo> </mrow> </semantics></math>; the <span class="html-italic">y</span>-coordinate of the pixel’s corresponding grid: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mo>[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>y</mi> </mrow> </msub> </mrow> <mrow> <msub> <mrow> <mi>g</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> </mrow> <mrow> <mi>h</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </msub> </mrow> </mfrac> </mstyle> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Grid division schematic diagram (using a 5 m × 5 m grid as an example).</p>
Full article ">Figure 8
<p>Data processing flow of this study.</p>
Full article ">Figure 9
<p>Scatter plots of the maximum canopy height extracted from the two canopy height models using grids of different sizes. From the scatter plot, it is evident that as the grid height increases, the number of scatter points obtained from the study area gradually decreases. Slopes of fitting lines <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mi>x</mi> </mrow> </semantics></math> are always less than 1. To the left of the intersection with the line <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math>, the canopy height obtained through <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </semantics></math> modeling is greater than that obtained through <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </semantics></math> modeling.</p>
Full article ">Figure 10
<p>A scatter plot of the maximum canopy height extracted from the two canopy height models using grids of different sizes. The distribution of canopy maximum height fitting points extracted from grids of different sizes from the two models shows no clear separation and appears to be concentrated in the same areas. The points located on the far right or top side represent the canopy maximum height points obtained by LiDAR or SfM algorithms across the entire study area. The points marked within the red rectangle represent a difference of more than 1 m in canopy height values extracted from the two models. This may indicate that the grid’s pixels in one model are primarily ground points or lower canopy, while the grid with the same coordinates in the other model includes a higher canopy. Such a situation can be caused by horizontal modeling errors. There are fewer significantly deviating points in red rectangle B compared to red rectangle A in Figure 10.</p>
Full article ">Figure 11
<p>Box plot of maximum canopy height within grids of varying sizes. In the figure, the box plots with the same color represent the maximum canopy height within grids of the same height extracted from the two canopy height models. Regardless of the grid height used to extract the maximum canopy height, the median (Q2) and upper quartile (Q3) of the maximum canopy heights extracted from <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> are consistently higher than those extracted from <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>A selected profile line (<b>a</b>) was used to create profile diagrams (<b>b</b>) for the two canopy height models. A and B represent the starting and ending positions of the profile line. The results indicate that, compared to the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>, the surface of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> is smoother. Additionally, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> tends to overestimate lower canopies.</p>
Full article ">Figure 13
<p>The schematic diagram shows the positions of the field-measured height points within the orchard. The sampling points include those at the edges of the canopy as well as those at the center of the canopy.</p>
Full article ">Figure 14
<p>The field-measured canopy heights are compared with those obtained from the canopy height model, and an analysis is conducted.</p>
Full article ">Figure 15
<p>Extract the canopy projection area from the two canopy height models using grids of different heights, and create scatter plots for grids at the same coordinates. The canopy projection area within a grid cannot exceed the grid area, so the scatter plot forms a clear boundary line near the grid area. The intersection of the fitted line with <span class="html-italic">y</span> = <span class="html-italic">x</span> indicates that, compared to <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math> tends to overestimate smaller canopy projection areas and underestimate larger canopy projection areas, using the intersection point <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> as the boundary.</p>
Full article ">Figure 16
<p>Figure that integrated scatter plots of canopy projection areas extracted from grids of all sizes in this study.</p>
Full article ">Figure 17
<p>Scatter plot of canopy volume extracted within grids of varying heights. As the grid height increases, the number of grids decreases, the correlation coefficient between scatter points gradually increases, and the <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> gradually decreases.</p>
Full article ">Figure 18
<p>Scatter plots of canopy data extracted using grids of different sizes. With the increase in grid height, the scatter points show a good linear trend.</p>
Full article ">Figure 19
<p>Comparison of surface area on the canopy calculated based on <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>H</mi> <mi>M</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>a</b>) The relationship between <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>U</mi> <mi>A</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> <mo>−</mo> <mi>f</mi> <mi>i</mi> <mi>l</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>U</mi> <mi>A</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>=</mo> <mn>87.0</mn> <mi mathvariant="normal">%</mi> </mrow> </semantics></math>. (<b>b</b>) The relationship between <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>U</mi> <mi>A</mi> </mrow> <mrow> <mi>L</mi> <mi>i</mi> <mi>D</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>U</mi> <mi>A</mi> </mrow> <mrow> <mi>S</mi> <mi>f</mi> <mi>M</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> <mo>=</mo> <mn>94.6</mn> </mrow> </semantics></math>%.</p>
Full article ">Figure 20
<p>Diagram of CHM in point cloud format constructed from (<b>a</b>) SfM and (<b>b</b>) LiDAR point cloud.</p>
Full article ">Figure 21
<p>Wipe off point cloud of ground by CSF algorithm.</p>
Full article ">Figure 22
<p>Comparison of the point clouds of the same mango tree generated by the SfM algorithm and LiDAR. (<b>a</b>) The point cloud of the mango tree collected by LiDAR. (<b>b</b>) The point cloud of the mango tree generated by the SfM algorithm. Obviously, for the same tree, the LiDAR point cloud is denser compared to the SfM point cloud.</p>
Full article ">Figure 23
<p>Violin plot of the elevation of the study area point cloud generated by LiDAR sensors and the SfM algorithm. A violin plot can display the shape of the data distribution, the degree of data concentration, and the presence of outliers. The violin plot shows that the canopy point cloud generated by LiDAR is mainly concentrated around a height of 2.1 m, while the canopy point cloud generated by SfM displays two peaks, with distributions at heights of 2.0 m and 0.7 m.</p>
Full article ">Figure 24
<p>Cross-sectional view of two kinds of point cloud at the same location in the orchard: (<b>a</b>) Cross-sectional view of the point cloud generated by the SfM algorithm. (<b>b</b>) Cross-sectional view of the point cloud obtained by LiDAR.</p>
Full article ">
16 pages, 1006 KiB  
Systematic Review
Composite Dust Toxicity Related to Restoration Polishing: A Systematic Review
by Kamila Kucharska, Anna Lehmann, Martyna Ortarzewska, Jakub Jankowski and Kacper Nijakowski
J. Compos. Sci. 2025, 9(2), 90; https://doi.org/10.3390/jcs9020090 - 18 Feb 2025
Viewed by 190
Abstract
An integral part of daily dental practice is preparing and polishing placed composite restorations. When these procedures are performed, significant amounts of composite dust are released from the grinding material. This systematic review aims to enhance the existing body of knowledge, encourage further [...] Read more.
An integral part of daily dental practice is preparing and polishing placed composite restorations. When these procedures are performed, significant amounts of composite dust are released from the grinding material. This systematic review aims to enhance the existing body of knowledge, encourage further dialogue, and expand the understanding of composite dust and its related risks. Following inclusion and exclusion criteria, twelve studies were included. Several studies highlight that composite dust contains nanoparticles capable of deep lung penetration, posing significant health risks to both dental staff and patients. Inhalation of composite dust can lead to respiratory diseases such as pneumoconiosis. Studies have shown that water cooling during composite grinding reduces dust emissions but does not eliminate them completely. Researchers suggest that thermal degradation of the composite material, not just filler particles, may be the source of the nanoparticles. In vitro studies have shown the toxicity of composite dust to bronchial and gingival epithelial cells, especially at high concentrations. Further research is needed on the health effects of composite dust and the development of effective methods to protect staff and patients. Full article
(This article belongs to the Special Issue Feature Papers in Journal of Composites Science in 2024)
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram presenting search strategy.</p>
Full article ">Figure 2
<p>Quality assessment, including the main potential risk of bias (risk level: green—low, yellow—unspecified, red—high; quality score: green—good, yellow—intermediate, red—poor) [<a href="#B14-jcs-09-00090" class="html-bibr">14</a>,<a href="#B17-jcs-09-00090" class="html-bibr">17</a>,<a href="#B18-jcs-09-00090" class="html-bibr">18</a>,<a href="#B19-jcs-09-00090" class="html-bibr">19</a>,<a href="#B20-jcs-09-00090" class="html-bibr">20</a>,<a href="#B30-jcs-09-00090" class="html-bibr">30</a>,<a href="#B31-jcs-09-00090" class="html-bibr">31</a>,<a href="#B32-jcs-09-00090" class="html-bibr">32</a>,<a href="#B33-jcs-09-00090" class="html-bibr">33</a>,<a href="#B34-jcs-09-00090" class="html-bibr">34</a>,<a href="#B35-jcs-09-00090" class="html-bibr">35</a>,<a href="#B36-jcs-09-00090" class="html-bibr">36</a>].</p>
Full article ">
17 pages, 8025 KiB  
Article
Improving the Sensitivity of a Dark-Resonance Atomic Magnetometer
by Hao Zhai, Wei Li and Guangxiang Jin
Sensors 2025, 25(4), 1229; https://doi.org/10.3390/s25041229 - 18 Feb 2025
Viewed by 87
Abstract
The combination of unmanned aerial vehicles and atomic magnetometers can be used for detection applications such as mineral resource exploration, environmental protection, and earthquake monitoring, as well as the detection of sunken ships and unexploded ordnance. A dark-resonance atomic magnetometer offers the significant [...] Read more.
The combination of unmanned aerial vehicles and atomic magnetometers can be used for detection applications such as mineral resource exploration, environmental protection, and earthquake monitoring, as well as the detection of sunken ships and unexploded ordnance. A dark-resonance atomic magnetometer offers the significant advantages of a fully optical probe and omnidirectional measurement with no dead zones, making it an ideal choice for airborne applications on unmanned aerial vehicles. Enhancing the sensitivity of such atomic magnetometers is an essential task. In this study, we sought to enhance the sensitivity of a dark-state resonance atomic magnetometer. Initially, through theoretical analysis, we compared the excitation effects of coherent population trapping (CPT) resonance on the D1 and D2 transitions of 133Cs thermal vapor. The results indicate that excitation via the D1 line yields an increase in resonance contrast and a reduction in linewidth when compared with excitation through the D2 line, aligning with theoretical predictions. Subsequently, considering the impact of various quantum system parameters on sensitivity, as well as their interdependent characteristics, two experimental setups were developed for empirical investigation. One setup focused on parameter optimization experiments, where we compared the linewidth and contrast of CPT resonances excited by both D1 and D2 transitions; this led to an optimization of atomic cell size, buffer gas pressure, and operating temperature, resulting in an ideal parameter range. The second setup was employed to validate these optimized parameters using a coupled dark-state atom magnetometer experiment, achieving approximately a 10-fold improvement in sensitivity. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Three-level Λ system.</p>
Full article ">Figure 2
<p>Contrast of CPT resonance.</p>
Full article ">Figure 3
<p>D<sub>1</sub> and D<sub>2</sub> lines of <sup>133</sup>Cs.</p>
Full article ">Figure 4
<p>D₁ spectral line excitation scheme within the hyperfine structure of <sup>133</sup>Cs.</p>
Full article ">Figure 5
<p>Parameter optimization experimental setup.</p>
Full article ">Figure 6
<p>Coupled dark-state atomic magnetometer experimental setup.</p>
Full article ">Figure 7
<p>Doppler absorption spectrum of VCSEL laser after passing through atomic gas cell.</p>
Full article ">Figure 8
<p>Linewidth and contrast in cell: D<sub>1</sub> and D<sub>2</sub>.</p>
Full article ">Figure 9
<p>The CPT resonance linewidth is related to the wavelength and laser power of the VCSEL.</p>
Full article ">Figure 10
<p>The relationship between CPT resonance contrast and the wavelength and laser power of a VCSEL.</p>
Full article ">Figure 11
<p>Laser noise power spectrum.</p>
Full article ">Figure 12
<p>Different-sized cesium cells.</p>
Full article ">Figure 13
<p>Relationship between different sizes of cesium cells and line width and contrast.</p>
Full article ">Figure 14
<p>Contrast at different cesium cell temperatures and laser power.</p>
Full article ">Figure 15
<p>CPT Resonance signal linewidth and buffer gas pressure.</p>
Full article ">Figure 16
<p>Improving the magnetic field measurement sensitivity.</p>
Full article ">
26 pages, 5777 KiB  
Article
Three-Stage Up-Scaling and Uncertainty Estimation in Forest Aboveground Biomass Based on Multi-Source Remote Sensing Data Considering Spatial Correlation
by Xiangyuan Ding, Erxue Chen, Lei Zhao, Yaxiong Fan, Jian Wang and Yunmei Ma
Remote Sens. 2025, 17(4), 671; https://doi.org/10.3390/rs17040671 - 16 Feb 2025
Viewed by 247
Abstract
Airborne LiDAR (ALS) data have been extensively utilized for aboveground biomass (AGB) estimation; however, the high acquisition costs make it challenging to attain wall-to-wall estimation across large regions. Some studies have leveraged ALS data as intermediate variables to amplify sample sizes, thereby reducing [...] Read more.
Airborne LiDAR (ALS) data have been extensively utilized for aboveground biomass (AGB) estimation; however, the high acquisition costs make it challenging to attain wall-to-wall estimation across large regions. Some studies have leveraged ALS data as intermediate variables to amplify sample sizes, thereby reducing costs and enhancing sample representativeness and model accuracy, but the cost issue remains in larger-scale estimations. Satellite LiDAR data, offering a broader dataset that can be acquired quickly with lower costs, can serve as an alternative intermediate variable for sample expansion. In this study, we employed a three-stage up-scaling approach to estimate forest AGB and introduced a method for quantifying estimation uncertainty. Based on the established three-stage general-hierarchical-model-based estimation inference (3sGHMB), an RK-3sGHMB inference method is proposed to make use of the regression-kriging (RK) method, and then it is compared with conventional model-based inference (CMB), general hierarchical model-based inference (GHMB), and improved general hierarchical model-based inference (RK-GHMB) to estimate forest AGB and uncertainty at both the pixel and forest farm levels. This study was carried out by integrating plot data, sampled ALS data, wall-to-wall Sentinel-2A data, and airborne P-SAR data. The results show that the accuracy of CMB (Radj2 = 0.37, RMSE = 33.95 t/ha, EA = 63.28%) is lower than that of GHMB (Radj2 = 0.38, RMSE = 33.72 t/ha, EA = 63.53%), while it is higher than that of 3sGHMB (Radj2 = 0.27, RMSE = 36.58 t/ha, EA = 60.43%). Notably, RK-GHMB (Radj2 = 0.60, RMSE= 27.07 t/ha, EA = 70.72%) and RK-3sGHMB (Radj2 = 0.55, RMSE = 28.55 t/ha, EA = 69.13%) demonstrate significant accuracy enhancements compared to GHMB and 3sGHMB. For population AGB estimation, the precision of the proposed RK-3sGHMB (p = 94.44%) is the highest, providing that there are sufficient sample sizes in the third stage, followed by RK-GHMB (p = 93.32%) with sufficient sample sizes in the second stage, GHMB (p = 90.88%), 3sGHMB (p = 88.91%), and CMB (p = 87.96%). Further analysis reveals that the three-stage model, considering spatial correlation at the third stage, can improve estimation accuracy, but the prerequisite is that the sample size in the third stage must be sufficient. For large-scale estimation, the RK-3sGHMB model proposed herein offers certain advantages. Full article
(This article belongs to the Special Issue Forest Biomass/Carbon Monitoring towards Carbon Neutrality)
Show Figures

Figure 1

Figure 1
<p>Location and coverage of the study area.</p>
Full article ">Figure 2
<p>(<b>a</b>) Spatial distribution of sample plots, data from Sentinel-2A, P-SAR, and GEDI in the study area; and (<b>b</b>) the spatial distribution of ALS strips and GEDI data within the strips.</p>
Full article ">Figure 3
<p>Overall workflow of the study; the green square blocks represent field sample plots; the purple square blocks represent ALS sample plots; the yellow circles represent the GEDI sample plots; the black bars represent the ALS strips; the elements enclosed within the blue boxes collectively constitute a unified whole.</p>
Full article ">Figure 4
<p>Variation in the feature selection iteration curves using genetic algorithms according to <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mi>m</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> (<b>a</b>), <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mi>m</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>b</b>), <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>l</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> (<b>c</b>), <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mrow> <mi>l</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>d</b>), and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>g</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (<b>e</b>).</p>
Full article ">Figure 5
<p>Model accuracy evaluation of <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mi>m</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> (<b>a</b>), <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mrow> <mi>l</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> (<b>b</b>), MB (<b>c</b>), GHMB (<b>d</b>), RK-GMB (<b>e</b>), 3sGHMB (<b>f</b>), and RK-3sGHMB (<b>g</b>) using independent samples.</p>
Full article ">Figure 6
<p>Residual variogram and fitted model with exponential, gaussian, spherical, and linear fitting using RK-GHMB (<b>a</b>) and RK-3sGHMB (<b>b</b>).</p>
Full article ">Figure 7
<p>The results for forest AGB and uncertainty: the spatial distribution of forest AGB and uncertainty estimated by 3sGHMB (<b>a</b>), GHMB (<b>b</b>), RK-3sGHMB (<b>c</b>), RK-GHMB (<b>d</b>), and CMB (<b>e</b>); (<b>f</b>) pixel-level uncertainty (<span class="html-italic">RMSE</span>) with respect to predicted AGB.</p>
Full article ">Figure 8
<p>The influence of sample size used for <math display="inline"><semantics> <mrow> <mi>V</mi> <mo stretchy="false">(</mo> <msub> <mover> <mi>μ</mi> <mo stretchy="false">¯</mo> </mover> <mrow> <mi>m</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> estimation using MB, GHMB, RH-GHMBa, 3sGHMB, and RK-3sGHMB.</p>
Full article ">
28 pages, 8850 KiB  
Article
Real-Time Runway Detection Using Dual-Modal Fusion of Visible and Infrared Data
by Lichun Yang, Jianghao Wu, Hongguang Li, Chunlei Liu and Shize Wei
Remote Sens. 2025, 17(4), 669; https://doi.org/10.3390/rs17040669 - 16 Feb 2025
Viewed by 175
Abstract
Advancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This study proposes a salient object detection [...] Read more.
Advancements in aviation technology have made intelligent navigation systems essential for improving flight safety and efficiency, particularly in low-visibility conditions. Radar and GPS systems face limitations in bad weather, making visible–infrared sensor fusion a promising alternative. This study proposes a salient object detection (SOD) method that integrates visible and infrared sensors for robust airport runway detection in complex environments. We introduce a large-scale visible–infrared runway dataset (RDD5000) and develop a SOD algorithm capable of detecting salient targets from unaligned visible and infrared images. To enable real-time processing, we design a lightweight dual-modal fusion network (DCFNet) with an independent–shared encoder and a cross-layer attention mechanism to enhance feature extraction and fusion. Experimental results show that the MobileNetV2-based lightweight version achieves 155 FPS on a single GPU, significantly outperforming previous methods such as DCNet (4.878 FPS) and SACNet (27 FPS), making it suitable for real-time deployment on airborne systems. This work offers a novel and efficient solution for intelligent navigation in aviation. Full article
Show Figures

Figure 1

Figure 1
<p>The examples of several airport runway datasets: (<b>a</b>) airport runway datasets using remote sensing imagery [<a href="#B44-remotesensing-17-00669" class="html-bibr">44</a>,<a href="#B45-remotesensing-17-00669" class="html-bibr">45</a>,<a href="#B46-remotesensing-17-00669" class="html-bibr">46</a>], (<b>b</b>) airport runway datasets generated by simulation [<a href="#B47-remotesensing-17-00669" class="html-bibr">47</a>,<a href="#B48-remotesensing-17-00669" class="html-bibr">48</a>].</p>
Full article ">Figure 2
<p>The examples of the proposed RDD5000 dataset consisting of 10 sets of visible images (first row and fifth row), visible ground truth (second row and sixth row), infrared images (third row and seventh row), and infrared ground truth (fourth row and eighth row).</p>
Full article ">Figure 3
<p>The overall architecture of the proposed DCFNet.</p>
Full article ">Figure 4
<p>The proposed MAFM framework.</p>
Full article ">Figure 5
<p>Examples of haze image generation.</p>
Full article ">Figure 6
<p>Visual comparison of speed and accuracy on the RDD5000 datasets.</p>
Full article ">Figure 7
<p>PR curves and threshold F-measure curves (from <b>left</b> to <b>right</b>) of different models on the RDD5000 visible dataset.</p>
Full article ">Figure 8
<p>Visual comparisons with other SOTA methods under different challenging scenarios, including small objects (Rows 1 and 2), medium objects (Rows 3 and 4), and large objects (Rows 5 and 6).</p>
Full article ">Figure 9
<p>Max F-measure scores and MAE on RDD5000 in the whole training procedure to verify the effectiveness of the MFEM.</p>
Full article ">Figure 10
<p>Visual comparison of the feature heatmaps with/without the MFEM.</p>
Full article ">Figure 11
<p>Visual comparison of the features with/without AM in the MAFM.</p>
Full article ">
24 pages, 11602 KiB  
Article
Nonoverlapping Spectral Ranges’ Hyperspectral Data Fusion Based on Combined Spectral Unmixing
by Yihao Wang, Jianyu Chen, Xuanqin Mou, Jia Liu, Tieqiao Chen, Xiangpeng Feng, Bo Qu, Jie Liu, Geng Zhang and Siyuan Li
Remote Sens. 2025, 17(4), 666; https://doi.org/10.3390/rs17040666 - 15 Feb 2025
Viewed by 276
Abstract
Due to the development of spectral remote sensing imaging technology, hyperspectral data in different spectral ranges, such as visible and near-infrared, short-wave infrared, etc., can be acquired simultaneously. Data fusion between these nonoverlapping spectral ranges’ hyperspectral data has become an urgent task. Most [...] Read more.
Due to the development of spectral remote sensing imaging technology, hyperspectral data in different spectral ranges, such as visible and near-infrared, short-wave infrared, etc., can be acquired simultaneously. Data fusion between these nonoverlapping spectral ranges’ hyperspectral data has become an urgent task. Most existing hyperspectral data fusion methods focus on two types of hyperspectral data with overlapping spectral ranges, requiring spectral response functions as a necessary condition, which is not applicable to this task. To address this issue, we propose the combined spectral unmixing fusion (CSUF) method, an unsupervised method with certain physical significance. It effectively solves the problem of hyperspectral data fusion with nonoverlapping spectral ranges through the two hyperspectral data point spread function estimation and combined spectral unmixing. Experiments on airborne datasets and HJ-2 satellite data show that, compared with various leading methods, our method achieves the best performance in terms of reference evaluation indicators such as the PSNR and SAM, as well as the non-reference evaluation indicator the QNR. Furthermore, we deeply analyze the spectral response relationship and the impact of the ratio of spectral bands between the fused data on the fusion effect, providing references for future research. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of hyperspectral remote sensing data across different spectral ranges include visible near-infrared (VNIR), short-wave Infrared (SWIR), mid-wave infrared (MWIR), and long-wave infrared (LWIR). The spectral curves for different spectral ranges in the figure are represented by different colors. Due to sensor technology and other factors, the spatial resolution decreases as the wavelength increases. The reflectance range on the vertical axis is from 0 to 100.</p>
Full article ">Figure 2
<p>Flowchart of the CSUF method, mainly divided into three parts, including the estimation of the PSF using images from the adjacent spectral bands between <span class="html-italic">X</span> and <span class="html-italic">Y</span>, combining downsampled <span class="html-italic">XL</span> and <span class="html-italic">Y</span> to <span class="html-italic">ZL</span>, combining upsampled <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>Y</mi> <mi>H</mi> </mrow> <mo stretchy="true">^</mo> </mover> </mrow> </semantics></math> and <span class="html-italic">X</span> to <span class="html-italic">ZH</span>, and alternating the spectral unmixing of <span class="html-italic">ZL</span>, <span class="html-italic">X</span> and <span class="html-italic">ZH</span>. The circled numbers 1 to 3 represent the sequence of spectral unmixing for ZL, X, and ZH.</p>
Full article ">Figure 3
<p>Airborne hyperspectral datasets: (<b>a</b>,<b>b</b>) are the visible and near-infrared spectral bands hyperspectral data split from the Pavia University dataset, and (<b>c</b>,<b>d</b>) are the visible and near-infrared spectral bands hyperspectral data split from the Chikusei dataset.</p>
Full article ">Figure 4
<p>HJ-2 satellite VNIR and SWIR real in-orbit remote sensing hyperspectral data, where (<b>a</b>,<b>b</b>) are the VNIR and SWIR hyperspectral data of river scenes, and (<b>c</b>,<b>d</b>) are the VNIR and SWIR hyperspectral data of farmland scenes.</p>
Full article ">Figure 5
<p>False color band composite image of the Pavia University dataset fusion result.</p>
Full article ">Figure 6
<p>SAM maps of the Pavia University dataset fusion result.</p>
Full article ">Figure 7
<p>False color band composite image of the Chikusei dataset fusion result.</p>
Full article ">Figure 8
<p>SAM maps of the Chikusei dataset fusion result.</p>
Full article ">Figure 9
<p>False color band composite image of the HJ2-River Data fusion result.</p>
Full article ">Figure 10
<p>False color band composite image of the HJ2-Farmland Data fusion result.</p>
Full article ">Figure 11
<p>False color images of reconstructed data based on the estimated SRF in both overlapping and nonoverlapping spectral ranges: (<b>a</b>–<b>d</b>) are from the PaviaU dataset, while (<b>e</b>–<b>h</b>) are from the Chikusei dataset; (<b>a</b>,<b>b</b>,<b>e</b>,<b>f</b>) represent cases with overlapping spectral ranges, while (<b>c</b>,<b>d</b>,<b>g</b>,<b>h</b>) represent cases with nonoverlapping spectral ranges.</p>
Full article ">Figure 12
<p>Spectral curves of reconstructed data based on the SRF estimation in both overlapping and nonoverlapping spectral ranges: (<b>a</b>,<b>b</b>) are from the PaviaU dataset, while (<b>c</b>,<b>d</b>) are from the Chikusei dataset; (<b>a</b>,<b>c</b>) represent cases with overlapping spectral ranges, while (<b>b</b>,<b>d</b>) represent cases with nonoverlapping spectral ranges.</p>
Full article ">Figure 13
<p>The impact of the spectral band ratio of two sets of hyperspectral data on the PSNR and SAM metrics of the fusion results: (<b>a</b>,<b>b</b>) are from the PaviaU dataset, while (<b>c</b>,<b>d</b>) are from the Chikusei dataset.</p>
Full article ">
21 pages, 4483 KiB  
Article
DEM Generation Incorporating River Channels in Data-Scarce Contexts: The “Fluvial Domain Method”
by Jairo R. Escobar Villanueva, Jhonny I. Pérez-Montiel and Andrea Gianni Cristoforo Nardini
Hydrology 2025, 12(2), 33; https://doi.org/10.3390/hydrology12020033 - 14 Feb 2025
Viewed by 498
Abstract
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial [...] Read more.
This paper presents a novel methodology to generate Digital Elevation Models (DEMs) in flat areas, incorporating river channels from relatively coarse initial data. The technique primarily utilizes filtered dense point clouds derived from SfM-MVS (Structure from Motion-Multi-View Stereo) photogrammetry of available crewed aerial imagery datasets. The methodology operates under the assumption that the aerial survey was carried out during low-flow or drought conditions so that the dry (or almost dry) riverbed is detected, although in an imprecise way. Direct interpolation of the detected elevation points yields unacceptable river channel bottom profiles (often exhibiting unrealistic artifacts) and even distorts the floodplain. In our Fluvial Domain Method, channel bottoms are represented like “highways”, perhaps overlooking their (unknown) detailed morphology but gaining in general topographic consistency. For instance, we observed an 11.7% discrepancy in the river channel long profile (with respect to the measured cross-sections) and a 0.38 m RMSE in the floodplain (with respect to the GNSS-RTK measurements). Unlike conventional methods that utilize active sensors (satellite and airborne LiDAR) or classic topographic surveys—each with precision, cost, or labor limitations—the proposed approach offers a more accessible, cost-effective, and flexible solution that is particularly well suited to cases with scarce base information and financial resources. However, the method’s performance is inherently limited by the quality of input data and the simplification of complex channel morphologies; it is most suitable for cases where high-resolution geomorphological detail is not critical or where direct data acquisition is not feasible. The resulting DEM, incorporating a generalized channel representation, is well suited for flood hazard modeling. A case study of the Ranchería river delta in the Northern Colombian Caribbean demonstrates the methodology. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

Figure 1
<p>Study area: lower Ranchería River basin sector (green polygon), Riohacha (Colombia). The study reach focuses on the main channel from the “Aremasain” station to the branch named “Riito”.</p>
Full article ">Figure 2
<p>Dense vegetation context along the studied river reach.</p>
Full article ">Figure 3
<p>Deployment of GNSS-RTK points (gray dots) used for subsequent DEM adjustments from the photogrammetric process and validation (red triangles) of DSM/DEM products.</p>
Full article ">Figure 4
<p>Figure example shows a longitudinal channel profile (dashed red line) from the preliminary SfM-MVS DEM (without channel correction). Note the significant altimetric variability resulting from interpolation artifacts and the overestimation of the channel width due to the artificial lowering of the floodplain surface along the riverbanks. Flow direction is represented by the black arrow.</p>
Full article ">Figure 5
<p>General outline of the proposed method. The workflow starts with the input data at the bottom and culminates in the final product at the top, highlighting it as the process’s outcome: (1) Input data and preprocessing, (2) Elevation extraction from the preliminary DEM, (3) Bathymetric channel correction, and (4) Channel Integration with the preliminary DEM.</p>
Full article ">Figure 6
<p>Visualization of the error distribution and accuracy assessment of the digital models using histograms (<b>a</b>) and box plots (<b>b</b>). DSM as blue and DTM appears in yellow. Superimposed on the histogram are the expected normal distribution curves and white circles represents the outliers.</p>
Full article ">Figure 7
<p>Comparison between raw (blue) and smoothed elevation profiles of the channel obtained by SfM-MVS photogrammetry and the proposed method (red): (<b>a</b>) smoothed channel bottom, location of reference cross-sections and GNSS RTK observations (triangles); (<b>b</b>) refinement of the channel longitudinal profile using GNSS RTK adjustment in the last river reach (7 and 8). Purple boxes represent the cross-section location along the elevation profile.</p>
Full article ">Figure 8
<p>Comparison of the cross-sectional depth (h) geometry along the studied river (n = 8). Black lines represent depths estimated by the proposed method; purple lines represent reference (observed) depths.</p>
Full article ">Figure 9
<p>Comparison of maximum depths obtained from field measurements and estimated using the proposed method at eight reference cross-sections of the Ranchería River: (<b>a</b>) relative error and Mean Absolute Percentage Error (MAPE) analysis; (<b>b</b>) scatter plot showing the relationship between the observed and estimated depths. The solid black line represents the linear regression fit to the depths data (grey boxes), with the corresponding equation and R-squared value shown (dashed red line indicates perfect agreement).</p>
Full article ">
21 pages, 35742 KiB  
Article
LandNet: Combine CNN and Transformer to Learn Absolute Camera Pose for the Fixed-Wing Aircraft Approach and Landing
by Siyuan Shen, Guanfeng Yu, Lei Zhang, Youyu Yan and Zhengjun Zhai
Remote Sens. 2025, 17(4), 653; https://doi.org/10.3390/rs17040653 - 14 Feb 2025
Viewed by 262
Abstract
Camera localization approaches often degrade in challenging environments characterized by illumination variations and significant viewpoint changes, presenting critical limitations for fixed-wing aircraft landing applications. To address these challenges, we propose LandNet—a novel absolute camera pose estimation network specifically designed for airborne scenarios. Our [...] Read more.
Camera localization approaches often degrade in challenging environments characterized by illumination variations and significant viewpoint changes, presenting critical limitations for fixed-wing aircraft landing applications. To address these challenges, we propose LandNet—a novel absolute camera pose estimation network specifically designed for airborne scenarios. Our framework processes images from forward-looking aircraft cameras to directly predict 6-DoF camera poses, subsequently enabling aircraft pose determination through rigid transformation. As a first step, we design two encoders from Transformer and CNNs to capture complementary spatial–temporal features. Furthermore, a novel Feature Interactive Block (FIB) is employed to fully utilize spatial clues from the CNN encoder and temporal clues from the Transformer encoder. We also introduce a novel Attentional Convtrans Fusion Block (ACFB) to fuse the feature maps from encoder and transformer encoder, which can enhance the image representations to promote the accuracy of the camera pose. Finally, two Multi-Layer Perceptron (MLP) heads are applied to estimate 6-DOF of camera position and orientation, respectively. Thus the estimated position and orientation of our LandNet can be further used to acquire the pose and orientation of the aircraft through the rigid connection between the airborne camera and the aircraft. The experimental results from simulation and real flight data demonstrate the effectiveness of our proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Coordinates’ definitions in the fixed-wing aircraft landing. A, B, C, and D are the runway vertices.</p>
Full article ">Figure 2
<p>Illustration of the ECEF and ENU coordinates.</p>
Full article ">Figure 3
<p>Transform matrix between navigation frame and body frame.</p>
Full article ">Figure 4
<p>Illustration of aircraft landing procedures. A, B, and C points are represented as 1000 feet, 200 feet, and 100 feet respectively of altitue.</p>
Full article ">Figure 5
<p>Overall architecture of proposed camera localization network.</p>
Full article ">Figure 6
<p>Two types of residual structures. (<b>a</b>): Residual structure without downsamping (<b>b</b>): Residual structure with downsamping.</p>
Full article ">Figure 7
<p>Illustration of the Transformer encoder.</p>
Full article ">Figure 8
<p>Illustration of the proposed FIB.</p>
Full article ">Figure 9
<p>Structure of the proposed ACFB.</p>
Full article ">Figure 10
<p>The simulation landing scene of the UAV.</p>
Full article ">Figure 11
<p>Data Acquisition Platform.</p>
Full article ">Figure 12
<p>Images captured by FLIR camera.</p>
Full article ">Figure 13
<p>Landing trajectories.</p>
Full article ">Figure 14
<p>Trajectory comparisons at various flight altitudes.</p>
Full article ">
30 pages, 8823 KiB  
Article
General Approach for Forest Woody Debris Detection in Multi-Platform LiDAR Data
by Renato César dos Santos, Sang-Yeop Shin, Raja Manish, Tian Zhou, Songlin Fei and Ayman Habib
Remote Sens. 2025, 17(4), 651; https://doi.org/10.3390/rs17040651 - 14 Feb 2025
Viewed by 297
Abstract
Woody debris (WD) is an important element in forest ecosystems. It provides critical habitats for plants, animals, and insects. It is also a source of fuel contributing to fire propagation and sometimes leads to catastrophic wildfires. WD inventory is usually conducted through field [...] Read more.
Woody debris (WD) is an important element in forest ecosystems. It provides critical habitats for plants, animals, and insects. It is also a source of fuel contributing to fire propagation and sometimes leads to catastrophic wildfires. WD inventory is usually conducted through field surveys using transects and sample plots. Light Detection and Ranging (LiDAR) point clouds are emerging as a valuable source for the development of comprehensive WD detection strategies. Results from previous LiDAR-based WD detection approaches are promising. However, there is no general strategy for handling point clouds acquired by different platforms with varying characteristics such as the pulse repetition rate and sensor-to-object distance in natural forests. This research proposes a general and adaptive morphological WD detection strategy that requires only a few intuitive thresholds, making it suitable for multi-platform LiDAR datasets in both plantation and natural forests. The conceptual basis of the strategy is that WD LiDAR points exhibit non-planar characteristics and a distinct intensity and comprise clusters that exceed a minimum size. The developed strategy was tested using leaf-off point clouds acquired by Geiger-mode airborne, uncrewed aerial vehicle (UAV), and backpack LiDAR systems. The results show that using the intensity data did not provide a noticeable improvement in the WD detection results. Quantitatively, the approach achieved an average recall of 0.83, indicating a low rate of omission errors. Datasets with a higher point density (i.e., from UAV and backpack LiDAR) showed better performance. As for the precision evaluation metric, it ranged from 0.40 to 0.85. The precision depends on commission errors introduced by bushes and undergrowth. Full article
Show Figures

Figure 1

Figure 1
<p>Data acquisition systems used in this study: (<b>a</b>) Geiger-mode high-altitude airborne, (<b>b</b>) UAV, and (<b>c</b>) backpack LiDAR systems.</p>
Full article ">Figure 2
<p>Location of forest areas and spatial distribution of validation regions.</p>
Full article ">Figure 3
<p>Sample close-up views of acquired datasets at different sites—perspective views colored by height and intensity (left and middle columns) and point density maps/statistics (right column): (<b>a</b>) Geiger-mode, (<b>b</b>) UAV, and (<b>c</b>) backpack LiDAR systems.</p>
Full article ">Figure 4
<p>Proposed workflow for the WD detection strategy.</p>
Full article ">Figure 5
<p>Illustration of a sample point cloud from the McCormick Woods dataset: (<b>a</b>) original point cloud, (<b>b</b>) normalized height point cloud, and (<b>c</b>) isolated point cloud close to the forest floor.</p>
Full article ">Figure 6
<p>Illustration of a sample region in McCormick Woods acquired by Geiger-mode LiDAR: (<b>a</b>) normalized height point cloud and (<b>b</b>) corresponding planarity map.</p>
Full article ">Figure 7
<p>Illustration of the two-step classification strategy using planarity and intensity attributes for WD detection.</p>
Full article ">Figure 8
<p>Illustration of sample point cloud collected by a backpack system showing (<b>a</b>) original intensity and (<b>b</b>) normalized intensity.</p>
Full article ">Figure 9
<p>Illustration of intensity normalization: (<b>a</b>) procedure for intensity normalization and (<b>b</b>) intensity histogram before normalization and (<b>c</b>) after normalization.</p>
Full article ">Figure 10
<p>Illustration of derived confusion matrix before (<b>a</b>) and after (<b>b</b>) intensity normalization together with the precision, recall, and F1-score metrics.</p>
Full article ">Figure 11
<p>Illustration of the refinement of WD detection based on cluster spread: (<b>a</b>) DBSCAN segmentation of hypothesized WD, (<b>b</b>) maximum spread for selected clusters, shown by black arrows, and (<b>c</b>) WD detection result after eliminating small clusters (the Geiger_MCNF_2021 dataset is used for this illustration).</p>
Full article ">Figure 12
<p>Clipped view of the Geiger_MCNF_2021 dataset: (<b>a</b>) normalized height point cloud colored by height and (<b>b</b>) point cloud colored by original intensity; and WD detection results (randomly colored by cluster ID) (<b>c</b>) using planarity criterion, (<b>d</b>) using planarity and intensity criteria, and (<b>e</b>) reference data.</p>
Full article ">Figure 13
<p>Clipped view of the <span class="html-italic">UAV-MNF_Plot_4d_a-2021</span> dataset: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) point cloud colored by original intensity, and (<b>c</b>) point cloud colored by normalized intensity; and WD detection results (randomly colored by cluster ID) (<b>d</b>) using planarity criterion, (<b>e</b>) using planarity and original intensity criteria, (<b>f</b>) using planarity and normalized intensity criteria, and (<b>g</b>) reference data.</p>
Full article ">Figure 14
<p>Clipped view of the BP_MNF_Plot_4d_b_2022 dataset: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) point cloud colored by original intensity, and (<b>c</b>) point cloud colored by normalized intensity; and WD detection results (randomly colored by cluster ID) (<b>d</b>) using planarity criterion, (<b>e</b>) using planarity and original intensity criteria, (<b>f</b>) using planarity and normalized intensity criteria, and (<b>g</b>) reference data.</p>
Full article ">Figure 15
<p>Comparison of F1-scores from pixel-based and object-based analyses of Geiger-mode, UAV, and backpack LiDAR data using (<b>a</b>) planarity criterion alone and (<b>b</b>) planarity combined with intensity—the original intensity was only used for the Geiger-mode LiDAR, whereas, for the UAV and backpack LiDAR, intensity was normalized.</p>
Full article ">Figure 16
<p>Impact of point density on WD detection for aerial and terrestrial datasets showing clipped view of the point cloud colored by height (top) and WD detection results using planarity criterion (bottom): (<b>a</b>) Geiger_MNF_Plot_4d_a_2021, (<b>b</b>) UAV_MNF_Plot_4d_a_2021, (<b>c</b>) BP_MNF_Plot_4d_b_2022, and (<b>d</b>) reference data.</p>
Full article ">Figure 17
<p>Clipped view of the UAV_MNF_Plot_4d_b_2022 (top) and BP_MNF_Plot_4d_b_2022 (bottom) datasets: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) WD detection results using planarity criterion, (<b>c</b>) zoom-in view of WD detection results, and (<b>d</b>) reference data.</p>
Full article ">
15 pages, 3863 KiB  
Article
Floating Multi-Focus Metalens for High-Efficiency Airborne Laser Wireless Charging
by Zheting Meng, Yuting Xiao, Lianwei Chen, Si Wang, Yao Fang, Jiangning Zhou, Yang Li, Dapeng Zhang, Mingbo Pu and Xiangang Luo
Photonics 2025, 12(2), 150; https://doi.org/10.3390/photonics12020150 - 12 Feb 2025
Viewed by 485
Abstract
Laser wireless power transfer (LWPT) offers a transformative approach to wireless energy transmission, addressing critical limitations in unmanned aerial vehicles (UAVs) such as battery energy limitation. However, challenges like beam divergence, non-uniform irradiation, and alignment instability limit its practical application. Here, we present [...] Read more.
Laser wireless power transfer (LWPT) offers a transformative approach to wireless energy transmission, addressing critical limitations in unmanned aerial vehicles (UAVs) such as battery energy limitation. However, challenges like beam divergence, non-uniform irradiation, and alignment instability limit its practical application. Here, we present a lightweight air-floating metalens platform to overcome these barriers. This innovative lens focuses laser beams near the photovoltaic receiver with an energy distribution uniformity across a single spot at the focal plane that is 50 times greater than that of a conventional Gaussian beam spot, achieving a multi-spot energy distribution uniformity of up to 99% theoretically. Experimentally, we achieved 75% uniformity using a metalens sample. Simultaneously, our system maintains superior beam quality within a dynamic range of 4 m and enhances charging efficiency by 1.5 times. Our research provides a robust technical solution to improve UAV endurance, enabling efficient, long-range wireless power transfer and opening broader technological implications. Full article
(This article belongs to the Special Issue Recent Advances in Diffractive Optics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of the metalens designed to generate multiple focal spots. When light beams with arbitrary polarization states are incident, a 3 × 3 array of nine focal spots is formed on the focal plane. The uniformity across the focal spots reaches 99%, while the non-uniformity within each individual focal spot remains below 5; (<b>b</b>) Optimized phase profiles of the metalens used to generate nine focal spots and square-shaped focal spots, respectively; (<b>c</b>) Calculated uniformity of the nine focal spots as a function of the distance from the focal plane. The blue curve shows the trend for a metalens generating Gaussian focal spots. The yellow curve represents the uniformity trend for a metalens generating square focal spots prior to optimization, while the purple curve shows the improved trend after optimization.</p>
Full article ">Figure 2
<p>Simulation results of Gaussian beam spots versus optimized metalens square beam spots at the focal plane (The maximum light intensity has been normalized to a unit value.). The focal spot is primarily divided into nine regions, denoted as A1, A2, A3, A4, A5, A6, A7, A8, and A9: (<b>a</b>) Energy distribution of nine Gaussian beam spots; (<b>b</b>) Energy distribution of nine square beam spots; (<b>c</b>) Energy distribution of a single Gaussian beam spot; (<b>d</b>) Energy distribution of a single square beam spot; (<b>e</b>) Comparison of energy proportions between Gaussian and square beam spots across nine sub-regions; (<b>f</b>) Comparison of non-uniformity <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> </mrow> </semantics></math> across the nine sub-regions.</p>
Full article ">Figure 3
<p>Measurement of the dynamic range of energy distribution (The maximum light intensity has been normalized to a unit value.): (<b>a</b>) Energy distribution of Gaussian beam spots across various propagation distances; (<b>b</b>) Energy distribution of square beam spots across various propagation distances before optimization; (<b>c</b>) Energy distribution of optimized square beam spots across various propagation distances; (<b>d</b>) Comparison of energy proportions in the sub-regions of optimized square beam spots at different propagation distances; (<b>e</b>) Comparison of non-uniformity (Δ) within individual sub-regions of optimized square beam spots at different propagation distances; (<b>f</b>) Overall illumination uniformity (μ) of optimized square beam spots across various propagation distances.</p>
Full article ">Figure 3 Cont.
<p>Measurement of the dynamic range of energy distribution (The maximum light intensity has been normalized to a unit value.): (<b>a</b>) Energy distribution of Gaussian beam spots across various propagation distances; (<b>b</b>) Energy distribution of square beam spots across various propagation distances before optimization; (<b>c</b>) Energy distribution of optimized square beam spots across various propagation distances; (<b>d</b>) Comparison of energy proportions in the sub-regions of optimized square beam spots at different propagation distances; (<b>e</b>) Comparison of non-uniformity (Δ) within individual sub-regions of optimized square beam spots at different propagation distances; (<b>f</b>) Overall illumination uniformity (μ) of optimized square beam spots across various propagation distances.</p>
Full article ">Figure 4
<p>Experimental verification of the metalens and floating structure: (<b>a</b>) Schematic diagram of the experimental setup for metalens testing; (<b>b</b>) Comparison between experimental and simulation results of square beam spots at various propagation distances; (<b>c</b>) Experimental results of the overall uniformity (μ) of the beam spots at different propagation distances; (<b>d</b>,<b>e</b>) 3D models of the floating structure; (<b>f</b>,<b>g</b>) Experimental verification images of the balloon-supported floating structure; (<b>h</b>) Laser-transmitted power at different positions of the flexible lens.</p>
Full article ">
23 pages, 3449 KiB  
Article
Machine Learning to Forecast Airborne Parietaria Pollen in the North-West of the Iberian Peninsula
by Gonzalo Astray, Rubén Amigo Fernández, María Fernández-González, Duarte A. Dias-Lorenzo, Guillermo Guada and Francisco Javier Rodríguez-Rajo
Sustainability 2025, 17(4), 1528; https://doi.org/10.3390/su17041528 - 12 Feb 2025
Viewed by 427
Abstract
Pollen forecasting models are helpful tools to predict environmental processes and allergenic risk events. Parietaria belongs to the Urticaceae family, and due to its high-level pollen production, is responsible for many cases of severe pollinosis reactions. This research aims to develop different machine [...] Read more.
Pollen forecasting models are helpful tools to predict environmental processes and allergenic risk events. Parietaria belongs to the Urticaceae family, and due to its high-level pollen production, is responsible for many cases of severe pollinosis reactions. This research aims to develop different machine learning models such as the random forest—RF, support vector machine—SVM, and artificial neural network—ANN models, to predict Parietaria pollen concentrations in the atmosphere of northwest Spain using 24 years of data from 1999 to 2022. The results obtained show an increase in the duration and intensity of the Parietaria main pollen season in the Mediterranean region (Ourense). Machine learning models exhibited their capacity to forecast Parietaria pollen concentrations at one, two, and three days ahead. The best selected models presented high correlation coefficients between 0.713 and 0.859, with root mean squared errors between 5.55 and 7.66 pollen grains·m−3 for the testing phase. The models developed could be improved by increasing the number of years, studying other hyperparameter ranges, or analyzing different data distributions. Full article
(This article belongs to the Section Pollution Prevention, Mitigation and Sustainability)
Show Figures

Figure 1

Figure 1
<p>Microscopic photograph (40×) <span class="html-italic">Parietaria</span> pollen.</p>
Full article ">Figure 2
<p>Location of the study area (Ourense) in Galicia and Europe.</p>
Full article ">Figure 3
<p>Schematic diagram of the procedure carried out to develop the three machine learning models.</p>
Full article ">Figure 4
<p>Scheme of a random forest model (inspired by Machado et al. (2015) [<a href="#B37-sustainability-17-01528" class="html-bibr">37</a>]), a support vector regression model (inspired by Keshtegar et al. (2019) [<a href="#B38-sustainability-17-01528" class="html-bibr">38</a>]), and an artificial neural network model (inspired by Abdolrasol et al. (2021) [<a href="#B39-sustainability-17-01528" class="html-bibr">39</a>]).</p>
Full article ">Figure 5
<p>Aerobiological and meteorological Mann–Kendal trends for each study region. The horizontal black bar shows the significance level. (<b>A</b>) Main aerobiological parameters of the MPS: onset of the MPS (st.jd; days), length pollen season (ln.ps; days), end of the MPS (end.jd; days), SPIn (sm.ps; pollen grains), pollen peak (pk.val; pollen grains·m<sup>−3</sup>), and pollen peak day (pk.jd; days); main aerobiological parameters of the pre-peak period: length (ln.prpk; days) and SPIn_pre peak (sm.prpk; pollen grains); main aerobiological parameters of the post-peak period: length (ln.pspk; days) and SPIn_post peak (sm.pspk; pollen grains). (<b>B</b>) Meteorological parameters studied: relative humidity (RH; %); rainfall (Rainfall; mm); maximum temperature (Max T; °C), mean temperature (Avg T, °C), and minimum temperature (Min T, °C). Precipitation trends were also calculated for the pre- (Rainfall_Pre) and post-peak periods (Rainfall_Post).</p>
Full article ">Figure 6
<p>Real and predicted values for <span class="html-italic">Parietaria</span> pollen concentration during the testing years 2018 to 2022 for 1 day (<b>top</b>), 2 days (<b>centre</b>), and 3 days ahead (<b>bottom</b>), using the best selected models developed.</p>
Full article ">
35 pages, 25233 KiB  
Article
Assessment of the Solar Potential of Buildings Based on Photogrammetric Data
by Paulina Jaczewska, Hubert Sybilski and Marlena Tywonek
Energies 2025, 18(4), 868; https://doi.org/10.3390/en18040868 - 12 Feb 2025
Viewed by 532
Abstract
In recent years, a growing demand for alternative energy sources, including solar energy, has been observed. This article presents a methodology for assessing the solar potential of buildings using images from Unmanned Aerial Vehicles (UAVs) and point clouds from airborne LIDAR. The proposed [...] Read more.
In recent years, a growing demand for alternative energy sources, including solar energy, has been observed. This article presents a methodology for assessing the solar potential of buildings using images from Unmanned Aerial Vehicles (UAVs) and point clouds from airborne LIDAR. The proposed method includes the following stages: DSM generation, extraction of building footprints, determination of roof parameters, map solar energy generation, removing of the areas that are not suitable for the installation solar systems, calculation of power per each building, conversion of solar irradiance into energy, and mapping the potential for solar power generation. This paper describes also the Detecting Photovoltaic Panels algorithm with the use of deep learning techniques. The proposed algorithm enabled assessing the efficiency of photovoltaic panels and comparing the results of maps of the solar potential of buildings, as well as identifying the areas that require optimization. The results of the analysis, which had been conducted in the test areas in the village and on the campus of the university, confirmed the usefulness of the above proposed methods. The analysis provides that the UAV image data enable generation of solar potential maps with higher accuracy (MAE = 8.5 MWh) than LIDAR data (MAE = 10.5 MWh). Full article
(This article belongs to the Special Issue Advanced Applications of Solar and Thermal Storage Energy)
Show Figures

Figure 1

Figure 1
<p>Research areas: (<b>a</b>) Military University of Technology [Google Earth]; (<b>b</b>) Wodziczna village [own photo].</p>
Full article ">Figure 2
<p>Methodology of generating the map of the solar potential of buildings.</p>
Full article ">Figure 3
<p>The methodology of detecting objects with the use of deep learning.</p>
Full article ">Figure 4
<p>Location of the detected photovoltaic systems Photovoltaic systems marked with numbers 1–6 correspond to the houses marked in <a href="#energies-18-00868-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>Power generation potential in Wodziczna village (low-altitude data). The houses marked with numbers 1–6 correspond to the photovoltaic systems marked in <a href="#energies-18-00868-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Assessment of the solar potential of buildings for the campus of the Military University of Technology: (<b>a</b>) Solar map of the campus; (<b>b</b>) power generation potential of the campus.</p>
Full article ">Figure 6 Cont.
<p>Assessment of the solar potential of buildings for the campus of the Military University of Technology: (<b>a</b>) Solar map of the campus; (<b>b</b>) power generation potential of the campus.</p>
Full article ">Figure 7
<p>The 2.5D visualization of the potential for solar power generation. Fragment of the MUT campus, visualization based on DSM + orthomosaic (mesh size: 50 cm, resolution of the orthomosaic: 5 cm).</p>
Full article ">Figure 8
<p>Maps of power generation potential for three different datasets: (<b>a</b>) Power generation potential in Wodziczna village [DSM (mesh size 1.75 cm) based on imagery data]; (<b>b</b>) Power generation potential in Wodziczna village [DSM (mesh size 10 cm) based on LIDAR data]; (<b>c</b>) Power generation potential in Wodziczna village [DSM (mesh size 100 cm) based on LIDAR data].</p>
Full article ">Figure 9
<p>Comparison of grid size from different datasets: low-altitude imagery data and LIDAR data from ALS, at various mesh sizes.</p>
Full article ">Figure 10
<p>Position of houses on the map of the power generation potential [DSM (mesh size 1.75 cm), based on image data].</p>
Full article ">Figure 11
<p>The 2.5D visualization of the power generation potential: (<b>a</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 1.75 cm) + orthomosaic (pixel size 1.75 cm), DSM based on low-altitude photogrammetric data; (<b>b</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 10 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data; (<b>c</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 100 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data.</p>
Full article ">Figure 11 Cont.
<p>The 2.5D visualization of the power generation potential: (<b>a</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 1.75 cm) + orthomosaic (pixel size 1.75 cm), DSM based on low-altitude photogrammetric data; (<b>b</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 10 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data; (<b>c</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 100 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data.</p>
Full article ">Figure 12
<p>Shaded photovoltaic panels (mounting error)—System 3.</p>
Full article ">Figure 13
<p>Comparison of the power generation for roof surfaces with the data on power generation from existing photovoltaic systems.</p>
Full article ">Figure 14
<p>Power generation from photovoltaic systems—vectorized areas of photovoltaic panels. (<b>a</b>) vectorization of a photovoltaic system where the panels are separated; (<b>b</b>) vectorization of a photovoltaic system.</p>
Full article ">Figure 15
<p>Comparison of the power generation for vectorized surfaces of photovoltaic systems with the data on power generation from existing photovoltaic systems.</p>
Full article ">
15 pages, 2892 KiB  
Article
Diagnosis of Winter Wheat Nitrogen Status Using Unmanned Aerial Vehicle-Based Hyperspectral Remote Sensing
by Liyang Huangfu, Jundang Jiao, Zhichao Chen, Lixiao Guo, Weidong Lou and Zheng Zhang
Appl. Sci. 2025, 15(4), 1869; https://doi.org/10.3390/app15041869 - 11 Feb 2025
Viewed by 371
Abstract
The nitrogen nutrition index (NNI) is a significant agronomic statistic used to assess the nitrogen nutrition status of crops. The use of remote sensing to invert it is crucial for accurately diagnosing and managing nitrogen nutrition in crops during critical periods. This study [...] Read more.
The nitrogen nutrition index (NNI) is a significant agronomic statistic used to assess the nitrogen nutrition status of crops. The use of remote sensing to invert it is crucial for accurately diagnosing and managing nitrogen nutrition in crops during critical periods. This study utilizes the UHD185 airborne hyperspectral imager and the ASD Field Spec3 portable spectrometer to acquire hyperspectral remote sensing data and agronomic parameters of the winter wheat canopy during the nodulation and flowering stages. The objective is to estimate the NNI of winter wheat through a winter wheat nitrogen gradient experiment conducted in Leling, Shandong Province. The ASD spectral reflectance data of the winter wheat canopy were selected as the reference standard and compared with the UHD185 hyperspectral data obtained from an unmanned aerial vehicle (UAV). The comparison focused on analyzing the trends in the spectral curve changes and the spectral correlation between the two datasets. The findings indicated a strong agreement between the UHD185 hyperspectral data and the spectral data obtained by ASD in the range of 450–830 nm. A spectrum index was developed to estimate the nitrogen nutritional index utilizing the bands within this range. The linear model, based on the first-order derivative ratio spectral index (RSI) (FD666, FD826), demonstrated the highest accuracy in estimating the nitrogen nutrient index in winter wheat. The model yielded R2 values of 0.85 and 0.75, respectively, and may be represented by the equation y = −2.0655x + 0.156. The results serve as a benchmark for future utilization of the UHD185 hyperspectral data in estimating agronomic characteristics of winter wheat. Full article
(This article belongs to the Special Issue State-of-the-Art Agricultural Science and Technology in China)
Show Figures

Figure 1

Figure 1
<p>Geographical Position map of the study area and field setup.</p>
Full article ">Figure 2
<p>Flowchart showing the steps taken to process the hyperspectral imageries.</p>
Full article ">Figure 3
<p>Comparison between UHD185 spectral curve and resampled ASD spectral curve at different stages.</p>
Full article ">Figure 4
<p>Correlation between spectral reflectance of UHD185 and resampled ASD in different periods. (<b>a</b>) Jointing period; (<b>b</b>) Blooming period.</p>
Full article ">Figure 5
<p>Contour map of the coefficients of determination between spectral indices composed of any two bands of spectra and the nitrogen nutrient index of the plant. (<b>a</b>) NDSI; (<b>b</b>) RSI; (<b>c</b>) DSI; (<b>d</b>) NDSI_FD; (<b>e</b>) RSI_FD; (<b>f</b>) DSI_FD.</p>
Full article ">Figure 5 Cont.
<p>Contour map of the coefficients of determination between spectral indices composed of any two bands of spectra and the nitrogen nutrient index of the plant. (<b>a</b>) NDSI; (<b>b</b>) RSI; (<b>c</b>) DSI; (<b>d</b>) NDSI_FD; (<b>e</b>) RSI_FD; (<b>f</b>) DSI_FD.</p>
Full article ">Figure 6
<p>Comparison of predicted and measured values of NNI in winter wheat retrieved by regression model.</p>
Full article ">
Back to TopTop