Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (767)

Search Parameters:
Keywords = multi-spectral remote sensing mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4530 KiB  
Article
Mapping Forest Aboveground Biomass Using Multi-Source Remote Sensing Data Based on the XGBoost Algorithm
by Dejun Wang, Yanqiu Xing, Anmin Fu, Jie Tang, Xiaoqing Chang, Hong Yang, Shuhang Yang and Yuanxin Li
Forests 2025, 16(2), 347; https://doi.org/10.3390/f16020347 - 15 Feb 2025
Viewed by 228
Abstract
Aboveground biomass (AGB) serves as an important indicator for assessing the productivity of forest ecosystems and exploring the global carbon cycle. However, accurate estimation of forest AGB remains a significant challenge, especially when integrating multi-source remote sensing data, and the effects of different [...] Read more.
Aboveground biomass (AGB) serves as an important indicator for assessing the productivity of forest ecosystems and exploring the global carbon cycle. However, accurate estimation of forest AGB remains a significant challenge, especially when integrating multi-source remote sensing data, and the effects of different feature combinations for AGB estimation results are unclear. In this study, we proposed a method for estimating forest AGB by combining Gao Fen 7 (GF-7) stereo imagery with data from Sentinel-1 (S1), Sentinel-2 (S2), and the Advanced Land Observing Satellite digital elevation model (ALOS DEM), and field survey data. The continuous tree height (TH) feature was derived using GF-7 stereo imagery and the ALOS DEM. Spectral features were extracted from S1 and S2, and topographic features were extracted from the ALOS DEM. Using these features, 15 feature combinations were constructed. The recursive feature elimination (RFE) method was used to optimize each feature combination, which was then input into the extreme gradient boosting (XGBoost) model for AGB estimation. Different combinations of features used to estimate forest AGB were compared. The best model was selected for mapping AGB distribution at 30 m resolution. The outcomes showed that the forest AGB model was composed of 13 features, including TH, topographic, and spectral features extracted from S1 and S2 data. This model achieved the best prediction performance, with a determination coefficient (R2) of 0.71 and a root mean square error (RMSE) of 18.11 Mg/ha. TH was found to be the most important predictive feature, followed by S2 optical features, topographic features, and S1 radar features. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>True-color image map of the study area. The red and blue lines show the spatial coverage of the forward and backward images of GF-7, respectively. The red, green, and blue triangular markers indicate the locations of the 2012, 2022, and 2024 sampling data, respectively.</p>
Full article ">Figure 2
<p>Flowchart for the overall workflow of estimating forest AGB using combined multi-source remote sensing data.</p>
Full article ">Figure 3
<p>Distribution of GCPs and TPs used for DSM generation. (<b>a</b>) Ground control points, (<b>b</b>) Tie points.</p>
Full article ">Figure 4
<p>Performance of XGBoost models for AGB estimation using different feature combinations. (<b>a</b>) TS1S2D, (<b>b</b>) TS2D, (<b>c</b>) TS1S2.</p>
Full article ">Figure 5
<p>Relative importance ranking of features based on the XGBoost model built with the TS1S2D feature combination.</p>
Full article ">Figure 6
<p>Spatial distribution of AGB in the research area. (<b>a</b>) Distribution of AGB predicted by the XGBoost model with TS1S2D combination; (<b>b</b>) S2 true-color image of the region within the red box; (<b>c</b>) zoomed-in view of the red box in the AGB distribution map.</p>
Full article ">Figure 7
<p>Forest AGB estimation based on XGBoost model and TS1S2D feature combination. (<b>a</b>) Coniferous forests, (<b>b</b>) Broadleaf forests.</p>
Full article ">Figure 8
<p>Feature importance rankings for different forest types based on the XGBoost model and TS1S2D feature combination. (<b>a</b>) Coniferous forests, (<b>b</b>) Broadleaf forests.</p>
Full article ">Figure 9
<p>AGB difference maps. (<b>a</b>) TS2D–TS1S2D, (<b>b</b>) TS1S2–TS1S2D.</p>
Full article ">Figure 10
<p>Comparison of AGB distributions in this study and published datasets (Zhang et al. [<a href="#B41-forests-16-00347" class="html-bibr">41</a>], Yang et al. [<a href="#B44-forests-16-00347" class="html-bibr">44</a>], Chang et al. [<a href="#B52-forests-16-00347" class="html-bibr">52</a>]). The horizontal line in each box plot represents the median, the black dot indicates the mean, and the width of the violin plot reflects the data proportion.</p>
Full article ">
27 pages, 4395 KiB  
Article
Impact of Land Use Pattern and Heavy Metals on Lake Water Quality in Vidarbha and Marathwada Region, India
by Pranaya Diwate, Prasanna Lavhale, Suraj Kumar Singh, Shruti Kanga, Pankaj Kumar, Gowhar Meraj, Jatan Debnath, Dhrubajyoti Sahariah, Md. Simul Bhuyan and Kesar Chand
Water 2025, 17(4), 540; https://doi.org/10.3390/w17040540 - 13 Feb 2025
Viewed by 391
Abstract
Lakes are critical resources that support the ecological balance and provide essential services for human and environmental well-being. However, their quality is being increasingly threatened by both natural and anthropogenic processes. This study aimed to assess the water quality and the presence of [...] Read more.
Lakes are critical resources that support the ecological balance and provide essential services for human and environmental well-being. However, their quality is being increasingly threatened by both natural and anthropogenic processes. This study aimed to assess the water quality and the presence of heavy metals in 15 lakes in the Vidarbha and Marathwada regions of Maharashtra, India. To understand the extent of pollution and its sources, the physico-chemical parameters were analyzed which included pH, turbidity, total hardness, orthophosphate, residual free chlorine, chloride, fluoride, and nitrate, as well as heavy metals such as iron, lead, zinc, copper, arsenic, chromium, manganese, cadmium, and nickel. The results revealed significant pollution in several lakes, with the Lonar Lake showing a pH value of 12, exceeding the Bureau of Indian Standards’ (BIS) limit. The Lonar Lake also showed elevated levels of fluoride having a value of 2 mg/L, nitrate showing a value of 45 mg/L, and orthophosphate showing a concentration up to 2 mg/L. The Rishi Lake had higher concentrations of nickel having a value of 0.2 mg/L and manganese having a value of 0.7 mg/L, crossing permissible BIS limits. The Rishi Lake and the Salim Ali Lake exhibited higher copper levels than other lakes. Cadmium was detected in most of the lakes ranging from values of 0.1 mg/L to 0.4 mg/L, exceeding BIS limits. The highest turbidity levels were observed in Rishi Lake and Salim Ali Lake at 25 NTU. The total hardness value observed in the Kharpudi Lake was 400 mg/L, which is highest among all the lakes under study. The spatial analysis, which utilized remote sensing and GIS techniques, including Sentinel-2 multispectral imagery for land use and land cover mapping and Digital Elevation Model (DEM) for watershed delineation, provided insights into the topography and drainage patterns affecting these lakes. The findings emphasize the urgent need for targeted management strategies to mitigate pollution and protect these vital freshwater ecosystems, with broader implications for public health and ecological sustainability in regions reliant on these water resources. Full article
(This article belongs to the Special Issue Monitoring and Modelling of Contaminants in Water Environment)
Show Figures

Figure 1

Figure 1
<p>Study area map (<b>A</b>) India, (<b>B</b>) State of Maharashtra, (<b>C</b>) Watersheds of the Vidarbha region.</p>
Full article ">Figure 2
<p>Study area map. (<b>A</b>) India, (<b>B</b>) Watersheds of the Marathwada region.</p>
Full article ">Figure 3
<p>Methodology flowchart for water quality and watershed analysis.</p>
Full article ">Figure 4
<p>Values in ppm (mg/L) of physico-chemical parameters.</p>
Full article ">Figure 5
<p>Watershed in Vidarbha region.</p>
Full article ">Figure 6
<p>Watershed in Marathwada region.</p>
Full article ">Figure 7
<p>Land use land cover map of Watersheds in Vidarbha region.</p>
Full article ">Figure 8
<p>Land use land cover map of watersheds in Marathwada region.</p>
Full article ">
32 pages, 3009 KiB  
Review
Satellite Remote Sensing Techniques and Limitations for Identifying Bare Soil
by Beth Delaney, Kevin Tansey and Mick Whelan
Remote Sens. 2025, 17(4), 630; https://doi.org/10.3390/rs17040630 - 12 Feb 2025
Viewed by 430
Abstract
Bare soil (BS) identification through satellite remote sensing can potentially play a critical role in understanding and managing soil properties essential for climate regulation and ecosystem services. From 191 papers, this review synthesises advancements in BS detection methodologies, such as threshold masking and [...] Read more.
Bare soil (BS) identification through satellite remote sensing can potentially play a critical role in understanding and managing soil properties essential for climate regulation and ecosystem services. From 191 papers, this review synthesises advancements in BS detection methodologies, such as threshold masking and classification algorithms, while highlighting persistent challenges such as spectral confusion and inconsistent validation practices. The analysis reveals an increasing reliance on satellite data for applications such as digital soil mapping, land use monitoring, and environmental impact mapping. While multispectral sensors like Landsat and Sentinel dominate current methodologies, limitations remain in distinguishing BS from spectrally similar surfaces, such as crop residues and urban areas. This review emphasises the critical need for robust validation practices to ensure reliable estimates. By integrating technological advancements with improved methodologies, the potential for accurate, large-scale BS detection can significantly contribute to combating land degradation and supporting global food security and climate resilience efforts. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flow chart describing the process of extracting relevant papers based on the initial search criteria.</p>
Full article ">Figure 2
<p>Trends in the use of BS identification methods in published articles with satellite remote sensing data.</p>
Full article ">Figure 3
<p>(<b>a</b>) Number of articles per research area. (<b>b</b>) Number of articles using each distinct BS methodology. (<b>c</b>) Method of BS identification for each research area. BSS, Bare Soil Separation; VM, Vegetation Mapping; EIRM, Environmental Impact and Risk Mapping; LULC, Land Use Land Cover; DSM; Digital Soil Mapping.</p>
Full article ">Figure 4
<p>Top five most commonly used indices for threshold masking to identify BS pixels, by frequency of use within the literature.</p>
Full article ">Figure 5
<p>The variation in NDVI threshold ranges used to identify BS pixels. The threshold ranges are arranged by use within the literature, with bar thickness and values representing the frequency of use. The range with most overlap of NDVI values is depicted by the vertical line (0.09–0.10).</p>
Full article ">Figure 6
<p>Land cover classes most often confused with bare soil in classification algorithms.</p>
Full article ">Figure 7
<p>Comparison of machine learning model types and statistical models based on F1 score.</p>
Full article ">Figure 8
<p>Distribution of validation types across different BS methods (%).</p>
Full article ">
25 pages, 12059 KiB  
Article
Albufera Lagoon Ecological State Study Through the Temporal Analysis Tools Developed with PerúSAT-1 Satellite
by Bárbara Alvado, Luis Saldarriaga, Xavier Sòria-Perpinyà, Juan Miguel Soria, Jorge Vicent, Antonio Ruíz-Verdú, Clara García-Martínez, Eduardo Vicente and Jesus Delegido
Sensors 2025, 25(4), 1103; https://doi.org/10.3390/s25041103 - 12 Feb 2025
Viewed by 387
Abstract
The Albufera of Valencia (Spain) is a representative case of pressure on water quality, which caused the hypertrophic state of the lake to completely change the ecosystem that once featured crystal clear waters. PerúSAT-1 is the first Peruvian remote sensing satellite developed for [...] Read more.
The Albufera of Valencia (Spain) is a representative case of pressure on water quality, which caused the hypertrophic state of the lake to completely change the ecosystem that once featured crystal clear waters. PerúSAT-1 is the first Peruvian remote sensing satellite developed for natural disaster monitoring. Its high spatial resolution makes it an ideal sensor for capturing highly detailed products, which are useful for a variety of applications. The ability to change its acquisition geometry allows for an increase in revisit time. The main objective of this study is to assess the potential of PerúSAT-1′s multispectral images to develop multi-parameter algorithms to evaluate the ecological state of the Albufera lagoon. During five field campaigns, samples were taken, and measurements of ecological indicators (chlorophyll-a, Secchi disk depth, total suspended matter, and its organic-inorganic fraction) were made. All possible combinations of two bands were obtained and subsequently correlated with the biophysical variables by fitting a linear regression between the field data and the band combinations. The equations for estimating all the water variables result in the following R2 values: 0.76 for chlorophyll-a (NRMSE: 16%), 0.75 for Secchi disk depth (NRMSE: 15%), 0.84 for total suspended matter (NRMSE: 11%), 0.76 for the inorganic fraction (NRMSE: 15%), and 0.87 for the organic fraction (NRMSE: 9%). Finally, the equations were applied to the Albufera lagoon images to obtain thematic maps for all variables. Full article
(This article belongs to the Special Issue Application of Satellite Remote Sensing in Geospatial Monitoring)
Show Figures

Figure 1

Figure 1
<p>Study area location <span class="html-italic">L’Albufera de València</span>. Green dots are the sampling points.</p>
Full article ">Figure 2
<p>PerúSAT-1 images over the Albufera lagoon. Image TOA (<b>a</b>), without atmospheric correction, and image BOA (<b>b</b>), with atmospheric correction.</p>
Full article ">Figure 3
<p>Validation of atmospheric correction data.</p>
Full article ">Figure 4
<p>Boxplot of the values range for the water quality parameters. The box bounds the interquartile range (IQR: 25–75 percentile), the horizontal line inside the box indicates the median, and the error bars indicate the 90th above and 10th below percentiles. Dots indicate the outliers.</p>
Full article ">Figure 5
<p>Chl-<span class="html-italic">a</span> in situ as a function of ND (B4 − B1)/(B4 + B1).</p>
Full article ">Figure 6
<p>SDD in situ as a function of ND (B4 − B1)/(B4 + B1).</p>
Full article ">Figure 7
<p>TSM in situ as a function of SR (B1/B4).</p>
Full article ">Figure 8
<p>PIM in situ as a function of ND (B3/B1).</p>
Full article ">Figure 9
<p>POM in situ as a function of SR (B3/B4).</p>
Full article ">Figure 10
<p>Estimation maps of Chl-<span class="html-italic">a</span> (μg/L). From left to right and up to down: winter, spring, summer, and autumn.</p>
Full article ">Figure 11
<p>Estimation maps of SDD (m). From left to right and up to down: winter, spring, summer, and autumn.</p>
Full article ">Figure 12
<p>Estimation maps of TSM (mg/L). From left to right and up to down: winter, spring, summer, and autumn.</p>
Full article ">Figure 13
<p>Estimation maps of PIM (mg/L). From left to right and up to down: winter, spring, summer, and autumn.</p>
Full article ">Figure 14
<p>Estimation maps of POM (mg/L). From left to right and up to down: winter, spring, summer, and autumn.</p>
Full article ">Figure 15
<p>10 m pixel (<b>left</b>) of S2 image vs. 2.8 m pixel (<b>right</b>) of PerúSAT-1 image.</p>
Full article ">Figure 16
<p>Subset of “Obera ditch” area for PerúSAT-1 product (<b>top</b>) and S2 product (<b>bottom</b>).</p>
Full article ">Figure A1
<p>Estimation maps of Chl-<span class="html-italic">a</span>. Acquisition data: 20 April 2023. Left: S2 image; right: PerúSAT-1 image.</p>
Full article ">Figure A2
<p>Estimation maps of SDD. Acquisition data: 20 April 2023. Left: S2 image; right: PerúSAT-1 image.</p>
Full article ">Figure A3
<p>Estimation maps of TSM. Acquisition data: 20 April 2023. Left: S2 image; right: PerúSAT-1 image.</p>
Full article ">Figure A4
<p>Estimation maps of PIM. Acquisition data: 20 April 2023. Left: S2 image; right: PerúSAT-1 image.</p>
Full article ">Figure A5
<p>Estimation maps of POM. Acquisition data: 20 April 2023. Left: S2 image; right: PerúSAT-1 image.</p>
Full article ">Figure A6
<p>Subset of “Obera ditch” area for PerúSAT-1 Chl-<span class="html-italic">a</span> product (<b>top</b>) and S2 Chl-<span class="html-italic">a</span> product (<b>bottom</b>).</p>
Full article ">
37 pages, 7441 KiB  
Review
Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors
by Wouter H. Maes
Remote Sens. 2025, 17(4), 606; https://doi.org/10.3390/rs17040606 - 10 Feb 2025
Viewed by 794
Abstract
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, [...] Read more.
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, hyperspectral, or thermal cameras. Based on a literature review, this paper provides comprehensive guidelines and best practices for executing such mapping flights. It addresses critical aspects of flight preparation and flight execution. Key considerations in flight preparation covered include sensor selection, flight height and GSD, flight speed, overlap settings, flight pattern, direction, and viewing angle; considerations in flight execution include on-site preparations (GCPs, camera settings, sensor calibration, and reference targets) as well as on-site conditions (weather conditions, time of the flights) to take into account. In all these steps, high-resolution and high-quality data acquisition needs to be balanced with feasibility constraints such as flight time, data volume, and post-flight processing time. For reflectance and thermal measurements, BRDF issues also influence the correct setting. The formulated guidelines are based on literature consensus. However, the paper also identifies knowledge gaps for mapping flight settings, particularly in viewing angle pattern, flight direction, and thermal imaging in general. The guidelines aim to advance the harmonization of UAV mapping practices, promoting reproducibility and enhanced data quality across diverse applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the UAV mapping process. This review focuses on the areas in bold and green.</p>
Full article ">Figure 2
<p>Schematic overview of the solar and sensor viewing angles.</p>
Full article ">Figure 3
<p>BRDF influence on spectral reflectance. (<b>a</b>) Images obtained with a UAV from a meadow from different sensor zenith and azimuth angles (Canon S110 camera, on a Vulcan hexacopter with an AV200 gimbal (PhotoHigher, Wellington, New Zealand), obtained on 28 July 2015 over a meadow near Richmond, NSW, Australia (lat: 33.611° S, lon: 150.732° E)). (<b>b</b>) Empirical BRDF in the green wavelength over a tropical forest (Robson Creek, Queensland, Australia (lat: 17.118° S, lon: 145.630° E), obtained with the same UAV and camera on 16 August 2015, from [<a href="#B21-remotesensing-17-00606" class="html-bibr">21</a>], (<b>c</b>–<b>e</b>) Simulations of reflectance in the red (<b>c</b>) and near infrared (<b>d</b>) spectrum and for NDVI (<b>e</b>) (SCOPE; for a vegetation of 1 m height, LAI of 2, Chlorophyll content of 40 μg/cm<sup>2</sup> and fixed solar zenith angle of 30°).</p>
Full article ">Figure 4
<p>General workflow for the flight planning with an indication of the most important considerations in each step.</p>
Full article ">Figure 5
<p>The effect of ground sampling distance (GSD) on the image quality, in this case for weed detection in a corn field. Image taken on 14/07/2022 in Bottelare, Belgium (lat: 50.959° N, lon: 3.767° E), with a Sony α7R IV camera, equipped with an 85 mm lens flying at 18 m altitude on a DJI M600 Pro UAV. Here, a small section of the orthomosaic, created in Agisoft Metashape, is shown. The original GSD was 0.85 mm, which was downscaled and exported at different GSD using Agisoft Metashape.</p>
Full article ">Figure 6
<p>(<b>a</b>) The 1 ha-field of which the simulation was done. (<b>b</b>) The effect of GSD on the estimated flight time and the number of images required for mapping this area. Here, we calculated the flight time and number of images for a multispectral camera (MicaSense RedEdge-MX Dual). The simulation was performed using the DJI Pilot app, with horizontal and vertical overlap set at 80%, and the maximum flight speed set at 5 m s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Illustration of terrain following (<b>b</b>) option relative to the standard flight height option (<b>a</b>). The colors in (<b>b</b>) represent the actual altitude of the UAV above sea level, in m. Output (print screen) of DJI Pilot 2, here for a Mavic 3E (RGB) camera with 70% overlap and GSD of 2.7 cm.</p>
Full article ">Figure 8
<p>(<b>a</b>) Schematic figure of a standard (parallel) mapping mission over a target area (orange line) with the planned locations for image capture (dots) illustrating the vertical and horizontal overlap. (<b>b</b>) The same area, but now covered with a grid flight pattern (<a href="#sec3dot5dot1-remotesensing-17-00606" class="html-sec">Section 3.5.1</a>).</p>
Full article ">Figure 9
<p>(<b>a</b>) The number of images collected for mapping a 1 ha area (100 m × 100 m field, see <a href="#remotesensing-17-00606-f006" class="html-fig">Figure 6</a>) with a MicaSense RedEdge-MX multispectral camera as a function of the vertical and horizontal overlap. Image number was estimated in DJI Pilot. (<b>b</b>) Simulated number of cameras seen per point for the same range of overlap and camera. (<b>c</b>) Simulated coordinates of the cameras (in m) seeing the center point (black +, relative coordinates of (0,0)) for different overlaps (same horizontal and vertical overlap, see color scale) for the same camera, flown at 50 m flight height.</p>
Full article ">Figure 10
<p>Adjusted overlap (overlap needed to be given as input in the flight app) as a function of flight height and vegetation height, when the target overlap is 80%.</p>
Full article ">Figure 11
<p>Orthomosaic (<b>a</b>) full field; (<b>b</b>) detail) of a flight generated with a flight overlap of 80% in horizontal and vertical direction. The yellow lines indicate the area taken from each single image. Notice the constant pattern in the core of the images, whereas the edges typically have larger areas from a single image, increasing the risk of anisotropic effects. Image taken from Agisoft Metashape from a dataset of multispectral imagery (MicaSense RedEdge-MX Dual), acquired on 07/10/2024, over a potato field in Bottelare, Belgium (lat: 50.9612°N, lon: 3.7677°E), at a flight height of 32 m.</p>
Full article ">Figure 12
<p>Illustration of different viewing angle options available. (<b>a</b>) standard nadir option; (<b>b</b>) Limited number of oblique images from a single direction (“Elevation Optimization”) and (<b>c</b>–<b>f</b>) oblique mapping under four different viewing angles. Output (print screen) of DJI Pilot 2 app, here for a Zenmuse P1 RGB camera (50 mm lens) on a DJI M350, with 65% horizontal and vertical overlap and a GSD of 0.22 cm.</p>
Full article ">Figure 13
<p>Schematic overview of corrections of thermal measurements atmospheric correction (L<sub>atm</sub>, τ) and the additional correction for emissivity (ε) and longwave incoming radiation (L<sub>in</sub>, W m<sup>−2</sup>) needed to retrieve surface temperature (T<sub>s</sub>, K). (L<sub>sensor</sub> = ad-sensor radiance, W m<sup>−2</sup>; L<sub>atm</sub>= upwelling ad-sensor radiance, W m<sup>−2</sup>; τ = atmospheric transmittance (-), σ = Stefan–Boltzmann constant = 5.67 10<sup>−8</sup> W m<sup>−2</sup> K<sup>−4</sup>).</p>
Full article ">Figure 14
<p>Overall summary of flight settings and flight conditions for the different applications. * More for larger or complex terrains.</p>
Full article ">
31 pages, 18303 KiB  
Article
A Novel Approach for Maize Straw Type Recognition Based on UAV Imagery Integrating Height, Shape, and Spectral Information
by Xin Liu, Huili Gong, Lin Guo, Xiaohe Gu and Jingping Zhou
Drones 2025, 9(2), 125; https://doi.org/10.3390/drones9020125 - 9 Feb 2025
Viewed by 352
Abstract
Accurately determining the distribution and quantity of maize straw types is of great significance for evaluating the effectiveness of conservation tillage, precisely estimating straw resources, and predicting the risk of straw burning. The widespread adoption of conservation tillage technology has greatly increased the [...] Read more.
Accurately determining the distribution and quantity of maize straw types is of great significance for evaluating the effectiveness of conservation tillage, precisely estimating straw resources, and predicting the risk of straw burning. The widespread adoption of conservation tillage technology has greatly increased the diversity and complexity of maize straw coverage in fields after harvest. To improve the precision and effectiveness of remote sensing recognition for maize straw types, a novel method was proposed. This method utilized unmanned aerial vehicle (UAV) multispectral imagery, integrated the Stacking Enhanced Straw Index (SESI) introduced in this study, and combined height, shape, and spectral characteristics to improve recognition accuracy. Using the original five-band multispectral imagery, a new nine-band image of the study area was constructed by integrating the calculated SESI, Canopy Height Model (CHM), Product Near-Infrared Straw Index (PNISI), and Normalized Difference Vegetation Index (NDVI) through band combination. An object-oriented classification method, utilizing a “two-step segmentation with multiple algorithms” strategy, was employed to integrate height, shape, and spectral features, enabling rapid and accurate mapping of maize straw types. The results showed that height information obtained from the CHM and spectral information derived from SESI were essential for accurately classifying maize straw types. Compared to traditional methods that relied solely on spectral information for recognition of maize straw types, the proposed approach achieved a significant improvement in overall classification accuracy, increasing it by 8.95% to reach 95.46%, with a kappa coefficient of 0.94. The remote sensing recognition methods and findings for maize straw types presented in this study can offer valuable information and technical support to agricultural departments, environmental protection agencies, and related enterprises. Full article
Show Figures

Figure 1

Figure 1
<p>The UAV’s multispectral image of the study area, captured on October 20, 2021: (<b>a</b>) the study area, (<b>b</b>) the image by RGB bands, (<b>c</b>) the image by other bands.</p>
Full article ">Figure 2
<p>The DJI Phantom 4 Multispectral UAV used in this study.</p>
Full article ">Figure 3
<p>Primary maize straw types in the study area: (<b>a</b>) upright maize straw, (<b>b</b>) heaped maize straw, (<b>c</b>) bundled maize straw, (<b>d</b>) chopped maize straw, (<b>e</b>) bare land.</p>
Full article ">Figure 3 Cont.
<p>Primary maize straw types in the study area: (<b>a</b>) upright maize straw, (<b>b</b>) heaped maize straw, (<b>c</b>) bundled maize straw, (<b>d</b>) chopped maize straw, (<b>e</b>) bare land.</p>
Full article ">Figure 4
<p>The DSM, DEM sample point distribution, and DEM image for the typical straw type area in Zhaodong City, Heilongjiang Province.</p>
Full article ">Figure 4 Cont.
<p>The DSM, DEM sample point distribution, and DEM image for the typical straw type area in Zhaodong City, Heilongjiang Province.</p>
Full article ">Figure 4 Cont.
<p>The DSM, DEM sample point distribution, and DEM image for the typical straw type area in Zhaodong City, Heilongjiang Province.</p>
Full article ">Figure 5
<p>Workflow of the study.</p>
Full article ">Figure 6
<p>The original 5-band spectral difference image of maize straw types.</p>
Full article ">Figure 7
<p>Typical multi-feature index imagery in the study area.</p>
Full article ">Figure 7 Cont.
<p>Typical multi-feature index imagery in the study area.</p>
Full article ">Figure 7 Cont.
<p>Typical multi-feature index imagery in the study area.</p>
Full article ">Figure 8
<p>Typical multi-feature index imagery in the study area (to enhance visualization, the SESI values in the image were reduced by a factor of ten for display).</p>
Full article ">Figure 9
<p>Decision tree for maize straw types classification.</p>
Full article ">Figure 10
<p>The thematic map of maize straw types in Zhaodong City, Heilongjiang Province. (<b>a</b>) The object-oriented recognition thematic map of maize straw types, integrating height, shape, and spectral features; (<b>b</b>) the object-oriented recognition thematic map of maize straw types based on spectral features.</p>
Full article ">Figure 10 Cont.
<p>The thematic map of maize straw types in Zhaodong City, Heilongjiang Province. (<b>a</b>) The object-oriented recognition thematic map of maize straw types, integrating height, shape, and spectral features; (<b>b</b>) the object-oriented recognition thematic map of maize straw types based on spectral features.</p>
Full article ">
28 pages, 23880 KiB  
Article
Inversion of Leaf Chlorophyll Content in Different Growth Periods of Maize Based on Multi-Source Data from “Sky–Space–Ground”
by Wu Nile, Su Rina, Na Mula, Cha Ersi, Yulong Bao, Jiquan Zhang, Zhijun Tong, Xingpeng Liu and Chunli Zhao
Remote Sens. 2025, 17(4), 572; https://doi.org/10.3390/rs17040572 - 8 Feb 2025
Viewed by 325
Abstract
Leaf chlorophyll content (LCC) is a key indicator of crop growth condition. Real-time, non-destructive, rapid, and accurate LCC monitoring is of paramount importance for precision agriculture management. This study proposes an improved method based on multi-source data, combining the Sentinel-2A spectral response function [...] Read more.
Leaf chlorophyll content (LCC) is a key indicator of crop growth condition. Real-time, non-destructive, rapid, and accurate LCC monitoring is of paramount importance for precision agriculture management. This study proposes an improved method based on multi-source data, combining the Sentinel-2A spectral response function (SRF) and computer algorithms, to overcome the limitations of traditional methods. First, the equivalent remote sensing reflectance of Sentinel-2A was simulated by combining UAV hyperspectral images with ground experimental data. Then, using grey relational analysis (GRA) and the maximum information coefficient (MIC) algorithm, we explored the complex relationship between the vegetation indices (VIs) and LCC, and further selected feature variables. Meanwhile, we utilized three spectral indices (DSI, NDSI, RSI) to identify sensitive band combinations for LCC and further analyzed the response relationship of the original bands to LCC. On this basis, we selected three nonlinear machine learning models (XGBoost, RFR, SVR) and one multiple linear regression model (PLSR) to construct the LCC inversion model, and we chose the optimal model to generate spatial distribution maps of maize LCC at the regional scale. The results indicate that there is a significant nonlinear correlation between the VIs and LCC, with the XGBoost, RFR, and SVR models outperforming the PLSR model. Among them, the XGBoost_MIC model achieved the best LCC inversion results during the tasseling stage (VT) of maize growth. In the UAV hyperspectral data, the model achieved an R2 = 0.962 and an RMSE = 5.590 mg/m2 in the training set, and an R2 = 0.582 and an RMSE = 6.019 mg/m2 in the test set. For the Sentinel-2A-simulated spectral data, the training set had an R2 = 0.923 and an RMSE = 8.097 mg/m2, while the test set showed an R2 = 0.837 and an RMSE = 3.250 mg/m2, which indicates an improvement in test set accuracy. On a regional scale, the LCC inversion model also yielded good results (train R2 = 0.76, test R2 = 0.88, RMSE = 18.83 mg/m2). In conclusion, the method proposed in this study not only significantly improves the accuracy of traditional methods but also, with its outstanding versatility, can achieve rapid, non-destructive, and precise crop growth monitoring in different regions and for various crop types, demonstrating broad application prospects and significant practical value in precision agriculture. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Main flow chart.</p>
Full article ">Figure 2
<p>Location of study area. (<b>a</b>) is the geographical location of Inner Mongolia, (<b>b</b>) is land use types in Inner Mongolia, (<b>c</b>) is a DEM map of Hohhot, Inner Mongolia, (<b>d</b>) is flight area, (<b>e</b>) is a RGB plot of the different growth periods of V2–R1.</p>
Full article ">Figure 3
<p>Data collection system. (<b>a</b>) The data collection equipment, (<b>b</b>) the UAV hyperspectral data collection site, (<b>c</b>) the UAV flight course.</p>
Full article ">Figure 4
<p>Experiment site. (<b>a</b>) Collection of ASD spectral information set, (<b>b</b>) sample site, (<b>c</b>) collection of ground chlorophyll content.</p>
Full article ">Figure 5
<p>Verification of the reliability of S185 hyperspectral data. (<b>a</b>) The difference between the spectral curves of S185 and ASD, (<b>b</b>) the correlation of spectral reflectance between S185 and ASD.</p>
Full article ">Figure 6
<p>Comparison of spectra and ground spectral curves before and after simulation. (<b>a</b>) The spectral comparison between UAV and Sentinel-2A-simulated data, (<b>b</b>) the Sentinel-2A-simulated spectra versus Sentinel-2A satellite spectra.</p>
Full article ">Figure 7
<p>Correlation plots for different growth periods of maize. (<b>A</b>) Three columns show UAV data; (<b>B</b>) three columns show Sentinel-2A simulation data. (<b>a</b>) V2 stage, (<b>b</b>) V4 stage, (<b>c</b>) V6 stage, (<b>d</b>) V10 stage, (<b>e</b>) V12 stage, (<b>f</b>) VT stage, (<b>g</b>) R1 stage.</p>
Full article ">Figure 8
<p>Correlation between VI and LCC in different growth periods of maize. (<b>a</b>) UAV_GRA; (<b>b</b>) UAV_MIC, (<b>c</b>) SRF_GRA; (<b>d</b>) SRF_MIC.</p>
Full article ">Figure 9
<p>Measured and predicted LCC values based on the best band combined with the best stage of the four ML models. (<b>a</b>) UAV_DSI_XGBoost for VT growth stage, (<b>b</b>) UAV_NDSI_RFR for V12 growth stage, (<b>c</b>) UAV_NDSI_SVR for V12 growth stage, (<b>d</b>) UAV_RSI_PLSR for V6 stage.</p>
Full article ">Figure 10
<p>Measured and predicted values of maize in different growth periods for UAV_XGBoost model.</p>
Full article ">Figure 11
<p>Measured and predicted values of different growth periods of maize for SRF_XGBoost model.</p>
Full article ">Figure 12
<p>RFR and XGBoost models for estimating spatial distribution of LCC in different growth periods of maize in the experimental area.</p>
Full article ">Figure 13
<p>Changes in LCC and spectral reflectance in different growth periods of maize. (<b>a</b>) LCC changes, (<b>b</b>) spectral reflectance.</p>
Full article ">Figure 14
<p>Accuracy of measured and predicted LCC values of different ML models in different growth periods of maize. (<b>a</b>) Accuracy of measured and predicted LCC values of UAV + spectral index; (<b>b</b>) accuracy of measured and predicted LCC values of Sentinel-2A-simulated spectral data + spectral index; (<b>c</b>) accuracy of measured and predicted LCC values of UAV hyperspectral data + VI; (<b>d</b>) accuracy of measured and predicted LCC values of Sentinel-2A-simulated spectral data + VI.</p>
Full article ">Figure 15
<p>Spatial distribution of LCC inversion of maize in different growth periods in Saihan District. (<b>a</b>) is V2; (<b>b</b>) is V4; (<b>c</b>) is V6; (<b>d</b>) is V10; (<b>e</b>) is V12; (<b>f</b>) is VT; (<b>g</b>) is R1.</p>
Full article ">
18 pages, 4411 KiB  
Article
High-Resolution Mapping of Topsoil Sand Content in Planosol Regions Using Temporal and Spectral Feature Optimization
by Jiaying Meng, Nanchen Chu, Chong Luo, Huanjun Liu and Xue Li
Remote Sens. 2025, 17(3), 553; https://doi.org/10.3390/rs17030553 - 6 Feb 2025
Viewed by 401
Abstract
Soil sand content is an important characterization index of soil texture, which directly affects soil water regulation, nutrient cycling, and crop growth potential. Therefore, its high-precision spatial distribution information is of great importance for agricultural resource management and land use. In this study, [...] Read more.
Soil sand content is an important characterization index of soil texture, which directly affects soil water regulation, nutrient cycling, and crop growth potential. Therefore, its high-precision spatial distribution information is of great importance for agricultural resource management and land use. In this study, a remote sensing prediction method based on the combination of time-phase optimization and spectral feature preference is innovatively proposed for improving the mapping accuracy of the sand content in the till layer of a planosol area. The study first analyzed the prediction performance of single-time-phase images, screened the optimal time-phase (May), and constructed a single-time-phase model, which achieved significant prediction accuracy, with a coefficient of determination (R2) of 0.70 and a root mean square error (RMSE) of 1.26%. Subsequently, the model was further optimized by combining multiple time phases, and the prediction accuracy was improved to R2 = 0.77 and the RMSE decreased to 1.10%. At the feature level, the recursive feature elimination (RF-RFE) method was utilized to preferentially select 19 key spectral variables from the initial feature set, among which the short-wave infrared bands (b11, b12) and the visible bands (b2, b3, b4) contributed most significantly to the prediction. Finally, the prediction accuracy was further improved to R2 = 0.79 and RMSE = 1.05% by multi-temporal-multi-feature fusion modeling. The spatial distribution map of sand content generated by the optimized model shows that areas with high sand content are primarily located in the northern and central regions of Shuguang Farm. This study not only provides a new technical path for accurate mapping of soil texture in the planosol area, but also provides a reference for the improvement of remote sensing monitoring methods in other typical soil areas. The research results can provide a reference for mapping high-resolution soil sand maps over a wider area in the future. Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Soil Mapping and Modeling (Second Edition))
Show Figures

Figure 1

Figure 1
<p>Overview of the study area: (<b>a</b>) location of the study area, (<b>b</b>) location of sampling points and topography of the study area, (<b>c</b>) remote sensing images of the study area.</p>
Full article ">Figure 2
<p>Flow chart.</p>
Full article ">Figure 3
<p>Spectral curves of Sentinel-2: (<b>a</b>,<b>b</b>) Spectral characteristic curves for different sand grain contents in April, (<b>c</b>,<b>d</b>) Spectral characteristic curves for different sand grain contents in May.</p>
Full article ">Figure 4
<p>Prediction results of sand content for different number of images based on the optimal temporal phase sequencing method. The yellow line indicates the best result for a single-date image.</p>
Full article ">Figure 5
<p>Prediction results of sand content for multi-feature combination based on recursive feature elimination method. The brown line indicates the prediction result after recursive feature elimination.</p>
Full article ">Figure 6
<p>Spatial distribution of soil sands.</p>
Full article ">Figure 7
<p>Spatial distribution and accuracy of the sand content of the mapping product (double preferred) proposed in this study compared to other products. (<b>a</b>) Spatial distribution of sand content in the National Grid Map product, (<b>b</b>) Spatial distribution of sand content in the product of this study, and (<b>c</b>) Comparison of the two products.</p>
Full article ">Figure 8
<p>Comparison of straw cover and white pulpification of the tillage layer in bare soil images of different months: (<b>a</b>) bare soil image of April, (<b>b</b>) bare soil image of May.</p>
Full article ">Figure 9
<p>The weights of different input variables (<b>a</b>) and different types of input variables (<b>b</b>) on the RFE model.</p>
Full article ">Figure 10
<p>Predicted sand content map.</p>
Full article ">
26 pages, 4406 KiB  
Article
Inter-Annual Variability of Peatland Vegetation Captured Using Phenocam- and UAV Imagery
by Gillian Simpson, Tom Wade, Carole Helfter, Matthew R. Jones, Karen Yeung and Caroline J. Nichol
Remote Sens. 2025, 17(3), 526; https://doi.org/10.3390/rs17030526 - 4 Feb 2025
Viewed by 462
Abstract
Plant phenology is an important driver of inter-annual variability in peatland carbon uptake. However, the use of traditional phenology datasets (e.g., manual surveys, satellite remote sensing) to quantify this link is hampered by their limited spatial and temporal coverage. This study examined the [...] Read more.
Plant phenology is an important driver of inter-annual variability in peatland carbon uptake. However, the use of traditional phenology datasets (e.g., manual surveys, satellite remote sensing) to quantify this link is hampered by their limited spatial and temporal coverage. This study examined the use of phenology cameras (phenocams) and uncrewed aerial vehicles (UAVs) for monitoring phenology in a Scottish temperate peatland. Data were collected at the site over multiple growing seasons using a UAV platform fitted with a multispectral Parrot Sequoia camera. We found that greenness indices calculated using data from both platforms were in strong agreement with each other, and exhibited strong correlations with rates of gross primary production (GPP) at the site. Greenness maps generated with the UAV data were combined with fine-scale vegetation classifications, and highlighted the variable sensitivity of different plant species to dry spells over the study period. While a lack of suitable weather conditions for surveying limited the UAV data temporally, the phenocam provided a near-continuous record of phenology. The latter revealed substantial temporal variability in the relationship between canopy greenness and peatland GPP, which although strong over the growing season as a whole (rs = 0.88, p < 0.01), was statistically insignificant during the peak growing season. Full article
Show Figures

Figure 1

Figure 1
<p>Study site overview. Panel (<b>a</b>) shows an aerial image of Auchencorth Moss alongside the UAV survey area (yellow box), with the location of the meteorological measurement mast (star), flux measurement tower and phenocam (circle) as well as its field of view (dotted lines) shown for reference. Panel (<b>b</b>) shows the employed species-level classification of the UAV survey area (see [<a href="#B70-remotesensing-17-00526" class="html-bibr">70</a>]). Map tile (<b>a</b>) by the Microsoft<sup>®</sup> Bing™ Maps Platform API (Microsoft Corporation, Redmond, WA, USA), under General Rights and Restrictions for Prints. Data by OpenStreetMap, under ODbL.</p>
Full article ">Figure 2
<p>Example phenocam imagery from Auchencorth Moss captured at 12:00 UTC. The selected region of interest (ROI) selected for this study is outlined in yellow. The left-hand side from top to bottom shows imagery taken during the 2019 green-up (subplot (<b>a</b>)), peak season (<b>b</b>), and green-down (<b>c</b>). Displayed on the right-hand side are images that did not pass quality control screening (i.e., raindrops on the camera housing (subplot (<b>d</b>)), variable scene illumination (<b>e</b>) and fog (<b>f</b>)). Note: (i) the bottom half of the image below the ROI is inside the fenced measurement compound, and hence not considered representative of the natural vegetation; and (ii) that the camera viewing angle in combination with a strong micro-topography and presence of tall grasses means it is not possible to see hollow microsites, although hummocks are clearly visible.</p>
Full article ">Figure 3
<p>Overview of hydrometeorological conditions over the study period: 2018, 2019, 2020 and 2021 (from top to bottom). Shown are: daily cumulative precipitation (bars); daily water-table depth (WTD, lines; positive values indicate a water table below the ground surface); and ‘dry spells’ (yellow shaded areas, defined as periods &gt; 1 week with WTD &gt; 5 cm below the surface).</p>
Full article ">Figure 4
<p>Time-series of daily phenocam-<span class="html-italic">G<sub>cc</sub></span> (black line and circles) averaged over the ROI, and UAV-derived VIs for the flight survey area (coloured markers). Displayed on the <span class="html-italic">x</span>-axis are the calendar months for each measurement year; note that the seasonal amplitude of UAV-derived VIs has been normalised between 0 and 1 to facilitate comparison of their temporal trends.</p>
Full article ">Figure 5
<p>Correlation between UAV-derived VIs averaged across the scene, and daily phenocam-derived greenness (<span class="html-italic">G<sub>cc</sub></span>) averaged across the ROI. Shown at the top of each of the displayed subplots is the Spearman’s rank correlation coefficient, r<sub>s</sub>, and its associated statistical significance. ** indicates <span class="html-italic">p</span> &lt; 0.01, and n = 26 (i.e., the number of days on which UAV imagery was collected) for all of the eight subplots displayed.</p>
Full article ">Figure 6
<p>UAV-derived reNDVI1 difference map. Shown is the percentage difference in reNDVI1 from 14 May to 3 August 2021. Negative values indicate a reduction in greenness from May to August, whereas positive values indicate an increase in greenness.</p>
Full article ">Figure 7
<p>Time-series of species-level reNDVI1 over the growing season from the UAV surveys conducted in 2021. Shown are dry spells (yellow shaded areas), daily GPP90 from the EC data (grey), and reNDVI1 for each of the dominant species in the survey area (coloured lines and markers).</p>
Full article ">Figure 8
<p>Change in selected VIs over Dryspell-1 and Dryspell-2 for each dominant species. Shown is the percentage change with respect to the seasonal amplitude for each VI (i.e., the maximum- minus the minimum species-averaged value during the year). Positive values (green) indicate an increase in greenness, whereas negative values (yellow) denote a reduction in greenness. Dryspell-1 plots (upper row) show change in species-level VIs from the flights on DOY 168–153; Dryspell-2 plots (bottom row) show VI change from flights on DOY 215–204.</p>
Full article ">Figure 9
<p>Correlation between daily phenocam <span class="html-italic">G<sub>cc</sub></span> and GPP90 for the years 2019, 2020 and 2021 (top to bottom). Plots from left to right show data for: the complete growing season; green-up; peak growing season; and green-down. Shown at the bottom of each plot is the Spearman’s rank correlation coefficient, r<sub>s</sub> (<span class="html-italic">G<sub>cc</sub></span>, GPP90) and its statistical significance: ** indicates <span class="html-italic">p</span> &lt; 0.01; * indicates <span class="html-italic">p</span> &lt; 0.05; else <span class="html-italic">p</span> ≥ 0.05.</p>
Full article ">
27 pages, 24351 KiB  
Article
UAV-Based Multiple Sensors for Enhanced Data Fusion and Nitrogen Monitoring in Winter Wheat Across Growth Seasons
by Jingjing Wang, Wentao Wang, Suyi Liu, Xin Hui, Haohui Zhang, Haijun Yan and Wouter H. Maes
Remote Sens. 2025, 17(3), 498; https://doi.org/10.3390/rs17030498 - 31 Jan 2025
Viewed by 469
Abstract
Unmanned aerial vehicles (UAVs) equipped with multi-sensor remote sensing technologies provide an efficient approach for mapping spatial and temporal variations in vegetation traits, enabling advancements in precision monitoring and modeling. This study’s objective was to analyze UAV multiple sensors’ performance in monitoring winter [...] Read more.
Unmanned aerial vehicles (UAVs) equipped with multi-sensor remote sensing technologies provide an efficient approach for mapping spatial and temporal variations in vegetation traits, enabling advancements in precision monitoring and modeling. This study’s objective was to analyze UAV multiple sensors’ performance in monitoring winter wheat chlorophyll content (SPAD), plant nitrogen accumulation (PNA), and N nutrition index (NNI). A two-year field experiment with five N fertilizer treatments was carried out. The color indices (CIs, from RGB sensors), vegetation indices (VIs, from multispectral sensors), and temperature indices (TIs, from thermal sensors) were derived from the collected images. XGBoost (extreme gradient boosting) was applied to develop the models, using 2021 data for training and 2022 data for testing. The excess green minus excess red index, red green ratio index, and hue (from CIs), and green normalized difference vegetation index, normalized difference red-edge index, and normalized difference vegetation index (from VIs), showed high correlations with three N indicators. At the pre-heading stage, the best performing CIs correlated better than the VIs; this was reversed in the post-heading stage. CIs outperformed VIs in SPAD (CIs: R2(coefficient of determination) = 0.66, VIs: R2 = 0.61), PNA (CIs: R2 = 0.68, VIs: R2 = 0.64), and NNI (CIs: R2 = 0.64, VIs: R2 = 0.60) in the pre-heading stage, whereas VI-based models achieved slightly higher accuracies in post-heading and all stages compared to CIs. Models built with CIs + VIs significantly improved the models’ performance compared to single-sensor models. Adding TIs to CIs and CIs + VIs further improved the models’ performance slightly, especially at the post-heading stage, resulting in the best model performance with three sensors. These findings highlight the effectiveness of UAV systems in estimating wheat N and establish a framework for integrating RGB, multispectral, and thermal sensors to enhance model accuracy in precision vegetation monitoring. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Treatments of the experiment (N0, N1, N2, N3, and N4 represent the nitrogen fertilization level).</p>
Full article ">Figure 2
<p>Unmanned aerial vehicle remote sensing platform and sensors.</p>
Full article ">Figure 3
<p>The data processing flow of remote sensing images.</p>
Full article ">Figure 4
<p>The general workflow of the study. CIs, color indices; TIs, thermal indices; VIs, vegetation indices; PCs, principal components; PNA, plant nitrogen accumulation; NNI, nitrogen nutrient index; XGBoost, extreme gradient boosting algorithm.</p>
Full article ">Figure 5
<p>SPAD of winter wheat in the experiment. Note: Means followed by different letters among N treatments differ significantly at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 6
<p>The plant N accumulation of winter wheat in the experiment. Note: Means followed by different letters among N treatments differ significantly at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 7
<p>The critical nitrogen dilution curve of winter wheat.</p>
Full article ">Figure 8
<p>The N nutrition index of winter wheat in the experiment.</p>
Full article ">Figure 9
<p>Relationship between winter wheat RY and NNI. RY and NNI represent relative yield and nitrogen nutrition index, respectively. The gray zones represent the 95% confidence interval of the fitted quadratic function.</p>
Full article ">Figure 10
<p>Correlation between CIs, VIs, TIs, and SPAD of winter wheat. CIs, VIs, and TIs stand for color indices, vegetation indices, and temperature indices. Asterisks (*) indicate the indices that were within the top 5 of the best performing CIs and VIs by combining 2021 and 2022 in all stages.</p>
Full article ">Figure 11
<p>Correlation between CIs, VIs, TIs, and PNA of winter wheat. PNA represents plant nitrogen accumulation. CIs, VIs, and TIs stand for color indices, vegetation indices, and temperature indices. Asterisks (*) indicate the indices that were within the top 5 best performing CIs and VIs by combining 2021 and 2022 in all stages.</p>
Full article ">Figure 12
<p>Correlation between CIs, VIs, TIs, and NNI of winter wheat. NNI represents the nitrogen nutrient index. CIs, VIs, and TIs stand for color indices, vegetation indices, and temperature indices. Asterisks (*) indicate the indices that were within the top 5 of the best performing CIs and VIs by combining 2021 and 2022 in all stages.</p>
Full article ">Figure 13
<p>Principal component axes 1 and 2 of the PCA performed on the color indices (CIs), vegetation indices (VIs), and thermal indices (TIs) by combining 2021 and 2022.</p>
Full article ">Figure 14
<p>Relationships between the predicted and measured values. SPAD (<b>a</b>–<b>c</b>). PNA (<b>d</b>–<b>f</b>). NNI (<b>g</b>–<b>i</b>). PNA and NNI represent plant nitrogen accumulation and nitrogen nutrient index, respectively. CI, TI, and VI represent color index, thermal index, and vegetation index, respectively.</p>
Full article ">Figure 15
<p>The NNI map of winter wheat based on the all-stage estimation model using CIs, VIs, and TIs. NNI represents nitrogen nutrient index. CIs, VIs, and TIs represent color indices, vegetation indices, and thermal indices, respectively.</p>
Full article ">Figure A1
<p>The SPAD map of winter wheat based on the all-stage estimation model using CIs, VIs, and TIs.</p>
Full article ">Figure A2
<p>The PNA (g m<sup>−2</sup>) map of winter wheat based on the all-stage estimation model using CIs, VIs, and TIs.</p>
Full article ">
20 pages, 7947 KiB  
Article
Towards an Efficient Remote Sensing Image Compression Network with Visual State Space Model
by Yongqiang Wang, Feng Liang, Shang Wang, Hang Chen, Qi Cao, Haisheng Fu and Zhenjiao Chen
Remote Sens. 2025, 17(3), 425; https://doi.org/10.3390/rs17030425 - 26 Jan 2025
Viewed by 568
Abstract
In the past few years, deep learning has achieved remarkable advancements in the area of image compression. Remote sensing image compression networks focus on enhancing the similarity between the input and reconstructed images, effectively reducing the storage and bandwidth requirements for high-resolution remote [...] Read more.
In the past few years, deep learning has achieved remarkable advancements in the area of image compression. Remote sensing image compression networks focus on enhancing the similarity between the input and reconstructed images, effectively reducing the storage and bandwidth requirements for high-resolution remote sensing images. As the network’s effective receptive field (ERF) expands, it can capture more feature information across the remote sensing images, thereby reducing spatial redundancy and improving compression efficiency. However, the majority of these learned image compression (LIC) techniques are primarily CNN-based and transformer-based, often failing to balance the global ERF and computational complexity optimally. To alleviate this issue, we propose a learned remote sensing image compression network with visual state space model named VMIC to achieve a better trade-off between computational complexity and performance. Specifically, instead of stacking small convolution kernels or heavy self-attention mechanisms, we employ a 2D-bidirectional selective scan mechanism. Every element within the feature map aggregates data from multiple spatial positions, establishing a globally effective receptive field with linear computational complexity. We extend it to an omni-selective scan for the global-spatial correlations within our Channel and Global Context Entropy Model (CGCM), enabling the integration of spatial and channel priors to minimize redundancy across slices. Experimental results demonstrate that the proposed method achieves superior trade-off between rate-distortion performance and complexity. Furthermore, in comparison to traditional codecs and learned image compression algorithms, our model achieves BD-rate reductions of −4.48%, −9.80% over the state-of-the-art VTM on the AID and NWPU VHR-10 datasets, respectively, as well as −6.73% and −7.93% on the panchromatic and multispectral images of the WorldView-3 remote sensing dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of the effective receptive field (ERF) [<a href="#B19-remotesensing-17-00425" class="html-bibr">19</a>] for the AID remote sensing dataset, including Ballé (CNN-based) [<a href="#B20-remotesensing-17-00425" class="html-bibr">20</a>], {TIC [<a href="#B18-remotesensing-17-00425" class="html-bibr">18</a>], S2LIC [<a href="#B13-remotesensing-17-00425" class="html-bibr">13</a>] (transformer-based)}, TCM [<a href="#B14-remotesensing-17-00425" class="html-bibr">14</a>] (Hybrid CNN and transformer-based), and our method (Mamba-based). The yellower area indicates the larger effective receptive field.</p>
Full article ">Figure 2
<p>The network architecture of the Remote Sensing Image Compression with Visual State Space Model (VMIC). <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>w</mi> <mi>n</mi> <mi>s</mi> <mi>a</mi> <mi>m</mi> <mi>p</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>↓</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>U</mi> <mi>p</mi> <mi>s</mi> <mi>a</mi> <mi>m</mi> <mi>p</mi> <mi>l</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mo>↑</mo> </mrow> </semantics></math> denote the convolution operations with a stride of 2. Quantization is represented by <span class="html-italic">Q</span>, while <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>E</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>D</mi> </mrow> </semantics></math> refer to the arithmetic encoder and decoder, respectively. <span class="html-italic">N</span> and <span class="html-italic">M</span> represent the number of channels, which are 128 and 320.</p>
Full article ">Figure 3
<p>The framework of the cross-selective scan block (CSSB). <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>U</mi> </mrow> </semantics></math> are patch embedding and patch unembedding, respectively. <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>N</mi> </mrow> </semantics></math> indicates layer normalization. <math display="inline"><semantics> <mi>σ</mi> </semantics></math> is the SiLU activation funtion.</p>
Full article ">Figure 4
<p>The channel-wise module <math display="inline"><semantics> <msub> <mi>g</mi> <mrow> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> to extract channel context <math display="inline"><semantics> <msub> <mo>Φ</mo> <mrow> <mi>c</mi> <mi>h</mi> </mrow> </msub> </semantics></math> from each slice <math display="inline"><semantics> <msup> <mover accent="true"> <mi>y</mi> <mo stretchy="false">^</mo> </mover> <mi>i</mi> </msup> </semantics></math> in the Channel and Global Context entropy model (CGCM).</p>
Full article ">Figure 5
<p>Feature visualization of <math display="inline"><semantics> <mrow> <mi>a</mi> <mi>i</mi> <mi>r</mi> <mi>p</mi> <mi>o</mi> <mi>r</mi> <mi>t</mi> </mrow> </semantics></math> image in different slices of the NPWU VHR-10 dataset.</p>
Full article ">Figure 6
<p>The proposed omni-selective scan module (OSSM) for global-spatial <math display="inline"><semantics> <msub> <mi>g</mi> <mrow> <mi>s</mi> <mi>c</mi> </mrow> </msub> </semantics></math> context in the Channel and Global Context entropy model (CGCM).</p>
Full article ">Figure 7
<p>Examples from a subset of categories in the AID dataset, including (<b>a</b>) square, (<b>b</b>) mountain, (<b>c</b>) viaduct, (<b>d</b>) stadium, (<b>e</b>) storage tanks, (<b>f</b>) river, (<b>g</b>) forest, (<b>h</b>) dense residential, (<b>i</b>) bareland, (<b>j</b>) commercial.</p>
Full article ">Figure 8
<p>Examples from all categories in the NWPU VHR-10 dataset, including (<b>a</b>) airplane, (<b>b</b>) vehicle, (<b>c</b>) bridge, (<b>d</b>) ground_track_field, (<b>e</b>) tennis_court, (<b>f</b>) basketball_court, (<b>g</b>) baseball_diamond, (<b>h</b>) harbor, (<b>i</b>) storage_tank, (<b>j</b>) ship.</p>
Full article ">Figure 9
<p>Rate-distortion performance comparison on AID test images, evaluated using PSNR and MS-SSIM metrics.</p>
Full article ">Figure 10
<p>Rate-distortion performance comparison on NWPU VHR-10 test images, evaluated using PSNR and MS-SSIM metrics.</p>
Full article ">Figure 11
<p>Rate-distortion performance comparison on World View test images: panchromatic images (left) and multispectral images (right).</p>
Full article ">Figure 12
<p>Feature visualization of <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>e</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> <mi>g</mi> <mi>e</mi> </mrow> </semantics></math> images in the AID dataset, including compact latent representation <span class="html-italic">y</span> after transformation module, anchor <math display="inline"><semantics> <msub> <mover accent="true"> <mi>y</mi> <mo stretchy="false">^</mo> </mover> <mi>a</mi> </msub> </semantics></math>, and non-anchor <math display="inline"><semantics> <msub> <mover accent="true"> <mi>y</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mi>a</mi> </mrow> </msub> </semantics></math> latent features in the checkerboard context entropy model.</p>
Full article ">Figure 13
<p>Reconstructed visual comparison of the <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>a</mi> <mi>s</mi> <mi>e</mi> <mi>b</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mi>F</mi> <mi>i</mi> <mi>e</mi> <mi>l</mi> <mi>d</mi> </mrow> </semantics></math> image from the NWPU VHR-10 across various models. The evaluation metrics are presented as bpp ↓, PSNR ↑, MS-SSIM ↑.</p>
Full article ">Figure 14
<p>Reconstructed visual comparison of the <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>c</mi> <mi>h</mi> <mi>o</mi> <mi>o</mi> <mi>l</mi> </mrow> </semantics></math> image from the AID across various models. The evaluation metrics are presented as bpp ↓, PSNR ↑, MS-SSIM ↑.</p>
Full article ">Figure 15
<p>Reconstructed visual comparison of the <math display="inline"><semantics> <mrow> <mi>p</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>h</mi> <mi>r</mi> <mi>o</mi> <mi>m</mi> <mi>a</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> </mrow> </semantics></math> image from the WorldView-3 across various models. The evaluation metrics are presented as bpp ↓, PSNR ↑, MS-SSIM ↑.</p>
Full article ">Figure 16
<p>Rate-distortion performance comparison on ablation study for different design of global-spatial context.</p>
Full article ">
25 pages, 6632 KiB  
Article
Estimating Winter Wheat Canopy Chlorophyll Content Through the Integration of Unmanned Aerial Vehicle Spectral and Textural Insights
by Huiling Miao, Rui Zhang, Zhenghua Song and Qingrui Chang
Remote Sens. 2025, 17(3), 406; https://doi.org/10.3390/rs17030406 - 24 Jan 2025
Viewed by 562
Abstract
Chlorophyll content is an essential parameter for evaluating the growth condition of winter wheat, and its accurate monitoring through remote sensing is of great significance for early warnings about winter wheat growth. In order to investigate unmanned aerial vehicle (UAV) multispectral technology’s capability [...] Read more.
Chlorophyll content is an essential parameter for evaluating the growth condition of winter wheat, and its accurate monitoring through remote sensing is of great significance for early warnings about winter wheat growth. In order to investigate unmanned aerial vehicle (UAV) multispectral technology’s capability to estimate the chlorophyll content of winter wheat, this study proposes a method for estimating the relative canopy chlorophyll content (RCCC) of winter wheat based on UAV multispectral images. Concretely, an M350RTK UAV with an MS600 Pro multispectral camera was utilized to collect data, immediately followed by ground chlorophyll measurements with a Dualex handheld instrument. Then, the band information and texture features were extracted by image preprocessing to calculate the vegetation indices (VIs) and the texture indices (TIs). Univariate and multivariate regression models were constructed using random forest (RF), backpropagation neural network (BPNN), kernel extremum learning machine (KELM), and convolutional neural network (CNN), respectively. Finally, the optimal model was utilized for spatial mapping. The results provided the following indications: (1) Red-edge vegetation indices (RIs) and TIs were key to estimating RCCC. Univariate regression models were tolerable during the flowering and filling stages, while the superior multivariate models, incorporating multiple features, revealed more complex relationships, improving R² by 0.35% to 69.55% over the optimal univariate models. (2) The RF model showed notable performance in both univariate and multivariate regressions, with the RF model incorporating RIS and TIS during the flowering stage achieving the best results (R²_train = 0.93, RMSE_train = 1.36, RPD_train = 3.74, R²_test = 0.79, RMSE_test = 3.01, RPD_test = 2.20). With more variables, BPNN, KELM, and CNN models effectively leveraged neural network advantages, improving training performance. (3) Compared to using single-feature indices for RCCC estimation, the combination of vegetation indices and texture indices increased from 0.16% to 40.70% in the R² values of some models. Integrating UAV multispectral spectral and texture data allows effective RCCC estimation for winter wheat, aiding wheatland management, though further work is needed to extend the applicability of the developed estimation models. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the experimental area: (<b>a</b>) the geographic location of the study area; (<b>b</b>) the UAV image and sampling points; (<b>c</b>) the planting varieties and fertilization conditions in the experimental field. Note: In (<b>a</b>), the gray area represents Xianyang City, the blue area represents Qian County, and the red points indicate the study area. In (<b>b</b>), yellow points represent sampling locations. In (<b>c</b>), the blue area represents the ‘Xinmai 40’ variety, the green area represents the ‘Xinong 889’ variety, and the yellow area represents the ‘Xiaoyan 22’ variety. N0, N1, N2, N3, N4, and N5 represent six nitrogen fertilization gradients: 0, 60, 90, 120, 160, and 240 kg/ha, respectively.</p>
Full article ">Figure 2
<p>The correlation heatmap of the texture features constituting TIs at the heading stage. Note: (<b>a</b>–<b>c</b>), respectively, represent the correlation heatmaps between the texture features that constitute the DTI, RTI, and NDTI. In each image, mean, var, hom, con, dis, ent, sm, and corr represent the following texture features: mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation, respectively. The numbers 1, 2, 3, 4, 5, and 6 represent the blue band (centered at 450 nm), green band (centered at 555 nm), red band (centered at 660 nm), red-edge band (centered at 720 nm), red-edge band (centered at 750 nm), and near-infrared band (centered at 840 nm), respectively.</p>
Full article ">Figure 3
<p>The univariate regression models of RCCC. (<b>a</b>) The univariate regression models for RCCC at the heading stage; (<b>b</b>) the univariate regression models for RCCC at the flowering stage; (<b>c</b>) the univariate regression models for RCCC at the filling stage. Note: In each figure, the green bars represent the R² values for the training set, the brown bars represent the R² values for the testing set, the solid yellow pentagrams represent the RPD values for the training set, the hollow yellow pentagrams represent the RPD values for the testing set, the solid red squares represent the RMSE values for the training set, and the hollow red squares represent the RMSE values for the testing set. Numbers 1 to 18 successively represent the input parameters: NPCI, VDVI, NDVI, GNDVI, GCI, SR, MSR, RESR<sub>720</sub>, RESR<sub>750</sub>, LCI<sub>720</sub>, LCI<sub>750</sub>, NDRE<sub>720</sub>, NDRE<sub>750</sub>, RECI<sub>720</sub>, RECI<sub>750</sub>, DTI, RTI, and NDTI.</p>
Full article ">Figure 4
<p>The multivariate regression models of RCCC based on VIs, RIs, and TIs. (<b>a</b>) The multivariate regression models for RCCC at the heading stage; (<b>b</b>) the multivariate regression models for RCCC at the flowering stage; (<b>c</b>) the multivariate regression models for RCCC at the filling stage. Note: In each figure, the green bars represent the R² values for the training set, the brown bars represent the R² values for the testing set, the solid yellow pentagrams represent the RPD values for the training set, the hollow yellow pentagrams represent the RPD values for the testing set, the solid red squares represent the RMSE values for the training set, and the hollow red squares represent the RMSE values for the testing set. Numbers 1 to 18 successively represent the input parameters: NPCI, VDVI, NDVI, GNDVI, GCI, SR, MSR, RESR<sub>720</sub>, RESR<sub>750</sub>, LCI<sub>720</sub>, LCI<sub>750</sub>, NDRE<sub>720</sub>, NDRE<sub>750</sub>, RECI<sub>720</sub>, RECI<sub>750</sub>, DTI, RTI, and NDTI.</p>
Full article ">Figure 5
<p>The multivariate regression models of RCCC based on VIs+TIs and RIs+TIs. (<b>a</b>) The multivariate regression models for RCCC at the heading stage; (<b>b</b>) the multivariate regression models for RCCC at the flowering stage; (<b>c</b>) the multivariate regression models for RCCC at the filling stage. Note: In each figure, the green bars represent the R² values for the training set, the brown bars represent the R² values for the testing set, the solid yellow pentagrams represent the RPD values for the training set, the hollow yellow pentagrams represent the RPD values for the testing set, the solid red squares represent the RMSE values for the training set, and the hollow red squares represent the RMSE values for the testing set. Numbers 1 to 18 successively represent the input parameters: NPCI, VDVI, NDVI, GNDVI, GCI, SR, MSR, RESR<sub>720</sub>, RESR<sub>750</sub>, LCI<sub>720</sub>, LCI<sub>750</sub>, NDRE<sub>720</sub>, NDRE<sub>750</sub>, RECI<sub>720</sub>, RECI<sub>750</sub>, DTI, RTI, and NDTI.</p>
Full article ">Figure 6
<p>The spatial distribution maps of RCCC during heading, flowering, and filling stages. (<b>a</b>) The spatial distribution map of RCCC at the heading stage; (<b>b</b>) the spatial distribution map of RCCC at the flowering stage; (<b>c</b>) the spatial distribution map of RCCC at the filling stage.</p>
Full article ">Figure A1
<p>The correlation heatmaps of the texture features constituting TIs at the flowering stage. Note: (<b>a</b>–<b>c</b>), respectively, represent the correlation heatmaps between the texture features that constitute the DTI, RTI, and NDTI. In each image, mean, var, hom, con, dis, ent, sm, and corr represent the following texture features: mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation, respectively. The numbers 1, 2, 3, 4, 5, and 6 represent the blue band (centered at 450 nm), green band (centered at 555 nm), red band (centered at 660 nm), red-edge band (centered at 720 nm), red-edge band (centered at 750 nm), and near-infrared band (centered at 840 nm), respectively.</p>
Full article ">Figure A2
<p>The correlation heatmaps of the texture features constituting TIs at the filling stage. Note: (<b>a</b>–<b>c</b>), respectively, represent the correlation heatmaps between the texture features that constitute the DTI, RTI, and NDTI. In each image, mean, var, hom, con, dis, ent, sm, and corr represent the following texture features: mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation, respectively. The numbers 1, 2, 3, 4, 5, and 6 represent the blue band (centered at 450 nm), green band (centered at 555 nm), red band (centered at 660 nm), red-edge band (centered at 720 nm), red-edge band (centered at 750 nm), and near-infrared band (centered at 840 nm), respectively.</p>
Full article ">
18 pages, 6072 KiB  
Article
Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas
by Volker Reinprecht and Daniel Scott Kieffer
Remote Sens. 2025, 17(3), 405; https://doi.org/10.3390/rs17030405 - 24 Jan 2025
Viewed by 650
Abstract
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have [...] Read more.
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have restricted such studies to large sites. This study investigates the application of small, unmanned aerial vehicles (UAVs) equipped with multispectral sensors for land cover classification and vegetation monitoring. The application of UAVs bridges the gap between large-scale satellite remote sensing techniques and terrestrial surveys. Photogrammetric terrain models and orthoimages (RGB and multispectral) obtained from repeated mapping flights between November 2023 and May 2024 were combined with an ALS-based reference terrain model for object-based image classification. The collected data enabled differentiation between natural forests and areas affected by former mining activities, as well as the identification of variations in vegetation density and growth rates on former mining areas. The results confirm that small UAVs provide a versatile and efficient platform for classifying and monitoring mining areas and forested landslides. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Overview of the study site (“Trassbruch Gossendorf”) based on the digital elevation model; (<b>B</b>) oblique photograph. Former mining and mine dump areas, access roads and the landslide area are highlighted in (<b>A</b>).</p>
Full article ">Figure 2
<p>(<b>A</b>) Study site with the boundaries of former mining, mine dump and landslide affected areas. (<b>B</b>) Subset at the southern slope, visualizing the segmentation and the effect of the 0.5 m buffer around the sampling points and the typical tree crown dimension (diameter ~2–3 m).</p>
Full article ">Figure 3
<p>Python-based OBIA workflow, including a summary of each processing step.</p>
Full article ">Figure 4
<p>Classified map datasets for all four classification periods. (<b>A</b>) November 2023 (sunny, oblique flight); (<b>B</b>) December 2023 (overcast, nadir flight); (<b>C</b>) April 2024 (overcast, nadir flight); (<b>D</b>) May 2024 (sunny, nadir flight). [X] = area prone to misclassification (Zone A2), [Y] = old mine dump (Zone B1), that was only partially cleared for operation.</p>
Full article ">Figure 5
<p>(<b>A</b>) Parameter variation during the cross-validation process (global performance metrics and class performance metrics). (<b>B</b>) Classification metrics for all flight epochs including combined confusion matrices. (<b>C</b>) Confusion matrices derived from holdout dataset (holdout confusion matrix). The confusion matrices were standardized in horizontal direction and the corresponding sample number is given in square brackets.</p>
Full article ">Figure 6
<p>Time series for the mean NDVI, NDRE, height above rDTM (dDTM), height above rDSM and (dDSM) extracted from the former mining zones (mine dump, mine), the landslide area and the natural forest.</p>
Full article ">
27 pages, 15736 KiB  
Article
Predicting Manganese Mineralization Using Multi-Source Remote Sensing and Machine Learning: A Case Study from the Malkansu Manganese Belt, Western Kunlun
by Jiahua Zhao, Li He, Jiansheng Gong, Zhengwei He, Ziwen Feng, Jintai Pang, Wanting Zeng, Yujun Yan and Yan Yuan
Minerals 2025, 15(2), 113; https://doi.org/10.3390/min15020113 - 24 Jan 2025
Viewed by 536
Abstract
This study employs multi-source remote sensing information and machine learning methods to comprehensively assess the geological background, structural features, alteration anomalies, and spectral characteristics of the Malkansu Manganese Ore Belt in Xinjiang. Manganese mineralization is predicted, and areas with high mineralization potential are [...] Read more.
This study employs multi-source remote sensing information and machine learning methods to comprehensively assess the geological background, structural features, alteration anomalies, and spectral characteristics of the Malkansu Manganese Ore Belt in Xinjiang. Manganese mineralization is predicted, and areas with high mineralization potential are delineated. The results of the feature factor weight analysis indicate that structural density and lithological characteristics contribute most significantly to manganese mineralization. Notably, linear structures are aligned with the direction of the manganese belt, and areas exhibiting high controlling structural density are closely associated with the locations of mineral deposits, suggesting that structure plays a crucial role in manganese production in this region. The Area Under the Curve (AUC) values for the Random Forest (RF), Naïve Bayes (NB), and eXtreme Gradient Boosting (XGBoost) models were 0.975, 0.983, and 0.916, respectively, indicating that all three models achieved a high level of performance and interpretability. Among these, the NB model demonstrated the highest performance. By algebraically overlaying the predictions from these three machine learning models, a comprehensive mineralization favorability map was generated, identifying 11 prospective mineralization zones. The performance metrics of the machine learning models validate their robustness, while regional tectonics and stratigraphic lithology provide valuable characteristic factors for this approach. This study integrates multi-source remote sensing information with machine learning methods to enhance the effectiveness of manganese prediction, thereby offering new research perspectives for manganese forecasting in the Malkansu Manganese Ore Belt. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area (image from Google maps); (<b>b</b>) schematic tectonic map of Xinjiang China; (<b>c</b>) regional geological map and location of manganese deposits in the Malkansu area [<a href="#B7-minerals-15-00113" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) Malkantu syncline; (<b>b</b>) stratigraphic division in the field; (<b>c</b>) carbonaceous argillaceous limestone and tuff Interbeds; (<b>d</b>) rhodochrosite ore.</p>
Full article ">Figure 3
<p>Flow chart.</p>
Full article ">Figure 4
<p>Random forest flow chart [<a href="#B56-minerals-15-00113" class="html-bibr">56</a>].</p>
Full article ">Figure 5
<p>Enhancement of stratigraphic lithology information using Landsat 8 data. (<b>a</b>) B7(R) B5(G) B2(B); (<b>b</b>) PC3(R) PC2(G) PC1(B); (<b>c</b>) PC5(R) PC3(G) PC1(B); (<b>d</b>) IC6(R) IC3(G) B1(B); (<b>e</b>) MNF1(R) MNF2(G) MNF3(B); (<b>f</b>) IC1(R) IC2(G) IC3(B). Notes: Q: Quaternary; N<sub>1</sub><span class="html-italic">a</span>: Miocene Anju’an Formation; N<sub>1</sub><span class="html-italic">k</span>: Miocene Keziluoyi Formation; K<sub>2</sub><span class="html-italic">y</span>: Upper Cretaceous Yingjisha Group; D<sub>2</sub><span class="html-italic">t</span>: Middle Devonian Togmaiti Formation; S<sub>2-3</sub><span class="html-italic">t</span>: Middle-Upper Silurian Tartakuli Formation; S: Silurian.</p>
Full article ">Figure 6
<p>Remote sensing interpretation of stratigraphy. 1—Quaternary; 2—Pliocene Atushi Fm; 3—Miocene Pakabulake Fm; 4—Miocene Anju’an Fm; 5—Miocene Keziluoyi Fm; 6—Paleogene Kashi Group; 7—Upper Cretaceous Yingjisha Group; 8—Lower Cretaceous Kezilesu Group; 9—Upper Permian Kungaytao Fm; 10—Lower Permian Maerkanquekusaishan Fm; 11—Upper Carboniferous Kalaatehe Fm; 12—Lower Carboniferous Wuluate Fm; 13—Middle Devonian Togmaiti Fm; 14—Lower Devonian Sawayaerdun Fm; 15—Middle-Upper Silurian Tartakuli Fm; 16—Silurian; 17—Paleoproterozoic Bulunkuoler Group; 18—Snow; 19—River; 20—National border; 21—Manganese deposit.</p>
Full article ">Figure 7
<p>Mineralization factors: (<b>a</b>) normalization of favorable mineralization strata; (<b>b</b>) structural density; (<b>c</b>) aspect; (<b>d</b>) slope; (<b>e</b>) terrain ruggedness; (<b>f</b>) NDWI; (<b>g</b>) NDRI; (<b>h</b>) band ratios; (<b>i</b>) normalization of rhodochrosite anomaly information; (<b>j</b>) normalization of argillization anomaly information; (<b>k</b>) normalization of hydroxyl anomaly information; (<b>l</b>) normalization of iron staining anomaly information; (<b>m</b>) normalization of silicification anomaly information; (<b>n</b>) normalization of sericitization anomaly information; (<b>o</b>) normalization of soft manganese anomaly information; (<b>p</b>) normalization of chloritization anomaly information.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>d</b>) Filtering in 0°, 45°, 90°, and 135° directions; (<b>e</b>) interpretation of linear structures via remote sensing.</p>
Full article ">Figure 9
<p>Alteration information extraction results based on PCA: (<b>a</b>) hydroxyl anomaly information; (<b>b</b>) iron staining abnormal information; (<b>c</b>) argillic anomaly information; (<b>d</b>) rhodochrosite anomaly information.</p>
Full article ">Figure 10
<p>Alteration information extraction results based on SAM: (<b>a</b>) sericitization abnormal information; (<b>b</b>) chloritization anomaly information; (<b>c</b>) silicification anomaly information; (<b>d</b>) pyrolusite anomaly information.</p>
Full article ">Figure 11
<p>Sample point distribution map.</p>
Full article ">Figure 12
<p>(<b>a</b>) The predictor factor importance lollipop chart. (<b>b</b>) The predictor factor Pearson correlation heat map.</p>
Full article ">Figure 13
<p>(<b>a</b>) The ROC curve for the mineral prediction algorithm. (<b>b</b>) The 3D line plot of accuracy metrics across models.</p>
Full article ">Figure 14
<p>(<b>a</b>) RF mineralization prediction map. (<b>b</b>) NB mineral prediction map. (<b>c</b>) XGBoost mineralization prediction map. (<b>d</b>) Composite model mineralization prediction map.</p>
Full article ">Figure 15
<p>Delineation of target areas.</p>
Full article ">
20 pages, 32621 KiB  
Article
A Novel Rapeseed Mapping Framework Integrating Image Fusion, Automated Sample Generation, and Deep Learning in Southwest China
by Ruolan Jiang, Xingyin Duan, Song Liao, Ziyi Tang and Hao Li
Land 2025, 14(1), 200; https://doi.org/10.3390/land14010200 - 19 Jan 2025
Viewed by 772
Abstract
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic [...] Read more.
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic rapeseed mapping framework that integrates multi-source remote sensing data fusion, automated sample generation, and deep learning models. The framework was applied in Santai County, Sichuan Province, Southwest China, which has typical topographical and climatic characteristics. First, MODIS and Landsat data were used to fill the gaps in Sentinel-2 imagery, creating time-series images through the object-level processing version of the spatial and temporal adaptive reflectance fusion model (OL-STARFM). In addition, a novel spectral phenology approach was developed to automatically generate training samples, which were then input into the improved TS-ConvNeXt ECAPA-TDNN (NeXt-TDNN) deep learning model for accurate rapeseed mapping. The results demonstrated that the OL-STARFM approach was effective in rapeseed mapping. The proposed automated sample generation method proved effective in producing reliable rapeseed samples, achieving a low Dynamic Time Warping (DTW) distance (<0.81) when compared to field samples. The NeXt-TDNN model showed an overall accuracy (OA) of 90.12% and a mean Intersection over Union (mIoU) of 81.96% in Santai County, outperforming other models such as random forest, XGBoost, and UNet-LSTM. These results highlight the effectiveness of the proposed automatic rapeseed mapping framework in accurately identifying rapeseed. This framework offers a valuable reference for monitoring other crops in similar environments. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the study area in China; (<b>b</b>) the spatial distribution of the study area.</p>
Full article ">Figure 2
<p>Phenological calendar of three typical crops in Santai County. “E”, “M”, and “L” represent early, middle, and late periods of month, respectively.</p>
Full article ">Figure 3
<p>Framework of proposed three-stage rapeseed mapping.</p>
Full article ">Figure 4
<p>(<b>a</b>) The numbers of valid data based on the monthly synthesis of Sentinel-2 during the rapeseed growth period in Santai; (<b>b</b>) the framework of image fusion.</p>
Full article ">Figure 5
<p>The temporal features of rapeseed and other crops. The dashed line shows the gap between rapeseed and other types of features.</p>
Full article ">Figure 6
<p>Framework of NeXt-TDNN.</p>
Full article ">Figure 7
<p>(<b>a</b>) The potential samples generated by the Rapeseed Sample<sub>pheno</sub>. (<b>b</b>–<b>e</b>) The presented Sentinel-2 images are Ture-color (R: red; G: green; B: blue) images acquired in March. Blue and yellow represent potential sample objects proposed according to the rule, and the yellow and blue dots represent positive and negative sample points randomly selected from potential objects.</p>
Full article ">Figure 8
<p>Comparisons of the statistical distribution patterns of the DTW distance between all samples obtained from the rapeseed rule and field surveys in the study area. Rapeseed represents the DTW distance between rapeseed samples, while Non-Rapeseed represents the DTW distance between non-rapeseed samples.</p>
Full article ">Figure 9
<p>Comparison of time-series curves between automatically generated rapeseed samples and field collected rape samples.</p>
Full article ">Figure 10
<p>Rapeseed distribution map of Santai County (<b>a</b>), with detailed map of distribution in different areas (<b>b</b>–<b>d</b>), 2024.</p>
Full article ">Figure 11
<p>The recognition results of the four classifiers.</p>
Full article ">Figure 12
<p>A comparison of the recognition results of different classifiers. The specific positions of (<b>a</b>–<b>d</b>) are shown in <a href="#land-14-00200-f011" class="html-fig">Figure 11</a>. The blue circles mark better recognition results, and the red circles indicate error recognition or missing recognition.</p>
Full article ">
Back to TopTop