Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (525)

Search Parameters:
Keywords = mixed pixels

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5497 KiB  
Article
High Spatiotemporal Resolution Monitoring of Water Body Dynamics in the Tibetan Plateau: An Innovative Method Based on Mixed Pixel Decomposition
by Yuhang Jing and Zhenguo Niu
Sensors 2025, 25(4), 1246; https://doi.org/10.3390/s25041246 - 18 Feb 2025
Abstract
The Tibetan Plateau, known as the “Third Pole” and the “Water Tower of Asia”, has experienced significant changes in its surface water due to global warming. Accurately understanding and monitoring the spatiotemporal distribution of surface water is crucial for ecological conservation and the [...] Read more.
The Tibetan Plateau, known as the “Third Pole” and the “Water Tower of Asia”, has experienced significant changes in its surface water due to global warming. Accurately understanding and monitoring the spatiotemporal distribution of surface water is crucial for ecological conservation and the sustainable use of water resources. Among existing satellite data, the MODIS sensor stands out for its long time series and high temporal resolution, which make it advantageous for large-scale water body monitoring. However, its spatial resolution limitations hinder detailed monitoring. To address this, the present study proposes a dynamic endmember selection method based on phenological features, combined with mixed pixel decomposition techniques, to generate monthly water abundance maps of the Tibetan Plateau from 2000 to 2023. These maps precisely depict the interannual and seasonal variations in surface water, with an average accuracy of 95.3%. Compared to existing data products, the water abundance maps developed in this study provide better detail of surface water, while also benefiting from higher temporal resolution, enabling effective capture of dynamic water information. The dynamic monitoring of surface water on the Tibetan Plateau shows a year-on-year increase in water area, with an increasing fluctuation range. The surface water abundance products presented in this study not only provide more detailed information for the fine characterization of surface water but also offer a new technical approach and scientific basis for timely and accurate monitoring of surface water changes on the Tibetan Plateau. Full article
(This article belongs to the Special Issue Feature Papers in Remote Sensors 2024)
Show Figures

Figure 1

Figure 1
<p>Study Area Overview.</p>
Full article ">Figure 2
<p>Workflow of Water Body Abundance Inversion on the Tibetan Plateau.</p>
Full article ">Figure 3
<p>Abundance maps and validation results: (<b>a</b>) Abundance results for July 2017; (<b>b</b>) Distribution of classification accuracy, commission rate, and omission rate; (<b>c</b>) Scatter plot of RMSE and ME distribution.</p>
Full article ">Figure 4
<p>Analysis of Area Trend Over the Year.</p>
Full article ">Figure 5
<p>Comparison with Other Datasets: (<b>a</b>) Comparison of Area with Other Datasets; (<b>b</b>) Correlation of Abundance Map with Other Datasets.</p>
Full article ">Figure 6
<p>Interannual Area Change Diagram.</p>
Full article ">Figure 7
<p>Correlation Analysis with JRC and GSWED Datasets.</p>
Full article ">Figure 8
<p>Identification Results of Small Water Bodies.</p>
Full article ">Figure 9
<p>Identification of Linear Water Bodies.</p>
Full article ">Figure 10
<p>Potential of Abundance Maps in Wetland Classification.</p>
Full article ">
20 pages, 5687 KiB  
Article
Mapping of Dominant Tree Species in Yunnan Province Based on Sentinel-2 Time-Series Data and Assessment of the Influence of Understory Background on Mapping Accuracy
by Yihao Sun, Jingyuan Zhu, Ben Yang and Haodong Liu
Forests 2025, 16(2), 272; https://doi.org/10.3390/f16020272 - 5 Feb 2025
Viewed by 395
Abstract
Accurate information on the location of dominant tree species is essential for scientific forest management. However, factors like changes in forest phenology, stand conditions, and mixed understory backgrounds introduce uncertainties in remote sensing-based species mapping. To address these challenges, this study maps dominant [...] Read more.
Accurate information on the location of dominant tree species is essential for scientific forest management. However, factors like changes in forest phenology, stand conditions, and mixed understory backgrounds introduce uncertainties in remote sensing-based species mapping. To address these challenges, this study maps dominant tree species using time series Sentinel-2 data combined with environmental context data. To quantify the impact of understory background on mapping accuracy, this study applied a random forest inversion model to estimate the canopy cover across the study area. Binary contour plots and Pearson’s correlation coefficient were used to quantify the relationship between canopy cover and classification uncertainty at both the grid and pixels. A 10 m resolution map of dominant tree species in Yunnan Province, featuring eight species, was produced with an overall accuracy of 83.52% and a Kappa coefficient of 0.8115. The R2 value between the predicted and actual tree area proportions was greater than 0.93, with RMSEs consistently below 2.6. In addition, we observed strong negative correlations between different canopy cover classes. The correlations were −0.67 for low-cover areas, −0.40 for medium-cover areas, and −0.73 for high-cover areas. Our mapping framework enables the accurate identification of regional dominant species, and the established relationship between understory context and classification uncertainty provides valuable insights for analyzing potential mapping errors. Full article
Show Figures

Figure 1

Figure 1
<p>Overview map of the study area. (<b>a</b>) indicates the location of the study area in China. (<b>b</b>) indicates the distribution of forests in the study area.</p>
Full article ">Figure 2
<p>Sentinel-2 image usability within the study area. (<b>a</b>) shows the overall image usability. (<b>b</b>) shows the pixel-by-pixel image usability.</p>
Full article ">Figure 3
<p>Map of sampling distribution in the study area. (<b>a</b>) represents the spatial distribution of samples from different sources in the study area. (<b>b</b>) represents the sampling density in the study area.</p>
Full article ">Figure 4
<p>Predicted dominant tree species in the study area. (<b>a</b>) represents the spatial distribution of overall dominant tree species within the study area. (<b>b</b>) and (<b>c</b>) represent detailed tree species distribution information within individual areas (A and B).</p>
Full article ">Figure 5
<p>Producer accuracy, user accuracy, and confusion matrix results for each tree species. (<b>a</b>) represents the producer accuracy, and user accuracy of each tree species reported by the classification model. The meanings of the abbreviations in the vertical axis are shown in <a href="#forests-16-00272-t002" class="html-table">Table 2</a>. PA stands for producer accuracy and UA stands for user accuracy. (<b>b</b>) represents the confusion matrix.</p>
Full article ">Figure 6
<p>Linear fit between NFI data and area share of dominant tree species in predicted tree map for Zhenxiong (<b>a</b>) and Puer (<b>b</b>). The meanings of the abbreviated letters in the figure are shown in <a href="#forests-16-00272-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Forest classification uncertainty results for the study area (<b>a</b>). (<b>b1</b>) and (<b>b2</b>) show Google images of the example plots (A and B). (<b>c1</b>) and (<b>c2</b>) show the classification uncertainty of the example plots (A and B).</p>
Full article ">Figure 8
<p>Inversion results of forest canopy cover in the study area (<b>a</b>). (<b>b1</b>) and (<b>b2</b>) show Google images of the example plots (A and B). (<b>c1</b>) and (<b>c2</b>) show the canopy cover of the example plots (A and B).</p>
Full article ">Figure 9
<p>Plot of canopy cover fitting accuracy results obtained based on random forest inversion algorithm.</p>
Full article ">Figure 10
<p>Binary contour plots were obtained from reclassification results based on canopy cover and classification uncertainty. (<b>a</b>) is the spatial distribution of canopy cover for each class representing the grid scale. (<b>b</b>) is the spatial distribution of classification uncertainty for each class representing the grid scale. (<b>c</b>) represents binary equivalent results, with the abbreviations Cc and Cu for canopy cover and classification uncertainty, respectively. (<b>d</b>) represents the percentage share statistics for each equivalent result.</p>
Full article ">Figure 11
<p>Correlation results between canopy cover and classification uncertainty. (<b>a</b>) Scatterplot representing the correlation of validation points within the low-canopy-cover region. (<b>b</b>) Correlation scatterplot representing validation points within the medium-canopy-cover region. (<b>c</b>) Scatterplot representing the correlation of validation points within the region of high canopy cover. (<b>d</b>) Correlation scatterplot representing all validation points.</p>
Full article ">
20 pages, 11615 KiB  
Article
Analysis of the Spatiotemporal Evolution Patterns and Driving Factors of Various Planting Structures in Henan Province Based on Mixed-Pixel Decomposition Methods
by Kun Han, Jingyu Yang and Chao Liu
Sustainability 2025, 17(3), 1227; https://doi.org/10.3390/su17031227 - 3 Feb 2025
Viewed by 610
Abstract
Understanding the spatiotemporal evolution patterns and drivers of cropping structures is crucial for adjusting cropping structure policies, ensuring the sustainability of land resources, and safeguarding food security. However, existing research lacks sub-pixel scale data on planting structure, where planted area data are mainly [...] Read more.
Understanding the spatiotemporal evolution patterns and drivers of cropping structures is crucial for adjusting cropping structure policies, ensuring the sustainability of land resources, and safeguarding food security. However, existing research lacks sub-pixel scale data on planting structure, where planted area data are mainly derived from manual counting results. In this study, remote sensing technology was combined with geostatistical methods to realize the spatiotemporal evolution of crop planting structure at sub-pixel scale. Firstly, the spatial distribution of the multiple cropping structure in Henan Province was extracted based on a mixed-pixel decomposition model, and spatiotemporal evolution of the crop planting structure was analyzed using a combination of Sen’s slope estimator and Mann–Kendall trend analysis, as well as centroid migration. Then, Pearson correlation coefficients were calculated to explore the contribution of driving factors. The results indicate the following: (1) from 2001 to 2022, the cropping structure in Henan Province shows a slightly obvious increase. (2) The centroid of different cropping structures migrates to the main production areas as a whole. (3) Among the driving factors, there was a positive correlation with the labor force and a negative correlation with the urbanization rate. This study provides new insights into the evolution of large-scale crop planting structures and offers significant theoretical and practical value for sustainable agricultural development and the optimization of agricultural planting structures. Full article
(This article belongs to the Special Issue Land Management and Sustainable Agricultural Production: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Regional context and overview of Henan Province, China. (<b>a</b>) Location map and administrative map. (<b>b</b>) Digital elevation model map. (<b>c</b>) Land cover classification map.</p>
Full article ">Figure 2
<p>Overall study framework of analyzing the spatiotemporal evolution patterns and driving factors. NDVI: normalized difference vegetation index; FCLS: fully constrained least squares.</p>
Full article ">Figure 3
<p>Paddy rice abundance distribution in Henan Province over multiple years.</p>
Full article ">Figure 4
<p>Rapeseed–cotton abundance distribution in Henan Province over multiple years.</p>
Full article ">Figure 5
<p>The winter wheat–summer maize abundance distribution in Henan Province over multiple years.</p>
Full article ">Figure 6
<p>The winter wheat–small oilseeds abundance distribution in Henan Province over multiple years.</p>
Full article ">Figure 7
<p>Centroid migration for different planting structures from 2001 to 2022.</p>
Full article ">Figure 8
<p>Spatial distributions in trend and significance of the cropping index from 2001 to 2022.</p>
Full article ">Figure 9
<p>Pearson’s correlation coefficient of cropping index and 10 driving factors. The color and ellipses of the elliptical glyphs denote the magnitude and the direction of the relationship. The shorter the short axis, the closer the correlation coefficient is to 1 and vice versa. The bluer the color is, the stronger the positive correlation. The asterisk indicates the significance level of the correlation (* <span class="html-italic">p</span>  &lt;  0.05; ** <span class="html-italic">p</span>  &lt;  0.01; *** <span class="html-italic">p</span>  &lt;  0.001). CI, cropping index; AAT, annual average temperature; ACP, annual cumulative precipitation; UR, urbanization rate; SR, sex ratio; NGR, natural growth rate; RRP, resident rural population; CCF, consumption of chemical fertilizers; TPAM, total power of agricultural machinery; GDP, gross domestic product; DIR, disposable income of rural residents.</p>
Full article ">Figure 10
<p>Natural driving factors from 2001 to 2022. (<b>a</b>) Annual average temperature. (<b>b</b>) Annual cumulative precipitation.</p>
Full article ">Figure 11
<p>Economic driving factors from 2001 to 2022. (<b>a</b>) Gross domestic product. (<b>b</b>) Disposable income of rural residents.</p>
Full article ">Figure 12
<p>Population driving factors from 2001 to 2022. (<b>a</b>) Urbanization rate. (<b>b</b>) Sex ratio. (<b>c</b>) Natural growth rate. (<b>d</b>) Population.</p>
Full article ">Figure 13
<p>Agricultural production process driving factors from 2001 to 2022. (<b>a</b>) Consumption of chemical fertilizers. (<b>b</b>) Total power of agricultural machinery.</p>
Full article ">
19 pages, 7643 KiB  
Article
A 64 × 1 Multi-Mode Linear Single-Photon Avalanche Detector with Storage and Shift Reuse in Histogram
by Hankun Lv, Jingyi Wang, Bu Chen and Zhangcheng Huang
Electronics 2025, 14(3), 509; https://doi.org/10.3390/electronics14030509 - 26 Jan 2025
Viewed by 501
Abstract
Single-photon avalanche detectors (SPADs) have significant applications in fields such as autonomous driving. However, processing massive amounts of background data requires substantial storage and computational resources. This paper designs a linear SPAD sensor capable of three detection modes: 2D intensity detection, 3D synchronous [...] Read more.
Single-photon avalanche detectors (SPADs) have significant applications in fields such as autonomous driving. However, processing massive amounts of background data requires substantial storage and computational resources. This paper designs a linear SPAD sensor capable of three detection modes: 2D intensity detection, 3D synchronous detection, and 3D asynchronous detection. A configurable coincidence circuit is used to effectively suppress background light. To overcome the significant resource demands for storage and computation, this paper designs a histogram circuit that simultaneously possesses data storage and shifting capabilities. This circuit can not only perform statistical counting on time data but also shift data to quickly complete computational analysis. The chip is fabricated using a 0.13 μm mixed-signal CMOS process, with a pixel scale of 64 elements, a time resolution of 132 ps, and a power consumption of 12.9 mW. Test results indicate that the chip has good detection capabilities and good background light suppression. When the background light intensity is 6000 lux, the maximum background data are suppressed by 95.4%, and the average suppression rate increases to 86% as the coincidence threshold is raised from 0 to 1. Full article
(This article belongs to the Special Issue Advances in Solid-State Single Photon Detection Devices and Circuits)
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed SPAD sensor.</p>
Full article ">Figure 2
<p>The circuit diagram of the multi-mode front end circuit.</p>
Full article ">Figure 3
<p>The structural diagram of a CMOS single-photon avalanche device.</p>
Full article ">Figure 4
<p>The circuit diagram of the quench circuit.</p>
Full article ">Figure 5
<p>The circuit diagram of the coincidence circuit.</p>
Full article ">Figure 6
<p>The circuit diagram of the TDC circuit.</p>
Full article ">Figure 7
<p>The circuit diagram of the TDC’s ring oscillator.</p>
Full article ">Figure 8
<p>The circuit diagrams of the TDC register (<b>a</b>) and the ripple counter (<b>b</b>).</p>
Full article ">Figure 9
<p>The circuit diagram of the dual-mode output circuit. Red line: asynchronous readout mode. Blue line: synchronous readout mode.</p>
Full article ">Figure 10
<p>The block diagram of the SEL circuit.</p>
Full article ">Figure 11
<p>The diagram of the histogram circuit.</p>
Full article ">Figure 12
<p>The circuit diagram of the histogram counter unit.</p>
Full article ">Figure 13
<p>The circuit diagram of the PISO circuit.</p>
Full article ">Figure 14
<p>Micrograph of the proposed chip.</p>
Full article ">Figure 15
<p>A photograph of the test system, including the FPGA, optical lens, and laser (<b>a</b>), and the illuminated object (<b>b</b>).</p>
Full article ">Figure 16
<p>The structure of the test system.</p>
Full article ">Figure 17
<p>The measurement results of the output data versus trigger time in 64 pixels (<b>a</b>) and the comparison results of the output results from the 64 pixels triggered at 400 ns before and after calibration (<b>b</b>).</p>
Full article ">Figure 17 Cont.
<p>The measurement results of the output data versus trigger time in 64 pixels (<b>a</b>) and the comparison results of the output results from the 64 pixels triggered at 400 ns before and after calibration (<b>b</b>).</p>
Full article ">Figure 18
<p>Histograms generated under different coincidence detection threshold values: threshold value = 0 (<b>a</b>), threshold value = 1 (<b>b</b>), and threshold value = 2 (<b>c</b>).</p>
Full article ">Figure 19
<p>Histograms generated under different laser illumination intensities: small laser power (<b>a</b>), moderate laser power (<b>b</b>), and strong laser power (<b>c</b>).</p>
Full article ">Figure 20
<p>Depth measurement results of the SPAD sensor chip with the field of view depicted in <a href="#electronics-14-00509-f015" class="html-fig">Figure 15</a>b.</p>
Full article ">
17 pages, 4965 KiB  
Article
Neural Network for Underwater Fish Image Segmentation Using an Enhanced Feature Pyramid Convolutional Architecture
by Guang Yang, Junyi Yang, Wenyao Fan and Donghe Yang
J. Mar. Sci. Eng. 2025, 13(2), 238; https://doi.org/10.3390/jmse13020238 - 26 Jan 2025
Viewed by 410
Abstract
Underwater fish image segmentation is a crucial technique in marine fish monitoring. However, typical underwater fish images often suffer from issues such as color distortion, low contrast, and blurriness, primarily due to the complex and dynamic nature of the marine environment. To enhance [...] Read more.
Underwater fish image segmentation is a crucial technique in marine fish monitoring. However, typical underwater fish images often suffer from issues such as color distortion, low contrast, and blurriness, primarily due to the complex and dynamic nature of the marine environment. To enhance the accuracy of underwater fish image segmentation, this paper introduces an innovative neural network model that combines the attention mechanism with a feature pyramid module. After the backbone network processes the input image through convolution, the data pass through the enhanced feature pyramid module, where it is iteratively processed by multiple weighted branches. Unlike conventional methods, the multi-scale feature extraction module that we designed not only improves the extraction of high-level semantic features but also optimizes the distribution of low-level shape feature weights through the synergistic interactions of the branches, all while preserving the inherent properties of the image. This novel architecture significantly boosts segmentation accuracy, offering a new solution for fish image segmentation tasks. To further enhance the model’s robustness, the Mix-up and CutMix data augmentation techniques were employed. The model was validated using the Fish4Knowledge dataset, and the experimental results demonstrate that the model achieves a Mean Intersection over Union (MIoU) of 95.1%, with improvements of 1.3%, 1.5%, and 1.7% in the MIoU, Mean Pixel Accuracy (PA), and F1 score, respectively, compared to traditional segmentation methods. Additionally, a real fish image dataset captured in deep-sea environments was constructed to verify the practical applicability of the proposed algorithm. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>Network Architecture.</p>
Full article ">Figure 2
<p>PAFE.</p>
Full article ">Figure 3
<p>Fish dataset after data augmentation.</p>
Full article ">Figure 4
<p>Data Comparison Chart.</p>
Full article ">Figure 5
<p>Results Display.</p>
Full article ">Figure 6
<p>Black Sea Spart.</p>
Full article ">Figure 7
<p>Ablation Experiment III.</p>
Full article ">Figure 8
<p>Presentation of Experimental Results for Deep-Sea Fish Images.</p>
Full article ">Figure 8 Cont.
<p>Presentation of Experimental Results for Deep-Sea Fish Images.</p>
Full article ">Figure 9
<p>Image enhancement effect.</p>
Full article ">
19 pages, 30519 KiB  
Article
Analyzing Vegetation Heterogeneity Trends in an Urban-Agricultural Landscape in Iran Using Continuous Metrics and NDVI
by Ehsan Rahimi and Chuleui Jung
Land 2025, 14(2), 244; https://doi.org/10.3390/land14020244 - 24 Jan 2025
Viewed by 449
Abstract
Understanding vegetation heterogeneity dynamics is crucial for assessing ecosystem resilience, biodiversity patterns, and the impacts of environmental changes on landscape functions. While previous studies primarily focused on NDVI pixel trends, shifts in landscape heterogeneity have often been overlooked. To address this gap, our [...] Read more.
Understanding vegetation heterogeneity dynamics is crucial for assessing ecosystem resilience, biodiversity patterns, and the impacts of environmental changes on landscape functions. While previous studies primarily focused on NDVI pixel trends, shifts in landscape heterogeneity have often been overlooked. To address this gap, our study evaluated the effectiveness of continuous metrics in capturing vegetation dynamics over time, emphasizing their utility in short-term trend analysis. The study area, located in Iran, encompasses a mix of urban and agricultural landscapes dominated by farming-related vegetation. Using 11 Landsat 8 OLI images from 2013 to 2023, we calculated NDVI to analyze vegetation trends and heterogeneity dynamics. We applied three categories of continuous metrics: texture-based metrics (dissimilarity, entropy, and homogeneity), spatial autocorrelation indices (Getis and Moran), and surface metrics (Sa, Sku, and Ssk) to assess vegetation heterogeneity. By generating slope maps through linear regression, we identified significant trends in NDVI and correlated them with the slope maps of the continuous metrics to determine their effectiveness in capturing vegetation dynamics. Our findings revealed that Moran’s Index exhibited the highest positive correlation (0.63) with NDVI trends, followed by Getis (0.49), indicating strong spatial clustering in areas with increasing NDVI. Texture-based metrics, particularly dissimilarity (0.45) and entropy (0.28), also correlated positively with NDVI dynamics, reflecting increased variability and heterogeneity in vegetation composition. In contrast, negative correlations were observed with metrics such as homogeneity (−0.41), Sku (−0.12), and Ssk (−0.24), indicating that increasing NDVI trends were associated with reduced uniformity and surface dominance. Our analysis underscores the complementary roles of these metrics, with spatial autocorrelation metrics excelling in capturing clustering patterns and texture-based metrics highlighting value variability within clusters. By demonstrating the utility of spatial autocorrelation and texture-based metrics in capturing heterogeneity trends, our findings offer valuable tools for land management and conservation planning. Full article
Show Figures

Figure 1

Figure 1
<p>Study area location in Iran. (<b>a</b>) NDVI and (<b>b</b>) color composite of bands 5,4,3 in August 2023 (Landsat 8 OLI).</p>
Full article ">Figure 2
<p>Methodology flowchart of comparing continuous metrics for vegetation heterogeneity analysis.</p>
Full article ">Figure 3
<p>(<b>a</b>) Slope coefficient, (<b>b</b>) <span class="html-italic">p</span>-value, (<b>c</b>) negative and positive trend, and (<b>d</b>) significant <span class="html-italic">p</span>-values of NDVI pixels between 2013–2023.</p>
Full article ">Figure 4
<p>(<b>a</b>) slope coefficient of dissimilarity of NDVI, (<b>b</b>) slope coefficient of entropy of NDVI, (<b>c</b>) slope coefficient of Sa of NDVI (<b>d</b>) slope coefficient of Moran of NDVI.</p>
Full article ">Figure 5
<p>Negative and positive trend, and significant <span class="html-italic">p</span>-values of (<b>a</b>) dissimilarity of NDVI, (<b>b</b>) entropy of NDVI, (<b>c</b>) homogeneity of NDVI, (<b>d</b>) Getis of NDVI, (<b>e</b>) Moran of NDVI, (<b>f</b>) Sa of NDVI, (<b>g</b>) SKU of NDVI, (<b>h</b>) SSK of NDVI.</p>
Full article ">Figure 6
<p>Correlation values between the slope of NDVI and the slope of other continuous metrics, illustrating the relationship between NDVI trends and vegetation heterogeneity or clustering trends.</p>
Full article ">
16 pages, 2567 KiB  
Article
Mixing Data Cube Architecture and Geo-Object-Oriented Time Series Segmentation for Mapping Heterogeneous Landscapes
by Michel E. D. Chaves, Lívia G. D. Soares, Gustavo H. V. Barros, Ana Letícia F. Pessoa, Ronaldo O. Elias, Ana Claudia Golzio, Katyanne V. Conceição and Flávio J. O. Morais
AgriEngineering 2025, 7(1), 19; https://doi.org/10.3390/agriengineering7010019 - 17 Jan 2025
Viewed by 553
Abstract
The conflict between environmental conservation and agricultural production highlights the need for precise land use and land cover (LULC) mapping to support agro-environmental-related policies. Satellite image time series from the Moderate Resolution Image Spectroradiometer (MODIS) sensor are essential for current LULC mapping efforts. [...] Read more.
The conflict between environmental conservation and agricultural production highlights the need for precise land use and land cover (LULC) mapping to support agro-environmental-related policies. Satellite image time series from the Moderate Resolution Image Spectroradiometer (MODIS) sensor are essential for current LULC mapping efforts. However, most approaches focus on pixel data, and studies exploring object-based spatiotemporal heterogeneity and correlation features in its time series are limited. The objective of this study is to mix the data cube architecture (analysis-ready data—ARD) and the geo-object-oriented time series segmentation via Geographic Object-Based Image Analysis (GEOBIA) to assess its performance in identifying natural vegetation and double-cropping practices over a crop season. The study area was the state of Mato Grosso, Brazil. Results indicate that, by combining GEOBIA and time series analysis (materialized by the multiresolution segmentation algorithm to derive spatiotemporal geo-objects of the MODIS data cube), representative training data collected after a quality control process, and the Support Vector Machine to classify the ARD, the overall accuracy was 0.95 and all users’ and producers’ accuracies were higher than 0.88. By considering the heterogeneity of Mato Grosso’s landscape, the results indicate the potential of the approach to provide accurate mapping. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of Mato Grosso in the Brazilian territory and its division into mesoregions.</p>
Full article ">Figure 2
<p>Workflow of the methodological procedure adopted in the study.</p>
Full article ">Figure 3
<p>LULC classification of Mato Grosso, Brazil, corresponding to the 2016/2017 crop season, derived from a MODIS data cube segmented via GEOBIA and classified via the SVM algorithm.</p>
Full article ">Figure 4
<p>LULC classification in different regions of Mato Grosso, comparing the reference MODIS-NDVI data cube with the geo-object-oriented LULC classification.</p>
Full article ">
26 pages, 394 KiB  
Review
Monitoring Yield and Quality of Forages and Grassland in the View of Precision Agriculture Applications—A Review
by Abid Ali and Hans-Peter Kaul
Remote Sens. 2025, 17(2), 279; https://doi.org/10.3390/rs17020279 - 15 Jan 2025
Viewed by 1516
Abstract
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of [...] Read more.
The potential of precision agriculture (PA) in forage and grassland management should be more extensively exploited to meet the increasing global food demand on a sustainable basis. Monitoring biomass yield and quality traits directly impacts the fertilization and irrigation practises and frequency of utilization (cuts) in grasslands. Therefore, the main goal of the review is to examine the techniques for using PA applications to monitor productivity and quality in forage and grasslands. To achieve this, the authors discuss several monitoring technologies for biomass and plant stand characteristics (including quality) that make it possible to adopt digital farming in forages and grassland management. The review provides an overview about mass flow and impact sensors, moisture sensors, remote sensing-based approaches, near-infrared (NIR) spectroscopy, and mapping field heterogeneity and promotes decision support systems (DSSs) in this field. At a small scale, advanced sensors such as optical, thermal, and radar sensors mountable on drones; LiDAR (Light Detection and Ranging); and hyperspectral imaging techniques can be used for assessing plant and soil characteristics. At a larger scale, we discuss coupling of remote sensing with weather data (synergistic grassland yield modelling), Sentinel-2 data with radiative transfer modelling (RTM), Sentinel-1 backscatter, and Catboost–machine learning methods for digital mapping in terms of precision harvesting and site-specific farming decisions. It is known that the delineation of sward heterogeneity is more difficult in mixed grasslands due to spectral similarity among species. Thanks to Diversity-Interactions models, jointly assessing various species interactions under mixed grasslands is allowed. Further, understanding such complex sward heterogeneity might be feasible by integrating spectral un-mixing techniques such as the super-pixel segmentation technique, multi-level fusion procedure, and combined NIR spectroscopy with neural network models. This review offers a digital option for enhancing yield monitoring systems and implementing PA applications in forages and grassland management. The authors recommend a future research direction for the inclusion of costs and economic returns of digital technologies for precision grasslands and fodder production. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
22 pages, 33216 KiB  
Article
Characterizing Sparse Spectral Diversity Within a Homogenous Background: Hydrocarbon Production Infrastructure in Arctic Tundra near Prudhoe Bay, Alaska
by Daniel Sousa, Latha Baskaran, Kimberley Miner and Elizabeth Josephine Bushnell
Remote Sens. 2025, 17(2), 244; https://doi.org/10.3390/rs17020244 - 11 Jan 2025
Viewed by 888
Abstract
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark [...] Read more.
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark (S, V, and D) endmember spectra which dominate spectral variance for most of Earth’s land surface. We illustrate the approach using AVIRIS-3 imagery of anthropogenic surfaces (primarily hydrocarbon extraction infrastructure) embedded in a background of Arctic tundra near Prudhoe Bay, Alaska. Computational experiments further explore sensitivity to spatial and spectral resolution. Analysis involves two stages: first, computing the mixture residual of a generalized linear spectral mixture model; and second, nonlinear dimensionality reduction via manifold learning. Anthropogenic targets and lakeshore sediments are successfully isolated from the Arctic tundra background. Dependence on spatial resolution is observed, with substantial degradation of manifold topology as images are blurred from 5 m native ground sampling distance to simulated 30 m ground projected instantaneous field of view of a hypothetical spaceborne sensor. Degrading spectral resolution to mimicking the Sentinel-2A MultiSpectral Imager (MSI) also results in loss of information but is less severe than spatial blurring. These results inform spectroscopic characterization of sparse targets using spectroscopic images of varying spatial and spectral resolution. Full article
Show Figures

Figure 1

Figure 1
<p>Location. The Brooks Range runs over 1000 km from the Chukchi Sea, across northern Alaska, into the Yukon Territory. Its north slope, running to the Arctic Ocean, hosts the Prudhoe Bay Oil Field. Discovered on 12 March 1968 by ARCO and Exxon, Prudhoe Bay field is commonly recognized as the largest in North America. Estimates of original oil volume are 25 billion barrels. Production at Prudhoe Bay came onstream in the late 1970s and continues to present.</p>
Full article ">Figure 2
<p>AVIRIS-3 flight line overview. The first component of the analysis uses an example flight line (AV320230806t205810) acquired on 6 August 2023. These data were collected at approximately 4100 m (13,450 ft) flying in a north–south orientation. True color (<b>left</b>) and false color (<b>right</b>) images show Level-2A ISOFIT-corrected reflectance in north up orientation. A linear 2% stretch is applied to both images. Ground sampling distance is 4.1 m. Hydrocarbon extraction infrastructure is clearly visible amongst a typical north slope landscape mosaic of permafrost and thermokarst lakes. Yellow squares show the three 400 × 400 pixel subsets mosaiced and used for subsequent analysis.</p>
Full article ">Figure 3
<p>Principal Component based spectral feature space of the AVIRIS-3 dataset. The natural permafrost + thermokarst background forms a mixing space between Dark (D), green vegetation (V), and nonphotosynthetic vegetation (N) endmembers. This mixing continuum dominates image variance. Anthropogenic substrates (S<sub>1</sub>–S<sub>6</sub>) and processes, like Flaring (F), are spatially sparse but spectrally highly distinct, forming excursions from the mixing space. Some human materials are characterized by consistent absorption features, like the SWIR absorption at 2.2 microns and longer wavelengths characteristic of many polymers. Anthropogenic materials are prominent in the PC transform, but full characterization is made challenging by depth of the embedding space (many PCs to explore) and redundancy (same spectra identified by multiple pairs of PC dimensions). Here we show examples through PC5, but no obvious stopping point is evident.</p>
Full article ">Figure 4
<p>2D UMAP spectral feature space of AVIRIS-3 reflectance. Additional Dark endmembers (D<sub>1</sub>–D<sub>3</sub>) are readily identified using UMAP but not using PCs 1 through 5 since their contribution to overall image variance is small. Mixing continua (e.g., from D<sub>2</sub> to V and N) endmembers are also clearly visible. The full complexity of the anthropogenic substrates (S) collapses onto a single submanifold which can be easily identified and isolated for targeted analysis. The primary UMAP hyperparameter, the number of Nearest Neighbors (NN), has a clear impact on manifold structure. Low values of NN (<b>left</b>) result in identification of a very large number of internally consistent but globally incoherent clusters which are effectively fitting to sensor/atmospheric correction noise. High values of NN (<b>right</b>) capture global manifold topology but can lose important low-variance details. An intermediate value (<b>center</b>) was selected for subsequent work. Spectra for D<sub>1</sub> and D<sub>3</sub> are shown on <a href="#remotesensing-17-00244-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>3D UMAP feature space of AVIRIS-3 reflectance. Complexity is even more evident in the shallow water (Dark 1–3, or D<sub>1</sub>–D<sub>3</sub>) endmembers in the 3D embedding than the 2D embedding. Nonphotosynthetic vegetation (N), green vegetation (V) and anthropogenic substrate (S) EMs are present, but less clearly identifiable. Inspection of the embedding image (<a href="#remotesensing-17-00244-f006" class="html-fig">Figure 6</a>) shows that the UMAP clusters are capturing spatially coherent features.</p>
Full article ">Figure 6
<p>2D UMAP feature space of AVIRIS-3 spectral mixture residual. After removing the underlying mixing continuum, anthropogenic surfaces and processes are cleanly separable from the rest of the image (red polygon on feature space, red region of interest on image mosaic). Base image is a true color composite. A broad diversity of spectra (<b>lower left</b>) are contained within the statistically separable submanifold, corresponding to a wide range of infrastructure, including roads, pipelines, and extraction facilities. Some natural littoral features are also captured.</p>
Full article ">Figure 7
<p>Spectral diversity within the anthropogenic pixels identified from the UMAP(MR) space as shown by both PCs (<b>left</b>) and targeted UMAP (<b>right</b>). Spaces are sparser because most of the image is now excluded. UMAP clearly shows the albedo continuum, culminating in multiple tendrils near the bottom of the space, each of which corresponds to a distinct reflectance spectrum. Mixed pixels are clearly identifiable from each edge of the manifold. In contrast, the PC space manifests as a spiky hyperball, with individual mixing lines emanating from the body of the point cloud. In some cases, by not requiring statistical connectivity, the PCs are better able to identify compositional gradients among small numbers of pixels.</p>
Full article ">Figure 8
<p>Effect of spatial resolution on the characterization approach. Here we convolve the 4.1 m AVIRIS-3 reflectance mosaic with a low pass Gaussian blurring kernel to simulate the point spread function of a hypothetical 30 m resolution spaceborne imaging spectrometer. The resulting feature spaces are obviously much sparser. Broad spectral gradients corresponding to land cover types are evident, but clearly insufficient pixel density is present for either dimensionality reduction approach to resolve many important individual materials. Letter labels refer to the same endmember materials as <a href="#remotesensing-17-00244-f003" class="html-fig">Figure 3</a>, <a href="#remotesensing-17-00244-f004" class="html-fig">Figure 4</a> and <a href="#remotesensing-17-00244-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 9
<p>Effect of spectral resolution on the characterization approach. Here we convolve the AVIRIS-3 reflectance mosaic with the spectral response function of the Sentinel-2A multispectral imager to simulate the effect of spectral resolution, while holding spatial resolution constant. Shallow water still dominates the UMAP manifold structure. Anthropogenic materials still form an appreciable submanifold, but it is not fully separable from the main manifold even after pretreatment with the spectral mixture residual. Interested readers may benefit from comparison to Figure 6 of [<a href="#B17-remotesensing-17-00244" class="html-bibr">17</a>]. Letter labels refer to the same endmember materials as <a href="#remotesensing-17-00244-f003" class="html-fig">Figure 3</a>, <a href="#remotesensing-17-00244-f004" class="html-fig">Figure 4</a> and <a href="#remotesensing-17-00244-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 10
<p>Map view of key results. <b>Top row</b> shows true color (<b>left</b>) and false color (<b>right</b>) reflectance. <b>Middle row</b> shows true color (<b>left</b>) and false color (<b>right</b>) spectral mixture residual images. <b>Bottom row</b> shows dimensionality reduction results from UMAP dimensions (<b>left</b>) and PCs (<b>right</b>).</p>
Full article ">Figure A1
<p>Sensitivity of 3D covariance-based Principal Component spectral feature space of AVIRIS-3 reflectance to coarsening spatial resolution. The body of the point cloud is compressed towards the corner of the scatterplots to accommodate the wide range of spatially sparse high-variance pixels associated with emission spectra from methane flaring and spurious BRDF effects. A clear step change occurs between 16 and 32 m resolution, surprisingly consistent with [<a href="#B61-remotesensing-17-00244" class="html-bibr">61</a>].</p>
Full article ">Figure A2
<p>Sensitivity of 2D UMAP spectral feature space of AVIRIS-3 reflectance to coarsening spatial resolution. Manifold connectivity is greatest with fine spatial resolution (<b>above</b>) and degrades as resolution coarsens (<b>below</b>). For this dataset, 8 m resolution data (<b>top row</b>) captures similar manifold topology to the full 4 m resolution reflectance data (compare to <a href="#remotesensing-17-00244-f004" class="html-fig">Figure 4</a>), while 64 m resolution data (<b>bottom row</b>) retains only the most generic manifold properties.</p>
Full article ">Figure A3
<p>Sensitivity of 3D covariance-based Principal Component spectral feature space of AVIRIS-3 mixture residual reflectance to coarsening spatial resolution. The body of the point cloud is compressed towards the corner of the scatterplots to accommodate the wide range of spatially sparse high-variance pixels associated with emission spectra from methane flaring and spurious BRDF effects. As with <a href="#remotesensing-17-00244-f0A3" class="html-fig">Figure A3</a>, a clear step change occurs between 16 and 32 m resolution.</p>
Full article ">Figure A4
<p>Sensitivity of 2D UMAP spectral feature space of AVIRIS-3 mixture residual reflectance to coarsening spatial resolution. As with UMAP (Reflectance), manifold connectivity is greatest with fine spatial resolution (<b>above</b>) and degrades as resolution coarsens (<b>below</b>).</p>
Full article ">
23 pages, 6926 KiB  
Article
Characterising the Thematic Content of Image Pixels with Topologically Structured Clustering
by Giles M. Foody
Remote Sens. 2025, 17(1), 130; https://doi.org/10.3390/rs17010130 - 2 Jan 2025
Viewed by 434
Abstract
The location of a pixel in feature space is a function of its thematic composition. The latter is central to an image classification analysis, notably as an input (e.g., training data for a supervised classifier) and/or an output (e.g., predicted class label). Whether [...] Read more.
The location of a pixel in feature space is a function of its thematic composition. The latter is central to an image classification analysis, notably as an input (e.g., training data for a supervised classifier) and/or an output (e.g., predicted class label). Whether as an input to or output from a classification, little if any information beyond a class label is typically available for a pixel. The Kohonen self-organising feature map (SOFM) neural network however offers a means to both cluster together spectrally similar pixels that can be allocated suitable class labels and indicate relative thematic similarity of the clusters generated. Here, the thematic composition of pixels allocated to clusters represented by individual SOFM output units was explored with two remotely sensed data sets. It is shown that much of the spectral information of the input image data is maintained in the production of the SOFM output. This output provides a topologically structured representation of the image data, allowing spectrally similar pixels to be grouped together and the similarity of different clusters to be assessed. In particular, it is shown that the thematic composition of both pure and mixed pixels can be characterised by a SOFM. The location of the output unit in the output layer of the SOFM associated with a pixel conveys information on its thematic composition. Pixels in spatially close output units are more similar spectrally and thematically than those in more distant units. This situation also enables specific sub-areas of interest in the SOFM output space and/or feature space to be identified. This may, for example, provide a means to target efforts in training data acquisition for supervised classification as the most useful training cases may have a tendency to lie within specific sub-areas of feature space. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A map of the test site, indicated by the black dashed box, used in the analyses of the Sentinel-2 data. This map also represents the ground reference data used in the analyses of the Sentinel-2 data. The map is based on UKCEH Land Cover <sup>®</sup> Plus: Crops © 2018 UKCEH and acquired via the Digimap service.</p>
Full article ">Figure 2
<p>Sentinel-2 MSI imagery. (<b>a</b>) True colour composite with fields planted with oil seed rape showing in bright light green, (<b>b</b>) image in band 8 with oil seed rape fields shown with very light tone, and (<b>c</b>) coarse spatial resolution image in band 8 as guide to data used in SOFM.</p>
Full article ">Figure 2 Cont.
<p>Sentinel-2 MSI imagery. (<b>a</b>) True colour composite with fields planted with oil seed rape showing in bright light green, (<b>b</b>) image in band 8 with oil seed rape fields shown with very light tone, and (<b>c</b>) coarse spatial resolution image in band 8 as guide to data used in SOFM.</p>
Full article ">Figure 3
<p>A SOFM. The network shown is that used with the ATM data set, which had 3 input units (triangles) and an output layer containing 16 output units (circles) arranged in a square. The weighted connection between the input and output units is shown with black solid lines while the connections between the output units are shown with dashed red lines. Each output unit is identified by its position in the layer and defined by the numerical identifier in red.</p>
Full article ">Figure 4
<p>The number of pixels allocated to each of the SOFM output units for the ATM data set; the layout of the output layer matches that shown in <a href="#remotesensing-17-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Class composition of the pixels in the SOFM output units. Each pie chart shows the average composition of the pixels associated with the specific output unit. The classes are trees in blue, grass in orange, and asphalt in grey. The layout of the output layer matches that shown in <a href="#remotesensing-17-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>The number of pixels allocated to each of the SOFM output units for the Sentinel-2 MSI data set; the layout of the output layer matches that shown in <a href="#remotesensing-17-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 7
<p>The locations of the cases allocated to each SOFM output unit in the input spectral feature space.</p>
Full article ">Figure 8
<p>Locations of cases allocated to all 16 SOFM output units in the ATMdata feature space. (<b>a</b>) Two dimensional feature space defined by b1 and b2, (<b>b</b>) 2D feature space defined by b1 and b3, (<b>c</b>) 2D feature space defined by b2 and b3, and (<b>d</b>) 3D feature space defined by b1, b2 and b3.</p>
Full article ">Figure 9
<p>The average squared Mahalanobis distance for the pixels associated with each SOFM unit to each class. (<b>a</b>) Trees, (<b>b</b>) grass, and (<b>c</b>) asphalt.</p>
Full article ">Figure 10
<p>Cases associated with SOFM output unit 16 (shown in cyan) overlaid on original resolution image acquired in band 8 (i.e., <a href="#remotesensing-17-00130-f002" class="html-fig">Figure 2</a>b).</p>
Full article ">Figure 11
<p>Cases belonging to units neighbouring unit 16 overlain on original resolution image in band 8. (<b>a</b>) Unit 11, (<b>b</b>) unit 12, and (<b>c</b>) unit 15.</p>
Full article ">Figure 12
<p>Cases belonging to units that are near neighbours of unit 16 overlain on original resolution image in band 8. (<b>a</b>) Unit 7 and (<b>b</b>) unit 14.</p>
Full article ">Figure 13
<p>Cases belonging to unit 8 overlain on original resolution image in band 8.</p>
Full article ">Figure 14
<p>The locations of the cases allocated to each output unit in the SOFM with a 10 × 10 unit output layer. A legend is not shown as there are 100 classes and the central aim is to compare them visually against <a href="#remotesensing-17-00130-f007" class="html-fig">Figure 7</a>.</p>
Full article ">
22 pages, 8140 KiB  
Article
Improving Satellite-Based Retrieval of Maize Leaf Chlorophyll Content by Joint Observation with UAV Hyperspectral Data
by Siqi Yang, Ran Kang, Tianhe Xu, Jian Guo, Caiyun Deng, Li Zhang, Lulu Si and Hermann Josef Kaufmann
Drones 2024, 8(12), 783; https://doi.org/10.3390/drones8120783 - 23 Dec 2024
Viewed by 932
Abstract
While satellite-based remote sensing offers a promising avenue for large-scale LCC estimations, the accuracy of evaluations is often decreased by mixed pixels, attributable to distinct farming practices and diverse soil conditions. To overcome these challenges and to account for maize intercropping with soybeans [...] Read more.
While satellite-based remote sensing offers a promising avenue for large-scale LCC estimations, the accuracy of evaluations is often decreased by mixed pixels, attributable to distinct farming practices and diverse soil conditions. To overcome these challenges and to account for maize intercropping with soybeans at different growth stages combined with varying soil backgrounds, a hyperspectral database for maize was set up using a random linear mixed model applied to hyperspectral data recorded by an unmanned aerial vehicle (UAV). Four methods, namely, Euclidean distance, Minkowski distance, Manhattan distance, and Cosine similarity, were used to compare vegetation spectra from Sentinel-2A with the newly constructed database. In a next step, widely used vegetation indices such as NDVI, NAOC, and CAI were tested to find the optimum method for LCC retrieval, validated by field measurements. The results show that the NAOC had the strongest correlation with ground sampling information (R2 = 0.83, RMSE = 0.94 μg/cm2, and MAE = 0.67 μg/cm2). Additional field measurements sampled at other farming areas were applied to validate the method’s transferability and generalization. Here too, validation results showed a highly precise LCC estimation (R2 = 0.93, RMSE = 1.10 μg/cm2, and MAE = 1.09 μg/cm2), demonstrating that integrating UAV hyperspectral data with a random linear mixed model significantly improves satellite-based LCC retrievals. Full article
Show Figures

Figure 1

Figure 1
<p>Sketch map of the study area.</p>
Full article ">Figure 2
<p>(<b>a</b>) Headwall Nano-Hyperspec VNIR imaging sensor mounted on the Matrice 600 Pro. (<b>b</b>) Schematic diagram of the principal plane guiding flight passes of aircraft and UAV during recordings.</p>
Full article ">Figure 3
<p>UAV data collection at five dates during the three growth stages of maize. V6~V10 are the sixth-leaf and tenth-leaf stages of maize. VT marks the tasseling stage, and R3 represents the milking stage.</p>
Full article ">Figure 4
<p>Distribution of field sampling locations and the respective UAV flight plans.</p>
Full article ">Figure 5
<p>Flowchart illustrating the creation of the hyperspectral database for maize. In the target pool, different green dots are the EMs of maize, and the yellow dots are the EMs of soybeans. In the background pool, different colors represent different soil types. Liniear mixing spectrum comprise data from both target pool and background pool.</p>
Full article ">Figure 6
<p>(<b>a</b>) Illustration of the NAOC with an original (green) and a smoothed (orange) hyperspectral reflectance curve of vegetation. The green area corresponds to the AOC, and the blue area is the the integral from 643 nm to 795 nm (<b>b</b>) Diagram of the CAI spectral index. The dark blue area is the the integral from 600 nm to 735 nm, and the brown gray area is the integral of spectral envelope.</p>
Full article ">Figure 7
<p>(<b>a</b>) Randomly selected intercropping areas of the five UAV hyperspectral recordings. (<b>b</b>) Corresponding results of the NDVI threshold classification, identifying different crop areas: blue = maize; green = soybeans. (<b>c</b>) Spectral characteristics of maize and soybeans measured at the cursor position.</p>
Full article ">Figure 8
<p>Variations of maize spectra with increasing soil background fractions.</p>
Full article ">Figure 9
<p>(<b>a</b>) Number of pixels corresponding to distinct coverage differences based on four spectral matching methods. (<b>b</b>) Distribution of calculated Cosine similarity values.</p>
Full article ">Figure 10
<p>Correlations between the training datasets of measured and retrieved LCCs by the following functional indices: (<b>a</b>) NDVI, (<b>b</b>) CAI, and (<b>c</b>) NAOC.</p>
Full article ">Figure 11
<p>Accuracy of LCC estimation for different growth stages of maize based on our proposed method. (<b>a</b>) Jointing stage. (<b>b</b>) Tasseling stage. (<b>c</b>) Milking stage.</p>
Full article ">Figure 12
<p>Relationship between LCC and six widely used FVIs: (<b>a</b>) Normalized area over reflectance curve (NAOC); (<b>b</b>) Modified Chlorophyll Absorption Ratio Index (MCARI); (<b>c</b>) Difference Vegetation Index (DVI); (<b>d</b>) Red Edge Chlorophyll Index (CIred-edge); (<b>e</b>) Chlorophyll Vegetation Index (CVI); (<b>f</b>) Soil-adjusted Vegetation Index (SAVI). Linear models were constructed using the full dataset of 60 ground samples from our study area. The correlation coefficients and RMSE values are displayed in each scatterplot, with the distributions of the FVI and LCC values presented as histograms along the top and at the side of each graph.</p>
Full article ">
28 pages, 16088 KiB  
Article
A Hierarchical Machine Learning-Based Strategy for Mapping Grassland in Manitoba’s Diverse Ecoregions
by Mirmajid Mousavi, James Kobina Mensah Biney, Barbara Kishchuk, Ali Youssef, Marcos R. C. Cordeiro, Glenn Friesen, Douglas Cattani, Mustapha Namous and Nasem Badreldin
Remote Sens. 2024, 16(24), 4730; https://doi.org/10.3390/rs16244730 - 18 Dec 2024
Viewed by 919
Abstract
Accurate and reliable knowledge about grassland distribution is essential for farmers, stakeholders, and government to effectively manage grassland resources from agro-economical and ecological perspectives. This study developed a novel pixel-based grassland classification approach using three supervised machine learning (ML) algorithms, which were assessed [...] Read more.
Accurate and reliable knowledge about grassland distribution is essential for farmers, stakeholders, and government to effectively manage grassland resources from agro-economical and ecological perspectives. This study developed a novel pixel-based grassland classification approach using three supervised machine learning (ML) algorithms, which were assessed in the province of Manitoba, Canada. The grassland classification process involved three stages: (1) to distinguish between vegetation and non-vegetation covers, (2) to differentiate grassland from non-grassland landscapes, and (3) to identify three specific grassland classes (tame, native, and mixed grasses). Initially, this study investigated different satellite data, such as Sentinel-1 (S1), Sentinel-2 (S2), and Landsat 8 and 9, individually and combined, using the random forest (RF) method, with the best performance at the first two steps achieved using a combination of S1 and S2. The combination was then utilized to conduct the first two steps of classification using support vector machine (SVM) and gradient tree boosting (GTB). In step 3, after filtering out non-grassland pixels, the performance of RF, SVM, and GTB classifiers was evaluated with combined S1 and S2 data to distinguish different grassland types. Eighty-nine multitemporal raster-based variables, including spectral bands, SAR backscatters, and digital elevation models (DEM), were input for ML models. RF had the highest classification accuracy at 69.96% overall accuracy (OA) and a Kappa value of 0.55. After feature selection, the variables were reduced to 61, increasing OA to 72.62% with a Kappa value of 0.58. GTB ranked second, with its OA and Kappa values improving from 67.69% and 0.50 to 72.18% and 0.58 after feature selection. The impact of raster data quality on grassland classification accuracy was assessed through multisensor image fusion. Grassland classification using the Hue, Saturation, and Value (HSV) fused images showed higher OA (59.18%) and Kappa values (0.36) than the Brovey Transform (BT) and non-fused images. Finally, a web map was created to show grassland results within the Soil Landscapes of Canada (SLC) polygons, relating soil landscapes to grassland distribution and providing valuable information for decision-makers and researchers. Future work may include extending the current methodology by considering other influential variables, like meteorological parameters or soil properties, to create a comprehensive grassland inventory across the whole Prairie ecozone of Canada. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of study area. Manitoba’s PE, with different ecoregions, is located in the province of Manitoba, Canada.</p>
Full article ">Figure 2
<p>The spatial distribution of the ground-truthing sampling for all LULC classes included in the classification of Manitoba’s PE grasslands.</p>
Full article ">Figure 3
<p>General overview of the major steps and workflow of the novel strategy for grassland classification, which is developed to improve ecological monitoring using multisource RS data and advanced ML techniques. This workflow integrates data from the S1, S2, L8, and L9 satellites. It involves major stages of image preprocessing, multitemporal composition, image fusion using HSV and BT, and advanced ML classifiers, including RF, SVM, and GTB. The classification is performed in three steps to achieve fine-scale identification of native, tame, and mixed grasses, starting from basic vegetation classification (Step 1) to detailed grassland class differentiation (Step 3); # represents the generated grassland map from each ML process. Ancillary field data, topographic features, and LULC information were incorporated as inputs to generate the final grassland maps for web-based visualization.</p>
Full article ">Figure 4
<p>Atmospheric effects on RS imagery demonstrate the influence of atmospheric conditions on the quality of RS data, specifically how clouds and shadows can occlude pixels and impact the accuracy of the reflected signal received by MSS sensors. (<b>a</b>) The pixel occluded by a shadow shows a scenario where a shadow, cast by an obstacle like a cloud, causes the pixel to be occluded, leading to distorted signals received by the sensor. (<b>b</b>) The pixel occluded by a cloud shows a situation where a cloud directly occludes the pixel, resulting in inaccurate data due to the cloud’s interference with the reflected sunlight that reaches the sensor.</p>
Full article ">Figure 5
<p>The steps of HSV method modified from Al-Wassai et al. [<a href="#B85-remotesensing-16-04730" class="html-bibr">85</a>]. After transforming the RGB image to HSV format, its V channel was replaced with the HR channel, which was then converted back to RGB mode.</p>
Full article ">Figure 6
<p>Comparison of step 2 grassland classification results using different ML models: (<b>a</b>) RF, (<b>b</b>) SVM, and (<b>c</b>) GTB.</p>
Full article ">Figure 7
<p>Classification accuracy varies with different input features ranked based on ANOVA for the RF, SVM, and GTB.</p>
Full article ">Figure 8
<p>Sampled areas before and after image fusion for image quality improvement: (<b>a</b>) Landsat 30 m, (<b>b</b>) HSV fused image, and (<b>c</b>) BT fused image.</p>
Full article ">Figure 9
<p>Scatter plots of fused and non-fused bands using BT and HSV approaches. Except for band 2 (B2), HSV had a higher r-squared. Around 1900 points were selected to build the scatter plots, and the color bar represents the point density, speeded from low density (Blue) to high density (Red).</p>
Full article ">Figure 10
<p>The detailed grassland classification of Manitoba’s PE using RF supervised ML classification model and S1 + S2 data combination.</p>
Full article ">Figure 11
<p>Distribution of mixed, tamed, and native grasslands across three ecoregions, highlighting the areas covered by each grassland class. The percentage listed for each ecoregion shows its proportion of the total grassland area, with Southwest Manitoba Uplands at 2.42%, Lake Manitoba Plain at 41.83%, and Aspen Parkland at 55.75%. The relative dominance of each grassland type across the Aspen Parkland, Lake Manitoba Plain, and Southwest Manitoba Uplands illustrates regional differences in land use and ecological composition.</p>
Full article ">Figure A1
<p>The list of all features with their scores. The numbers following spectral bands, VIs and backscatter variables indicate multiple composite images created during the growing season. Red points show the features that were excluded from classification models to achieve their highest OA and Kappa coefficient; (<b>a</b>) ANOVA F-Value of RF; (<b>b</b>) ANOVA F-Value of SVM; and (<b>c</b>) ANOVA F-Value of GTB.</p>
Full article ">Figure A2
<p>The classification maps of pixel-level fusion with RF approach using (<b>a</b>) Multispectral image, (<b>b</b>) HSV fused image, and (<b>c</b>) BT fused image.</p>
Full article ">Figure A3
<p>Out-of-bag (OOB) error for different numbers of trees and number of variables per split was calculated. Different numbers of variables per split tested are the square root of the total number of variables (SQRT), the total number of variables (ALL), and the natural logarithm of the total number of variables (Ln).</p>
Full article ">Figure A4
<p>(<b>a</b>) OA of classification for different kernel types in the SVM Model. (<b>b</b>) Grid search to find the best value for the Cost/Regularization parameter for Linear kernel in SVM.</p>
Full article ">Figure A5
<p>The effect of the number of trees on OA in GTB classification.</p>
Full article ">
19 pages, 12370 KiB  
Article
Enhancing Cropland Mapping with Spatial Super-Resolution Reconstruction by Optimizing Training Samples for Image Super-Resolution Models
by Xiaofeng Jia, Xinyan Li, Zirui Wang, Zhen Hao, Dong Ren, Hui Liu, Yun Du and Feng Ling
Remote Sens. 2024, 16(24), 4678; https://doi.org/10.3390/rs16244678 - 15 Dec 2024
Viewed by 634
Abstract
Mixed pixels often hinder accurate cropland mapping from remote sensing images with coarse spatial resolution. Image spatial super-resolution reconstruction technology is widely applied to address this issue, typically transforming coarse-resolution remote sensing images into fine spatial resolution images, which are then used to [...] Read more.
Mixed pixels often hinder accurate cropland mapping from remote sensing images with coarse spatial resolution. Image spatial super-resolution reconstruction technology is widely applied to address this issue, typically transforming coarse-resolution remote sensing images into fine spatial resolution images, which are then used to generate fine-resolution land cover maps using classification techniques. Deep learning has been widely used for image spatial super-resolution reconstruction; however, collecting training samples is often difficult for cropland mapping. Given that the quality of spatial super-resolution reconstruction directly impacts classification accuracy, this study aims to assess the impact of different types of training samples on image spatial super-resolution reconstruction and cropland mapping results by employing a Residual Channel Attention Network (RCAN) model combined with a spatial attention mechanism. Four types of samples were used for spatial super-resolution reconstruction model training, namely fine-resolution images and their corresponding coarse-resolution images, including original Sentinel-2 and degraded Sentinel-2 images, original GF-2 and degraded GF-2 images, histogram-matched GF-2 and degraded GF-2 images, and registered original GF-2 and Sentinel-2 images. The results indicate that the samples acquired by the histogram-matched GF-2 and degraded GF-2 images can resolve spectral band mismatches when simulating training samples from fine spatial resolution imagery, while the other three methods have limitations in their inability to fully address spectral and spatial mismatches. The histogram-matched method yielded the best image quality with PSNR, SSIM, and QNR values of 42.2813, 0.9778, and 0.9872, respectively, and produced the best mapping results, achieving an overall accuracy of 0.9306. By assessing the impact of training samples on image spatial super-resolution reconstruction and classification, this study addresses data limitations and contributes to improving the accuracy of cropland mapping, which is crucial for agricultural management and decision-making. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Overall flowchart of the proposed method.</p>
Full article ">Figure 3
<p>RCAN model with spatial attention.</p>
Full article ">Figure 4
<p>Example of different sample types.</p>
Full article ">Figure 5
<p>Results of histogram matching for GF-2. The acquisition times of (<b>a1</b>,<b>b1</b>) are both on 9 December 2016.</p>
Full article ">Figure 6
<p>Super-resolution image reconstruction results. <b>a(1)–a(12</b>) are the image super-resolution reconstruction results of the RCAN, ESRGAN, and EDSR models using HMDG, GDG, SDS, and RGS training samples in area a. <b>b(1)–b(12</b>) are the image super-resolution reconstruction results of the RCAN, ESRGAN, and EDSR models using HMDG, GDG, SDS, and RGS training samples in area b. <b>c(1)–c(12)</b> are the image super-resolution reconstruction results of the RCAN, ESRGAN, and EDSR models using HMDG, GDG, SDS, and RGS training samples in area c. <b>d(1)–d(12)</b> are the image super-resolution reconstruction results of the RCAN, ESRGAN, and EDSR models using HMDG, GDG, SDS, and RGS training samples in area d.</p>
Full article ">Figure 7
<p>Super-resolution imagery and cropland mapping results using random forest classification. <b>a(1)</b>–<b>a(5)</b>, <b>b(1)</b>–<b>b(5)</b>, <b>c(1)</b>–<b>c(5)</b>, and <b>d(1)</b>–<b>d(5)</b> show the original Sentinel-2 images for regions a, b, c, and d, as well as the super-resolution reconstruction results obtained using GDG, SDS, RGS, and HMDG training samples, respectively. <b>a(6)</b>–<b>a(10)</b>, <b>b(6)</b>–<b>b(10)</b>, <b>c(6)</b>–<b>c(10)</b>, and <b>d(6)</b>–<b>d(10)</b> show the cropland mapping results obtained from the corresponding super-resolution images.</p>
Full article ">Figure 8
<p>The super-resolution prediction results of models trained with HMDG and GDG on different input data. (<b>a</b>) is the histogram-matched GF-2 reference data, (<b>b</b>) is the Sentinel-2 data, (<b>c</b>) is the histogram-matched degraded GF-2 data, and (<b>d</b>) is the degraded GF-2 data. (<b>e</b>) is the super-resolution prediction result of (<b>b</b>) using the model trained with HMDG, (<b>f</b>) is the super-resolution prediction result of (<b>b</b>) using the model trained with GDG, (<b>g</b>) is the super-resolution prediction result of (<b>c</b>) using the model trained with HMDG, and (<b>h</b>) is the super-resolution prediction result of (<b>d</b>) using the model trained with GDG.</p>
Full article ">
32 pages, 10548 KiB  
Article
An Unsupervised Remote Sensing Image Change Detection Method Based on RVMamba and Posterior Probability Space Change Vector
by Jiaxin Song, Shuwen Yang, Yikun Li and Xiaojun Li
Remote Sens. 2024, 16(24), 4656; https://doi.org/10.3390/rs16244656 - 12 Dec 2024
Viewed by 717
Abstract
Change vector analysis in posterior probability space (CVAPS) is an effective change detection (CD) framework that does not require sound radiometric correction and is robust against accumulated classification errors. Based on training samples within target images, CVAPS can generate a uniformly scaled change-magnitude [...] Read more.
Change vector analysis in posterior probability space (CVAPS) is an effective change detection (CD) framework that does not require sound radiometric correction and is robust against accumulated classification errors. Based on training samples within target images, CVAPS can generate a uniformly scaled change-magnitude map that is suitable for a global threshold. However, vigorous user intervention is required to achieve optimal performance. Therefore, to eliminate user intervention and retain the merit of CVAPS, an unsupervised CVAPS (UCVAPS) CD method, RFCC, which does not require rigorous user training, is proposed in this study. In the RFCC, we propose an unsupervised remote sensing image segmentation algorithm based on the Mamba model, i.e., RVMamba differentiable feature clustering, which introduces two loss functions as constraints to ensure that RVMamba achieves accurate segmentation results and to supply the CSBN module with high-quality training samples. In the CD module, the fuzzy C-means clustering (FCM) algorithm decomposes mixed pixels into multiple signal classes, thereby alleviating cumulative clustering errors. Then, a context-sensitive Bayesian network (CSBN) model is introduced to incorporate spatial information at the pixel level to estimate the corresponding posterior probability vector. Thus, it is suitable for high-resolution remote sensing (HRRS) imagery. Finally, the UCVAPS framework can generate a uniformly scaled change-magnitude map that is suitable for the global threshold and can produce accurate CD results. The experimental results on seven change detection datasets confirmed that the proposed method outperforms five state-of-the-art competitive CD methods. Full article
Show Figures

Figure 1

Figure 1
<p>Unsupervised change detection process based on RVMamba and Posterior Probability.</p>
Full article ">Figure 2
<p>Feature extraction network for visual state space modeling. (<b>a</b>) The overarching design of RVMamba. (<b>b</b>) VSS block; SS2D is the core operation in VSS block.</p>
Full article ">Figure 3
<p>Data flow of SS2D. It expands the inputs in four directions according to the serial number, scans them one by one through S6, and then merges them.</p>
Full article ">Figure 4
<p>Context-sensitive Bayesian network model.</p>
Full article ">Figure 5
<p>Experimental datasets and ground truth.</p>
Full article ">Figure 6
<p>Segmentation accuracies of RVMamba, UNet, and KMeans.</p>
Full article ">Figure 7
<p>Change maps obtained by the different most advanced methods on the dataset DS1. (<b>a</b>) ASEA, (<b>b</b>) PCANet, (<b>c</b>) KPCAMNet, (<b>d</b>) DeepCVA, (<b>e</b>) GMCD, (<b>f</b>) RFCC. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 8
<p>Change maps obtained by the different most advanced methods on the dataset DS2. (<b>a</b>) ASEA, (<b>b</b>) PCANet, (<b>c</b>) KPCAMNet, (<b>d</b>) DeepCVA, (<b>e</b>) GMCD, (<b>f</b>) RFCC. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 9
<p>Change maps obtained by the different most advanced methods on the dataset DS3. (<b>a</b>) ASEA, (<b>b</b>) PCANet, (<b>c</b>) KPCAMNet, (<b>d</b>) DeepCVA, (<b>e</b>) GMCD, (<b>f</b>) RFCC. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 10
<p>Change maps obtained with different algorithms tested on the dataset DS1. (<b>a</b>) RFCC, (<b>b</b>) UNet-FCM-CSBN-CVAPS, (<b>c</b>) RVMamba-FCM-SBN-CVAPS, (<b>d</b>) RVMamba-SVM-CVAPS. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 11
<p>Change maps obtained with different algorithms tested on the dataset DS2. (<b>a</b>) RFCC, (<b>b</b>) UNet-FCM-CSBN-CVAPS, (<b>c</b>) RVMamba-FCM-SBN-CVAPS, (<b>d</b>) RVMamba-SVM-CVAPS. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 12
<p>Change maps obtained with different algorithms tested on the dataset DS3. (<b>a</b>) RFCC, (<b>b</b>) UNet-FCM-CSBN-CVAPS, (<b>c</b>) RVMamba-FCM-SBN-CVAPS, (<b>d</b>) RVMamba-SVM-CVAPS. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 13
<p>Evaluation of change magnitude and entropy in bitemporal simulated posterior probability vectors. (<b>a</b>): Low uncertainty. (<b>b</b>): Appropriate reduction in certainty. (<b>c</b>): High uncertainty.</p>
Full article ">Figure 14
<p>Effect of fuzziness q on algorithm results.</p>
Full article ">Figure 15
<p>Effect of window size on algorithm results.</p>
Full article ">Figure 16
<p>Effect of the number of segmentation labels on Kappa and algorithm timeliness.</p>
Full article ">Figure 17
<p>Change maps generated by different techniques in the adaptive experiments. (Black is TN, white is TP, red is FA, and green is MD).</p>
Full article ">Figure 18
<p>Change detection with unsupervised segmentation.</p>
Full article ">Figure 19
<p>Change detection based on RVMamba, K-means, and Fuzzy C-means unsupervised segmentation.</p>
Full article ">
17 pages, 8026 KiB  
Article
Estimation of Non-Photosynthetic Vegetation Cover Using the NDVI–DFI Model in a Typical Dry–Hot Valley, Southwest China
by Caiyi Fan, Guokun Chen, Ronghua Zhong, Yan Huang, Qiyan Duan and Ying Wang
ISPRS Int. J. Geo-Inf. 2024, 13(12), 440; https://doi.org/10.3390/ijgi13120440 - 7 Dec 2024
Viewed by 932
Abstract
Non-photosynthetic vegetation (NPV) significantly impacts ecosystem degradation, drought, and wildfire risk due to its flammable and persistent litter. Yet, the accurate estimation of NPV in heterogeneous landscapes, such as dry–hot valleys, has been limited. This study utilized multi-source time-series remote sensing data from [...] Read more.
Non-photosynthetic vegetation (NPV) significantly impacts ecosystem degradation, drought, and wildfire risk due to its flammable and persistent litter. Yet, the accurate estimation of NPV in heterogeneous landscapes, such as dry–hot valleys, has been limited. This study utilized multi-source time-series remote sensing data from Sentinel-2 and GF-2, along with field surveys, to develop an NDVI-DFI ternary linear mixed model for quantifying NPV coverage (fNPV) in a typical dry–hot valley region in 2023. The results indicated the following: (1) The NDVI-DFI ternary linear mixed model effectively estimates photosynthetic vegetation coverage (fPV) and fNPV, aligning well with the conceptual framework and meeting key assumptions, demonstrating its applicability and reliability. (2) The RGB color composite image derived using the minimum inclusion endmember feature method (MVE) exhibited darker tones, suggesting that MVE tends to overestimate the vegetation fraction when distinguishing vegetation types from bare soil. On the other hand, the pure pixel index (PPI) method showed higher accuracy in estimation due to its higher spectral purity and better recognition of endmembers, making it more suitable for studying dry–hot valley areas. (3) Estimates based on the NDVI-DFI ternary linear mixed model revealed significant seasonal shifts between PV and NPV, especially in valleys and lowlands. From the rainy to the dry season, the proportion of NPV increased from 23.37% to 35.52%, covering an additional 502.96 km². In summary, these findings underscore the substantial seasonal variations in fPV and fNPV, particularly in low-altitude regions along the valley, highlighting the dynamic nature of vegetation in dry–hot environments. Full article
Show Figures

Figure 1

Figure 1
<p>Maps of the study area showing the (<b>a</b>,<b>b</b>) geographic location overview, (<b>c</b>) imagery, and (<b>d</b>) DEM and imagery of the study area.</p>
Full article ">Figure 2
<p>Rainfall variations in the study area. (<b>a</b>) The historical monthly accumulated rainfall and the monthly accumulated rainfall in 2023; (<b>b</b>) annual rainfall accumulation from 2004 to 2023.</p>
Full article ">Figure 3
<p>Maps showing (<b>a</b>) NDVI derived from GF-2 imagery, distribution of (<b>b</b>) validation samples, and (<b>c</b>) field investigation images of typical examples in the study area.</p>
Full article ">Figure 4
<p>The main technical workflow of the study.</p>
Full article ">Figure 5
<p>Spatial diagram of ternary linear mixed model.</p>
Full article ">Figure 6
<p>Feature space diagrams of the NDVI-DFI ternary linear mixed model in different months. The red, green, and blue circles in the figure show the location of NPV, PV, and BS endmembers in each projection, respectively.</p>
Full article ">Figure 7
<p>A ternary diagram based on the abundance proportions of the three endmembers.</p>
Full article ">Figure 8
<p>RGB color composite images based on different endmember selection methods for <span class="html-italic">f<sub>PV</sub></span>, <span class="html-italic">f<sub>NPV</sub></span>, and <span class="html-italic">f<sub>BS</sub></span>. Blue represents the bare soil fraction <span class="html-italic">f<sub>BS</sub></span>, green represents the photosynthetic vegetation fraction <span class="html-italic">f<sub>PV</sub></span>, and red represents the non-photosynthetic vegetation fraction <span class="html-italic">f<sub>NPV</sub></span>. White areas indicate invalid values, such as masked regions for cities, water bodies, and other excluded areas.</p>
Full article ">Figure 9
<p>Mean–standard deviation histograms for different endmember selection methods across seasons. (<b>a</b>) Results based on the Minimum-Volume Enclosing Endmember (MVE) method; (<b>b</b>) results based on the Pixel Purity Index (PPI) method.</p>
Full article ">Figure 10
<p>Comparison between the estimation results of the NDVI-DFI ternary linear mixed model and the high-resolution GF-2 satellite imagery for typical scenes. Blue represents the bare soil fraction <span class="html-italic">f<sub>BS</sub></span>, green represents the photosynthetic vegetation fraction <span class="html-italic">f<sub>PV</sub></span>, and red represents the non-photosynthetic vegetation fraction <span class="html-italic">f<sub>NPV</sub></span>, orange and purple represent areas with high abundance of the bare soil fraction <span class="html-italic">f<sub>BS</sub></span> and the non-photosynthetic vegetation fraction <span class="html-italic">f<sub>NPV</sub></span>.</p>
Full article ">Figure 11
<p>Comparison between the NDVI-DFI ternary linear mixed model estimation results and the GF-2 high-resolution image calculation results. The color gradient represents the density of validation samples, ranging from blue (low density) to red (high density). Regions with a deeper red indicate higher sample density, while regions with a deeper blue indicate sparser sample distribution.</p>
Full article ">Figure 12
<p>Spatial distribution of <span class="html-italic">f<sub>PV</sub></span> and <span class="html-italic">f<sub>NPV</sub></span> estimated based on Sentinel-2A images in Xinping in 2023. (<b>a</b>) Seasonal spatial distribution of <span class="html-italic">f<sub>PV</sub></span>; (<b>b</b>) seasonal spatial distribution of <span class="html-italic">f<sub>NPV</sub></span>.</p>
Full article ">
Back to TopTop