Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = STARFM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 32621 KiB  
Article
A Novel Rapeseed Mapping Framework Integrating Image Fusion, Automated Sample Generation, and Deep Learning in Southwest China
by Ruolan Jiang, Xingyin Duan, Song Liao, Ziyi Tang and Hao Li
Land 2025, 14(1), 200; https://doi.org/10.3390/land14010200 - 19 Jan 2025
Viewed by 598
Abstract
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic [...] Read more.
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic rapeseed mapping framework that integrates multi-source remote sensing data fusion, automated sample generation, and deep learning models. The framework was applied in Santai County, Sichuan Province, Southwest China, which has typical topographical and climatic characteristics. First, MODIS and Landsat data were used to fill the gaps in Sentinel-2 imagery, creating time-series images through the object-level processing version of the spatial and temporal adaptive reflectance fusion model (OL-STARFM). In addition, a novel spectral phenology approach was developed to automatically generate training samples, which were then input into the improved TS-ConvNeXt ECAPA-TDNN (NeXt-TDNN) deep learning model for accurate rapeseed mapping. The results demonstrated that the OL-STARFM approach was effective in rapeseed mapping. The proposed automated sample generation method proved effective in producing reliable rapeseed samples, achieving a low Dynamic Time Warping (DTW) distance (<0.81) when compared to field samples. The NeXt-TDNN model showed an overall accuracy (OA) of 90.12% and a mean Intersection over Union (mIoU) of 81.96% in Santai County, outperforming other models such as random forest, XGBoost, and UNet-LSTM. These results highlight the effectiveness of the proposed automatic rapeseed mapping framework in accurately identifying rapeseed. This framework offers a valuable reference for monitoring other crops in similar environments. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the study area in China; (<b>b</b>) the spatial distribution of the study area.</p>
Full article ">Figure 2
<p>Phenological calendar of three typical crops in Santai County. “E”, “M”, and “L” represent early, middle, and late periods of month, respectively.</p>
Full article ">Figure 3
<p>Framework of proposed three-stage rapeseed mapping.</p>
Full article ">Figure 4
<p>(<b>a</b>) The numbers of valid data based on the monthly synthesis of Sentinel-2 during the rapeseed growth period in Santai; (<b>b</b>) the framework of image fusion.</p>
Full article ">Figure 5
<p>The temporal features of rapeseed and other crops. The dashed line shows the gap between rapeseed and other types of features.</p>
Full article ">Figure 6
<p>Framework of NeXt-TDNN.</p>
Full article ">Figure 7
<p>(<b>a</b>) The potential samples generated by the Rapeseed Sample<sub>pheno</sub>. (<b>b</b>–<b>e</b>) The presented Sentinel-2 images are Ture-color (R: red; G: green; B: blue) images acquired in March. Blue and yellow represent potential sample objects proposed according to the rule, and the yellow and blue dots represent positive and negative sample points randomly selected from potential objects.</p>
Full article ">Figure 8
<p>Comparisons of the statistical distribution patterns of the DTW distance between all samples obtained from the rapeseed rule and field surveys in the study area. Rapeseed represents the DTW distance between rapeseed samples, while Non-Rapeseed represents the DTW distance between non-rapeseed samples.</p>
Full article ">Figure 9
<p>Comparison of time-series curves between automatically generated rapeseed samples and field collected rape samples.</p>
Full article ">Figure 10
<p>Rapeseed distribution map of Santai County (<b>a</b>), with detailed map of distribution in different areas (<b>b</b>–<b>d</b>), 2024.</p>
Full article ">Figure 11
<p>The recognition results of the four classifiers.</p>
Full article ">Figure 12
<p>A comparison of the recognition results of different classifiers. The specific positions of (<b>a</b>–<b>d</b>) are shown in <a href="#land-14-00200-f011" class="html-fig">Figure 11</a>. The blue circles mark better recognition results, and the red circles indicate error recognition or missing recognition.</p>
Full article ">
22 pages, 10004 KiB  
Article
High-Resolution Dynamic Monitoring of Rocky Desertification of Agricultural Land Based on Spatio-Temporal Fusion
by Xin Zhao, Zhongfa Zhou, Guijie Wu, Yangyang Long, Jiancheng Luo, Xingxin Huang, Jing Chen and Tianjun Wu
Land 2024, 13(12), 2173; https://doi.org/10.3390/land13122173 - 13 Dec 2024
Viewed by 460
Abstract
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities [...] Read more.
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities of Gaofen-2 imagery, we first delineate agricultural land, use these boundaries as spatial constraints to compute the agricultural land bedrock response Index (ABRI), and apply the spatial and temporal adaptive reflectance fusion model (STARFM) to achieve spatio-temporal fusion of Gaofen-2 imagery and Sentinel-2 imagery from multiple time periods, resulting in a high-spatio-temporal-resolution bedrock discrimination index (ABRI*) for analysis. This work demonstrates the pronounced rocky desertification phenomenon in the agricultural land in the study area. The ABRI* effectively captures this phenomenon, with the classification accuracy for the bedrock, based on the ABRI* derived from Gaofen-2 imagery, reaching 0.86. The bedrock exposure area in the farmland showed a decreasing trend from 2019 to 2021, a significant increase from 2021 to 2022, and a gradual decline from 2022 to 2024. Cultivation activities have a significant impact on rocky desertification within agricultural land. The ABRI significantly enhances the capabilities for the dynamic monitoring of rocky desertification in agricultural areas, providing data support for the management of specialized farmland. For vulnerable areas, timely adjustments to planting schemes and the prioritization of intervention measures such as soil conservation, vegetation restoration, and water resource management could help to improve the resilience and stability of agriculture, particularly in karst regions. Full article
Show Figures

Figure 1

Figure 1
<p>Mapping of the study area. (Areas (<b>A</b>) and (<b>B</b>) show unmanned aerial vehicle (UAV) images. Sweet potatoes are mainly planted near rocks in Area (<b>A</b>), while corn is mainly planted near rocks in Area (<b>B</b>)).</p>
Full article ">Figure 2
<p>Cloud cover distribution of Sentinel-2 data in the study area (2019–2024).</p>
Full article ">Figure 3
<p>Technical workflow diagram(Subgraphs (<b>A</b>–<b>D</b>) represent the four steps:data collection, data preprocessing, cropland selection and index construction).</p>
Full article ">Figure 4
<p>Cultivated area selection process (where a, b, c, d represent the specific steps of cultivated area selection described in the text).</p>
Full article ">Figure 5
<p>Sample selection examples for rocky desertification and non-rocky desertification.</p>
Full article ">Figure 6
<p>Spectral reflectance curves of vegetation, bare soil, and rock types from S2 and GF2 data.</p>
Full article ">Figure 7
<p>Analysis of extraction results for cropland areas (MC denotes mean center of distribution; DDB denotes distribution’s standard deviation ellipse).</p>
Full article ">Figure 8
<p>Performance of Agricultural Land Bedrock Response Index ((<b>A</b>–<b>F</b>) represent different sub-sampling areas).</p>
Full article ">Figure 9
<p>Spatio-temporal fusion accuracy validation result figure. (Subplot 1 shows the extraction result of <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and the distribution of 500 validation points. Subplot 2 presents the quadratic function fitting correlation analysis between <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math>. Subplot 3 displays the histogram distribution of the results from 500 sample points for <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math>. Subplot 4 presents further analysis results of the 500 sample points. Subplot 5 shows the results of accuracy calculations for 500 sample points. ABRI represents the Cropland Bedrock Response Index, which is normalized to the 0–1 range. <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math> represent the fitted result of S2’s ABRI through the STAFMA model and the ABRI result from GF2, respectively. <math display="inline"><semantics> <msub> <mi mathvariant="normal">r</mi> <mi>pearson</mi> </msub> </semantics></math> refers to the Pearson correlation validation R index. RMSE represents the root mean square error. MAE represents the mean absolute error. Bias represents the mean bias.d represents the concordance index).</p>
Full article ">Figure 10
<p>Comparison of mean and variance in <math display="inline"><semantics> <msup> <mi>ABRI</mi> <mo>*</mo> </msup> </semantics></math> calculation results for multiple periods in the study area.</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> changes in rocky desertification areas (regions F).</p>
Full article ">Figure 12
<p>Distribution of rocky desertification in the study area (where _P represents the peak growing period and _D represents the non-peak growing period).</p>
Full article ">Figure 13
<p>Comparative analysis of accuracy between traditional rocky exposure indices and ABRI.</p>
Full article ">Figure 14
<p>Distribution of rocky desertification change trends in the study area.</p>
Full article ">Figure 15
<p>Comparison of rock desertification degree results with actual ground bedrock exposure results (where A and B represent two different regions; T1 and T2 represent 12 July 2021 and 18 January 2021, respectively; <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>M</mi> <mi>A</mi> <mi>G</mi> <msub> <mi>E</mi> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> represents UAV imagery; RD_KBRI, RD_NDRI, and RD_SRI2 represent rock desertification degrees derived from different rock indices; ABRI_S2 and ABRI_S2* represent the 10 m resolution ABRI calculated from S2 and the 1 m resolution ABRI derived from spatio-temporal fusion, respectively; and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>O</mi> <mi>C</mi> <msub> <mi>K</mi> <mrow> <mi>A</mi> <mi>B</mi> <mi>R</mi> <mi>I</mi> <mo>∗</mo> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>O</mi> <mi>C</mi> <msub> <mi>K</mi> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> represent the rock distribution obtained from ABRI* and UAV imagery, respectively).</p>
Full article ">
20 pages, 10820 KiB  
Article
Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods
by Xiaochun Zhang, Hongsi Gao, Liangsheng Shi, Xiaolong Hu, Liao Zhong and Jiang Bian
Remote Sens. 2024, 16(13), 2414; https://doi.org/10.3390/rs16132414 - 1 Jul 2024
Viewed by 803
Abstract
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the [...] Read more.
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction to produce daily ET maps with a high spatial resolution. The Landsat-ET and MODIS-ET datasets for the UWET fusion data are retrieved from Landsat and MODIS images based on the surface energy balance model. The UWET model considers the effects of crop phenology, precipitation, and land cover in the process of the ET image fusion. The precision evaluation is conducted on the UWET results, and the measured ET values are monitored by eddy covariance at the Luancheng station, with average MAE values of 0.57 mm/day. The image results of UWET show fine spatial details and capture the dynamic ET changes. The seasonal ET values of winter wheat from the ET map mainly range from 350 to 660 mm in 2019–2020 and from 300 to 620 mm in 2020–2021. The average seasonal ET in 2019–2020 is 499.89 mm, and in 2020–2021, it is 459.44 mm. The performance of UWET is compared with two other fusion models: the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Spatial and Temporal Reflectance Unmixing Model (STRUM). UWET performs better in the spatial details than the STARFM and is better in the temporal characteristics than the STRUM. The results indicate that UWET is suitable for generating ET products with a high spatial–temporal resolution in agricultural regions. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area and ground test station.</p>
Full article ">Figure 2
<p>The flowchart for the extraction of land types.</p>
Full article ">Figure 3
<p>The spatial distribution of sampling points.</p>
Full article ">Figure 4
<p>The NDVI curve of winter wheat and corresponding feature points from 2019 to 2020.</p>
Full article ">Figure 5
<p>Land cover map. (<b>a</b>) 2019–2020; (<b>b</b>) 2020–2021.</p>
Full article ">Figure 6
<p>The UWET framework.</p>
Full article ">Figure 7
<p>Unmixing-based spatial downscaling of Landsat-ET and MODIS-ET coarse pixels.</p>
Full article ">Figure 8
<p>The base dates and prediction dates matching process of LS-MS ET image pairs.</p>
Full article ">Figure 9
<p>The framework of weight-based temporal prediction process.</p>
Full article ">Figure 10
<p>The variation of daily UWET.</p>
Full article ">Figure 11
<p>The comparison of measured ET, Landsat-ET, and UWET at the Luancheng station.</p>
Full article ">Figure 12
<p>The spatial pattern comparison between UWET and Landsat-ET. (<b>a</b>) Land cover map of Field 1; (<b>b</b>–<b>g</b>) Landsat-ET and UWET maps on different dates in Field 1; (<b>h</b>) Land cover map of Field 2; (<b>i</b>–<b>n</b>): Landsat-ET and UWET maps on different dates in Field 2.</p>
Full article ">Figure 13
<p>Spatial distribution of wheat ET. (<b>a</b>) Accumulated ET between 2019 and 2020; (<b>b</b>) accumulated ET between 2020 and 2021.</p>
Full article ">Figure 14
<p>Validation of crop ET. (<b>a</b>) MODIS-ET in 2019–2020; (<b>b</b>) MODIS-ET in 2020–2021; (<b>c</b>) UWET in 2019–2020; (<b>d</b>) UWET in 2020–2021.</p>
Full article ">Figure 15
<p>The spatial characteristics comparison of three fusion models on 22 May 2020. (<b>a</b>) Land cover map; (<b>b</b>) MODIS-ET map; (<b>c</b>) Landsat-ET map; (<b>d</b>) STARFM-ET map; (<b>e</b>) STRUM-ET map; (<b>f</b>) UWET map.</p>
Full article ">Figure 16
<p>Daily ET of the three models during the growing season. (<b>a</b>) 2019–2020; (<b>b</b>) 2020–2021.</p>
Full article ">
24 pages, 17475 KiB  
Article
Spatio-Temporal Land-Use/Cover Change Dynamics Using Spatiotemporal Data Fusion Model and Google Earth Engine in Jilin Province, China
by Zhuxin Liu, Yang Han, Ruifei Zhu, Chunmei Qu, Peng Zhang, Yaping Xu, Jiani Zhang, Lijuan Zhuang, Feiyu Wang and Fang Huang
Land 2024, 13(7), 924; https://doi.org/10.3390/land13070924 - 25 Jun 2024
Cited by 1 | Viewed by 1355
Abstract
Jilin Province is located in the northeast of China, and has fragile ecosystems, and a vulnerable environment. Large-scale, long time series, high-precision land-use/cover change (LU/CC) data are important for spatial planning and environmental protection in areas with high surface heterogeneity. In this paper, [...] Read more.
Jilin Province is located in the northeast of China, and has fragile ecosystems, and a vulnerable environment. Large-scale, long time series, high-precision land-use/cover change (LU/CC) data are important for spatial planning and environmental protection in areas with high surface heterogeneity. In this paper, based on the high temporal and spatial fusion data of Landsat and MODIS and the Google Earth Engine (GEE), long time series LU/CC mapping and spatio-temporal analysis for the period 2000–2023 were realized using the random forest remote sensing image classification method, which integrates remote sensing indices. The prediction results using the OL-STARFM method were very close to the real images and better contained the spatial image information, allowing its application to the subsequent classification. The average overall accuracy and kappa coefficient of the random forest classification products obtained using the fused remote sensing index were 95.11% and 0.9394, respectively. During the study period, the area of cultivated land and unused land decreased as a whole. The area of grassland, forest, and water fluctuated, while building land increased to 13,442.27 km2 in 2023. In terms of land transfer, cultivated land was the most important source of transfers, and the total area share decreased from 42.98% to 38.39%. Cultivated land was mainly transferred to grassland, forest land, and building land, with transfer areas of 7682.48 km2, 8374.11 km2, and 7244.52 km2, respectively. Grassland was the largest source of land transfer into cultivated land, and the land transfer among other feature types was relatively small, at less than 3300 km2. This study provides data support for the scientific management of land resources in Jilin Province, and the resulting LU/CC dataset is of great significance for regional sustainable development. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of Jilin Province.</p>
Full article ">Figure 2
<p>Correlation of different bands in the real and fusion images.</p>
Full article ">Figure 2 Cont.
<p>Correlation of different bands in the real and fusion images.</p>
Full article ">Figure 3
<p>Correlation between different remote sensing indices of the real and fusion images (<b>a</b>) correlation between the NDVI of the real and fusion images; (<b>b</b>) correlation between the MNDWI of the real and fusion images; (<b>c</b>) correlation between the NDBI of the real and fusion images.</p>
Full article ">Figure 4
<p>Classification of LU/CC in Jilin Province, 2000–2023.</p>
Full article ">Figure 5
<p>Period-by-period normalized confusion matrix; (<b>a</b>) 2000 normalized confusion matrix; (<b>b</b>) 2005 normalized confusion matrix; (<b>c</b>) 2010 normalized confusion matrix; (<b>d</b>) 2015 normalized confusion matrix; (<b>e</b>) 2020 normalized confusion matrix; (<b>f</b>) 2023 normalized confusion matrix.</p>
Full article ">Figure 5 Cont.
<p>Period-by-period normalized confusion matrix; (<b>a</b>) 2000 normalized confusion matrix; (<b>b</b>) 2005 normalized confusion matrix; (<b>c</b>) 2010 normalized confusion matrix; (<b>d</b>) 2015 normalized confusion matrix; (<b>e</b>) 2020 normalized confusion matrix; (<b>f</b>) 2023 normalized confusion matrix.</p>
Full article ">Figure 6
<p>Changes in the area of cultivated land in Jilin Province, 2000–2023.</p>
Full article ">Figure 7
<p>Changes in the area of grassland in Jilin Province, 2000–2023.</p>
Full article ">Figure 8
<p>Changes in the area of forest in Jilin Province, 2000–2023.</p>
Full article ">Figure 9
<p>Changes in the area of water in Jilin Province, 2000–2023.</p>
Full article ">Figure 10
<p>Changes in the area of building land in Jilin Province, 2000–2023.</p>
Full article ">Figure 11
<p>Changes in the area of unused land in Jilin Province, 2000–2023.</p>
Full article ">Figure 12
<p>Map of land-use transfers out and in at different phases. (<b>a</b>) map of land-use transfers out; (<b>b</b>) map of land-use transfers in. The “<b>↓</b>” indicates: “transfer to”.</p>
Full article ">Figure 13
<p>Land transfer chord diagrams at different phases.</p>
Full article ">Figure 14
<p>Frequency map of land-use changes.</p>
Full article ">Figure 15
<p>Comparison of frequency change area in different cities (autonomous prefecture) in Jilin Province. Baicheng City (BC); Baishan City (BS); Changchun City (CC); Jilin City (JL); Liaoyuan City (LY); Siping City (SP); Songyuan City (SY); Tonghua City (TH); Yanbian Korean Autonomous Prefecture (YB).</p>
Full article ">
17 pages, 13688 KiB  
Technical Note
Fast Fusion of Sentinel-2 and Sentinel-3 Time Series over Rangelands
by Paul Senty, Radoslaw Guzinski, Kenneth Grogan, Robert Buitenwerf, Jonas Ardö, Lars Eklundh, Alkiviadis Koukos, Torbern Tagesson and Michael Munk
Remote Sens. 2024, 16(11), 1833; https://doi.org/10.3390/rs16111833 - 21 May 2024
Cited by 2 | Viewed by 1497
Abstract
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source [...] Read more.
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source that fulfills these needs. A seamless fusion of data from the Sentinel-3 and Sentinel-2 optical sensors could meet these monitoring requirements as Sentinel-2 observes at the required spatial resolution (10 m) while Sentinel-3 observes at the required temporal resolution (daily). We introduce the Efficient Fusion Algorithm across Spatio-Temporal scales (EFAST), which interpolates Sentinel-2 data into smooth time series (both spatially and temporally). This interpolation is informed by Sentinel-3’s temporal profile such that the phenological changes occurring between two Sentinel-2 acquisitions at a 10 m resolution are assumed to mirror those observed at Sentinel-3’s resolution. The EFAST consists of a weighted sum of Sentinel-2 images (weighted by a distance-to-clouds score) coupled with a phenological correction derived from Sentinel-3. We validate the capacity of our method to reconstruct the phenological profile at a 10 m resolution over one rangeland area and one irrigated cropland area. The EFAST outperforms classical interpolation techniques over both rangeland (−72% in the mean absolute error, MAE) and agricultural areas (−43% MAE); it presents a performance comparable to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) (+5% MAE in both test areas) while being 140 times faster. The computational efficiency of our approach and its temporal smoothing enable the creation of seamless and high-resolution phenology products on a regional to continental scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The test areas are shown in white squares. The rangeland true-color image and NDVI data (<b>top</b>) were acquired by Sentinel-2 on 6 October 2021, and the cropland images (<b>bottom</b>) date from 13 December 2022. The cropland area is surrounded by grasslands along Lake Guiers. The rangeland area is situated a few kilometers northeast of the town of Dahra. The Dahra field site (represented by a small white dot on the top-right corner of the rangeland area) includes a hemispherical NDVI sensor that we use to evaluate the reliability of our method.</p>
Full article ">Figure 2
<p>Example showing Sentinel-2 and Sentinel-3 NDVI data around the wet seasons (months 7–12) of 2019, 2020, and 2021 in Senegal. Atmospheric effects lead to underestimations of Sentinel-2 and Sentinel-3 data which are especially apparent around September in 2020 and 2021. The timing of vegetation growth varies from July to August. Higher cloud cover during the wet season leads to fewer acquisitions, with an especially long time without Sentinel-2 data in 2020.</p>
Full article ">Figure 3
<p>The fusion principle—the EFAST creates synthetic high-resolution data through a simple transformation of Sentinel-2 and Sentinel-3 images as follows: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mfenced separators="|"> <mrow> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mi>*</mi> </mrow> </msup> </mrow> </mfenced> <mo>+</mo> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mfenced separators="|"> <mrow> <mi>t</mi> </mrow> </mfenced> <mo>−</mo> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>(</mo> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mi>*</mi> </mrow> </msup> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Validation was performed on all cloud-free Sentinel-2 images acquired during the wet season (black points); the rest of the Sentinel-2 cloud-free acquisitions (gray points) and all the Sentinel-3 observations were used for interpolation.</p>
Full article ">Figure 5
<p>STARFM, EFAST, and Whittaker filter compared to in situ data at Dahra field site.</p>
Full article ">Figure 6
<p>Example of a Sentinel-2 image used for validation, dating from 17 September 2019 (<b>a</b>), and the corresponding prediction by the EFAST (<b>b</b>). The absolute difference between these two images is one of the 12 terms (one for each validation image) of the mean absolute error map (<a href="#remotesensing-16-01833-f007" class="html-fig">Figure 7</a>c).</p>
Full article ">Figure 7
<p>Mean absolute error maps of the reconstructed NDVI profiles using the Whittaker filter (<b>a</b>), STARFM (<b>b</b>), and EFAST (<b>c</b>). The depicted dots represent specific points for which the corresponding time series are illustrated in <a href="#remotesensing-16-01833-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 8
<p>The predicted time series of the three interpolation methods (the Whittaker filter, STARFM, and EFAST) for the three points of the rangeland area (<a href="#remotesensing-16-01833-f007" class="html-fig">Figure 7</a>). Black points represent Sentinel-2 validation points, and gray points represent the Sentinel-2 data used for interpolation. Orange areas correspond to the time frames in which the reconstruction of the NDVI profile is assessed.</p>
Full article ">Figure 9
<p>The mean absolute error using the Whittaker filter (<b>a</b>), STARFM (<b>b</b>) and EFAST (<b>c</b>). The dots correspond to the points for which time series are displayed in <a href="#remotesensing-16-01833-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 10
<p>The predicted time series of the three interpolation methods (the Whittaker filter, STARFM, and EFAST) for three points in the cropland area (<a href="#remotesensing-16-01833-f009" class="html-fig">Figure 9</a>). Black points represent Sentinel-2 validation points, and gray points represent the Sentinel-2 data used for interpolation. Orange areas correspond to time frames in which the reconstruction of the NDVI profile is assessed.</p>
Full article ">Figure 11
<p>Pearson correlation coefficient between Sentinel-2 and Sentinel-3 time series. Small-scale features stand out as having a low correlation because of Sentinel-3’s limited spatial resolution. Conversely, large crop parcels and homogeneous grasslands present a high correlation. The white box corresponds to the area in <a href="#remotesensing-16-01833-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure A1
<p>Maximum time, in days, without Sentinel-2 data over the African continent between August 2021 and January 2023. Extracted using Google Earth Engine. The brighter stripes correspond to areas of overlap between two orbits.</p>
Full article ">Figure A2
<p>Smoothing parameters s and D. (<b>a</b>) Distance to closest masked cloud in km for the parameter. Distance-to-clouds score is equal to 0.5 two kilometers from the cloud mask and to 1 from D = 4 km. (<b>b</b>) Impact of temporal smoothing parameter s (in days) on temporal weights <math display="inline"><semantics> <mrow> <mi>e</mi> <mi>x</mi> <mi>p</mi> <mfenced open="[" close="]" separators="|"> <mrow> <mo>−</mo> <mfrac> <mrow> <msup> <mrow> <mfenced separators="|"> <mrow> <mi>t</mi> <mo>−</mo> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mo>∗</mo> </mrow> </msub> </mrow> </mfenced> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mrow> <mi>s</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msup> </mrow> </mfrac> </mrow> </mfenced> </mrow> </semantics></math>, displayed as bars, when there is one cloud-free Sentinel-2 acquisition every five days. Lines represent Gaussian distributions for s = 10 and 30 days.</p>
Full article ">
24 pages, 33650 KiB  
Article
Abandoned Farmland Extraction and Feature Analysis Based on Multi-Sensor Fused Normalized Difference Vegetation Index Time Series—A Case Study in Western Mianchi County
by Jiqiu Deng, Yiwei Guo, Xiaoyan Chen, Liang Liu and Wenyi Liu
Appl. Sci. 2024, 14(5), 2102; https://doi.org/10.3390/app14052102 - 2 Mar 2024
Cited by 4 | Viewed by 1775
Abstract
Farmland abandonment monitoring is one of the key aspects of land use and land cover research, as well as being an important prerequisite for ecological environmental protection and food security. A Normalized Difference Vegetation Index (NDVI) time series analysis is a common method [...] Read more.
Farmland abandonment monitoring is one of the key aspects of land use and land cover research, as well as being an important prerequisite for ecological environmental protection and food security. A Normalized Difference Vegetation Index (NDVI) time series analysis is a common method used for farmland abandonment data extraction; however, extracting this information using high-resolution data is still difficult due to the limitations caused by cloud influence and data of low temporal resolution. To address this problem, this study used STARFM for GF-6 and Landsat 8 data fusion to enhance the continuity of high-resolution and cloudless images. A dataset was constructed by combining the phenological cycle of crops in the study area and then extracting abandoned farmland data based on an NDVI time series analysis. The overall accuracy of the results based on the NDVI time series analysis using the STARFM-fused dataset was 93.42%, which was 15.5% higher than the accuracy of the results obtained using only GF-6 data and 28.52% higher than those obtained using only Landsat data. Improvements in accuracy were also achieved when using SVM for time series analysis based on the fused dataset, indicating that the method used in this study can effectively improve the accuracy of the results. Then, we analyzed the spatial distribution pattern of abandoned farmland by extracting the results and concluded that the abandonment rate increased with the increase in the road network density and decreased with the increase in the distance to residential areas. This study can provide decision-making guidance and scientific and technological support for the monitoring of farmland abandonment and can facilitate the analysis of abandonment mechanisms in the study area, which is conducive to the sustainable development of farmland. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Phenological cycle of main crops in the study area.</p>
Full article ">Figure 3
<p>Steps for manual vectorization. (<b>a</b>) Farmland area in GlobeLand30 raster, (<b>b</b>) farmland area vector converted by GlobeLand30 raster, and (<b>c</b>) manually adjusted farmland area vector.</p>
Full article ">Figure 4
<p>Distribution of training samples for 2021 and 2022.</p>
Full article ">Figure 5
<p>Workflow of abandoned farmland data extraction used in this study.</p>
Full article ">Figure 6
<p>Landsat 8 NDVI and STARFM-fused NDVI for corresponding dates.</p>
Full article ">Figure 7
<p>Correlation graphs between Landsat 8 NDVI and STARFM-fused NDVI: (<b>a</b>) correlation graph of data 30 April 2021, (<b>b</b>) correlation graph of data 21 September 2021, and (<b>c</b>) correlation graph of data 10 October 2022.</p>
Full article ">Figure 8
<p>Annual NDVI difference for 2021 and 2022. (<b>a</b>) NDVI difference for 2021 and (<b>b</b>) NDVI difference for 2022.</p>
Full article ">Figure 9
<p>Annual abandoned samples’ proportional change curves: (<b>a</b>) proportional change curve for 2021 and (<b>b</b>) proportional change curve for 2022.</p>
Full article ">Figure 10
<p>Abandoned farmland data extraction results: (<b>a</b>) abandoned farmland data extraction results for 2021, (<b>b</b>) abandoned farmland data extraction results for 2022, and (<b>c</b>) permanently abandoned farmland data extraction results for 2021–2022.</p>
Full article ">Figure 11
<p>Distribution of validation samples.</p>
Full article ">Figure 12
<p>Abandoned farmland data extraction results based on different datasets and methods. (<b>a</b>) Extraction results based on NDVI time series analysis using GF-6 data, (<b>b</b>) extraction results based on NDVI time series analysis using STARFM-fused data, (<b>c</b>) extraction results based on NDVI time series analysis using Landsat 8 data, and (<b>d</b>) extraction results based on SVM time series analysis using STARFM-fused data.</p>
Full article ">Figure 13
<p>Nuclear density profile and high-density sample areas. (<b>a</b>) Sample area (a) in high-density, (<b>b</b>) Sample area (b) in high-density, (<b>c</b>) Sample area (c) in high-density.</p>
Full article ">Figure 14
<p>Proportion of farmland to all farmland and abandonment rates at different ranges of different features. (<b>a</b>) Slope, (<b>b</b>) elevation, (<b>c</b>) road density, and (<b>d</b>) distance to residential area.</p>
Full article ">
28 pages, 21321 KiB  
Article
The Improved U-STFM: A Deep Learning-Based Nonlinear Spatial-Temporal Fusion Model for Land Surface Temperature Downscaling
by Shanxin Guo, Min Li, Yuanqing Li, Jinsong Chen, Hankui K. Zhang, Luyi Sun, Jingwen Wang, Ruxin Wang and Yan Yang
Remote Sens. 2024, 16(2), 322; https://doi.org/10.3390/rs16020322 - 12 Jan 2024
Cited by 4 | Viewed by 1914
Abstract
The thermal band of a satellite platform enables the measurement of land surface temperature (LST), which captures the spatial-temporal distribution of energy exchange between the Earth and the atmosphere. LST plays a critical role in simulation models, enhancing our understanding of physical and [...] Read more.
The thermal band of a satellite platform enables the measurement of land surface temperature (LST), which captures the spatial-temporal distribution of energy exchange between the Earth and the atmosphere. LST plays a critical role in simulation models, enhancing our understanding of physical and biochemical processes in nature. However, the limitations in swath width and orbit altitude prevent a single sensor from providing LST data with both high spatial and high temporal resolution. To tackle this challenge, the unmixing-based spatiotemporal fusion model (STFM) offers a promising solution by integrating data from multiple sensors. In these models, the surface reflectance is decomposed from coarse pixels to fine pixels using the linear unmixing function combined with fractional coverage. However, when downsizing LST through STFM, the linear mixing hypothesis fails to adequately represent the nonlinear energy mixing process of LST. Additionally, the original weighting function is sensitive to noise, leading to unreliable predictions of the final LST due to small errors in the unmixing function. To overcome these issues, we selected the U-STFM as the baseline model and introduced an updated version called the nonlinear U-STFM. This new model incorporates two deep learning components: the Dynamic Net (DyNet) and the Chang Ratio Net (RatioNet). The utilization of these components enables easy training with a small dataset while maintaining a high generalization capability over time. The MODIS Terra daytime LST products were employed to downscale from 1000 m to 30 m, in comparison with the Landsat7 LST products. Our results demonstrate that the new model surpasses STARFM, ESTARFM, and the original U-STFM in terms of prediction accuracy and anti-noise capability. To further enhance other STFMs, these two deep-learning components can replace the linear unmixing and weighting functions with minor modifications. As a deep learning-based model, it can be pretrained and deployed for online prediction. Full article
(This article belongs to the Special Issue Remote Sensing for Land Surface Temperature and Related Applications)
Show Figures

Figure 1

Figure 1
<p>The study area in Shenzhen and Dongguan within the GBA, China.</p>
Full article ">Figure 2
<p>The problem is the unmixing function. The red region represents the HCRs, and the black square represents the MODIS pixels. The green region in the figure on the left demonstrates the case when HCRs are across multiple MODIS pixels, and the green region in the right figure demonstrates the case when HCRs are only covered by one MODISpixel as the result of making the coverage faction matrix sparser.</p>
Full article ">Figure 3
<p>The problem with the original weighting function. The red region represents the more sensitive region of the error in <math display="inline"><semantics> <mrow> <mi>α</mi> </mrow> </semantics></math>; the blue region represents the less sensitive region.</p>
Full article ">Figure 4
<p>The basic idea of the nonlinear U-STFM.</p>
Full article ">Figure 5
<p>Overall workflow of this study.</p>
Full article ">Figure 6
<p>The workflow for training the unmixing model with DyNet.</p>
Full article ">Figure 7
<p>DyNet training process.</p>
Full article ">Figure 8
<p>Data transformation for training the RatioNet.</p>
Full article ">Figure 9
<p>The training process of the RatioNet.</p>
Full article ">Figure 10
<p>Nonlinear U-STFM prediction workflow.</p>
Full article ">Figure 11
<p>The loss value during the training process.</p>
Full article ">Figure 12
<p>The change ratio prediction for each HCR by DyNet: The red cross mark represents the ground truth, and the median value of the multiple predictions by the different beaches was used as the final prediction of the change ratio of each HCR.</p>
Full article ">Figure 13
<p>1:1 plot for predicting LST on 1 November 2000 with different three date pairs (<b>upper</b>) and the final combination prediction (the median value at the pixel level).</p>
Full article ">Figure 14
<p>The final prediction (1 November 2000) based on combining multiple date triplets. (<b>a</b>) the original MODIS LST on 1 November 2000; (<b>b</b>) the prediction of our model; (<b>c</b>) the Landsat LST; (<b>d</b>) the 1:1 plot between our model prediction and the Landsat LST; (<b>e</b>) the RMSE map between our model prediction and the Landsat LST. (1)–(3) are subareas shown in <a href="#remotesensing-16-00322-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 15
<p>Subarea of <a href="#remotesensing-16-00322-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Prediction for 17 September 2001. (<b>a</b>) the original MODIS LST; (<b>b</b>) the prediction of our model; (<b>c</b>) the Landsat LST; (<b>d</b>) the 1:1 plot between our model prediction and the Landsat LST; (<b>e</b>) the RMSE map between our model prediction and the Landsat LST.</p>
Full article ">Figure 17
<p>Comparison of the prediction for 17 September 2001, with or without data for 1 November 2000. Partial cloud coverage marked by the red circle.</p>
Full article ">Figure 18
<p>The 1:1 plot for multiple date predictions.</p>
Full article ">Figure 19
<p>Comparison with U-STFM on 1 November 2000 with multiple HCR setups; (<b>a</b>) the results under 45 HCRs group; (<b>b</b>) the result under 145 HCRs group; (<b>c</b>) the result under 245 HCRs group; (<b>d</b>) the RMSE boxplot under 45, 145 and 245 HCRs group.</p>
Full article ">Figure 20
<p>Comparison with U-STFM on 17 September 2001, with multiple HCR setups. (<b>a</b>) the results under 45 HCRs group; (<b>b</b>) the result under 145 HCRs group; (<b>c</b>) the result under 245 HCRs group; (<b>d</b>) the RMSE boxplot under 45, 145 and 245 HCRs group.</p>
Full article ">Figure 21
<p>Prediction with the different SNRs for 1 November 2000.</p>
Full article ">Figure 22
<p>Boxplot of prediction with the different SNRs.</p>
Full article ">Figure 23
<p>Comparison with the prediction RMSE with STARFM, ESTARFM, and U-STFM.</p>
Full article ">Figure 23 Cont.
<p>Comparison with the prediction RMSE with STARFM, ESTARFM, and U-STFM.</p>
Full article ">Figure 24
<p>The truncation error between the change ratio at HCR level and the pixel level.</p>
Full article ">Figure 25
<p>The theoretical graph of the weighting function.</p>
Full article ">
19 pages, 7992 KiB  
Article
Improving the STARFM Fusion Method for Downscaling the SSEBOP Evapotranspiration Product from 1 km to 30 m in an Arid Area in China
by Jingjing Sun, Wen Wang, Xiaogang Wang and Luca Brocca
Remote Sens. 2023, 15(22), 5411; https://doi.org/10.3390/rs15225411 - 18 Nov 2023
Viewed by 2032
Abstract
Continuous evapotranspiration (ET) data with high spatial resolution are crucial for water resources management in irrigated agricultural areas in arid regions. Many global ET products are available now but with a coarse spatial resolution. Spatial-temporal fusion methods, such as the spatial and temporal [...] Read more.
Continuous evapotranspiration (ET) data with high spatial resolution are crucial for water resources management in irrigated agricultural areas in arid regions. Many global ET products are available now but with a coarse spatial resolution. Spatial-temporal fusion methods, such as the spatial and temporal adaptive reflectance fusion model (STARFM), can help to downscale coarse spatial resolution ET products. In this paper, the STARFM model is improved by incorporating the temperature vegetation dryness index (TVDI) into the data fusion process, and we propose a spatial and temporal adaptive evapotranspiration downscaling method (STAEDM). The modified method STAEDM was applied to the 1 km SSEBOP ET product to derive a downscaled 30 m ET for irrigated agricultural fields of Northwest China. The STAEDM exhibits a significant improvement compared to the original STARFM method for downscaling SSEBOP ET on Landsat-unavailable dates, with an increase in the squared correlation coefficients (r2) from 0.68 to 0.77 and a decrease in the root mean square error (RMSE) from 10.28 mm/10 d to 8.48 mm/10 d. The ET based on the STAEDM additionally preserves more spatial details than STARFM for heterogeneous agricultural fields and can better capture the ET seasonal dynamics. The STAEDM ET can better capture the temporal variation of 10-day ET during the whole crop growing season than SSEBOP. Full article
Show Figures

Figure 1

Figure 1
<p>Land cover of the study area and observation sites.</p>
Full article ">Figure 2
<p>Schematic overview of the inputs and processing in the STAEDM framework. (Note: <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">t</mi> <mn>0</mn> </msub> </mrow> </semantics></math> is a 10-day period including the Landsat instantaneous over-pass time, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">t</mi> <mi mathvariant="normal">k</mi> </msub> </mrow> </semantics></math> is a 10-day period excluding the Landsat instantaneous over-pass time).</p>
Full article ">Figure 3
<p>Comparison of remote sensing estimated ET data and observed ET at the five sites with difference land covers during 2013 to 2018. (<b>a</b>) SSEBOP ET and (<b>b</b>) Landsat ET.</p>
Full article ">Figure 4
<p>Spatial pattern of maps in the 3rd dekad of June 2016 from (<b>a</b>) SSEBOP ET, (<b>b</b>) Landsat ET, (<b>c</b>) downscaling ET by NDVI, (<b>d</b>) downscaling ET by NDVI and LST, and (<b>e</b>) downscaling ET by TVDI. The frequency distribution curves of maps from (<b>f</b>) SSEBOP ET, (<b>h</b>) downscaling ET by NDVI, (<b>i</b>) downscaling ET by NDVI and LST, and (<b>j</b>) downscaling ET by TVDI are colored by blue. The frequency distribution curve of map from (<b>g</b>) Landsat ET are colored by orange. The orange curves shown by (<b>f</b>), (<b>h</b>), (<b>i</b>) and (<b>j</b>), respectively are the same as those shown by (<b>g</b>). The black rectangle in (<b>b</b>) is magnified in Figure 8a.</p>
Full article ">Figure 5
<p>Spatial pattern of maps in the 2nd dekad of September, 2016 from (<b>a</b>) SSEBOP ET, (<b>b</b>) Landsat ET, (<b>c</b>) downscaling ET by NDVI, (<b>d</b>) downscaling ET by NDVI and LST, and (<b>e</b>) downscaling ET by TVDI. The frequency distribution curves of maps from (<b>f</b>) SSEBOP ET, (<b>h</b>) downscaling ET by NDVI, (<b>i</b>) downscaling ET by NDVI and LST, and (<b>j</b>) downscaling ET by TVDI are colored by blue. The frequency distribution curve of map from (<b>g</b>) Landsat ET are colored by orange. The orange curves shown by (<b>f</b>), (<b>h</b>), (<b>i</b>) and (<b>j</b>), respectively are the same as those shown by (<b>g</b>). The black rectangle in (<b>b</b>) is magnified in Figure 8d.</p>
Full article ">Figure 6
<p>Spatial pattern of ET maps in the 3rd dekad of June, 2016 from (<b>a</b>) resampled STARFM, and (<b>b</b>) TVDI-based STARFM models. And frequency distribution curves of ET maps from (<b>c</b>) resampled STARFM, and (<b>d</b>) TVDI-based STARFM models are colored by blue. The frequency distribution curves of ET maps from Landsat are colored by orange. The ET is predicted by the Landsat/SSEBOP ET pairs in the 1st dekad of July 2016. The black rectangle in (<b>a</b>) is magnified in Figure 8b, and the black rectangle in (<b>b</b>) is magnified in Figure 8c.</p>
Full article ">Figure 7
<p>Spatial pattern of ET maps in the 2nd dekad of September 2016 from (<b>a</b>) resampled STARFM, and (<b>b</b>) TVDI-based STARFM models. And frequency distribution curves of ET maps from (<b>c</b>) resampled STARFM, and (<b>d</b>) TVDI-based STARFM models are colored by blue. The frequency distribution curves of ET maps from Landsat are colored by orange. The ET is predicted by the Landsat/SSEBOP ET pairs in the 1st dekad of October 2016. The black rectangle in (<b>a</b>) is magnified in Figure 8e, and the black rectangle in (<b>b</b>) is magnified in Figure 8f.</p>
Full article ">Figure 8
<p>Magnified view of ET in different dates. (<b>a</b>,<b>d</b>) from Landsat ET, (<b>b</b>,<b>e</b>) from the resampled STARFM model, and (<b>c</b>,<b>f</b>) from the TVDI-based STARFM model. (<b>a</b>–<b>c</b>) in the 3rd dekad of June and (<b>d</b>–<b>f</b>) on the 2nd dekad of September 2016.</p>
Full article ">Figure 9
<p>Scatterplot of the observed ET and downscaled ET by TVDI-based STARFM (in orange) and resampled STARFM (in blue) on Landsat-unavailable dates for the wetland (<b>a</b>), corn (<b>b</b>), desert steppe (<b>c</b>), Gobi (<b>d</b>), and desert (<b>e</b>). r<sup>2</sup> values of resampled and TVDI-based STARFM ET in the plot are colored by blue and orange, respectively.</p>
Full article ">Figure 10
<p>The spatial pattern of SSEBOP ET and STAEDM ET in the 2nd dekad from March to November 2016. The STAEDM ET includes 4 TVDI-based STARFM ET images on May, June, July, and October on Landsat-unavailable dates and 5 Landsat ET images on Landsat over-pass dates.</p>
Full article ">Figure 11
<p>Time series of observed ET (black line), SSEBOP ET (blue line with dots), STAEDM ET (orange line with dots), and rainfall (grey bars) from 2013 to 2018.</p>
Full article ">
24 pages, 6832 KiB  
Article
Developing Spatial and Temporal Continuous Fractional Vegetation Cover Based on Landsat and Sentinel-2 Data with a Deep Learning Approach
by Zihao Wang, Dan-Xia Song, Tao He, Jun Lu, Caiqun Wang and Dantong Zhong
Remote Sens. 2023, 15(11), 2948; https://doi.org/10.3390/rs15112948 - 5 Jun 2023
Cited by 8 | Viewed by 2717
Abstract
Fractional vegetation cover (FVC) has a significant role in indicating changes in ecosystems and is useful for simulating growth processes and modeling land surfaces. The fine-resolution FVC products represent detailed vegetation cover information within fine grids. However, the long revisit cycle of satellites [...] Read more.
Fractional vegetation cover (FVC) has a significant role in indicating changes in ecosystems and is useful for simulating growth processes and modeling land surfaces. The fine-resolution FVC products represent detailed vegetation cover information within fine grids. However, the long revisit cycle of satellites with fine-resolution sensors and cloud contamination has resulted in poor spatial and temporal continuity. In this study, we propose to derive a spatially and temporally continuous FVC dataset by comparing multiple methods, including the data-fusion method (STARFM), curve-fitting reconstruction (S-G filtering), and deep learning prediction (Bi-LSTM). By combining Landsat and Sentinel-2 data, the integrated FVC was used to construct the initial input of fine-resolution FVC with gaps. The results showed that the FVC of gaps were estimated and time-series FVC was reconstructed. The Bi-LSTM method was the most effective and achieved the highest accuracy (R2 = 0.857), followed by the data-fusion method (R2 = 0.709) and curve-fitting method (R2 = 0.705), and the optimal time step was 3. The inclusion of relevant variables in the Bi-LSTM model, including LAI, albedo, and FAPAR derived from coarse-resolution products, further reduced the RMSE from 5.022 to 2.797. By applying the optimized Bi-LSTM model to Hubei Province, a time series 30 m FVC dataset was generated, characterized by a spatial and temporal continuity. In terms of the major vegetation types in Hubei (e.g., evergreen and deciduous forests, grass, and cropland), the seasonal trends as well as the spatial details were captured by the reconstructed 30 m FVC. It was concluded that the proposed method was applicable to reconstruct the time-series FVC over a large spatial scale, and the produced fine-resolution dataset can support the data needed by many Earth system science studies. Full article
Show Figures

Figure 1

Figure 1
<p>The study areas and the major land cover types of Hubei province.</p>
Full article ">Figure 2
<p>The framework of spatio-temporal reconstruction of 30 m FVC using different methods.</p>
Full article ">Figure 3
<p>The structure of the LSTM/Bi-LSTM network.</p>
Full article ">Figure 4
<p>Comparison between the Landsat and Sentinel-2 FVC acquired for the same date over (<b>a</b>) forest, (<b>b</b>) grassland, and (<b>c</b>) cropland.</p>
Full article ">Figure 5
<p>Comparisons between real FVC and predicted FVC on 24 December 2017: (<b>a</b>) is the real FVC with gaps; (<b>b</b>) is the FVC predicted by the STARFM method; (<b>c</b>) is the FVC predicted by the S-G filter; and (<b>d</b>) is the FVC predicted by LSTM.</p>
Full article ">Figure 6
<p>The scatter plot comparisons of real FVC and predicted FVC on 24 December 2017: the (<b>left</b>) plot is the real FVC compared with STARFM-predicted FVC; the (<b>middle</b>) plot is the real FVC compared with S-G-filter-predicted FVC; the (<b>right</b>) plot is the real FVC compared with LSTM-predicted FVC.</p>
Full article ">Figure 7
<p>Time-series curves from real data and from other time reconstruction methods.</p>
Full article ">Figure 8
<p>Comparison of the validation results of the LSTM model (blue) and Bi-LSTM model (red).</p>
Full article ">Figure 9
<p>Model validation results with different time steps (blue line is the result of three steps, red line is the result of one step).</p>
Full article ">Figure 10
<p>Phik (φk) correlation coefficients for different GLASS products and FVC.</p>
Full article ">Figure 11
<p>The image pairs of 30 m FVC before and after reconstructions using the optimized Bi-LSTM method with multiple variables.</p>
Full article ">Figure 12
<p>Comparison of the reconstructed FVC in 2017 using the optimized multivariate Bi-LSTM model to three reference FVC products for different vegetation types, including grassland, cropland, and forest. The 30 m FVC has been aggregated to 500 m for the purpose of comparison.</p>
Full article ">Figure 13
<p>The reconstructed 30 m/16 days FVC of Hubei province in 2017 using the optimized Bi-LSTM method. The year and date-of-year are labeled below each mosaic in the format of YEARDOY.</p>
Full article ">Figure 14
<p>Comparison between the reconstructed FVC in January and coarse-resolution products for major vegetation types in Hubei. For illustration purpose, all FVC pixels have been aggregated to 1 km resolution.</p>
Full article ">Figure 15
<p>Comparison between the reconstructed FVC in July and coarse-resolution products for major vegetation types in Hubei. For illustration purposes, all FVC pixels have been aggregated to 1 km resolution.</p>
Full article ">
23 pages, 23000 KiB  
Article
A High Spatiotemporal Enhancement Method of Forest Vegetation Leaf Area Index Based on Landsat8 OLI and GF-1 WFV Data
by Xin Luo, Lili Jin, Xin Tian, Shuxin Chen and Haiyi Wang
Remote Sens. 2023, 15(11), 2812; https://doi.org/10.3390/rs15112812 - 29 May 2023
Cited by 1 | Viewed by 1771
Abstract
The leaf area index (LAI) is a crucial parameter for analyzing terrestrial ecosystem carbon cycles and global climate change. Obtaining high spatiotemporal resolution forest stand vegetation LAI products over large areas is essential for an accurate understanding of forest ecosystems. This study takes [...] Read more.
The leaf area index (LAI) is a crucial parameter for analyzing terrestrial ecosystem carbon cycles and global climate change. Obtaining high spatiotemporal resolution forest stand vegetation LAI products over large areas is essential for an accurate understanding of forest ecosystems. This study takes the northwestern part of the Inner Mongolia Autonomous Region (the northern section of the Greater Khingan Mountains) in northern China as the research area. It also generates the LAI time series product of the 8-day and 30 m forest stand vegetation growth period from 2013 to 2017 (from the 121st to the 305th day of each year). The Simulated Annealing-Back Propagation Neural Network (SA-BPNN) model was used to estimate LAI from Landsat8 OLI, and the multi-period GaoFen-1 WideField-View satellite images (GF-1 WFV) and the spatiotemporal adaptive reflectance fusion mode (STARFM) was used to predict high spatiotemporal resolution LAI by combining inversion LAI and Global LAnd Surface Satellite-derived vegetation LAI (GLASS LAI) products. The results showed the following: (1) The SA-BPNN estimation model has relatively high accuracy, with R2 = 0.75 and RMSE = 0.38 for the 2013 LAI estimation model, and R2 = 0.74 and RMSE = 0.17 for the 2016 LAI estimation model. (2) The fused 30 m LAI product has a good correlation with the LAI verification of the measured sample site (R2 = 0.8775) and a high similarity with the GLASS LAI product. (3) The fused 30 m LAI product has a high similarity with the GLASS LAI product, and compared with the GLASS LAI interannual trend line, it accords with the growth trend of plants in the seasons. This study provides a theoretical and technical reference for forest stand vegetation growth period LAI spatiotemporal fusion research based on high-score data, and has an important role in exploring vegetation primary productivity and carbon cycle changes in the future. Full article
(This article belongs to the Special Issue Quantitative Remote Sensing Product and Validation Technology)
Show Figures

Figure 1

Figure 1
<p>Location of the study area and the actual observation point of the plot. (<b>a</b>) Inner Mongolia Autonomous Region of China; (<b>b</b>) land use data in the research area; (<b>c</b>) GLASS LAI local area; (<b>d</b>) Landsat8 OLI remote sensing image; and (<b>e</b>) GF-1 WFV remote sensing image.</p>
Full article ">Figure 2
<p>The technical framework of this study.</p>
Full article ">Figure 3
<p>Based on SA-BPNN LAI measured value and inversion value correlation: (<b>a</b>) observations of LAI-2000 plots in 2013; (<b>b</b>) observations of LAI-2000 plots in 2016.</p>
Full article ">Figure 4
<p>Estimated LAI results of the SA-BPNN model from 2013 to 2017.</p>
Full article ">Figure 5
<p>LAI curve of the fusion image every 8 days during the mean value of stand LAI growing season from 2013 to 2017 and its comparison with the mean value of GLASS LAI.</p>
Full article ">Figure 6
<p>The relationships between (<b>a</b>) Fusion LAI, GLASS LAI and LAINet LAI, and the comparison of (<b>b</b>) Fusion LAI, and (<b>c</b>) GLASS LAI and LAINet LAI.</p>
Full article ">Figure 7
<p>The relationships between Fusion LAI, GLASS LAI and TRAC LAI, LAI-2200 LAI, and the comparison of (<b>a</b>) Fusion LAI, (<b>b</b>) GLASS LAI and TRAC LAI; and (<b>c</b>) Fusion LAI, (<b>d</b>) GLASS LAI and LAI-2200 LAI.</p>
Full article ">Figure 8
<p>Spatiotemporal distributions of time-series LAI assimilation of forest stand vegetation in research area during 2013–2017. (<b>a</b>) Fusion LAI results in the GF-1 WFV growing season 2013; (<b>b</b>) Fusion LAI results in the Landsat8 OLI growing season 2014; (<b>c</b>) Fusion LAI results in the Landsat8 OLI growing season 2015; (<b>d</b>) Fusion LAI results in the GF-1 WFV growing season 2016; and (<b>e</b>) Fusion LAI results in the GF-1 WFV growing season 2017.</p>
Full article ">Figure 8 Cont.
<p>Spatiotemporal distributions of time-series LAI assimilation of forest stand vegetation in research area during 2013–2017. (<b>a</b>) Fusion LAI results in the GF-1 WFV growing season 2013; (<b>b</b>) Fusion LAI results in the Landsat8 OLI growing season 2014; (<b>c</b>) Fusion LAI results in the Landsat8 OLI growing season 2015; (<b>d</b>) Fusion LAI results in the GF-1 WFV growing season 2016; and (<b>e</b>) Fusion LAI results in the GF-1 WFV growing season 2017.</p>
Full article ">Figure 8 Cont.
<p>Spatiotemporal distributions of time-series LAI assimilation of forest stand vegetation in research area during 2013–2017. (<b>a</b>) Fusion LAI results in the GF-1 WFV growing season 2013; (<b>b</b>) Fusion LAI results in the Landsat8 OLI growing season 2014; (<b>c</b>) Fusion LAI results in the Landsat8 OLI growing season 2015; (<b>d</b>) Fusion LAI results in the GF-1 WFV growing season 2016; and (<b>e</b>) Fusion LAI results in the GF-1 WFV growing season 2017.</p>
Full article ">Figure 9
<p>Statistical histogram of 2013–2017 fusion LAI values.</p>
Full article ">
17 pages, 3425 KiB  
Article
Adaptability Evaluation of the Spatiotemporal Fusion Model of Sentinel-2 and MODIS Data in a Typical Area of the Three-River Headwater Region
by Mengyao Fan, Dawei Ma, Xianglin Huang and Ru An
Sustainability 2023, 15(11), 8697; https://doi.org/10.3390/su15118697 - 27 May 2023
Cited by 4 | Viewed by 1641
Abstract
The study of surface vegetation monitoring in the “Three-River Headwaters” Region (TRHR) relies on satellite data with high spatial and temporal resolutions. The spatial and temporal fusion method for multiple data sources can effectively overcome the limitations of weather, the satellite return period, [...] Read more.
The study of surface vegetation monitoring in the “Three-River Headwaters” Region (TRHR) relies on satellite data with high spatial and temporal resolutions. The spatial and temporal fusion method for multiple data sources can effectively overcome the limitations of weather, the satellite return period, and funding on research data to obtain data higher spatial and temporal resolutions. This paper explores the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), and the flexible spatiotemporal data fusion (FSDAF) method applied to Sentinel-2 and MODIS data in a typical area of the TRHR. In this study, the control variable method was used to analyze the parameter sensitivity of the models and explore the adaptation parameters of the Sentinel-2 and MODIS data in the study area. Since the spatiotemporal fusion model was directly used in the product data of the vegetation index, this study used NDVI fusion as an example and set up a comparison experiment (experiment I first performed the band spatiotemporal fusion and then calculated the vegetation index; experiment II calculated the vegetation index first and then performed the spatiotemporal fusion) to explore the feasibility and applicability of the two methods for the vegetation index fusion. The results showed the following. (1) The three spatiotemporal fusion models generated high spatial resolution and high temporal resolution data based on the fusion of Sentinel-2 and MODIS data, the STARFM and FSDAF model had a higher fusion accuracy, and the R2 values after fusion were higher than 0.8, showing greater applicability. (2) The fusion accuracy of each model was affected by the model parameters. The errors between the STARFM, ESTARFM, and FSDAF fusion results and the validation data all showed a decreasing trend with an increase in the size of the sliding window or the number of similar pixels, which stabilized after the sliding window became larger than 50 and the similar pixels became larger than 80. (3) The comparative experimental results showed that the spatiotemporal fusion model can be directly fused based on the vegetation index products, and higher quality vegetation index data can be obtained by calculating the vegetation index first and then performing the spatiotemporal fusion. The high spatial and temporal resolution data obtained using a suitable spatial and temporal fusion model are important for the identification and monitoring of surface cover types in the TRHR. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area in the TRHR (red area in the map).</p>
Full article ">Figure 2
<p>Technology roadmap.</p>
Full article ">Figure 3
<p>The accuracy of the fusion results of each model using different sliding window sizes. The letter (<b>a</b>) indicates the accuracy evaluation indicator for the STARFM; (<b>b</b>) indicates the accuracy evaluation indicator for the ESTARFM; (<b>c</b>) indicates the accuracy evaluation indicator for the FSDAF model.</p>
Full article ">Figure 4
<p>Accuracy metric values for the ESTARFM fusion results with different numbers of similar image elements.</p>
Full article ">Figure 5
<p>The NDVI predicted by the STARFM and FSDAF model in experiments I and II compared to the real NDVI. The letter (<b>a</b>) indicates the NDVI image obtained by fusion with the STARFM; (<b>b</b>) indicates the NDVI image obtained by fusion with the FSDAF model. The a(I) indicates the NDVI image obtained by fusion with the STARFM in experiments I and a(II) indicates the NDVI image obtained by fusion with the STARFM in experiments II; The b(I) indicates the NDVI image obtained by fusion with the FSDAF in experiments I and b(II) indicates the NDVI image obtained by fusion with the FSDAF in experiments II.</p>
Full article ">Figure 6
<p>Scatterplots of the different fusion experiments using the STARFM and FSDAF model. The letter (<b>a</b>) denotes the correlation of the STARFM fusion results; (<b>b</b>) denotes the correlation of the FSDAF model. The a(I) denotes the correlation of the STARFM fusion results in experiments I and a(II) denotes the correlation of the STARFM fusion results in experiments II; The b(I) denotes the correlation of the FSDAF fusion results in experiments I and b(II) denotes the correlation of the FSDAF fusion results in experiments II.</p>
Full article ">
35 pages, 12709 KiB  
Article
Evaluation of MODIS, Landsat 8 and Sentinel-2 Data for Accurate Crop Yield Predictions: A Case Study Using STARFM NDVI in Bavaria, Germany
by Maninder Singh Dhillon, Carina Kübert-Flock, Thorsten Dahms, Thomas Rummler, Joel Arnault, Ingolf Steffan-Dewenter and Tobias Ullmann
Remote Sens. 2023, 15(7), 1830; https://doi.org/10.3390/rs15071830 - 29 Mar 2023
Cited by 13 | Viewed by 6035
Abstract
The increasing availability and variety of global satellite products and the rapid development of new algorithms has provided great potential to generate a new level of data with different spatial, temporal, and spectral resolutions. However, the ability of these synthetic spatiotemporal datasets to [...] Read more.
The increasing availability and variety of global satellite products and the rapid development of new algorithms has provided great potential to generate a new level of data with different spatial, temporal, and spectral resolutions. However, the ability of these synthetic spatiotemporal datasets to accurately map and monitor our planet on a field or regional scale remains underexplored. This study aimed to support future research efforts in estimating crop yields by identifying the optimal spatial (10 m, 30 m, or 250 m) and temporal (8 or 16 days) resolutions on a regional scale. The current study explored and discussed the suitability of four different synthetic (Landsat (L)-MOD13Q1 (30 m, 8 and 16 days) and Sentinel-2 (S)-MOD13Q1 (10 m, 8 and 16 days)) and two real (MOD13Q1 (250 m, 8 and 16 days)) NDVI products combined separately to two widely used crop growth models (CGMs) (World Food Studies (WOFOST), and the semi-empiric Light Use Efficiency approach (LUE)) for winter wheat (WW) and oil seed rape (OSR) yield forecasts in Bavaria (70,550 km2) for the year 2019. For WW and OSR, the synthetic products’ high spatial and temporal resolution resulted in higher yield accuracies using LUE and WOFOST. The observations of high temporal resolution (8-day) products of both S-MOD13Q1 and L-MOD13Q1 played a significant role in accurately measuring the yield of WW and OSR. For example, L- and S-MOD13Q1 resulted in an R2 = 0.82 and 0.85, RMSE = 5.46 and 5.01 dt/ha for WW, R2 = 0.89 and 0.82, and RMSE = 2.23 and 2.11 dt/ha for OSR using the LUE model, respectively. Similarly, for the 8- and 16-day products, the simple LUE model (R2 = 0.77 and relative RMSE (RRMSE) = 8.17%) required fewer input parameters to simulate crop yield and was highly accurate, reliable, and more precise than the complex WOFOST model (R2 = 0.66 and RRMSE = 11.35%) with higher input parameters. Conclusively, both S-MOD13Q1 and L-MOD13Q1, in combination with LUE, were more prominent for predicting crop yields on a regional scale than the 16-day products; however, L-MOD13Q1 was advantageous for generating and exploring the long-term yield time series due to the availability of Landsat data since 1982, with a maximum resolution of 30 m. In addition, this study recommended the further use of its findings for implementing and validating the long-term crop yield time series in different regions of the world. Full article
(This article belongs to the Special Issue Monitoring Crops and Rangelands Using Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The conceptual framework of this study is divided into two parts: Part 1 states the data fusion for 2019 to investigate the synthetic NDVI time series product (this section was completed in our previous study [<a href="#B4-remotesensing-15-01830" class="html-bibr">4</a>]) and Part 2 estimates and validates the crop yield for Bavaria by inputting the fused L-MOD13Q1 time series and climate elements to a semi-empiric Light Use Efficiency (LUE) model. STARFM = Spatial and Temporal Adaptive Reflectance Fusion Model; NDVI = Normalised Difference Vegetation Index; L-MOD09GQ = Landsat-MOD09GQ; L-MOD09Q1 = Landsat-MOD09Q1; L-MCD43A4 = Landsat-MCD43A4; L-MOD13Q1 = Landsat-MOD13Q1; S-MOD09GQ = Sentinel-2-MOD09GQ; S-MOD09Q1 = Sentinel-2-MOD09Q1; S-MCD43A4 = Sentinel-2-MCD43A4; S-MOD13Q1 = Sentinel-2-MOD13Q1; LfStat = the Bayerisches Landesamt für Statistik (LfStat).</p>
Full article ">Figure 2
<p>An overview of the study region. The LC map of Bavaria is obtained by combining multiple inputs of Landcover maps, such as Amtliche Topographisch-Kartographisches Informationssystem, Integrated Administration Control System (provides the crop field information), and Corine LC, into one map. Agriculture (peach green) dominates mainly in the northwest and southeast of Bavaria, while forest and grassland classes (dark green and yellow, respectively) dominate in the northeast and south. The district map of Bavaria overlays the LC map. The enlargement (displayed with a dark red box on the top right map) shows the urban area of the town Volkach, including the oil seed rape (OSR) fields (dark orange) and the winter wheat (WW) fields (dark green). A brief description of the regions of Bavaria is shown in <a href="#remotesensing-15-01830-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 3
<p>The cloud-free scenes are available for Landsat (in red box) and Sentinel-2 (in blue box) during the seasons of OSR and WW. Four cloud-free scenes were collected for the Landsat data and six were collected for the Sentinel-2 data. The maps show the NDVI values from −1 to 1 for Bavaria during 2019. The negative NDVI values indicate non-vegetated areas such as water bodies or barren land.</p>
Full article ">Figure 4
<p>Field-wise comparison of STARFM and real-time NDVI values of (<b>a</b>) MOD13Q1, (<b>b</b>) Landsat 8, (<b>c</b>) L-MOD13Q1, (<b>d</b>) Sentinel-2, and (<b>e</b>) S-MOD13Q1 on DOY 145 (25 May 2019) on WW fields. The image in (<b>f</b>) shows the spatial location of 10,000 random points in Bavaria used to draw line and bar plots in <a href="#remotesensing-15-01830-f005" class="html-fig">Figure 5</a> for comparing the mean NDVI values on a DOY basis for the real and synthetic NDVI products.</p>
Full article ">Figure 5
<p>The (<b>a</b>) line and (<b>b</b>) bar plots show the DOY-based and interquartile-range-based comparison of STARFM-generated NDVI values with their respective high-resolution input (Landsat (L) or Sentinel-2 (S)) and low-resolution input MOD13Q1, respectively. The comparison is based on the mean values extracted for 10,000 random points (whose spatial location is shown in <a href="#remotesensing-15-01830-f004" class="html-fig">Figure 4</a>f) taken for the entire Bavaria.</p>
Full article ">Figure 6
<p>The scatter plots (<b>a</b>–<b>l</b>) compare the accuracies of LUE- and WOFOST-modelled yields (inputting the 8- and 16-day MOD13Q1, L-MOD13Q1 and S-MOD13Q1) with the referenced yield of WW. The green dots represent WW. Every plot contains a solid line to visualise the correlation of pixels between the referenced and modelled yield values.</p>
Full article ">Figure 7
<p>The scatter plots (<b>a</b>–<b>l</b>) compare the accuracies of LUE- and WOFOST-modelled yields (inputting the 8- and 16-day MOD13Q1, L-MOD13Q1, and S-MOD13Q1) with the referenced yield of OSR. The orange dots represent OSR. Every plot contains a solid line to visualise the correlation of pixels between the referenced and modelled yield values.</p>
Full article ">Figure 8
<p>The violin plots compare the crop yields of referenced (at 95% confidence interval) and modelled yields obtained from multi-source data (MOD13Q1, L-MOD13Q1, and S-MOD13Q1) at 8 and 16 days of temporal scales of (<b>a</b>,<b>b</b>) WW and (<b>b</b>,<b>d</b>) OSR using the (<b>a</b>,<b>c</b>) LUE and (<b>b</b>,<b>d</b>) WOFOST models in 2019. The green-coloured text represents WW and the orange-coloured text represents OSR. The text values represent the median yield values of every product.</p>
Full article ">Figure 9
<p>The box plots compare the accuracies (<b>a</b>,<b>c</b>) R<sup>2</sup> and (<b>b</b>,<b>d</b>) RMSE of referenced (at 95% confidence interval) and modelled yields obtained from multi-source data: MOD13Q1, L-MOD13Q1, and S-MOD13Q1 at temporal scales of 8 and 16 days.</p>
Full article ">Figure 10
<p>Spatial distribution of referenced yields and predicted yields for WW using MOD13Q1 (8 and 16 days), L-MOD13Q1 (8 and 16 days), and S-MOD13Q1 (8 and 16 days) with LUE and WOFOST models for the state of Bavaria. The white colour represents no data available. A detailed map of the administrative regions of Bavaria is shown in <a href="#remotesensing-15-01830-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 11
<p>The dot plots show the region-wise distribution of referenced yields and modelled yields obtained from multi-source data (MOD13Q1 (8 and 16 days), L-MOD13Q1 (8 and 16 days), and S-MOD13Q1 (8 and 16 days)) for WW using (<b>a</b>) LUE and (<b>b</b>) WOFOST in 2019. The regional referenced yields are displayed in red dots.</p>
Full article ">Figure 12
<p>Spatial distribution of referenced yields and predicted yields for OSR using MOD13Q1 (8 and 16 days), L-MOD13Q1 (8 and 16 days), and S-MOD13Q1 (8 and 16 days) with LUE and WOFOST models for the state of Bavaria. The white colour represents no data available. A detailed map of the administrative regions of Bavaria is shown in <a href="#remotesensing-15-01830-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 13
<p>The dot plots show the region-wise distribution of referenced yields and modelled yields obtained from multi-source data (MOD13Q1 (8 and 16 days), L-MOD13Q1 (8 and 16 days), and S-MOD13Q1 (8 and 16 days)) for OSR using (<b>a</b>) LUE and (<b>b</b>) WOFOST in 2019. The regional referenced yields are displayed in red dots.</p>
Full article ">Figure 13 Cont.
<p>The dot plots show the region-wise distribution of referenced yields and modelled yields obtained from multi-source data (MOD13Q1 (8 and 16 days), L-MOD13Q1 (8 and 16 days), and S-MOD13Q1 (8 and 16 days)) for OSR using (<b>a</b>) LUE and (<b>b</b>) WOFOST in 2019. The regional referenced yields are displayed in red dots.</p>
Full article ">Figure 14
<p>The box plots show the comparison of accuracies (<b>a</b>,<b>c</b>) R<sup>2</sup> values and (<b>b</b>,<b>d</b>) RMSE values obtained from the referenced yields (at 95% confidence interval), with LUE (<b>a</b>,<b>b</b>) and WOFOST (<b>c</b>,<b>d</b>) modelled yields including climate stress factors (dark blue and pink) and the modelled yields excluding the climate stress factors (sensitivity analysis) (light blue and pink).</p>
Full article ">Figure 14 Cont.
<p>The box plots show the comparison of accuracies (<b>a</b>,<b>c</b>) R<sup>2</sup> values and (<b>b</b>,<b>d</b>) RMSE values obtained from the referenced yields (at 95% confidence interval), with LUE (<b>a</b>,<b>b</b>) and WOFOST (<b>c</b>,<b>d</b>) modelled yields including climate stress factors (dark blue and pink) and the modelled yields excluding the climate stress factors (sensitivity analysis) (light blue and pink).</p>
Full article ">Figure 15
<p>The dot plots show the comparison of accuracies for (<b>a</b>) R<sup>2</sup>, (<b>b</b>) RMSE, (<b>c</b>) RRMSE, and (<b>d</b>) ME values obtained from the referenced yields (at 95% confidence interval) for LUE (dark blue) and WOFOST (dark pink) models.</p>
Full article ">Figure 15 Cont.
<p>The dot plots show the comparison of accuracies for (<b>a</b>) R<sup>2</sup>, (<b>b</b>) RMSE, (<b>c</b>) RRMSE, and (<b>d</b>) ME values obtained from the referenced yields (at 95% confidence interval) for LUE (dark blue) and WOFOST (dark pink) models.</p>
Full article ">Figure 16
<p>The box plots compare the accuracies for (<b>a</b>) R<sup>2</sup> and (<b>b</b>) RRMSE of referenced (at 95% confidence interval) and modelled yields obtained from multi-source data using LUE and WOFOST models in 2019.</p>
Full article ">Figure 17
<p>Visualisation of field level biomass of L-MOD13Q1 and S-MOD13Q1 with 8 days, 16 days, and the difference (16–18 days) obtained using the LUE model for (<b>a</b>) WW and (<b>b</b>) OSR.</p>
Full article ">Figure A1
<p>Detailed map of administrative regions of Bavaria (Landkreise und Kreisfreie Städte in Bayern). The names of the districts are translated from German to English: Unterfranken as Lower Franconia, Mittelfranken as Middle Franconia, Oberfranken as Upper Franconia, Oberpfalz as Upper Palatinate, Oberbayern as Upper Bavaria, and Niederbayern as Lower Bavaria. (Source: <a href="https://www.gifex.com/" target="_blank">https://www.gifex.com/</a>, accessed on 12 January 2023).</p>
Full article ">Figure A2
<p>Flowchart of the WOFOST model. (Source: [<a href="#B5-remotesensing-15-01830" class="html-bibr">5</a>]).</p>
Full article ">
35 pages, 13244 KiB  
Article
Impact of STARFM on Crop Yield Predictions: Fusing MODIS with Landsat 5, 7, and 8 NDVIs in Bavaria Germany
by Maninder Singh Dhillon, Thorsten Dahms, Carina Kübert-Flock, Adomas Liepa, Thomas Rummler, Joel Arnault, Ingolf Steffan-Dewenter and Tobias Ullmann
Remote Sens. 2023, 15(6), 1651; https://doi.org/10.3390/rs15061651 - 18 Mar 2023
Cited by 6 | Viewed by 4710
Abstract
Rapid and accurate yield estimates at both field and regional levels remain the goal of sustainable agriculture and food security. Hereby, the identification of consistent and reliable methodologies providing accurate yield predictions is one of the hot topics in agricultural research. This study [...] Read more.
Rapid and accurate yield estimates at both field and regional levels remain the goal of sustainable agriculture and food security. Hereby, the identification of consistent and reliable methodologies providing accurate yield predictions is one of the hot topics in agricultural research. This study investigated the relationship of spatiotemporal fusion modelling using STRAFM on crop yield prediction for winter wheat (WW) and oil-seed rape (OSR) using a semi-empirical light use efficiency (LUE) model for the Free State of Bavaria (70,550 km2), Germany, from 2001 to 2019. A synthetic normalised difference vegetation index (NDVI) time series was generated and validated by fusing the high spatial resolution (30 m, 16 days) Landsat 5 Thematic Mapper (TM) (2001 to 2012), Landsat 7 Enhanced Thematic Mapper Plus (ETM+) (2012), and Landsat 8 Operational Land Imager (OLI) (2013 to 2019) with the coarse resolution of MOD13Q1 (250 m, 16 days) from 2001 to 2019. Except for some temporal periods (i.e., 2001, 2002, and 2012), the study obtained an R2 of more than 0.65 and a RMSE of less than 0.11, which proves that the Landsat 8 OLI fused products are of higher accuracy than the Landsat 5 TM products. Moreover, the accuracies of the NDVI fusion data have been found to correlate with the total number of available Landsat scenes every year (N), with a correlation coefficient (R) of +0.83 (between R2 of yearly synthetic NDVIs and N) and −0.84 (between RMSEs and N). For crop yield prediction, the synthetic NDVI time series and climate elements (such as minimum temperature, maximum temperature, relative humidity, evaporation, transpiration, and solar radiation) are inputted to the LUE model, resulting in an average R2 of 0.75 (WW) and 0.73 (OSR), and RMSEs of 4.33 dt/ha and 2.19 dt/ha. The yield prediction results prove the consistency and stability of the LUE model for yield estimation. Using the LUE model, accurate crop yield predictions were obtained for WW (R2 = 0.88) and OSR (R2 = 0.74). Lastly, the study observed a high positive correlation of R = 0.81 and R = 0.77 between the yearly R2 of synthetic accuracy and modelled yield accuracy for WW and OSR, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The conceptual framework of the study is divided into three parts: Part 1 states the data fusion for 2019 to investigate the best synthetic NDVI time series product (this section was already completed in our previous study [<a href="#B59-remotesensing-15-01651" class="html-bibr">59</a>]); Part 2 generates and validates the synthetic NDVI time series from 2001 to 2019 for the product L-MOD13Q1; and Part 3 performs the comparative analysis to compare the performance of fused (L-MOD13Q1) and non-fused (MOD13Q1) NDVI time series in crop yield prediction for 2019 and then estimates and validates the crop yield for Bavaria by inputting the L-MOD13Q1 time series and climate elements to a semi-empiric Light Use Efficiency (LUE) model; STARFM = Spatial and Temporal Adaptive Reflectance Fusion Model; NDVI = Normalised Difference Vegetation Index; L-MOD09GQ = Landsat-MOD09GQ; L-MOD09Q1 = Landsat-MOD09Q1; L-MCD43A4 = Landsat-MCD43A4; L-MOD13Q1 = Landsat-MOD13Q1; S-MOD09GQ = Sentinel-2-MOD09GQ; S-MOD09Q1 = Sentinel-2-MOD09Q1; S-MCD43A4 = Sentinel-2-MCD43A4; S-MOD13Q1 = Sentinel-2-MOD13Q1; PAR is photosynthetically active radiation, and FPAR is the fraction of PAR absorbed by the canopy. APAR = Absorbed Photosynthetically Active Radiation.</p>
Full article ">Figure 2
<p>Overview of the study region. The LC map of Bavaria is obtained by combining multiple inputs of landcover maps, such as the Amtliche Topographisch-Kartographische Informations System, Integrated Administration Control System (which provides the crop field information), and the Corine LC, into one map. Agriculture (peach green) dominates mainly in the northwest and southeast of Bavaria, while forest and grassland classes (dark green and yellow, respectively) dominate in the northeast and south. The LC map is overlayed by the district map of Bavaria. The enlargement (displayed with a dark red box on the top right map) shows the urban area of the city of Würzburg, with the oil-seed rape (OSR) fields (dark orange) and the winter wheat (WW) fields (dark green) in 2019. A brief description of the regions of Bavaria is shown in <a href="#remotesensing-15-01651-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 3
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of Landsat (referenced NDVI) with L-MOD13Q1 (synthetic NDVI) for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup> and RMSE and the total number of Landsat scenes available every year (N) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the referenced and synthetic NDVI values. The dashed line represents the regression line. The colour of scatter plots depicts the density of points (yellow: low, blue: high).</p>
Full article ">Figure 3 Cont.
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of Landsat (referenced NDVI) with L-MOD13Q1 (synthetic NDVI) for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup> and RMSE and the total number of Landsat scenes available every year (N) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the referenced and synthetic NDVI values. The dashed line represents the regression line. The colour of scatter plots depicts the density of points (yellow: low, blue: high).</p>
Full article ">Figure 4
<p>The correlation plots between the total number of Landsat scenes per year (N) and (<b>a</b>) R<sup>2</sup> values and (<b>b</b>) RMSE values obtained during the accuracy assessment of referenced and synthetic NDVI products from 2001 to 2019. The correlation coefficient refers to R (see Equation (5)).</p>
Full article ">Figure 5
<p>The day of the year (DOY)-based comparison of correlation coefficients between (<b>a</b>) R<sup>2</sup> values and (<b>b</b>) RMSE values obtained during the accuracy assessment of referenced and synthetic NDVI products from 2001 to 2019. The correlation coefficient refers to R (see Equation (5)).</p>
Full article ">Figure 6
<p>The dot plots compare the accuracies (<b>a</b>) R<sup>2</sup>, (<b>b</b>) RMSE, and (<b>c</b>) ME of referenced data (at 95% confidence intervals) and modelled yields obtained from multi-source data: MOD13Q1 and L-MOD13Q1 in 2019.</p>
Full article ">Figure 7
<p>The scatter plots compare the accuracies of the modelled and referenced yields (at a 95% confidence interval) of (<b>a</b>) WW and (<b>b</b>) OSR for 19 years together (i.e., from 2001 to 2019). The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), ME (dt/ha), and total number of points (n) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The dashed line represents the regression line. Different colours of the points display different years.</p>
Full article ">Figure 8
<p>The scatter plots correlating the modelled yield and regional mean elevation for (<b>a</b>) WW and (<b>b</b>) OSR. The dashed line represents the regression line. Different colours of the points display different crop types (green for WW and orange for OSR). The correlation coefficient refers to R (see Equation (5)).</p>
Full article ">Figure 9
<p>The bar plots show the yearly comparison of accuracies (<b>a</b>) R<sup>2</sup> values and (<b>b</b>) RMSE values obtained from the referenced yields (at a 95% confidence interval), with LUE-modelled yields including climate stress factors (dark blue) and LUE-modelled yields excluding the climate stress factors (sensitivity analysis) (light blue). The scatter plots compare the accuracies of the modelled and referenced yields (at a 95% confidence interval) of (<b>c</b>) WW and (<b>d</b>) OSR for 19 years together (i.e., from 2001 to 2019). The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), ME (dt/ha), and total number of points (n) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The dashed line represents the regression line. Different colours of the points display different years.</p>
Full article ">Figure 10
<p>Spatial distribution of mean referenced yield (2001–2019) and the year-wise predicted yield for WW from 2001 to 2019 using the LUE model for the state of Bavaria. The white colour represents no available data. A detailed map of the administrative regions of Bavaria is shown in <a href="#remotesensing-15-01651-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 11
<p>Spatial distribution of mean referenced yield (2001–2019) and the year-wise predicted yield for OSR from 2001 to 2019 using the LUE model for the state of Bavaria. The white colour represents no available data. A detailed map of the administrative regions of Bavaria is shown in <a href="#remotesensing-15-01651-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 12
<p>The dot plots show the district-wise distribution of modelled yield for (<b>a</b>) WW and (<b>b</b>) OSR, from 2001 to 2019. The green colour depicts the modelled yield of WW, the orange colour depicts the modelled yield of OSR, and the grey colour depicts both referenced yields of WW and OSR.</p>
Full article ">Figure 12 Cont.
<p>The dot plots show the district-wise distribution of modelled yield for (<b>a</b>) WW and (<b>b</b>) OSR, from 2001 to 2019. The green colour depicts the modelled yield of WW, the orange colour depicts the modelled yield of OSR, and the grey colour depicts both referenced yields of WW and OSR.</p>
Full article ">Figure 13
<p>The line plots compare the accuracies with the mean yield percent difference (as calculated in Equation (9)) for WW and OSR for 19 years (i.e., from 2001 to 2019). The accuracies of WW and OSR are analysed in six categories (less than −4, −4 to −2, −2 to 0, 0 to 2, 2 to 4, and more than 4%) of yield percent difference. The negative range shows the overestimation, and the positive range shows the underestimation of the modelled yield values by the LUE compared to the referenced yield values. The green colour depicts WW, and the orange colour depicts OSR.</p>
Full article ">Figure 14
<p>The bar plots compare the yearly (<b>a</b>) R<sup>2</sup> and (<b>b</b>) RMSE values of estimated OSR yield (orange), WW yield (green), and synthetic NDVI (purple) from 2001 to 2019. The units of the RMSE values of both WW and OSR yields are dt/ha.</p>
Full article ">Figure 15
<p>The correlation plots between R<sup>2</sup> of synthetic NDVI time series and R<sup>2</sup> of modelled yield time series for (<b>a</b>) WW (green) and (<b>b</b>) OSR (orange), from 2001 to 2019. The correlation coefficient refers to R (see Equation (5)).</p>
Full article ">Figure 16
<p>The side-by-side visualisation of synthetic NDVI products obtained on 18 June 2005, 2013 and 2019 (<b>left</b>) with the WW biomass obtained from the LUE modelled for the years of 2005, 2013 and 2019 (<b>right</b>).</p>
Full article ">Figure A1
<p>Detailed map of administrative regions of Bavaria (Landkreise und kreisfreie Städte in Bayern). The names of the districts are translated from German to English as: Unterfranken as Lower Franconia, Mittelfranken as Middle Franconia, Oberfranken as Upper Franconia, Oberpfalz as Upper Palatinate, Oberbayern as Upper Bavaria, and Niederbayern as Lower Bavaria. (Source: <a href="https://www.gifex.com/" target="_blank">https://www.gifex.com/</a>, accessed on 12 January 2023).</p>
Full article ">Figure A2
<p>The digital elevation map of Bavaria. The map is generated from shuttle radar topography mission (SRTM) digital elevation data. The elevation ranges from 93 m to 2943 m.</p>
Full article ">Figure A3
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of modelled and referenced yields of WW for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), and ME (dt/ha) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The green colour of scatter plots represents WW.</p>
Full article ">Figure A3 Cont.
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of modelled and referenced yields of WW for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), and ME (dt/ha) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The green colour of scatter plots represents WW.</p>
Full article ">Figure A4
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of modelled and referenced yields of OSR for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), and ME (dt/ha) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The orange colour of scatter plots represents OSR.</p>
Full article ">Figure A4 Cont.
<p>The scatter plots (<b>a</b>–<b>s</b>) compare the accuracies of modelled and referenced yields of OSR for 2001 to 2019. The values of the statistical parameters such as R<sup>2</sup>, RMSE (dt/ha), and ME (dt/ha) are displayed at the top of each plot. Every plot contains a solid line (1:1 line) that is used to visualise the correlation of pixels between the modelled and referenced yield values. The orange colour of scatter plots represents OSR.</p>
Full article ">Figure A5
<p>The regional scale average yield percent difference between the referenced and the modelled yield from 2001 to 2019 (<b>a</b>) WW, (<b>b</b>) OSR. The yield percent difference is calculated in Equation (9).</p>
Full article ">
18 pages, 5818 KiB  
Article
Stability Analysis of Unmixing-Based Spatiotemporal Fusion Model: A Case of Land Surface Temperature Product Downscaling
by Min Li, Shanxin Guo, Jinsong Chen, Yuguang Chang, Luyi Sun, Longlong Zhao, Xiaoli Li and Hongming Yao
Remote Sens. 2023, 15(4), 901; https://doi.org/10.3390/rs15040901 - 6 Feb 2023
Cited by 6 | Viewed by 2173
Abstract
The unmixing-based spatiotemporal fusion model is one of the effective ways to solve limitations in temporal and spatial resolution tradeoffs in a single satellite sensor. By using fusion data from different satellite platforms, high resolution in both temporal and spatial domains can be [...] Read more.
The unmixing-based spatiotemporal fusion model is one of the effective ways to solve limitations in temporal and spatial resolution tradeoffs in a single satellite sensor. By using fusion data from different satellite platforms, high resolution in both temporal and spatial domains can be produced. However, due to the ill-posed characteristic of the unmixing function, the model performance may vary due to the different model setups. The key factors affecting the model stability most and how to set up the unmixing strategy for data downscaling remain unknown. In this study, we use the multisource land surface temperature as the case and focus on the three major factors to analyze the stability of the unmixing-based fusion model: (1) the definition of the homogeneous change regions (HCRs), (2) the unmixing levels, and (3) the number of HCRs. The spatiotemporal data fusion model U-STFM was used as the baseline model. The results show: (1) The clustering-based algorithm is more suitable for detecting HCRs for unmixing. Compared with the multi-resolution segmentation algorithm and k-means algorithm, the ISODATA clustering algorithm can more accurately describe LST’s temporal and spatial changes on HCRs. (2) For the U-STFM model, applying the unmixing processing at the change ratio level can significantly reduce the additive and multiplicative noise of the prediction. (3) There is a tradeoff effect between the number of HCRs and the solvability of the linear unmixing function. The larger the number of HCRs (less than the available MODIS pixels), the more stable the model is. (4) For the fusion of the daily 30 m scale LST product, compared with STARFM and ESTARFM, the modified U-STFM (iso_USTFM) achieved higher prediction accuracy and a lower error (R 2: 0.87 and RMSE:1.09 k). With the findings of this study, daily fine-scale LST products can be predicted based on the unmixing-based spatial–temporal model with lower uncertainty and stable prediction. Full article
(This article belongs to the Special Issue Monitoring Environmental Changes by Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The study area is located in Shenzhen and Dongguan within the GBA of China.</p>
Full article ">Figure 2
<p>Experimental design of this study.</p>
Full article ">Figure 3
<p>Comparison of model fusion results based on multiresolution segmentation algorithm, k-means algorithm, and ISODATA algorithm. (<b>a</b>) PNSR; (<b>b</b>) CC; (<b>c</b>) RMSE; (<b>d</b>) MAE.</p>
Full article ">Figure 4
<p>Comparison of absolute error distribution of model fusion results based on multi-scale segmentation (<b>a</b>), k-means (<b>b</b>) and ISODATA (<b>c</b>) on 20 November 2001. The missing pixels are influenced by cloud cover.</p>
Full article ">Figure 5
<p>The different levels of the unmixing.</p>
Full article ">Figure 6
<p>Comparison of model fusion results based on unmixing on the single date level, the differential level, and the change ratio level. (<b>a</b>) PNSR; (<b>b</b>) CC; (<b>c</b>) RMSE; (<b>d</b>) MAE.</p>
Full article ">Figure 7
<p>Comparison of the absolute error distribution of model fusion results based on unmixing on the single date level (<b>a</b>), the differential level (<b>b</b>), and the change ratio level (<b>c</b>) on 20 November 2001.</p>
Full article ">Figure 8
<p>Comparison of model fusion results based on different numbers of HCRs. (<b>a</b>) PNSR; (<b>b</b>) CC; (<b>c</b>) RMSE; (<b>d</b>) MAE. Each point is the median value of 56 predictions (about 2027671 pixels for each prediction) across 6 predicted dates (from 1 November 2000 to 7 November 2002).</p>
Full article ">Figure 9
<p>Comparison of the absolute error distribution of model fusion results based on different numbers of HCRs on 20 November 2001.</p>
Full article ">Figure 10
<p>Comparison of the absolute error distribution of model fusion results on 20 November 2001. (<b>a</b>) STARFM, (<b>b</b>) ESTARFM, (<b>c</b>) USTFM, (<b>d</b>) iso_USTFM in three sub-regions: (1) city, (2) forest, (3) lakes The true-color composite Landsat 7 image from Google earth on 31 December 2001.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison of absolute error distribution of model fusion results for land cover categories in 2001. (<b>b</b>) 0–8 k range of (<b>a</b>).</p>
Full article ">Figure 12
<p>Scatter plot of predicted LST using these three models with Landsat7 LST on 20 November 2001.</p>
Full article ">
24 pages, 11742 KiB  
Article
Tree Species Classification over Cloudy Mountainous Regions by Spatiotemporal Fusion and Ensemble Classifier
by Liang Cui, Shengbo Chen, Yongling Mu, Xitong Xu, Bin Zhang and Xiuying Zhao
Forests 2023, 14(1), 107; https://doi.org/10.3390/f14010107 - 5 Jan 2023
Cited by 6 | Viewed by 1883
Abstract
Accurate mapping of tree species is critical for the sustainable development of the forestry industry. However, the lack of cloud-free optical images makes it challenging to map tree species accurately in cloudy mountainous regions. In order to improve tree species identification in this [...] Read more.
Accurate mapping of tree species is critical for the sustainable development of the forestry industry. However, the lack of cloud-free optical images makes it challenging to map tree species accurately in cloudy mountainous regions. In order to improve tree species identification in this context, a classification method using spatiotemporal fusion and ensemble classifier is proposed. The applicability of three spatiotemporal fusion methods, i.e., the spatial and temporal adaptive reflectance fusion model (STARFM), the flexible spatiotemporal data fusion (FSDAF), and the spatial and temporal nonlocal filter-based fusion model (STNLFFM), in fusing MODIS and Landsat 8 images was investigated. The fusion results in Helong City show that the STNLFFM algorithm generated the best fused images. The correlation coefficients between the fusion images and actual Landsat images on May 28 and October 19 were 0.9746 and 0.9226, respectively, with an average of 0.9486. Dense Landsat-like time series at 8-day time intervals were generated using this method. This time series imagery and topography-derived features were used as predictor variables. Four machine learning methods, i.e., K-nearest neighbors (KNN), random forest (RF), artificial neural networks (ANNs), and light gradient boosting machine (LightGBM), were selected for tree species classification in Helong City, Jilin Province. An ensemble classifier combining these classifiers was constructed to further improve the accuracy. The ensemble classifier consistently achieved the highest accuracy in almost all classification scenarios, with a maximum overall accuracy improvement of approximately 3.4% compared to the best base classifier. Compared to only using a single temporal image, utilizing dense time series and the ensemble classifier can improve the classification accuracy by about 20%, and the overall accuracy reaches 84.32%. In conclusion, using spatiotemporal fusion and the ensemble classifier can significantly enhance tree species identification in cloudy mountainous areas with poor data availability. Full article
(This article belongs to the Special Issue Mapping Forest Vegetation via Remote Sensing Tools)
Show Figures

Figure 1

Figure 1
<p>The study area. The forest area is overlaid with Shuttle Radar Topography Mission elevation data.</p>
Full article ">Figure 2
<p>Flow diagram of tree species classification. The content bordered by a short dotted line represents the dominant tree species to be classified, and the content in the long dotted line box is several machine learning methods for comparison.</p>
Full article ">Figure 3
<p>MCD43A4 images of the base date (May 19) and two predicted dates (May 28 and October 19).</p>
Full article ">Figure 4
<p>Forest inventory data of the study area in 2016.</p>
Full article ">Figure 5
<p>Distribution of sampling point data used in this study.</p>
Full article ">Figure 6
<p>Real Landsat images on the base date and predicted dates.</p>
Full article ">Figure 7
<p>RGB composition of the fusion results. The first and second rows are the MCD43A4, Landsat 8 OLI imagery, and the fused images of the three different methods (STARFM, FSDAF, and STNLFFM) on May 28 and October 19, respectively.</p>
Full article ">Figure 8
<p>Correlation between the red and near-infrared bands on May 28 and October 19. (<b>a</b>) Correlation coefficient of the red band on May 28. (<b>b</b>) Correlation coefficient of the NIR band on May 28. (<b>c</b>) Correlation coefficient of the red band on October 19. (<b>d</b>) Correlation coefficient of the NIR band on October 19.</p>
Full article ">Figure 9
<p>Overall accuracy and kappa coefficient of Experiments 1 and 2. (<b>a</b>) Experiment 1 (spectral features only). (<b>b</b>) Experiment 2 (spectral plus topographic features).</p>
Full article ">Figure 9 Cont.
<p>Overall accuracy and kappa coefficient of Experiments 1 and 2. (<b>a</b>) Experiment 1 (spectral features only). (<b>b</b>) Experiment 2 (spectral plus topographic features).</p>
Full article ">Figure 10
<p>Confusion matrix for various classification methods: (<b>a</b>) KNN; (<b>b</b>) RF; (<b>c</b>) ANN; (<b>d</b>) LightGBM; (<b>e</b>) ensemble.</p>
Full article ">Figure 11
<p>Classification results of forest tree species in Helong City based on the ensemble classifier and the optimal feature combination.</p>
Full article ">
Back to TopTop