Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (57)

Search Parameters:
Keywords = spatial-spectral dimensional forest

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 11764 KiB  
Article
Vegetation Classification in a Mountain–Plain Transition Zone in the Sichuan Basin, China
by Wenqian Bai, Zhengwei He, Yan Tan, Guy M. Robinson, Tingyu Zhang, Xueman Wang, Li He, Linlong Li and Shuang Wu
Land 2025, 14(1), 184; https://doi.org/10.3390/land14010184 - 17 Jan 2025
Viewed by 323
Abstract
Developing an effective vegetation classification method for mountain–plain transition zones is critical for understanding ecological patterns, evaluating ecosystem services, and guiding conservation efforts. Existing methods perform well in mountainous and plain areas but lack verification in mountain–plain transition zones. This study utilized terrain [...] Read more.
Developing an effective vegetation classification method for mountain–plain transition zones is critical for understanding ecological patterns, evaluating ecosystem services, and guiding conservation efforts. Existing methods perform well in mountainous and plain areas but lack verification in mountain–plain transition zones. This study utilized terrain data and Sentinel-1 and Sentinel-2 imagery to extract topographic, spectral, texture, and SAR features as well as the vegetation index. By combining feature sets and applying feature elimination algorithms, the classification performance of one-dimensional convolutional neural networks (1D-CNNs), Random Forest (RF), and Multilayer Perceptron (MLP) was evaluated to determine the optimal feature combinations and methods. The results show the following: (1) multi-feature combinations, especially spectral and topographic features, significantly improved classification accuracy; (2) Recursive Feature Elimination based on Random Forest (RF-RFE) outperformed ReliefF in feature selection, identifying more representative features; (3) all three algorithms performed well, with consistent spatial results. The MLP algorithm achieved the best overall accuracy (OA: 81.65%, Kappa: 77.75%), demonstrating robustness and lower dependence on feature quantity. This study presents an efficient and robust vegetation classification workflow, verifies its applicability in mountain–plain transition zones, and provides valuable insights for small-region vegetation classification under similar topographic conditions globally. Full article
(This article belongs to the Special Issue Vegetation Cover Changes Monitoring Using Remote Sensing Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of study area.</p>
Full article ">Figure 2
<p>Methodology of study (all factors and abbreviations are defined in text).</p>
Full article ">Figure 3
<p>Architecture of 1D-CNN algorithm. Note: EBF stands for evergreen broadleaf forest, DBF for deciduous broadleaf forest, and CF for coniferous forest.</p>
Full article ">Figure 4
<p>Architecture of MLP algorithm.</p>
Full article ">Figure 5
<p>Architecture of RF algorithm.</p>
Full article ">Figure 6
<p>Comparison of accuracy of machine learning algorithms in vegetation classification under different feature combinations.</p>
Full article ">Figure 7
<p>Vegetation mapping of Mianzhu City based on 1D-CNN, MLP, and RF.</p>
Full article ">Figure 8
<p>Area and percentage of each vegetation type based on 1D-CNN, MLP, and RF algorithms.</p>
Full article ">Figure 9
<p>Misclassification between different forest types using three models: (<b>a</b>) 1D-CNN, (<b>b</b>) MLP, and (<b>c</b>) RF algorithms. (Note: “1” stands for shrubland; “2” stands for evergreen broadleaf forest (EBF); “3” stands for deciduous broadleaf forest (DBF); “4” stands for coniferous forest (CF); “5” stands for grassland; and “6” stands for cropland.)</p>
Full article ">Figure 10
<p>Comparison of localized vegetation classification results using 1D-CNN, MLP, and RF algorithms.</p>
Full article ">
21 pages, 4884 KiB  
Article
Evaluation of Machine Learning Algorithms for Classification of Visual Stimulation-Induced EEG Signals in 2D and 3D VR Videos
by Mingliang Zuo, Xiaoyu Chen and Li Sui
Brain Sci. 2025, 15(1), 75; https://doi.org/10.3390/brainsci15010075 - 16 Jan 2025
Viewed by 581
Abstract
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers [...] Read more.
Backgrounds: Virtual reality (VR) has become a transformative technology with applications in gaming, education, healthcare, and psychotherapy. The subjective experiences in VR vary based on the virtual environment’s characteristics, and electroencephalography (EEG) is instrumental in assessing these differences. By analyzing EEG signals, researchers can explore the neural mechanisms underlying cognitive and emotional responses to VR stimuli. However, distinguishing EEG signals recorded by two-dimensional (2D) versus three-dimensional (3D) VR environments remains underexplored. Current research primarily utilizes power spectral density (PSD) features to differentiate between 2D and 3D VR conditions, but the potential of other feature parameters for enhanced discrimination is unclear. Additionally, the use of machine learning techniques to classify EEG signals from 2D and 3D VR using alternative features has not been thoroughly investigated, highlighting the need for further research to identify robust EEG features and effective classification methods. Methods: This study recorded EEG signals from participants exposed to 2D and 3D VR video stimuli to investigate the neural differences between these conditions. Key features extracted from the EEG data included PSD and common spatial patterns (CSPs), which capture frequency-domain and spatial-domain information, respectively. To evaluate classification performance, several classical machine learning algorithms were employed: ssupport vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), naive Bayes, decision Tree, AdaBoost, and a voting classifier. The study systematically compared the classification performance of PSD and CSP features across these algorithms, providing a comprehensive analysis of their effectiveness in distinguishing EEG signals in response to 2D and 3D VR stimuli. Results: The study demonstrated that machine learning algorithms can effectively classify EEG signals recorded during watching 2D and 3D VR videos. CSP features outperformed PSD in classification accuracy, indicating their superior ability to capture EEG signals differences between the VR conditions. Among the machine learning algorithms, the Random Forest classifier achieved the highest accuracy at 95.02%, followed by KNN with 93.16% and SVM with 91.39%. The combination of CSP features with RF, KNN, and SVM consistently showed superior performance compared to other feature-algorithm combinations, underscoring the effectiveness of CSP and these algorithms in distinguishing EEG responses to different VR experiences. Conclusions: This study demonstrates that EEG signals recorded during watching 2D and 3D VR videos can be effectively classified using machine learning algorithms with extracted feature parameters. The findings highlight the superiority of CSP features over PSD in distinguishing EEG signals under different VR conditions, emphasizing CSP’s value in VR-induced EEG analysis. These results expand the application of feature-based machine learning methods in EEG studies and provide a foundation for future research into the brain cortical activity of VR experiences, supporting the broader use of machine learning in EEG-based analyses. Full article
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)
Show Figures

Figure 1

Figure 1
<p>Electrode lead position layout.</p>
Full article ">Figure 2
<p>A space VR scene in a spacecraft, showing a participant in the lower-left corner of the image wearing an EEG cap and VR glasses while watching VR videos in an isolated room.</p>
Full article ">Figure 3
<p>EEG signals from the O2 channel, recorded during the first 2 s of both 2D and 3D VR video viewing, are displayed with the 2D signals at the top and the 3D signals at the bottom.</p>
Full article ">Figure 4
<p>Power spectral maps: 2D (<b>left</b>) and 3D (<b>right</b>).</p>
Full article ">Figure 5
<p>The contribution of various EEG channels in classification using CSP feature extraction.</p>
Full article ">Figure 6
<p>Architectural structure of the RF algorithm.</p>
Full article ">Figure 7
<p>The classification pipeline includes PSD and CSP measures, normalization, and machine learning classification.</p>
Full article ">Figure 8
<p>t-SNE visualization of features extracted using PSD (<b>a</b>) and CSP (<b>b</b>) methods.</p>
Full article ">Figure 9
<p>The mean confusion matrix components for all subjects using machine learning approaches.</p>
Full article ">Figure 10
<p>The ROC curves using machine learning approaches.</p>
Full article ">Figure 11
<p>The accuracy with standard deviation curves using machine learning approaches.</p>
Full article ">
18 pages, 12205 KiB  
Article
An Open-Pit Mines Land Use Classification Method Based on Random Forest Using UAV: A Case Study of a Ceramic Clay Mine
by Yuanrong He, Yangfeng Lai, Bingning Chen, Yuhang Chen, Zhiying Xie, Xiaolin Yu and Min Luo
Minerals 2024, 14(12), 1282; https://doi.org/10.3390/min14121282 - 17 Dec 2024
Viewed by 584
Abstract
Timely and accurate land use information in open-pit mines is essential for environmental monitoring, ecological restoration planning, and promoting sustainable progress in mining regions. This study used high-resolution unmanned aerial vehicle (UAV) imagery, combined with object-oriented methods, optimal segmentation algorithms, and machine learning [...] Read more.
Timely and accurate land use information in open-pit mines is essential for environmental monitoring, ecological restoration planning, and promoting sustainable progress in mining regions. This study used high-resolution unmanned aerial vehicle (UAV) imagery, combined with object-oriented methods, optimal segmentation algorithms, and machine learning algorithms, to develop an efficient and practical method for classifying land use in open-pit mines. First, six land use categories were identified: stope, restoration area, building, vegetation area, arterial road, and waters. To achieve optimal scale segmentation, an image segmentation quality evaluation index is developed, emphasizing both high intra-object homogeneity and high inter-object heterogeneity. Second, spectral, index, texture, and spatial features are identified through out-of-bag (OOB) error of random forest and recursive feature elimination (RFE) to create an optimal multi-feature fusion combination. Finally, the classification of open-pit mines was executed by leveraging the optimal feature combination, employing the random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN) classifiers in a comparative analysis. The experimental results indicated that classification of appropriate scale image segmentation can extract more accurate land use information. Feature selection effectively reduces model redundancy and improves classification accuracy, with spectral features having the most significant effect. The RF algorithm outperformed SVM and KNN, demonstrating superior handling of high-dimensional feature combinations. It achieves the highest overall accuracy (OA) of 90.77%, with the lowest misclassification and omission errors and the highest classification accuracy. The disaggregated data facilitate effective monitoring of ecological changes in open-pit mining areas, support the development of mining plans, and help predict the quality and heterogeneity of raw clay in some areas. Full article
(This article belongs to the Special Issue Application of UAV and GIS for Geosciences, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Study area: (<b>a</b>–<b>c</b>) location of the study area in China, Fujian province, and Fuzhou city, respectively. The base map of (<b>b</b>) and (<b>c</b>) is a digital elevation model. (<b>d</b>) The orthophoto and (<b>e</b>) the digital surface model of the study area.</p>
Full article ">Figure 2
<p>Flowchart of this research.</p>
Full article ">Figure 3
<p>Original image and visible vegetation index image: (<b>a</b>) original image, (<b>b</b>) RGRI image, (<b>c</b>) NGRDI image, (<b>d</b>) EXG image, (<b>e</b>) RGBVI image, (<b>f</b>) VDVI image. (b, c, d, e, and f were calculated using ENVI software, version 5.6).</p>
Full article ">Figure 4
<p>Random forest algorithm.</p>
Full article ">Figure 5
<p>Support vector machine algorithm.</p>
Full article ">Figure 6
<p>K-nearest neighbor algorithm.</p>
Full article ">Figure 7
<p>The three upper images, from left to right, show the segmentation scales 100, 285, and 400. The lower figure presents the image segmentation quality evaluation index chart.</p>
Full article ">Figure 8
<p>Confusion matrix of the classification results. (<b>a</b>) The confusion matrix generated by the RF algorithm after multi-scale segmentation at a segmentation scale of 360; (<b>b</b>–<b>d</b>) are the confusion matrices generated by RF, SVM, and KNN algorithms, respectively, after optimal scale segmentation.</p>
Full article ">Figure 9
<p>Feature importance ranking.</p>
Full article ">Figure 10
<p>Correlation between the number of features and OOB error.</p>
Full article ">Figure 11
<p>Classification results of (<b>a</b>) RF, (<b>b</b>) SVM, and (<b>c</b>) KNN algorithms under optimal segmentation scale and optimal feature combination.</p>
Full article ">Figure 12
<p>(<b>a</b>) Misclassification error and (<b>b</b>) omission error of different classification algorithms.</p>
Full article ">
19 pages, 24741 KiB  
Article
Estimation of Soil Salinity by Combining Spectral and Texture Information from UAV Multispectral Images in the Tarim River Basin, China
by Jiaxiang Zhai, Nan Wang, Bifeng Hu, Jianwen Han, Chunhui Feng, Jie Peng, Defang Luo and Zhou Shi
Remote Sens. 2024, 16(19), 3671; https://doi.org/10.3390/rs16193671 - 1 Oct 2024
Viewed by 1228
Abstract
Texture features have been consistently overlooked in digital soil mapping, especially in soil salinization mapping. This study aims to clarify how to leverage texture information for monitoring soil salinization through remote sensing techniques. We propose a novel method for estimating soil salinity content [...] Read more.
Texture features have been consistently overlooked in digital soil mapping, especially in soil salinization mapping. This study aims to clarify how to leverage texture information for monitoring soil salinization through remote sensing techniques. We propose a novel method for estimating soil salinity content (SSC) that combines spectral and texture information from unmanned aerial vehicle (UAV) images. Reflectance, spectral index, and one-dimensional (OD) texture features were extracted from UAV images. Building on the one-dimensional texture features, we constructed two-dimensional (TD) and three-dimensional (THD) texture indices. The technique of Recursive Feature Elimination (RFE) was used for feature selection. Models for soil salinity estimation were built using three distinct methodologies: Random Forest (RF), Partial Least Squares Regression (PLSR), and Convolutional Neural Network (CNN). Spatial distribution maps of soil salinity were then generated for each model. The effectiveness of the proposed method is confirmed through the utilization of 240 surface soil samples gathered from an arid region in northwest China, specifically in Xinjiang, characterized by sparse vegetation. Among all texture indices, TDTeI1 has the highest correlation with SSC (|r| = 0.86). After adding multidimensional texture information, the R2 of the RF model increased from 0.76 to 0.90, with an improvement of 18%. Among the three models, the RF model outperforms PLSR and CNN. The RF model, which combines spectral and texture information (SOTT), achieves an R2 of 0.90, RMSE of 5.13 g kg−1, and RPD of 3.12. Texture information contributes 44.8% to the soil salinity prediction, with the contributions of TD and THD texture indices of 19.3% and 20.2%, respectively. This study confirms the great potential of introducing texture information for monitoring soil salinity in arid and semi-arid regions. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the study.</p>
Full article ">Figure 2
<p>The survey area, including: (<b>a</b>) location of the Xinjiang Province in China, (<b>b</b>) orthophoto map of the UAV and the location of sampling points, (<b>c</b>) DJI Phantom 4 Pro Multispectral Edition, (<b>d</b>) calibrated reflective panel captured by a multispectral camera, (<b>e</b>) the soil type, and (<b>f</b>) the main vegetation types.</p>
Full article ">Figure 3
<p>Feature window size.</p>
Full article ">Figure 4
<p>The estimation accuracy of RF at different feature window sizes.</p>
Full article ">Figure 5
<p>Correlation plot between spectral information and SSC.</p>
Full article ">Figure 6
<p>Correlation plot between OD texture features and SSC within (<b>a</b>) texture features in the blue band, (<b>b</b>) texture features in the green band, (<b>c</b>) texture features in the red band, (<b>d</b>) texture features in the red-edge band, and (<b>e</b>) texture features in the near-infrared band.</p>
Full article ">Figure 7
<p>Correlation plot between the TD texture index and SSC.</p>
Full article ">Figure 8
<p>Optimal correlation plot between the THD texture index and SSC.</p>
Full article ">Figure 9
<p>Evaluation of all feature-selected datasets and the accuracy of the three models. SPI, OD, TD, and THD represent the spectral information, one-dimensional texture index, two-dimensional texture index, and three-dimensional texture index, respectively. SO, SOT, and SOTT represent the spectral information + one-dimensional texture index, spectral information + one-dimensional texture index + two-dimensional texture index, and spectral information + one-dimensional texture index + two-dimensional texture index + three-dimensional texture index, respectively.</p>
Full article ">Figure 10
<p>SSC map derived from RF, CNN, and PLSR models.</p>
Full article ">Figure 11
<p>Variable importance of feature variables using the RF model. Where (<b>a</b>) represents the factor importance of different variables, and (<b>b</b>) represents the factor importance of different variable types.</p>
Full article ">
23 pages, 39394 KiB  
Article
Fine-Scale Mangrove Species Classification Based on UAV Multispectral and Hyperspectral Remote Sensing Using Machine Learning
by Yuanzheng Yang, Zhouju Meng, Jiaxing Zu, Wenhua Cai, Jiali Wang, Hongxin Su and Jian Yang
Remote Sens. 2024, 16(16), 3093; https://doi.org/10.3390/rs16163093 - 22 Aug 2024
Cited by 2 | Viewed by 2207
Abstract
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental [...] Read more.
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental conservation of coastal ecosystems. Traditional satellite data are limited in fine-scale mangrove species classification due to low spatial resolution and less spectral information. This study employed unmanned aerial vehicle (UAV) technology to acquire high-resolution multispectral and hyperspectral mangrove forest imagery in Guangxi, China. We leveraged advanced algorithms, including RFE-RF for feature selection and machine learning models (Adaptive Boosting (AdaBoost), eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM)), to achieve mangrove species mapping with high classification accuracy. The study assessed the classification performance of these four machine learning models for two types of image data (UAV multispectral and hyperspectral imagery), respectively. The results demonstrated that hyperspectral imagery had superiority over multispectral data by offering enhanced noise reduction and classification performance. Hyperspectral imagery produced mangrove species classification with overall accuracy (OA) higher than 91% across the four machine learning models. LightGBM achieved the highest OA of 97.15% and kappa coefficient (Kappa) of 0.97 based on hyperspectral imagery. Dimensionality reduction and feature extraction techniques were effectively applied to the UAV data, with vegetation indices proving to be particularly valuable for species classification. The present research underscored the effectiveness of UAV hyperspectral images using machine learning models for fine-scale mangrove species classification. This approach has the potential to significantly improve ecological management and conservation strategies, providing a robust framework for monitoring and safeguarding these essential coastal habitats. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and UAV-based visible image ((<b>A</b>): Yingluo Bay, (<b>B</b>): Pearl Bay).</p>
Full article ">Figure 2
<p>Workflow diagram illustrating the methodology of this study.</p>
Full article ">Figure 3
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by four learning models based on multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">Figure 4
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by LightGBM learning model based on the multi- and hyper-spectral image in Pearl Bay.</p>
Full article ">Figure 5
<p>The mangrove species classification maps using four learning models (LightGBM, RF, XGBoost, and AdaBoost) based on UAV multispectral image (<b>a</b>–<b>d</b>) and hyperspectral image (<b>e</b>–<b>h</b>), respectively, in Yingluo Bay.</p>
Full article ">Figure 6
<p>The UAV visual image covering Yingluo Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results based on the LightGBM learning model.</p>
Full article ">Figure 7
<p>The mangrove species classification maps using the LightGBM learning model based on UAV multispectral image (<b>a</b>) and hyperspectral image (<b>b</b>) in Pearl Bay.</p>
Full article ">Figure 8
<p>The UAV visual image covering Pearl Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results using LightGBM learning model.</p>
Full article ">Figure A1
<p>Normalized confusion matrices of mangrove species classification using four learning models (AdaBoost, XGboost, RF, and LightGBM) based on UAV multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">
17 pages, 13631 KiB  
Article
Ensemble Machine Learning on the Fusion of Sentinel Time Series Imagery with High-Resolution Orthoimagery for Improved Land Use/Land Cover Mapping
by Mukti Ram Subedi, Carlos Portillo-Quintero, Nancy E. McIntyre, Samantha S. Kahl, Robert D. Cox, Gad Perry and Xiaopeng Song
Remote Sens. 2024, 16(15), 2778; https://doi.org/10.3390/rs16152778 - 30 Jul 2024
Cited by 1 | Viewed by 2266
Abstract
In the United States, several land use and land cover (LULC) data sets are available based on satellite data, but these data sets often fail to accurately represent features on the ground. Alternatively, detailed mapping of heterogeneous landscapes for informed decision-making is possible [...] Read more.
In the United States, several land use and land cover (LULC) data sets are available based on satellite data, but these data sets often fail to accurately represent features on the ground. Alternatively, detailed mapping of heterogeneous landscapes for informed decision-making is possible using high spatial resolution orthoimagery from the National Agricultural Imagery Program (NAIP). However, large-area mapping at this resolution remains challenging due to radiometric differences among scenes, landscape heterogeneity, and computational limitations. Various machine learning (ML) techniques have shown promise in improving LULC maps. The primary purposes of this study were to evaluate bagging (Random Forest, RF), boosting (Gradient Boosting Machines [GBM] and extreme gradient boosting [XGB]), and stacking ensemble ML models. We used these techniques on a time series of Sentinel 2A data and NAIP orthoimagery to create a LULC map of a portion of Irion and Tom Green counties in Texas (USA). We created several spectral indices, structural variables, and geometry-based variables, reducing the dimensionality of features generated on Sentinel and NAIP data. We then compared accuracy based on random cross-validation without accounting for spatial autocorrelation and target-oriented cross-validation accounting for spatial structures of the training data set. Comparison of random and target-oriented cross-validation results showed that autocorrelation in the training data offered overestimation ranging from 2% to 3.5%. The XGB-boosted stacking ensemble on-base learners (RF, XGB, and GBM) improved model performance over individual base learners. We show that meta-learners are just as sensitive to overfitting as base models, as these algorithms are not designed to account for spatial information. Finally, we show that the fusion of Sentinel 2A data with NAIP data improves land use/land cover classification using geographic object-based image analysis. Full article
(This article belongs to the Special Issue Mapping Essential Elements of Agricultural Land Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in Irion and Tom Green counties in Texas, with a red, green, and blue composite of a Sentinel 2A image from June 2018.</p>
Full article ">Figure 2
<p>A schematic overview of stacking ensemble machine learning using bagging and boosting algorithms.</p>
Full article ">Figure 3
<p>Confusion matrices produced on holdout (20%) of the total training data using RF (<b>A</b>), GBM (<b>B</b>), XGB (<b>C</b>), and stacking (XGB) (<b>D</b>) classifiers.</p>
Full article ">Figure 4
<p>Box plot of overall accuracy across folds in the base-learner model and meta-learner model using random cross-validation (random, in red) and target-oriented cross-validation (LLO, in blue). The horizontal black line in each box plot indicates the median and the crosshairs indicate the mean.</p>
Full article ">Figure 5
<p>Permutation-based feature importance for RF (<b>A</b>), GBM (<b>B</b>), XGB (<b>C</b>), and stacked (<b>D</b>) models in target-oriented cross-validation.</p>
Full article ">Figure 6
<p>Classified map of the study area based on the stacking model (meta-learner) using target-oriented cross-validation, and geographic object-based image analysis (GEOBIA) approach.</p>
Full article ">
30 pages, 4346 KiB  
Article
Exploiting Soil and Remote Sensing Data Archives for 3D Mapping of Multiple Soil Properties at the Swiss National Scale
by Felix Stumpf, Thorsten Behrens, Karsten Schmidt and Armin Keller
Remote Sens. 2024, 16(15), 2712; https://doi.org/10.3390/rs16152712 - 24 Jul 2024
Cited by 1 | Viewed by 1387
Abstract
Soils play a central role in ecosystem functioning, and thus, mapped soil property information is indispensable to supporting sustainable land management. Digital Soil Mapping (DSM) provides a framework to spatially estimate soil properties. However, broad-scale DSM remains challenging because of non-purposively [...] Read more.
Soils play a central role in ecosystem functioning, and thus, mapped soil property information is indispensable to supporting sustainable land management. Digital Soil Mapping (DSM) provides a framework to spatially estimate soil properties. However, broad-scale DSM remains challenging because of non-purposively sampled soil data, large data volumes for processing extensive soil covariates, and high model complexities due to spatially varying soil–landscape relationships. This study presents a three-dimensional DSM framework for Switzerland, targeting the soil properties of clay content (Clay), organic carbon content (SOC), pH value (pH), and potential cation exchange capacity (CECpot). The DSM approach is based on machine learning and a comprehensive exploitation of soil and remote sensing data archives. Quantile Regression Forest was applied to link the soil sample data from a national soil data base with covariates derived from a LiDAR-based elevation model, from climate raster data, and from multispectral raster time series based on satellite imagery. The covariate set comprises spatially multiscale terrain attributes, climate patterns and their temporal variation, temporarily multiscale land use features, and spectral bare soil signatures. Soil data and predictions were evaluated with respect to different landcovers and depth intervals. All reference soil data sets were found to be spatially clustered towards croplands, showing an increasing sample density from lower to upper depth intervals. According to the R2 value derived from independent data, the overall model accuracy amounts to 0.69 for Clay, 0.64 for SOC, 0.76 for pH, and 0.72 for CECpot. Reduced model accuracies were found to be accompanied by soil data sets showing limited sample sizes (e.g., CECpot), uneven statistical distributions (e.g., SOC), and low spatial sample densities (e.g., woodland subsoils). Multiscale terrain covariates were highly influential for all models; climate covariates were particularly important for the Clay model; multiscale land use covariates showed enhanced importance for modeling pH; and bare soil reflectance was a major driver in the SOC and CECpot models. Full article
(This article belongs to the Special Issue Recent Advances in Remote Sensing of Soil Science)
Show Figures

Figure 1

Figure 1
<p>Switzerland with respect to landcover and topography, as well as to precipitation and temperature means across 1980 to 2022.</p>
Full article ">Figure 2
<p>Frequency distributions of the soil sample data ((<b>A</b>) <span class="html-italic">Clay</span> in %; (<b>B</b>) <span class="html-italic">SOC</span> in %; (<b>C</b>) <span class="html-italic">pH</span> in CaCl<sub>2</sub>; (<b>D</b>) <span class="html-italic">CEC<sub>pot</sub></span> in mm<sub>c</sub>kg<sup>−1</sup>), stratified by depth intervals (‘<span class="html-italic">Topsoil</span>’, ‘<span class="html-italic">Subsoil</span>’, and ‘<span class="html-italic">Deep Subsoil</span>’) and landcover classes (brown: ‘<span class="html-italic">Cropland</span>’; light green: ‘<span class="html-italic">Grassland</span>’; dark green: ‘<span class="html-italic">Woodland</span>’; n: count of samples per stratum).</p>
Full article ">Figure 3
<p><span class="html-italic">Clay</span> model characteristics. (<b>A</b>) A density scatterplot of observed vs. predicted values based on five independent submodels and independent validation data (dashed line indicates 1:1 line; blue-yellow-red color scale indicates low to high density); (<b>B</b>) a barplot for the covariate importance of covariates related to terrain, climate, bare soil, and land use; (<b>C</b>) a topsoil prediction map at national scale; and (<b>D</b>) a zoomed-in map including <span class="html-italic">RPI</span> uncertainty (dots indicate calibration data).</p>
Full article ">Figure 4
<p>Boxplots of <span class="html-italic">Clay</span> predictions by landcover classes (‘<span class="html-italic">Total area</span>’, ‘<span class="html-italic">Cropland</span>’, ‘<span class="html-italic">Grassland</span>’, and ‘<span class="html-italic">Woodland</span>’) and depth intervals (‘<span class="html-italic">Topsoil</span>’, ‘<span class="html-italic">Subsoil</span>’, and ‘<span class="html-italic">Deep Subsoil</span>’).</p>
Full article ">Figure 5
<p><span class="html-italic">SOC</span> model characteristics. (<b>A</b>) A density scatterplot of observed vs. predicted values based on five independent submodels and independent validation data (dashed line indicates 1:1 line; blue-yellow-red color scale indicates low to high density); (<b>B</b>) a barplot for the covariate importance of covariates related to terrain, climate, bare soil, and land use; (<b>C</b>) a topsoil prediction map at national scale; and (<b>D</b>) a zoomed-in map including <span class="html-italic">RPI</span> uncertainty (dots indicate calibration data).</p>
Full article ">Figure 6
<p>Boxplots of <span class="html-italic">SOC</span> predictions by landcover classes (‘<span class="html-italic">Total area</span>’, ‘<span class="html-italic">Cropland</span>’, ‘<span class="html-italic">Grassland</span>’, and ‘<span class="html-italic">Woodland</span>’) and depth intervals (‘<span class="html-italic">Topsoil</span>’, ‘<span class="html-italic">Subsoil</span>’, and ‘<span class="html-italic">Deep Subsoil</span>’).</p>
Full article ">Figure 7
<p>The <span class="html-italic">pH</span> model characteristics. (<b>A</b>) A density scatterplot of observed vs. predicted values based on five independent submodels and independent validation data (dashed line indicates 1:1; blue-yellow-red color scale indicates low to high density line); (<b>B</b>) a barplot for the covariate importance of covariates related to terrain, climate, bare soil, and land use; (<b>C</b>) a topsoil prediction map at national scale; and (<b>D</b>) a zoomed-in map including <span class="html-italic">RPI</span> uncertainty (dots indicate calibration data).</p>
Full article ">Figure 8
<p>Boxplots of <span class="html-italic">pH</span> predictions by landcover classes (‘<span class="html-italic">Total area</span>’, ‘<span class="html-italic">Cropland</span>’, ‘<span class="html-italic">Grassland</span>’, and ‘<span class="html-italic">Woodland</span>’) and depth intervals (‘<span class="html-italic">Topsoil</span>’, ‘<span class="html-italic">Subsoil</span>’, and ‘<span class="html-italic">Deep Subsoil</span>’).</p>
Full article ">Figure 9
<p><span class="html-italic">CEC<sub>pot</sub></span> model characteristics. (<b>A</b>) A density scatterplot of observed vs. predicted values based on five independent submodels and independent validation data (dashed line indicates 1:1; blue-yellow-red color scale indicates low to high density line); (<b>B</b>) a barplot for the covariate importance of covariates related to terrain, climate, bare soil, and land use; (<b>C</b>) a topsoil prediction map at national scale; and (<b>D</b>) a zoomed-in map including <span class="html-italic">RPI</span> uncertainty (dots indicate calibration data).</p>
Full article ">Figure 10
<p>Boxplots of <span class="html-italic">CEC<sub>pot</sub></span> predictions by landcover classes (‘<span class="html-italic">Total area</span>’, ‘<span class="html-italic">Cropland</span>’, ‘<span class="html-italic">Grassland</span>’, and ‘<span class="html-italic">Woodland</span>’) and depth intervals (‘<span class="html-italic">Topsoil</span>’, ‘<span class="html-italic">Subsoil</span>’, and ‘<span class="html-italic">Deep Subsoil</span>’).</p>
Full article ">
27 pages, 4028 KiB  
Article
Evaluation and Selection of Multi-Spectral Indices to Classify Vegetation Using Multivariate Functional Principal Component Analysis
by Simone Pesaresi, Adriano Mancini, Giacomo Quattrini and Simona Casavecchia
Remote Sens. 2024, 16(7), 1224; https://doi.org/10.3390/rs16071224 - 30 Mar 2024
Cited by 3 | Viewed by 1882
Abstract
The identification, classification and mapping of different plant communities and habitats is of fundamental importance for defining biodiversity monitoring and conservation strategies. Today, the availability of high temporal, spatial and spectral data from remote sensing platforms provides dense time series over different spectral [...] Read more.
The identification, classification and mapping of different plant communities and habitats is of fundamental importance for defining biodiversity monitoring and conservation strategies. Today, the availability of high temporal, spatial and spectral data from remote sensing platforms provides dense time series over different spectral bands. In the case of supervised mapping, time series based on classical vegetation indices (e.g., NDVI, GNDVI, …) are usually input characteristics, but the selection of the best index or set of indices (which guarantees the best performance) is still based on human experience and is also influenced by the study area. In this work, several different time series, based on Sentinel-2 images, were created exploring new combinations of bands that extend the classic basic formulas as the normalized difference index. Multivariate Functional Principal Component Analysis (MFPCA) was used to contemporarily decompose the multiple time series. The principal multivariate seasonal spectral variations identified (MFPCA scores) were classified by using a Random Forest (RF) model. The MFPCA and RF classifications were nested into a forward selection strategy to identify the proper and minimum set of indices’ (dense) time series that produced the most accurate supervised classification of plant communities and habitat. The results we obtained can be summarized as follows: (i) the selection of the best set of time series is specific to the study area and the habitats involved; (ii) well-known and widely used indices such as the NDVI are not selected as the indices with the best performance; instead, time series based on original indices (in terms of formula or combination of bands) or underused indices (such as those derivable with the visible bands) are selected; (iii) MFPCA efficiently reduces the dimensionality of the data (multiple dense time series) providing ecologically interpretable results representing an important tool for habitat modelling outperforming conventional approaches that consider only discrete time series. Full article
Show Figures

Figure 1

Figure 1
<p>Spectral variations in remotely sensed images over time. (<b>a</b>) Finite discrete time series: this panel shows a typical representation of remotely sensed data captured at discrete points in time (raw data). Each point on the graph represents data from a specific moment. (<b>b</b>,<b>c</b>) Spectral variations in pixels as functions of time (smoothed representation of variations). These two panels show how individual pixel spectral characteristics evolve over time, simplifying trend observation. In detail (<b>b</b>) defines a univariate functional space that describe the spectral variations in pixels characterized by a single band or index, such as NDVI. This helps us to understand how one specific aspect of vegetation changes over time while (<b>c</b>) shows spectral variations in pixels characterized by multiple bands or indices, such as NDVI, GNDVI and NDWI, defining multivariate functional space (this allows us to study how different aspects of vegetation change together over time.</p>
Full article ">Figure 2
<p>Starting from a set of Sentinel-2 images, we trigger a processing pipeline that extracts the most relevant vegetation indices that could be used to characterize the study area.</p>
Full article ">Figure 3
<p>The two study areas: (<b>a</b>) national and (<b>b</b>) regional overview of the two study areas; S1 is the Frasassi Gorge, and S2 is Mount Conero. (<b>c</b>) Panoramic image of the Frasassi Gorge area. (<b>d</b>) Panoramic image of the Mount Conero area. (<b>e</b>) Reference data on the Digital Elevation Model with the boundary of the Frasassi Gorge Special Area of Conservation (SAC IT5320003). (<b>f</b>) Reference data on the Digital Elevation Model with the boundary of the Mount Conero area of interest.</p>
Full article ">Figure 4
<p>Example of derived time series considering mean weekly annual Sentinel-2 GNDVI variations (2017–2020) of the 172 plots of the Mount Conero study area. On the left (<b>a</b>) the discrete mean weekly time series, while on the right (<b>b</b>) the weekly functional cyclic cubic spline representation of the spectral plot variations. The letters at the top correspond to the initials of the months of the year.</p>
Full article ">Figure 5
<p>Comparison of Overall Accuracy (OA) among different model strategies for the two study areas. The dashed line represents the OA achieved by the baseline B model using a Pure Machine Learning approach. M, mF and Ms are three hybrid model strategies combining Random Forest with Functional Data Analysis (Hybrid statistical–functional–Machine Learning approach). (<b>a</b>) Mount Conero area. (<b>b</b>) Frasassi Gorge area.</p>
Full article ">Figure 6
<p>Principal Component biplot relating properties of accuracy and model complexity (black arrows) to the different supervised classification models (B, mF, M, Ms) applied to all distinct formulas. (<b>a</b>) Mount Conero Area. PCA axis 1 accounts for 49.5% of the multivariate variation and axis 2 for 22.5%. (<b>b</b>) Frasassi Gorge Area. PCA axis 1 accounts for 43.8% of the multivariate variation and axis 2 for 17.0%. Labels: OA–Overall Accuracy; sd–standard deviation; pr–number of input variables selected; mtry–final Random Forest mtry parameter; v1–v8 and c1–c4 are Producer Accuracy of vegetation types (listed in <a href="#remotesensing-16-01224-t001" class="html-table">Table 1</a>) for Frasassi Gorge and Mount Conero areas, respectively.</p>
Full article ">
18 pages, 4479 KiB  
Article
Forest Canopy Fuel Loads Mapping Using Unmanned Aerial Vehicle High-Resolution Red, Green, Blue and Multispectral Imagery
by Álvaro Agustín Chávez-Durán, Mariano García, Miguel Olvera-Vargas, Inmaculada Aguado, Blanca Lorena Figueroa-Rangel, Ramón Trucíos-Caciano and Ernesto Alonso Rubio-Camacho
Forests 2024, 15(2), 225; https://doi.org/10.3390/f15020225 - 24 Jan 2024
Cited by 6 | Viewed by 1602
Abstract
Canopy fuels determine the characteristics of the entire complex of forest fuels due to their constant changes triggered by the environment; therefore, the development of appropriate strategies for fire management and fire risk reduction requires an accurate description of canopy forest fuels. This [...] Read more.
Canopy fuels determine the characteristics of the entire complex of forest fuels due to their constant changes triggered by the environment; therefore, the development of appropriate strategies for fire management and fire risk reduction requires an accurate description of canopy forest fuels. This paper presents a method for mapping the spatial distribution of canopy fuel loads (CFLs) in alignment with their natural variability and three-dimensional spatial distribution. The approach leverages an object-based machine learning framework with UAV multispectral data and photogrammetric point clouds. The proposed method was developed in the mixed forest of the natural protected area of “Sierra de Quila”, Jalisco, Mexico. Structural variables derived from photogrammetric point clouds, along with spectral information, were used in an object-based Random Forest model to accurately estimate CFLs, yielding R2 = 0.75, RMSE = 1.78 Mg, and an average Biasrel = 18.62%. Canopy volume was the most significant explanatory variable, achieving a mean decrease in impurity values greater than 80%, while the combination of texture and vegetation indices presented importance values close to 20%. Our modelling approach enables the accurate estimation of CFLs, accounting for the ecological context that governs their dynamics and spatial variability. The high precision achieved, at a relatively low cost, encourages constant updating of forest fuels maps to enable researchers and forest managers to streamline decision making on fuel and forest fire management. Full article
(This article belongs to the Topic Application of Remote Sensing in Forest Fire)
Show Figures

Figure 1

Figure 1
<p>Map showing the study area location. Sampling plots and Homogeneous Response Area (HRA). Projection coordinate system Universal Transverse Mercator Zone 13 North (UTM 13N).</p>
Full article ">Figure 2
<p>Flowchart for the estimation of the spatial distribution of CFLs.</p>
Full article ">Figure 3
<p>Field measurements and tree canopies overview. (<b>A</b>) Tree diameter measurement. (<b>B</b>) Tree height measurement. (<b>C</b>) Tree tagging. (<b>D</b>) UAV RGB image from plot P1. (<b>E</b>) UAV RGB image from plot P2. (<b>F</b>) UAV RGB image from plot P3.</p>
Full article ">Figure 4
<p>Spatial distribution of trees and segments in the plots, where each segment represents a group of trees identified through TREETOP analysis.</p>
Full article ">Figure 5
<p>Boxplot showing median and quartiles for Canopy fuel loads (Mg); Canopy volume (m<sup>3</sup>); Normalized Difference Vegetation Index (NDVI); 2-band Enhanced Vegetation Index (EVI2); Red band texture (Red texture) and Red edge band texture (Red edge texture).</p>
Full article ">Figure 6
<p>Model verification through linear regression. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>C</mi> <mi>F</mi> <mi>L</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = <span class="html-italic">CFL</span> from field measurement and <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mrow> <mi>C</mi> <mi>F</mi> <mi>L</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math> = <span class="html-italic">CFL</span> estimated by RF model.</p>
Full article ">Figure 7
<p><span class="html-italic">CFL</span> spatial distribution: (<b>A</b>–<b>C</b>) <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>F</mi> <mi>L</mi> </mrow> </semantics></math> = <span class="html-italic">CFL</span> spatial distribution reference data; (<b>D</b>–<b>F</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>C</mi> <mi>F</mi> <mi>L</mi> </mrow> <mo>^</mo> </mover> </mrow> </semantics></math> = <span class="html-italic">CFL</span> spatial distribution estimated by RF model and (<b>G</b>–<b>I</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>B</mi> <mi>i</mi> <mi>a</mi> <mi>s</mi> </mrow> <mrow> <mi>a</mi> <mi>b</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> = <span class="html-italic">CFL</span> Absolute Bias.</p>
Full article ">
19 pages, 5108 KiB  
Article
Individual Tree Species Identification and Crown Parameters Extraction Based on Mask R-CNN: Assessing the Applicability of Unmanned Aerial Vehicle Optical Images
by Zongqi Yao, Guoqi Chai, Lingting Lei, Xiang Jia and Xiaoli Zhang
Remote Sens. 2023, 15(21), 5164; https://doi.org/10.3390/rs15215164 - 29 Oct 2023
Cited by 4 | Viewed by 2078
Abstract
Automatic, efficient, and accurate individual tree species identification and crown parameters extraction is of great significance for biodiversity conservation and ecosystem function assessment. UAV multispectral data have the advantage of low cost and easy access, and hyperspectral data can finely characterize spatial and [...] Read more.
Automatic, efficient, and accurate individual tree species identification and crown parameters extraction is of great significance for biodiversity conservation and ecosystem function assessment. UAV multispectral data have the advantage of low cost and easy access, and hyperspectral data can finely characterize spatial and spectral features. As such, they have attracted extensive attention in the field of forest resource investigation, but their applicability for end-to-end individual tree species identification is unclear. Based on the Mask R-CNN instance segmentation model, this study utilized UAV hyperspectral images to generate spectral thinning data, spectral dimensionality reduction data, and simulated multispectral data, thereby evaluating the importance of high-resolution spectral information, the effectiveness of PCA dimensionality reduction processing of hyperspectral data, and the feasibility of multispectral data for individual tree identification. The results showed that the individual tree species identification accuracy of spectral thinning data was positively correlated with the number of bands, and full-band hyperspectral data were better than other hyperspectral thinning data and PCA dimensionality reduction data, with Precision, Recall, and F1-score of 0.785, 0.825, and 0.802, respectively. The simulated multispectral data are also effective in identifying individual tree species, among which the best result is realized through the combination of Green, Red, and NIR bands, with Precision, Recall, and F1-score of 0.797, 0.836, and 0.814, respectively. Furthermore, by using Green–Red–NIR data as input, the tree crown area and width are predicted with an RMSE of 3.16m2 and 0.51m, respectively, along with an rRMSE of 0.26 and 0.12. This study indicates that the Mask R-CNN model with UAV optical images is a novel solution for identifying individual tree species and extracting crown parameters, which can provide practical technical support for sustainable forest management and ecological diversity monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area. (<b>a</b>) Location of the study area; (<b>b</b>) Location of hyperspectral data distribution; (<b>c</b>–<b>g</b>) True color display of hyperspectral data.</p>
Full article ">Figure 2
<p>The framework of Mask R-CNN model.</p>
Full article ">Figure 3
<p>Schematic diagram of spectral thinning processing.</p>
Full article ">Figure 4
<p>Spectral response curve of MCA sensor.</p>
Full article ">Figure 5
<p>Spectral reflectance curves of canopy with different spectral thinning data. (<b>a</b>) 1/1 bands; (<b>b</b>) 1/2 bands; (<b>c</b>) 1/4 bands; (<b>d</b>) 1/8 bands; (<b>e</b>) 1/16 bands.</p>
Full article ">Figure 6
<p>Confusion matrix for individual tree species identification with different hyperspectral processing data. The number in each cell indicates the number of trees. The color of the cell refers to its ratio to the number of all ground truth samples (the sum of the row). (<b>a</b>–<b>e</b>) Experiment A1–A5; (<b>f</b>) Experiment B.</p>
Full article ">Figure 7
<p>Confusion matrix for individual tree species identification with three multispectral data sets. (<b>a</b>–<b>c</b>) Experiment C1–C3.</p>
Full article ">Figure 8
<p>The results of individual tree species identification in different typical regions with Green–Red–NIR data. (<b>a1</b>–<b>a4</b>) Ground truth; (<b>b1</b>–<b>b4</b>) Prediction results.</p>
Full article ">Figure 9
<p>Prediction maps of individual tree species identification using Green–Red–NIR data.</p>
Full article ">Figure 10
<p>Scatter plot of predicted and measured tree crown parameters, where the red line indicates the 1:1 line. (<b>a</b>) Crown projection area; (<b>b</b>) Crown width.</p>
Full article ">Figure 11
<p>PCA dimensionality reduction results of hyperspectral data. (<b>a</b>) True color display of hyperspectral data; (<b>b</b>) False color image of PCA data; (<b>c</b>–<b>e</b>) Grayscale maps of the first three principal components.</p>
Full article ">
12 pages, 3827 KiB  
Communication
Three-Dimensional Mapping of Habitats Using Remote-Sensing Data and Machine-Learning Algorithms
by Meisam Amani, Fatemeh Foroughnia, Armin Moghimi, Sahel Mahdavi and Shuanggen Jin
Remote Sens. 2023, 15(17), 4135; https://doi.org/10.3390/rs15174135 - 23 Aug 2023
Cited by 5 | Viewed by 2338
Abstract
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in [...] Read more.
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in three-dimensional (3D) habitat mapping primarily relies on same/cross-sensor features like features derived from multibeam Light Detection And Ranging (LiDAR), hydrographic LiDAR, and aerial images, often overlooking the potential benefits of considering multi-sensor data integration. To address this gap, this study introduced a novel approach to creating 3D habitat maps by using high-resolution multispectral images and a LiDAR-derived Digital Surface Model (DSM) coupled with an object-based Random Forest (RF) algorithm. LiDAR-derived products were also used to improve the accuracy of the habitat classification, especially for the habitat classes with similar spectral characteristics but different heights. Two study areas in the United Kingdom (UK) were chosen to explore the accuracy of the developed models. The overall accuracies for the two mentioned study areas were high (91% and 82%), which is indicative of the high potential of the developed RS method for 3D habitat mapping. Overall, it was observed that a combination of high-resolution multispectral imagery and LiDAR data could help the separation of different habitat types and provide reliable 3D information. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study areas and the distribution of in situ data.</p>
Full article ">Figure 2
<p>Flowchart of the suggested approach for 3D habitat mapping.</p>
Full article ">Figure 3
<p>(<b>a</b>) Worldview-2 multispectral imagery; (<b>b</b>) 2D habitat maps, and (<b>c</b>) 3D habitat maps obtained from the suggested approach for Colt Crag and Grassholme study areas.</p>
Full article ">Figure 4
<p>The percentage of the area of classified habitats in the (<b>a</b>) Colt Crag and (<b>b</b>) Grassholme reservoirs.</p>
Full article ">Figure 5
<p>Producer and user accuracies for the habitat classes in the (<b>a</b>) Colt Crag and (<b>b</b>) Grassholme reservoirs using a combination of Worldview-2 satellite image and LiDAR data.</p>
Full article ">Figure 6
<p>Producer and user accuracies for the habitat classes in the Colt Crag reservoir using only Worldview-2 satellite image.</p>
Full article ">
28 pages, 9126 KiB  
Article
Early Detection of Wheat Yellow Rust Disease and Its Impact on Terminal Yield with Multi-Spectral UAV-Imagery
by Canh Nguyen, Vasit Sagan, Juan Skobalski and Juan Ignacio Severo
Remote Sens. 2023, 15(13), 3301; https://doi.org/10.3390/rs15133301 - 27 Jun 2023
Cited by 20 | Viewed by 4306
Abstract
The food production system is vulnerable to diseases more than ever, and the threat is increasing in an era of climate change that creates more favorable conditions for emerging diseases. Fortunately, scientists and engineers are making great strides to introduce farming innovations to [...] Read more.
The food production system is vulnerable to diseases more than ever, and the threat is increasing in an era of climate change that creates more favorable conditions for emerging diseases. Fortunately, scientists and engineers are making great strides to introduce farming innovations to tackle the challenge. Unmanned aerial vehicle (UAV) remote sensing is among the innovations and thus is widely applied for crop health monitoring and phenotyping. This study demonstrated the versatility of aerial remote sensing in diagnosing yellow rust infection in spring wheats in a timely manner and determining an intervenable period to prevent yield loss. A small UAV equipped with an aerial multispectral sensor periodically flew over, and collected remotely sensed images of, an experimental field in Chacabuco (−34.64; −60.46), Argentina during the 2021 growing season. Post-collection images at the plot level were engaged in a thorough feature-engineering process by handcrafting disease-centric vegetation indices (VIs) from the spectral dimension, and grey-level co-occurrence matrix (GLCM) texture features from the spatial dimension. A machine learning pipeline entailing a support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP) was constructed to identify locations of healthy, mild infection, and severe infection plots in the field. A custom 3-dimensional convolutional neural network (3D-CNN) relying on the feature learning mechanism was an alternative prediction method. The study found red-edge (690–740 nm) and near infrared (NIR) (740–1000 nm) as vital spectral bands for distinguishing healthy and severely infected wheats. The carotenoid reflectance index 2 (CRI2), soil-adjusted vegetation index 2 (SAVI2), and GLCM contrast texture at an optimal distance d = 5 and angular direction θ = 135° were the most correlated features. The 3D-CNN-based wheat disease monitoring performed at 60% detection accuracy as early as 40 days after sowing (DAS), when crops were tillering, increasing to 71% and 77% at the later booting and flowering stages (100–120 DAS), and reaching a peak accuracy of 79% for the spectral-spatio-temporal fused data model. The success of early disease diagnosis from low-cost multispectral UAVs not only shed new light on crop breeding and pathology but also aided crop growers by informing them of a prevention period that could potentially preserve 3–7% of the yield at the confidence level of 95%. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The study site with spring wheats grown in field plots. Inset maps show the location in Chacabuco (−34.64; −60.46), Argentina.</p>
Full article ">Figure 2
<p>The field of interest (<b>A</b>) and aerial remote sensing platforms included (<b>B</b>) DJI P4 Multispectral (DJI Corporation, Shenzhen, China), (<b>C</b>) the sunlight irradiance sensor, (<b>D</b>) the multispectral sensor mounted to the airframe by an automatic stabilizer.</p>
Full article ">Figure 3
<p>Close-view pictures of wheat plots labeled in three different categories: healthy (<b>left</b>), mild infection (<b>middle</b>), and severe infection (<b>right</b>).</p>
Full article ">Figure 4
<p>Random chip images and their spectral profiles of healthy, mild, and severe infection over different times (date format: MM/DD/YYYY).</p>
Full article ">Figure 5
<p>Disease identification by Deep 3D Convolutional Neural Network (CNN).</p>
Full article ">Figure 6
<p>Pairwise plots of the five normalized spectral bands (Blue, Green, Red, Red-edge, NIR) by disease status (Healthy, Mild, and Severe infection) over the temporal dimension of (<b>a</b>) 30 August, (<b>b</b>) 24 September, (<b>c</b>) 5 October, (<b>d</b>) 7 October, (<b>e</b>) 26 October, and (<b>f</b>) 17 November.</p>
Full article ">Figure 7
<p>A dimension-reduction principal component analysis (PCA) transformed the data from a high-dimensional, temporal, and multicollinearity space to a low-dimensional and orthogonal space of 10 principal components. (<b>a</b>) The 10 newly reduced features explained 91% of the original data variance. (<b>b</b>) A 3D scatter plot between three PCA components by disease categories. (<b>c</b>) A heatmap of PCA components influenced by original features (10 most positive influence and another 10 most negative influence).</p>
Full article ">Figure 8
<p>The overall accuracy (OA) and accuracy of each class as a result of support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and 3D convolutional neural network (3D-CNN) over temporal datasets (date format: MM/DD/YYYY).</p>
Full article ">Figure 9
<p>The F1 metrics across four different machine learning algorithms over temporal datasets (date format: MM/DD/YYYY).</p>
Full article ">Figure 10
<p>Spatial distribution maps of disease statuses. (<b>a</b>) The 700 plots included 592 plots of manually labeled healthy, mild infection, and severe infection and another 108 unknown plots (Null). (<b>b</b>) The 592 plots further spatially and randomly split 80–20 into training (473 plots) and testing (119 plots) during the modelling stage. (<b>c</b>) The trained SVM of temporal-spectral-texture fused data predicting the entire field. (<b>d</b>) The trained RF of temporal-spectral-texture fused data predicting the entire field. (<b>e</b>) The trained MLP of temporal-spectral-texture fused data predicting the entire field. (<b>f</b>) The trained 3D-CNN of temporal UAV imagery predicting the entire field.</p>
Full article ">Figure 11
<p>The disease-induced impact on yield loss. (<b>a</b>) The harvest yield distribution by wheat disease degrees. (<b>b</b>) Means and standard deviations of wheat yield by disease statuses. (<b>c</b>) A normalized harvest yield by disease statuses on a spatial map.</p>
Full article ">Figure 12
<p>The actual labeling of severe infection plots (light red) on the test set and its correctly predicted labels (dark red). The test set was randomly assigned 119 plots, among which were 11 severe infection plots. Those infected plots primarily caused permanent yield loss on the crop. UAVs could detect and identify yellow rust at the early stages of the growing season: (<b>a</b>) the SVM correctly predicting 4 out of 11 plots on 30 August, (<b>b</b>) the SVM correctly predicting 3 out of 11 plots on 24 September, (<b>c</b>) the SVM correctly predicting 5 out of 11 plots on 5 October, (<b>d</b>) the SVM correctly predicting 5 out of 11 plots on 7 October, (<b>e</b>) the SVM correctly predicting 3 out of 11 plots on 26 October, and (<b>f</b>) the SVM correctly predicting 6 out of 11 plots on 17 November.</p>
Full article ">Figure 13
<p>The quantitative estimate of potential yield preservation in Kg/ha and in percentage (%) if the yellow rust can be early detected and properly intervened (date format: MM/DD/YYYY).</p>
Full article ">Figure 14
<p>Discrepancies in the resolution of UAV aerial images. Within the same collection date, the image resolution collected by one flight mission differed from that collected by another mission. It possibly explained the drop in prediction accuracy from the models used with the 5 October data compared with those used with the 7 October data, especially the image-wise method such as 3D-CNN.</p>
Full article ">
19 pages, 5060 KiB  
Article
Deep Learning Application for Crop Classification via Multi-Temporal Remote Sensing Images
by Qianjing Li, Jia Tian and Qingjiu Tian
Agriculture 2023, 13(4), 906; https://doi.org/10.3390/agriculture13040906 - 20 Apr 2023
Cited by 18 | Viewed by 6097
Abstract
The combination of multi-temporal images and deep learning is an efficient way to obtain accurate crop distributions and so has drawn increasing attention. However, few studies have compared deep learning models with different architectures, so it remains unclear how a deep learning model [...] Read more.
The combination of multi-temporal images and deep learning is an efficient way to obtain accurate crop distributions and so has drawn increasing attention. However, few studies have compared deep learning models with different architectures, so it remains unclear how a deep learning model should be selected for multi-temporal crop classification, and the best possible accuracy is. To address this issue, the present work compares and analyzes a crop classification application based on deep learning models and different time-series data to exploit the possibility of improving crop classification accuracy. Using Multi-temporal Sentinel-2 images as source data, time-series classification datasets are constructed based on vegetation indexes (VIs) and spectral stacking, respectively, following which we compare and evaluate the crop classification application based on time-series datasets and five deep learning architectures: (1) one-dimensional convolutional neural networks (1D-CNNs), (2) long short-term memory (LSTM), (3) two-dimensional-CNNs (2D-CNNs), (4) three-dimensional-CNNs (3D-CNNs), and (5) two-dimensional convolutional LSTM (ConvLSTM2D). The results show that the accuracy of both 1D-CNN (92.5%) and LSTM (93.25%) is higher than that of random forest (~ 91%) when using a single temporal feature as input. The 2D-CNN model integrates temporal and spatial information and is slightly more accurate (94.76%), but fails to fully utilize its multi-spectral features. The accuracy of 1D-CNN and LSTM models integrated with temporal and multi-spectral features is 96.94% and 96.84%, respectively. However, neither model can extract spatial information. The accuracy of 3D-CNN and ConvLSTM2D models is 97.43% and 97.25%, respectively. The experimental results show limited accuracy for crop classification based on single temporal features, whereas the combination of temporal features with multi-spectral or spatial information significantly improves classification accuracy. The 3D-CNN and ConvLSTM2D models are thus the best deep learning architectures for multi-temporal crop classification. However, the ConvLSTM architecture combining recurrent neural networks and CNNs should be further developed for multi-temporal image crop classification. Full article
Show Figures

Figure 1

Figure 1
<p>False color image and the Cropland Data Layer (CDL) of study areas.</p>
Full article ">Figure 2
<p>General workflow of this study.</p>
Full article ">Figure 3
<p>Time-series samples with different dimensions. (<b>a</b>) 1-D time-series, (<b>b</b>) 2-D time-series, (<b>c</b>) 3-D time-series, (<b>d</b>) 4-D time-series.</p>
Full article ">Figure 4
<p>Architectures of (<b>a</b>) LSTM, (<b>b</b>) 1D-CNN, (<b>c</b>) 2D-CNN, (<b>d</b>) 3D-CNN, and (<b>e</b>) ConvLSTM2D.</p>
Full article ">Figure 5
<p>Crop classification results based on VI time-series (see red boxes for more detail).</p>
Full article ">Figure 6
<p>Crop classification results based on multi-spectral time series.</p>
Full article ">Figure 7
<p>Crop classification results based on temporal, spectral, and spatial information.</p>
Full article ">Figure 8
<p>Time-series spectral band and vegetation indices are aggregated for crop fields. The buffers indicate one standard deviation calculated from the fields.</p>
Full article ">Figure 9
<p>The OA of different deep learning models.</p>
Full article ">Figure 10
<p>Correlation of crop-area ratio. ((<b>a</b>–<b>d</b>) correspond to four experiments, as shown in the vertical label. The scatter points mean the fraction of different crop over the study area. The red line reflects the consistency of crop area between the classification results and the CDL.)</p>
Full article ">
18 pages, 2931 KiB  
Article
Quantifying the Variation in Reflectance Spectra of Metrosideros polymorpha Canopies across Environmental Gradients
by Megan M. Seeley, Roberta E. Martin, Nicholas R. Vaughn, David R. Thompson, Jie Dai and Gregory P. Asner
Remote Sens. 2023, 15(6), 1614; https://doi.org/10.3390/rs15061614 - 16 Mar 2023
Cited by 8 | Viewed by 1969
Abstract
Imaging spectroscopy is a burgeoning tool for understanding ecosystem functioning on large spatial scales, yet the application of this technology to assess intra-specific trait variation across environmental gradients has been poorly tested. Selection of specific genotypes via environmental filtering plays an important role [...] Read more.
Imaging spectroscopy is a burgeoning tool for understanding ecosystem functioning on large spatial scales, yet the application of this technology to assess intra-specific trait variation across environmental gradients has been poorly tested. Selection of specific genotypes via environmental filtering plays an important role in driving trait variation and thus functional diversity across space and time, but the relative contributions of intra-specific trait variation and species turnover are still unclear. To address this issue, we quantified the variation in reflectance spectra within and between six uniform stands of Metrosideros polymorpha across elevation and soil substrate age gradients on Hawai‘i Island. Airborne imaging spectroscopy and light detection and ranging (LiDAR) data were merged to capture and isolate sunlit portions of canopies at the six M. polymorpha-dominated sites. Both intra-site and inter-site spectral variations were quantified using several analyses. A support vector machine (SVM) model revealed that each site was spectrally distinct, while Euclidean distances between site centroids in principal components (PC) space indicated that elevation and soil substrate age drive the separation of canopy spectra between sites. Coefficients of variation among spectra, as well as the intrinsic spectral dimensionality of the data, demonstrated the hierarchical effect of soil substrate age, followed by elevation, in determining intra-site variation. Assessments based on leaf trait data estimated from canopy reflectance resulted in similar patterns of separation among sites in the PC space and distinction among sites in the SVM model. Using a highly polymorphic species, we demonstrated that canopy reflectance follows known ecological principles of community turnover and thus how spectral remote sensing addresses forest community assembly on large spatial scales. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Global Airborne Observatory (GAO) Digital Modular Aerial Camera (DiMAC) imagery of the six study sites in red, green, and blue (true color) composites (<b>left</b>). Locations of the sites on Hawai‘i Island (<b>right</b>). The sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients. See <a href="#remotesensing-15-01614-t001" class="html-table">Table 1</a> for a summary of site information. Note that the DiMAC imagery shown here as a high-resolution reference to site conditions was not used in the analyses. Instead, see <a href="#app1-remotesensing-15-01614" class="html-app">Figure S1</a> for GAO imaging spectroscopy true color composites of sites.</p>
Full article ">Figure 2
<p>Top of canopy height (TCH) histogram showing the structural diversity of the six sites. Sites were characterized by soil substrate age (young, medium, or old) and elevation (low or high). TCH was derived from the unfiltered light detection and ranging (LiDAR) data collected by the Global Airborne Observatory. Sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients.</p>
Full article ">Figure 3
<p>Mean and standard deviation (gray fill) (<b>a</b>) reflectance and (<b>b</b>) brightness-normalized reflectance for the six sites. See <a href="#app1-remotesensing-15-01614" class="html-app">Figure S2</a> for brightness normalized reflectance of each site plotted individually. (<b>c</b>) Coefficient of variation (CV) of brightness-normalized reflectance for all sites. Site data were subset to control for the number of pixels included in each site (pixel-controlled method). Sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients.</p>
Full article ">Figure 4
<p>Euclidean distances between the centroid of each site projected onto 2D space. The Euclidean distance was calculated using (<b>a</b>) the first 16 principal components (PCs) of the reflectance data and (<b>b</b>) the nine PCs of leaf traits. See <a href="#remotesensing-15-01614-t002" class="html-table">Table 2</a> for a list of the leaf traits included. See <a href="#app1-remotesensing-15-01614" class="html-app">Table S3</a> for the distances between sites. Sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients.</p>
Full article ">Figure 5
<p>The intrinsic spectral dimensionality of each site calculated using both local (gray) and global (black) principal component analyses (PCA) applied to the filtered reflectance data. The areal extent of each site was controlled (area-controlled method) such that the area of each site used in the analysis was equivalent. The global intrinsic spectral dimensionality (calculated across all sites) is represented by the “All” category. Sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients.</p>
Full article ">Figure 6
<p>Distributions of modeled canopy chemistry indices for (<b>a</b>) foliar nitrogen (%) and (<b>b</b>) leaf mass per area (LMA; g m<sup>−2</sup>) across each of the six sites. The total number of pixels used to create the site kernel density plots was equivalent to that at the smallest site and randomly selected from each site. Sites are located along soil substrate age (Y, young &lt;200–750; M, medium 5000–64,000; O, old 260,000–500,000) and elevation (L, low 200–900 m; H, high 1100–1500 m) gradients. See <a href="#app1-remotesensing-15-01614" class="html-app">supporting information in Figure S3 and Table S4</a> for the distributions of all canopy chemistries.</p>
Full article ">
25 pages, 4585 KiB  
Article
Unsupervised Diffusion and Volume Maximization-Based Clustering of Hyperspectral Images
by Sam L. Polk, Kangning Cui, Aland H. Y. Chan, David A. Coomes, Robert J. Plemmons and James M. Murphy
Remote Sens. 2023, 15(4), 1053; https://doi.org/10.3390/rs15041053 - 15 Feb 2023
Cited by 8 | Viewed by 3237
Abstract
Hyperspectral images taken from aircraft or satellites contain information from hundreds of spectral bands, within which lie latent lower-dimensional structures that can be exploited for classifying vegetation and other materials. A disadvantage of working with hyperspectral images is that, due to an inherent [...] Read more.
Hyperspectral images taken from aircraft or satellites contain information from hundreds of spectral bands, within which lie latent lower-dimensional structures that can be exploited for classifying vegetation and other materials. A disadvantage of working with hyperspectral images is that, due to an inherent trade-off between spectral and spatial resolution, they have a relatively coarse spatial scale, meaning that single pixels may correspond to spatial regions containing multiple materials. This article introduces the Diffusion and Volume maximization-based Image Clustering (D-VIC) algorithm for unsupervised material clustering to address this problem. By directly incorporating pixel purity into its labeling procedure, D-VIC gives greater weight to pixels corresponding to a spatial region containing just a single material. D-VIC is shown to outperform comparable state-of-the-art methods in extensive experiments on a range of hyperspectral images, including land-use maps and highly mixed forest health surveys (in the context of ash dieback disease), implying that it is well-equipped for unsupervised material clustering of spectrally-mixed hyperspectral datasets. Full article
(This article belongs to the Special Issue Theory and Application of Machine Learning in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Example of a toy dataset <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>n</mi> <mo>=</mo> <mn>1000</mn> <mo>)</mo> </mrow> </semantics></math> with nonlinear cluster structure: two interleaving half-circles. The idealized clustering, visualized in (<b>a</b>), separates each half-circle. Due to the lack of a linear decision boundary, however, both <span class="html-italic">K</span>-Means (<b>b</b>) and GMM (<b>c</b>) were unable to extract latent cluster structure from this simple nonlinear dataset. Both algorithms were run with 100 replicates.</p>
Full article ">Figure 2
<p>Diagram of the D-VIC clustering algorithm. The computational complexity of each step is colored in red. The scaling of D-VIC depends on <span class="html-italic">n</span> (no. pixels), <span class="html-italic">D</span> (no. spectral bands), <span class="html-italic">m</span> (no. endmembers), <span class="html-italic">I</span> (no. AVMAX maximizations), <span class="html-italic">N</span> (no. nearest neighbors), <span class="html-italic">d</span> (doubling dimension of <span class="html-italic">X</span> [<a href="#B136-remotesensing-15-01053" class="html-bibr">136</a>]), and <span class="html-italic">C</span>: a constant independent of all other parameters [<a href="#B136-remotesensing-15-01053" class="html-bibr">136</a>]; see <a href="#sec3dot1-remotesensing-15-01053" class="html-sec">Section 3.1</a> for details. Note that all steps are quasilinear with respect to <span class="html-italic">n</span>, implying that D-VIC scales well to large HSI datasets. We remark that the spectral unmixing step (indicated with a blue box) is quite modular, and other approaches may be used in future work [<a href="#B12-remotesensing-15-01053" class="html-bibr">12</a>,<a href="#B90-remotesensing-15-01053" class="html-bibr">90</a>,<a href="#B93-remotesensing-15-01053" class="html-bibr">93</a>,<a href="#B94-remotesensing-15-01053" class="html-bibr">94</a>,<a href="#B130-remotesensing-15-01053" class="html-bibr">130</a>,<a href="#B133-remotesensing-15-01053" class="html-bibr">133</a>,<a href="#B134-remotesensing-15-01053" class="html-bibr">134</a>].</p>
Full article ">Figure 3
<p>Ground truth labels and pixel purity of synthetic dataset sampled from a triangle in <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">R</mi> <mn>2</mn> </msup> </semantics></math>; the <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> vertices of this triangle served as ground truth endmembers. Notice that empirical density maximizers near the origin are also the lowest-purity data points.</p>
Full article ">Figure 4
<p>Optimal LUND (OA = 0.739) and D-VIC (OA = 0.905) clusterings of the synthetic dataset (<a href="#remotesensing-15-01053-f003" class="html-fig">Figure 3</a>). D-VIC explicitly incorporates data purity into its labeling procedure, resulting in better clustering performance than LUND in the high-density, low-purity region near the origin.</p>
Full article ">Figure 5
<p>Ground truth labels and first principal component scores for the real benchmark HSIs analyzed in this article: Salinas A (<b>a</b>), Jasper Ridge (<b>b</b>), and Indian Pines (<b>c</b>).</p>
Full article ">Figure 6
<p>Comparison of clusterings produced by D-VIC and related algorithms on the Salinas A HSI. Unlike any comparison method, D-VIC correctly groups pixels corresponding to 8-week maturity romaine (indicated in yellow), resulting in the near-perfect recovery of the ground truth labels.</p>
Full article ">Figure 7
<p>Comparison of clusterings produced by D-VIC and related algorithms on the Jasper Ridge HSI. D-VIC outperforms all other algorithms, largely due to superior performance among pixels corresponding to tree and soil classes (indicated in dark blue and green, respectively).</p>
Full article ">Figure 8
<p>Visualization of D-VIC’s median OA across 50 trials as hyperparameters <span class="html-italic">N</span> and <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>0</mn> </msub> </semantics></math> are varied. D-VIC achieves high performance across a large set of hyperparameters.</p>
Full article ">Figure 9
<p>Analysis of D-VIC’s performance as <span class="html-italic">t</span> varies across <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>10</mn> <mo>,</mo> <mn>200</mn> <mo>]</mo> </mrow> </semantics></math>. Values are the median OA across 100 implementations of D-VIC with the optimal <span class="html-italic">N</span> and <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>0</mn> </msub> </semantics></math> values. Generally, <span class="html-italic">t</span> appears to have little impact on the OA of D-VIC, and D-VIC achieves performances comparable to those reported in <a href="#remotesensing-15-01053-t002" class="html-table">Table 2</a> across <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mo>[</mo> <mn>90</mn> <mo>,</mo> <mn>200</mn> <mo>]</mo> </mrow> </semantics></math>, uniformly across all data sets.</p>
Full article ">Figure 10
<p>Visualizations of the Madingley HSI. The RF disease mapping is visualized in (<b>a</b>) and the Madingley HSI’s first principal component scores are visualized in (<b>b</b>). In (<b>a</b>), yellow indicates severely-infected ash, green indicates infected ash, and light blue indicates healthy ash.</p>
Full article ">Figure 11
<p>Comparison of clusterings produced by D-VIC and related algorithms on the Madingley HSI. Labels were aligned so that yellow indicates dieback-infected ash and teal indicates healthy ash. Though the performance of many graph-based algorithms (SC, KNN-SSC, LUND, and D-VIC) was similar in <a href="#remotesensing-15-01053-t004" class="html-table">Table 4</a>, qualitative differences exist between these algorithms’ clusterings.</p>
Full article ">
Back to TopTop