Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (158)

Search Parameters:
Keywords = GEOBIA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 25203 KiB  
Article
Integrating Physical-Based Models and Structure-from-Motion Photogrammetry to Retrieve Fire Severity by Ecosystem Strata from Very High Resolution UAV Imagery
by José Manuel Fernández-Guisuraga, Leonor Calvo, Luis Alfonso Pérez-Rodríguez and Susana Suárez-Seoane
Fire 2024, 7(9), 304; https://doi.org/10.3390/fire7090304 - 27 Aug 2024
Viewed by 940
Abstract
We propose a novel mono-temporal framework with a physical basis and ecological consistency to retrieve fire severity at very high spatial resolution. First, we sampled the Composite Burn Index (CBI) in 108 field plots that were subsequently surveyed through unmanned aerial vehicle (UAV) [...] Read more.
We propose a novel mono-temporal framework with a physical basis and ecological consistency to retrieve fire severity at very high spatial resolution. First, we sampled the Composite Burn Index (CBI) in 108 field plots that were subsequently surveyed through unmanned aerial vehicle (UAV) flights. Then, we mimicked the field methodology for CBI assessment in the remote sensing framework. CBI strata were identified through individual tree segmentation and geographic object-based image analysis (GEOBIA). In each stratum, wildfire ecological effects were estimated through the following methods: (i) the vertical structural complexity of vegetation legacies was computed from 3D-point clouds, as a proxy for biomass consumption; and (ii) the vegetation biophysical variables were retrieved from multispectral data by the inversion of the PROSAIL radiative transfer model, with a direct physical link with the vegetation legacies remaining after canopy scorch and torch. The CBI scores predicted from UAV ecologically related metrics at the strata level featured high fit with respect to the field-measured CBI scores (R2 > 0.81 and RMSE < 0.26). Conversely, the conventional retrieval of fire effects using a battery of UAV structural and spectral predictors (point height distribution metrics and spectral indices) computed at the plot level provided a much worse performance (R2 = 0.677 and RMSE = 0.349). Full article
(This article belongs to the Special Issue Drone Applications Supporting Fire Management)
Show Figures

Figure 1

Figure 1
<p>Workflow summarizing the methodological approach followed in the present study.</p>
Full article ">Figure 2
<p>Location of Folledo (<b>A</b>) and Lavadoira (<b>B</b>) wildfires in the western Mediterranean Basin. We show the extent of the UAV surveys and the location of the Composite Burn Index (CBI) field plots within the wildfire perimeters. The background image is a Landsat-8 false color composite (R = band 7; G = band 5; B = band 4). Wildfire perimeters were obtained from the Copernicus Emergency Management Service (EMS).</p>
Full article ">Figure 3
<p>Detailed view of a true color (RGB) composite of the multispectral orthomosaic (<b>left</b>) and the labeled map for the classes of interest (<b>right</b>) for the surroundings of two CBI plots (red square) with high land cover heterogeneity in the Foyedo wildfire. Non-interest classes included canopy shadows and bare soil/litter.</p>
Full article ">Figure 4
<p>Boxplots showing the relationship between the CBI scores and the strata height. We included one-way ANOVA and Tukey’s HSD post hoc results. Lowercase letters denote significant differences in the CBI score between strata at the 0.05 level.</p>
Full article ">Figure 5
<p>Normalized 3D-point cloud profiles and individual segmented trees for representative CBI plots of each fire severity category and dominated by <span class="html-italic">Pinus pinaster</span> with same pre-fire structure.</p>
Full article ">Figure 6
<p>Comparison of structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics with ecological sense describing fire effects across fire severity categories in the CBI field plots. The vegetation stratum higher than 20 m in height was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). We included one-way ANOVA and Tukey’s HSD post hoc results. Lowercase letters denote significant differences in the CBI score between strata at the 0.05 level.</p>
Full article ">Figure 7
<p>Univariate relationships between structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics with the CBI scores aggregated by strata. The vegetation stratum higher than 20 m in height was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). The solid red line represents the linear fit evaluated through the coefficient of determination (R<sup>2</sup>) in ordinary least square (OLS) models.</p>
Full article ">Figure 8
<p>Relationships between field-measured and predicted CBI scores aggregated by strata using structural (canopy density) and spectral (fractional vegetation cover -FCOVER-, brown pigments fraction -C<sub>brown</sub>-, and canopy water content -C<sub>w</sub>-) UAV-derived metrics. The vegetation stratum higher than 20 m was not considered because it was present only in five CBI plots (ratio #observations/#predictors close to 1:1 in further analyses). The solid red line represents the linear fit evaluated through the coefficient of determination (R<sup>2</sup>) in ordinary least square (OLS) models.</p>
Full article ">Figure 9
<p>Comparison of observed and predicted plot-level CBI values for the interval model validation (<b>A</b>), external model validation (<b>B</b>), benchmark #1 (<b>C</b>) and benchmark #2 (<b>D</b>) scenarios (see <a href="#sec2dot6-fire-07-00304" class="html-sec">Section 2.6</a>). Dashed black lines denote the CBI category thresholds.</p>
Full article ">Figure 9 Cont.
<p>Comparison of observed and predicted plot-level CBI values for the interval model validation (<b>A</b>), external model validation (<b>B</b>), benchmark #1 (<b>C</b>) and benchmark #2 (<b>D</b>) scenarios (see <a href="#sec2dot6-fire-07-00304" class="html-sec">Section 2.6</a>). Dashed black lines denote the CBI category thresholds.</p>
Full article ">
17 pages, 13631 KiB  
Article
Ensemble Machine Learning on the Fusion of Sentinel Time Series Imagery with High-Resolution Orthoimagery for Improved Land Use/Land Cover Mapping
by Mukti Ram Subedi, Carlos Portillo-Quintero, Nancy E. McIntyre, Samantha S. Kahl, Robert D. Cox, Gad Perry and Xiaopeng Song
Remote Sens. 2024, 16(15), 2778; https://doi.org/10.3390/rs16152778 - 30 Jul 2024
Cited by 1 | Viewed by 1926
Abstract
In the United States, several land use and land cover (LULC) data sets are available based on satellite data, but these data sets often fail to accurately represent features on the ground. Alternatively, detailed mapping of heterogeneous landscapes for informed decision-making is possible [...] Read more.
In the United States, several land use and land cover (LULC) data sets are available based on satellite data, but these data sets often fail to accurately represent features on the ground. Alternatively, detailed mapping of heterogeneous landscapes for informed decision-making is possible using high spatial resolution orthoimagery from the National Agricultural Imagery Program (NAIP). However, large-area mapping at this resolution remains challenging due to radiometric differences among scenes, landscape heterogeneity, and computational limitations. Various machine learning (ML) techniques have shown promise in improving LULC maps. The primary purposes of this study were to evaluate bagging (Random Forest, RF), boosting (Gradient Boosting Machines [GBM] and extreme gradient boosting [XGB]), and stacking ensemble ML models. We used these techniques on a time series of Sentinel 2A data and NAIP orthoimagery to create a LULC map of a portion of Irion and Tom Green counties in Texas (USA). We created several spectral indices, structural variables, and geometry-based variables, reducing the dimensionality of features generated on Sentinel and NAIP data. We then compared accuracy based on random cross-validation without accounting for spatial autocorrelation and target-oriented cross-validation accounting for spatial structures of the training data set. Comparison of random and target-oriented cross-validation results showed that autocorrelation in the training data offered overestimation ranging from 2% to 3.5%. The XGB-boosted stacking ensemble on-base learners (RF, XGB, and GBM) improved model performance over individual base learners. We show that meta-learners are just as sensitive to overfitting as base models, as these algorithms are not designed to account for spatial information. Finally, we show that the fusion of Sentinel 2A data with NAIP data improves land use/land cover classification using geographic object-based image analysis. Full article
(This article belongs to the Special Issue Mapping Essential Elements of Agricultural Land Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in Irion and Tom Green counties in Texas, with a red, green, and blue composite of a Sentinel 2A image from June 2018.</p>
Full article ">Figure 2
<p>A schematic overview of stacking ensemble machine learning using bagging and boosting algorithms.</p>
Full article ">Figure 3
<p>Confusion matrices produced on holdout (20%) of the total training data using RF (<b>A</b>), GBM (<b>B</b>), XGB (<b>C</b>), and stacking (XGB) (<b>D</b>) classifiers.</p>
Full article ">Figure 4
<p>Box plot of overall accuracy across folds in the base-learner model and meta-learner model using random cross-validation (random, in red) and target-oriented cross-validation (LLO, in blue). The horizontal black line in each box plot indicates the median and the crosshairs indicate the mean.</p>
Full article ">Figure 5
<p>Permutation-based feature importance for RF (<b>A</b>), GBM (<b>B</b>), XGB (<b>C</b>), and stacked (<b>D</b>) models in target-oriented cross-validation.</p>
Full article ">Figure 6
<p>Classified map of the study area based on the stacking model (meta-learner) using target-oriented cross-validation, and geographic object-based image analysis (GEOBIA) approach.</p>
Full article ">
22 pages, 3980 KiB  
Article
A Geographic Object-Based Image Approach Based on the Sentinel-2 Multispectral Instrument for Lake Aquatic Vegetation Mapping: A Complementary Tool to In Situ Monitoring
by Maria Tompoulidou, Elpida Karadimou, Antonis Apostolakis and Vasiliki Tsiaoussi
Remote Sens. 2024, 16(5), 916; https://doi.org/10.3390/rs16050916 - 5 Mar 2024
Viewed by 2061
Abstract
Aquatic vegetation is an essential component of lake ecosystems, used as a biological indicator for in situ monitoring within the Water Framework Directive. We developed a hierarchical object-based image classification model with multi-seasonal Sentinel-2 imagery and suitable spectral indices in order to map [...] Read more.
Aquatic vegetation is an essential component of lake ecosystems, used as a biological indicator for in situ monitoring within the Water Framework Directive. We developed a hierarchical object-based image classification model with multi-seasonal Sentinel-2 imagery and suitable spectral indices in order to map the aquatic vegetation in a Mediterranean oligotrophic/mesotrophic deep lake; we then applied the model to another lake with similar abiotic and biotic characteristics. Field data from a survey of aquatic macrophytes, undertaken on the same dates as EO data, were used within the accuracy assessment. The aquatic vegetation was discerned into three classes: emergent, floating, and submerged aquatic vegetation. Geographic object-based image analysis (GEOBIA) proved to be effective in discriminating the three classes in both study areas. Results showed high effectiveness of the classification model in terms of overall accuracy, particularly for the emergent and floating classes. In the case of submerged aquatic vegetation, challenges in their classification prompted us to establish specific criteria for their accurate detection. Overall results showed that GEOBIA based on spectral indices was suitable for mapping aquatic vegetation in oligotrophic/mesotrophic deep lakes. EO data can contribute to large-scale coverage and high-frequency monitoring requirements, being a complementary tool to in situ monitoring. Full article
(This article belongs to the Special Issue Advances of Remote Sensing and GIS Technology in Surface Water Bodies)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study areas of Trichonida Lake (<b>a</b>) and Feneos Lake (<b>b</b>) examined in this analysis. The red polygon boundary denotes the 200 m buffer zone around the respective shoreline that was determined by the mean water level (16 m above the mean sea level for Trichonida Lake and 872 m above the mean sea level for Feneos Lake). The buffer indicates the zone based on a 200 m distance from the shoreline, and it comprised the final area under investigation. Blue points denote the transects established for the in situ monitoring of aquatic vegetation for WFD purposes. The green points denote the additional plots where extra field vegetation recordings were available for Feneos Lake. These points comprise the final set of in situ recorded points used in the accuracy assessment. We used the World Topographic Map as a basemap, available in ESRI ArcGIS software v. 10.8.2.</p>
Full article ">Figure 2
<p>Hierarchical classification flow with three levels of segmentation. The links between the classes denote the relationship between them.</p>
Full article ">Figure 3
<p>The classification ruleset as it was developed in the study area of Trichonida Lake. Features derived from winter season and summer season images are denoted with (ws) and (ss), respectively. Prior to its application in the second study area of Feneos Lake, the ruleset was evaluated for its performance and minor adjustments were made to the respective range of values of the features.</p>
Full article ">Figure 4
<p>Examples of classification results in Trichonida Lake. The first two columns indicate the Sentinel 2A images for summer and winter season, respectively; the third column shows the classification result. In case (<b>A</b>), patches of floating vegetation (pink) can be observed in the image of summer season, while in all cases the emergent vegetation (orange) demonstrates a characteristic spectral differentiation between seasons. In cases (<b>B</b>,<b>C</b>), pictures show the distribution of submerged vegetation (purple) as it resulted from NDAVI ((ws) − (ss)) values combined with mean blue (ss) values.</p>
Full article ">Figure 5
<p>Examples of classification results in Feneos Lake. The first two columns indicate the Sentinel 2A images for summer and winter season, respectively; the third column shows the classification result. In cases (<b>A</b>,<b>B</b>), we observe the high discrimination of the emergent aquatic vegetation class, while the submerged aquatic vegetation was slightly less accurate. In case (<b>C</b>), the submerged aquatic vegetation class in the shallow part of the lake was very well discriminated, although this does not apply for the deeper part (almost near the “Deep Water” class (dark blue) where we observed, during our survey, patches of <span class="html-italic">Vallisneria spiralis</span>).</p>
Full article ">Figure 6
<p>Examples of the classified objects for the “Submerged” class (purple). The polygons with yellow color—produced through digitization solely for demonstration purposes—define areas covered by submerged aquatic vegetation (based on in situ data and expert photointerpretation). It can be observed that, in cases (<b>c</b>,<b>d</b>,<b>f</b>), the submerged communities dominated by <span class="html-italic">Myriophyllum spicatum</span>, <span class="html-italic">Najas marina,</span> and <span class="html-italic">Potamogeton lucens</span>, particularly in the deeper parts of the lake, were under-classified. This is mainly due to the differences in spectral characteristics of the particular communities in different depths. Several tests for their discrimination led to unwanted over-classifications. In the other cases (<b>a</b>,<b>b</b>,<b>e</b>,<b>g</b>–<b>i</b>), it can be observed that the submerged communities dominated by the <span class="html-italic">Vallisneria spiralis</span> have been systematically classified.</p>
Full article ">
26 pages, 43921 KiB  
Article
Performance Comparison of Deep Learning (DL)-Based Tabular Models for Building Mapping Using High-Resolution Red, Green, and Blue Imagery and the Geographic Object-Based Image Analysis Framework
by Mohammad D. Hossain and Dongmei Chen
Remote Sens. 2024, 16(5), 878; https://doi.org/10.3390/rs16050878 - 1 Mar 2024
Cited by 1 | Viewed by 1204
Abstract
Identifying urban buildings in high-resolution RGB images presents challenges, mainly due to the absence of near-infrared bands in UAVs and Google Earth imagery and the diversity in building attributes. Deep learning (DL) methods, especially Convolutional Neural Networks (CNNs), are widely used for building [...] Read more.
Identifying urban buildings in high-resolution RGB images presents challenges, mainly due to the absence of near-infrared bands in UAVs and Google Earth imagery and the diversity in building attributes. Deep learning (DL) methods, especially Convolutional Neural Networks (CNNs), are widely used for building extraction but are primarily pixel-based. Geographic Object-Based Image Analysis (GEOBIA) has emerged as an essential approach for high-resolution imagery. However, integrating GEOBIA with DL models presents challenges, including adapting DL models for irregular-shaped segments and effectively merging DL outputs with object-based features. Recent developments include tabular DL models that align well with GEOBIA. GEOBIA stores various features for image segments in a tabular format, yet the effectiveness of these tabular DL models for building extraction still needs to be explored. It also needs to clarify which features are crucial for distinguishing buildings from other land-cover types. Typically, GEOBIA employs shallow learning (SL) classifiers. Thus, this study evaluates SL and tabular DL classifiers for their ability to differentiate buildings from non-building features. Furthermore, these classifiers are assessed for their capacity to handle roof heterogeneity caused by sun exposure and roof materials. This study concludes that some SL classifiers perform similarly to their DL counterparts, and it identifies critical features for building extraction. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Typical workflow of CNNs.</p>
Full article ">Figure 2
<p>A typical workflow of a DL-based tabular model.</p>
Full article ">Figure 3
<p>Segmentation results illustrated for a section of the study area with enhanced views for improved visualization.</p>
Full article ">Figure 4
<p>Location of the study area (Kingston ON, Canada) and the UAV test image.</p>
Full article ">Figure 5
<p>Sample types, shapes, and colors of the buildings available in the study area: gable roof (<b>a</b>–<b>c</b>), hip roof (<b>d</b>–<b>f</b>), complex roof (<b>g</b>), and flat roof (<b>h</b>).</p>
Full article ">Figure 6
<p>Feature importance provided by RF (<b>top</b>) and XGB (<b>bottom</b>) for building versus shadow classification.</p>
Full article ">Figure 7
<p>Performance comparison of classifiers in differentiating buildings from shadows. The red hatched area indicates the shadow, the blue hatched area indicates non-shadow, and the yellow box indicates misclassification.</p>
Full article ">Figure 8
<p>Feature importance provided by RF (<b>top</b>) and XGB (<b>bottom</b>) for building versus vegetation classification.</p>
Full article ">Figure 9
<p>Performance comparison of classifiers in differentiating buildings from vegetation. The green hatched area indicates vegetation, the blue hatched area indicates non-vegetation, and the yellow box indicates misclassification.</p>
Full article ">Figure 10
<p>Feature importance provided by RF (<b>top</b>) and XGB (<b>bottom</b>) for building versus soil classification.</p>
Full article ">Figure 11
<p>Performance comparison of classifiers in differentiating buildings from soil. The yellow hatched area indicates soil, the blue hatched area indicates non-soil, and the light-green box indicates misclassification.</p>
Full article ">Figure 12
<p>Feature importance provided by RF (<b>top</b>) and XGB (<b>bottom</b>) for building versus other impervious surfaces classification.</p>
Full article ">Figure 13
<p>Performance comparison of classifiers in differentiating buildings from other impervious surfaces. The red hatched area indicates impervious surfaces, the blue hatched area indicates buildings and permeable surfaces, and the yellow box indicates misclassification.</p>
Full article ">Figure 14
<p>Feature importance provided by RF (<b>top</b>) and XGB (<b>bottom</b>) to extract heterogeneous roof type.</p>
Full article ">Figure 15
<p>Performance comparison of classifiers in extracting heterogeneous roof types. The purple hatched area indicates a non-building, and the blue hatched area shows a building.</p>
Full article ">Figure 16
<p>Over-segmentation-induced misclassification (as indicated in the yellow box). The green polygon designates a segment, the purple hatched area indicates a non-building, and the blue hatched area shows a building.</p>
Full article ">Figure 17
<p>Under-segmentation-induced misclassification (as indicated in the yellow box). The green polygon designates a segment, the purple hatched area indicates a non-building, and the blue hatched area shows a building.</p>
Full article ">
18 pages, 12795 KiB  
Article
Maize Crop Detection through Geo-Object-Oriented Analysis Using Orbital Multi-Sensors on the Google Earth Engine Platform
by Ismael Cavalcante Maciel Junior, Rivanildo Dallacort, Cácio Luiz Boechat, Paulo Eduardo Teodoro, Larissa Pereira Ribeiro Teodoro, Fernando Saragosa Rossi, José Francisco de Oliveira-Júnior, João Lucas Della-Silva, Fabio Henrique Rojo Baio, Mendelson Lima and Carlos Antonio da Silva Junior
AgriEngineering 2024, 6(1), 491-508; https://doi.org/10.3390/agriengineering6010030 - 22 Feb 2024
Viewed by 1441
Abstract
Mato Grosso state is the biggest maize producer in Brazil, with the predominance of cultivation concentrated in the second harvest. Due to the need to obtain more accurate and efficient data, agricultural intelligence is adapting and embracing new technologies such as the use [...] Read more.
Mato Grosso state is the biggest maize producer in Brazil, with the predominance of cultivation concentrated in the second harvest. Due to the need to obtain more accurate and efficient data, agricultural intelligence is adapting and embracing new technologies such as the use of satellites for remote sensing and geographic information systems. In this respect, this study aimed to map the second harvest maize cultivation areas at Canarana-MT in the crop year 2019/2020 by using geographic object-based image analysis (GEOBIA) with different spatial, spectral, and temporal resolutions. MSI/Sentinel-2, OLI/Landsat-8, MODIS-Terra and MODIS-Aqua, and PlanetScope imagery were used in this assessment. The maize crops mapping was based on cartographic basis from IBGE (Brazilian Institute of Geography and Statistics) and the Google Earth Engine (GEE), and the following steps of image filtering (gray-level co-occurrence matrix—GLCM), vegetation indices calculation, segmentation by simple non-iterative clustering (SNIC), principal component (PC) analysis, and classification by random forest (RF) algorithm, followed finally by confusion matrix analysis, kappa, overall accuracy (OA), and validation statistics. From these methods, satisfactory results were found; with OA from 86.41% to 88.65% and kappa from 81.26% and 84.61% among the imagery systems considered, the GEOBIA technique combined with the SNIC and GLCM spectral and texture feature discriminations and the RF classifier presented a mapping of the corn crop of the study area that demonstrates an improved and aided the performance of automated multispectral image classification processes. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the object-oriented classification methodology.</p>
Full article ">Figure 2
<p>Location of study area in Canarana municipality, Mato Grosso state, presented by using the normalized difference vegetation index (NDVI).</p>
Full article ">Figure 3
<p>Land-use and land-cover sample’s location at Canarana-MT.</p>
Full article ">Figure 4
<p>PC analysis mosaicking for (<b>A</b>) OLI/Landsat-8, (<b>B</b>) MODIS Terra, (<b>C</b>) Planet NICFI, and (<b>D</b>) MSI/Sentinel-2.</p>
Full article ">Figure 5
<p>Accuracy test with different quantities of decision trees in the random forest classification process in each imagery system considered: (<b>A</b>) OLI/Landsat-8, (<b>B</b>) MODIS Terra, (<b>C</b>) Planet NICFI, and (<b>D</b>) MSI/Sentinel-2.</p>
Full article ">Figure 6
<p>Land-use and land-cover classification based on GEOBIA and random forest for each considered sensor: (<b>A</b>) OLI/Landsat-8, (<b>B</b>) MODIS (<b>C</b>) Planet NICFI, and (<b>D</b>) MSI/Sentinel-2.</p>
Full article ">Figure 7
<p>Classified second-crop maize areas clip: (<b>A</b>) OLI/Landsat-8, (<b>B</b>) MODIS, (<b>C</b>) Planet NICFI, and (<b>D</b>) MSI/Sentinel-2.</p>
Full article ">Figure 8
<p>Confusion matrix for OLI/Landsat-8 imagery.</p>
Full article ">Figure 9
<p>Confusion matrix for MODIS imagery.</p>
Full article ">Figure 10
<p>Confusion matrix for Planet NICFI imagery.</p>
Full article ">Figure 11
<p>Confusion matrix for MSI/Sentinel-2.</p>
Full article ">
19 pages, 14538 KiB  
Article
Evaluating the Performance of Geographic Object-Based Image Analysis in Mapping Archaeological Landscapes Previously Occupied by Farming Communities: A Case of Shashi–Limpopo Confluence Area
by Olaotse Lokwalo Thabeng, Elhadi Adam and Stefania Merlo
Remote Sens. 2023, 15(23), 5491; https://doi.org/10.3390/rs15235491 - 24 Nov 2023
Viewed by 1175
Abstract
The use of pixel-based remote sensing techniques in archaeology is usually limited by spectral confusion between archaeological material and the surrounding environment because they rely on the spectral contrast between features. To deal with this problem, we investigated the possibility of using geographic [...] Read more.
The use of pixel-based remote sensing techniques in archaeology is usually limited by spectral confusion between archaeological material and the surrounding environment because they rely on the spectral contrast between features. To deal with this problem, we investigated the possibility of using geographic object-based image analysis (GEOBIA) to predict archaeological and non-archaeological features. The chosen study area was previously occupied by farming communities and is characterised by natural soils (non-sites), vitrified dung, non-vitrified dung, and savannah woody vegetation. The study uses a three-stage GEOBIA that comprises (1) image object segmentation, (2) feature selection, and (3) object classification. The spectral mean of each band and the area extent of an object were selected as input variables for object classifications in support vector machines (SVM) and random forest (RF) classifiers. The results of this study have shown that GEOBIA approaches have the potential to map archaeological landscapes. The SVM and RF classifiers achieved high classification accuracies of 96.58% and 94.87%, respectively. Visual inspection of the classified images has demonstrated the importance of the aforementioned models in mapping archaeological and non-archaeological features because of their ability to manage the spectral confusion between non-sites and vitrified dung sites. In summary, the results have demonstrated that the GEOBIAs ability to incorporate spatial elements in the classification model ameliorates the chances of distinguishing materials with limited spectral differences. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area in Southern Africa with a true colour WorldView-2 image used in this study and major farming communities’ sites.</p>
Full article ">Figure 2
<p>Subsets of Worldview-2 image of the study area: (<b>a</b>) Before segmentation and (<b>b</b>) after MRS segmentation at a scale of 52. The greyish patch shown within the image is a non-vitrified dung site.</p>
Full article ">Figure 3
<p>OOB errors of RF parameters (mtry and ntree) optimised using grid search procedure.</p>
Full article ">Figure 4
<p>Cross-validation errors of SVM parameters (C and γ) optimised using the grid search procedure. The cost and gamma values varied between −1000 and 1000.</p>
Full article ">Figure 5
<p>Classification maps obtained using the RF (<b>a</b>) and SVM (<b>b</b>) algorithm.</p>
Full article ">Figure 6
<p>Plan of site AA 14B overlaid on images classified using RF (<b>a</b>) and SVM (<b>b</b>). Dagga and byre were digitised from a site plan drawn by Huffman [<a href="#B53-remotesensing-15-05491" class="html-bibr">53</a>].</p>
Full article ">Figure 7
<p>A plan of site AD4 overlaid on images classified using RF (<b>a</b>) and SVM (<b>b</b>). Byre, midden, grain bin, and possible hut floor were digitised from a site plan drawn Calabrese [<a href="#B108-remotesensing-15-05491" class="html-bibr">108</a>].</p>
Full article ">
18 pages, 17294 KiB  
Article
Spectral Patterns of Pixels and Objects of the Forest Phytophysiognomies in the Anauá National Forest, Roraima State, Brazil
by Tiago Monteiro Condé, Niro Higuchi, Adriano José Nogueira Lima, Moacir Alberto Assis Campos, Jackelin Dias Condé, André Camargo de Oliveira and Dirceu Lucio Carneiro de Miranda
Ecologies 2023, 4(4), 686-703; https://doi.org/10.3390/ecologies4040045 - 28 Oct 2023
Viewed by 1172
Abstract
Forest phytophysiognomies have specific spatial patterns that can be mapped or translated into spectral patterns of vegetation. Regions of spectral similarity can be classified by reference to color, tonality or intensity of brightness, reflectance, texture, size, shape, neighborhood influence, etc. We evaluated the [...] Read more.
Forest phytophysiognomies have specific spatial patterns that can be mapped or translated into spectral patterns of vegetation. Regions of spectral similarity can be classified by reference to color, tonality or intensity of brightness, reflectance, texture, size, shape, neighborhood influence, etc. We evaluated the power of accuracy of supervised classification algorithms via per-pixel (maximum likelihood) and geographic object-based image analysis (GEOBIA) for distinguishing spectral patterns of the vegetation in the northern Brazilian Amazon. A total of 280 training samples (70%) and 120 validation samples (30%) of each of the 11 vegetation cover and land-use classes (N = 4400) were classified based on differences in their visible (RGB), near-infrared (NIR), and medium infrared (SWIR 1 or MIR) Landsat 8 (OLI) bands. Classification by pixels achieved a greater accuracy (Kappa = 0.75%) than GEOBIA (Kappa = 0.72%). GEOBIA, however, offers a greater plasticity and the possibility of calibrating the spectral rules associated with vegetation indices and spatial parameters. We conclude that both methods enabled precision spectral separations (0.45–1.65 μm), contributing to the distinctions between forest phytophysiognomies and land uses—strategic factors in the planning and management of natural resources in protected areas in the Amazon region. Full article
Show Figures

Figure 1

Figure 1
<p>The study area (102,750 km<sup>2</sup>) is centered on the Anauá National Forest in Roraima State, Brazil, covered by a mosaic of Landsat 8 (OLI) reflectance images (path 232, rows 059–060).</p>
Full article ">Figure 2
<p>Color compositions of reflectance in visible (RGB), near-infrared (NIR), and medium infrared (SWIR 1 or MIR) wavelengths of Landsat 8 (OLI) images of the Anauá National Forest, where: (<b>1</b>) Visible {R(4)G(3)B(2)}: R [Red b4] = 0.64–0.67 µm; G [Green b3] = 0.53–0.59 µm; B [Blue b2] = 0.45–0.51 µm; (<b>2</b>) Near infrared {R(5)G(4)B(3)}: R [NIR b5] = 0.85–0.88 µm; G [Red b4] = 0.45–0.51 µm; B [Green b3] = 0.53–0.59 µm; (<b>3</b>) Mean infrared {R(6)G(5)B(4)}: R [SWIR 1 or MIR b6] = 1.57–1.65 µm; G [NIR b5] = 0.85–0.88 µm; R [Red b4] = 0.45–0.51 µm.</p>
Full article ">Figure 3
<p>Supervised classification of Geographic Object-Based Image Analysis (GEOBIA) in 5 steps. Using reflectance values without a conversion factor (10,000).</p>
Full article ">Figure 4
<p>Spectral correlations among visible (RGB) and infrared (NIR and SWIR 1 or MIR) bands of general (N = 4400) and per class—C (n = 400) reflectance of ROI samples per pixel.</p>
Full article ">Figure 5
<p>Spectral and Spatial reflectance behaviors of vegetation (NDVI), water (NDWI), and fractional cover image (PV, NPV, and BARE) indices of the Anauá National Forest in Roraima, Brazil.</p>
Full article ">Figure 6
<p>Pixel supervised classification in 3 color compositions (maximum likelihood) versus GEOBIA of the vegetation cover and land-use classes in the Anauá National Forest, Roraima State, Brazil, where area % expresses the percentage area per class as classified by GEOBIA.</p>
Full article ">Figure 7
<p>Spectral and spatial patterns of vegetation cover and land-use classes in the Anauá National Forest, in which: (<b>1</b>) Descriptor 1 {NIR classified by reflectance (%) at the threshold of 0.2 (blue) to 0.4 (green)}; (<b>2</b>) green vegetation in the visible range (RGB); (<b>3</b>) Descriptor 2 {Haralick texture contrast (GLCM Contrast all dir.)} showing light tones in areas of high contrast (rough texture) and dark tones in areas of lower contrast (smooth texture); (<b>4</b>) red vegetation in the near-infrared range—NIR; and, (<b>5</b>) fractional cover imaging, evidencing the high percentages of exposed soil (Bare in pink) resulting from deforestation (black square), and high concentrations of dead vegetation (NPV in blue), and forest degradation (red rectangle) resulting from selective timber extraction.</p>
Full article ">Figure 8
<p>Changes in land use and land cover provoke reductions of biomass and carbon stocks, with changes in the spectral behavior of vegetation reflectance in the Amazon. Images from Landsat 8 (OLI).</p>
Full article ">
16 pages, 5353 KiB  
Article
Land Use and Land Cover Mapping with VHR and Multi-Temporal Sentinel-2 Imagery
by Suzanna Cuypers, Andrea Nascetti and Maarten Vergauwen
Remote Sens. 2023, 15(10), 2501; https://doi.org/10.3390/rs15102501 - 10 May 2023
Cited by 11 | Viewed by 7377
Abstract
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for [...] Read more.
Land Use/Land Cover (LULC) mapping is the first step in monitoring urban sprawl and its environmental, economic and societal impacts. While satellite imagery and vegetation indices are commonly used for LULC mapping, the limited resolution of these images can hamper object recognition for Geographic Object-Based Image Analysis (GEOBIA). In this study, we utilize very high-resolution (VHR) optical imagery with a resolution of 50 cm to improve object recognition for GEOBIA LULC classification. We focused on the city of Nice, France, and identified ten LULC classes using a Random Forest classifier in Google Earth Engine. We investigate the impact of adding Gray-Level Co-Occurrence Matrix (GLCM) texture information and spectral indices with their temporal components, such as maximum value, standard deviation, phase and amplitude from the multi-spectral and multi-temporal Sentinel-2 imagery. This work focuses on identifying which input features result in the highest increase in accuracy. The results show that adding a single VHR image improves the classification accuracy from 62.62% to 67.05%, especially when the spectral indices and temporal analysis are not included. The impact of the GLCM is similar but smaller than the VHR image. Overall, the inclusion of temporal analysis improves the classification accuracy to 74.30%. The blue band of the VHR image had the largest impact on the classification, followed by the amplitude of the green-red vegetation index and the phase of the normalized multi-band drought index. Full article
(This article belongs to the Special Issue Multi-Source Data with Remote Sensing Techniques)
Show Figures

Figure 1

Figure 1
<p>The area of Nice is located on the Mediterranean Sea coast of southeastern France. All available image patches are shown on the left map. Four examples of the orthoimages are shown on the right. Each tile encompasses 1 km<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 2
<p>The 12 available classes in the IEEE data fusion contest is reduced to ten. Training samples are acquired through stratified point sampling.</p>
Full article ">Figure 3
<p>Poor data labeling by the Urban Atlas and reused in the IEEE data fusion contest. (<b>Left</b>) aerial imagery. (<b>Right</b>) labeled images. The labels are as follows: 1: Urban fabric; 2: Industrial, commercial, public, military, private and transport units; 7: Pastures; 10: Forests; 11: Herbaceous vegetation associations; 12: Open spaces with little or no vegetation; and 14: Water.</p>
Full article ">Figure 4
<p>The fitted harmonic trend of NDVI at four different locations. Land cover classes can be distinguished based on the fitted curves. To include this information in the image composite, the maximum value, standard deviation, phase, amplitude and mean value of each curve are added as bands.</p>
Full article ">Figure 5
<p>The HSV image shows the phase (HUE), amplitude (SAT) and mean (VAL) of the NDVI index in a region at the coast of Nice. The color indicates at what time of year a crop matures. The saturation shows how much intra-annual variation is observed. Black areas have a low mean value and thus no variation is present.</p>
Full article ">Figure 6
<p>The resulting LULC Random Forest prediction for two VHR aerial images using Sentinel-2 input data (S2), spectral indices (S2i) and the temporal analysis (S2+). In the third column, the VHR orthoimage is added. In the last column, the GLCM features are added. For the legend, we refer to <a href="#remotesensing-15-02501-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 7
<p>The predictions are influenced by the clustering. (<b>a</b>) Clustering on S2 data, prediction on S2. (<b>b</b>) Clustering on VHR, prediction on S2 and VHR. (<b>c</b>) Clustering on S2+ and VHR, prediction on S2+ and VHR.</p>
Full article ">Figure 8
<p>The relative importance histogram of <b>S2+</b> with VHR image bands. The 12 S2 bands are shown on the left. The middle section shows the spectral indices and their temporal analysis bands. The VHR bands are on the outer right. In general, the VHR bands have a high relevance meaning that they have the highest Gini-index in the random forest.</p>
Full article ">Figure 9
<p>The confusion matrix of <b>S2+</b> with VHR image bands.</p>
Full article ">
24 pages, 2950 KiB  
Article
Can Plot-Level Photographs Accurately Estimate Tundra Vegetation Cover in Northern Alaska?
by Hana L. Sellers, Sergio A. Vargas Zesati, Sarah C. Elmendorf, Alexandra Locher, Steven F. Oberbauer, Craig E. Tweedie, Chandi Witharana and Robert D. Hollister
Remote Sens. 2023, 15(8), 1972; https://doi.org/10.3390/rs15081972 - 8 Apr 2023
Cited by 2 | Viewed by 2329
Abstract
Plot-level photography is an attractive time-saving alternative to field measurements for vegetation monitoring. However, widespread adoption of this technique relies on efficient workflows for post-processing images and the accuracy of the resulting products. Here, we estimated relative vegetation cover using both traditional field [...] Read more.
Plot-level photography is an attractive time-saving alternative to field measurements for vegetation monitoring. However, widespread adoption of this technique relies on efficient workflows for post-processing images and the accuracy of the resulting products. Here, we estimated relative vegetation cover using both traditional field sampling methods (point frame) and semi-automated classification of photographs (plot-level photography) across thirty 1 m2 plots near Utqiaġvik, Alaska, from 2012 to 2021. Geographic object-based image analysis (GEOBIA) was applied to generate objects based on the three spectral bands (red, green, and blue) of the images. Five machine learning algorithms were then applied to classify the objects into vegetation groups, and random forest performed best (60.5% overall accuracy). Objects were reliably classified into the following classes: bryophytes, forbs, graminoids, litter, shadows, and standing dead. Deciduous shrubs and lichens were not reliably classified. Multinomial regression models were used to gauge if the cover estimates from plot-level photography could accurately predict the cover estimates from the point frame across space or time. Plot-level photography yielded useful estimates of vegetation cover for graminoids. However, the predictive performance varied both by vegetation class and whether it was being used to predict cover in new locations or change over time in previously sampled plots. These results suggest that plot-level photography may maximize the efficient use of time, funding, and available technology to monitor vegetation cover in the Arctic, but the accuracy of current semi-automated image analysis is not sufficient to detect small changes in cover. Full article
(This article belongs to the Special Issue Advanced Technologies in Wetland and Vegetation Ecological Monitoring)
Show Figures

Figure 1

Figure 1
<p>Map of the research site. (<b>A</b>) The site is stationed above the Arctic Circle (denoted by a white dashed line) on the Barrow Peninsula near the city of Utqiaġvik, Alaska. (<b>B</b>) The 30 vegetation plots in this analysis are represented by white squares. These plots are part of a larger collection of 98 plots (denoted by black squares), which are evenly distributed at a 100-m interval across the Arctic System Science (ARCSS) grid.</p>
Full article ">Figure 2
<p>Schematic of the processing pipelines to estimate relative vegetation cover using (<b>A</b>) plot-level photography and (<b>B</b>) point frame field sampling methods. The steps to process the plot-level photographs were guided by semi-automated object-based image analysis: data acquisition, preprocessing images in ArcGIS Pro (orange), segmentation and preliminary classification in eCognition (light blue), and development and selection of a machine learning model in R (dark blue).</p>
Full article ">Figure 3
<p>Example of the image segmentation and classification of a plot. (<b>A</b>) The extent of the plot image is 0.75 m<sup>2</sup>, cropped according to the footprint of the point frame. Scale is increased to show the (<b>B</b>) vegetation in the plot, (<b>C</b>) primitive image objects as a result of multi-resolution segmentation, and (<b>D</b>) final classification of the image objects using the optimal random forest model.</p>
Full article ">Figure 4
<p>Cover estimates derived from the point frame and plot-level photography. Each point shows the cover of a vegetation class in each plot for each year sampled. The y-axis relates to the measured point frame cover, while the x-axis relates to the estimates from plot-level photography. Histograms on each axis show the distribution of values. Insets within each panel illustrate multinomial model performance using mean absolute error (MAE) and bias. The 1:1 reference line is included as a visual aid.</p>
Full article ">
24 pages, 7027 KiB  
Article
Urban Structure Changes in Three Areas of Detroit, Michigan (2014–2018) Utilizing Geographic Object-Based Classification
by Vera De Wit and K. Wayne Forsythe
Land 2023, 12(4), 763; https://doi.org/10.3390/land12040763 - 28 Mar 2023
Cited by 2 | Viewed by 1535
Abstract
The following study utilized geographic object-based image analysis methods to detect pervious and impervious landcover with respect to residential structure changes. The datasets consist of freely available very high-resolution orthophotos acquired under the United States National Agriculture Imagery Program. Over the last several [...] Read more.
The following study utilized geographic object-based image analysis methods to detect pervious and impervious landcover with respect to residential structure changes. The datasets consist of freely available very high-resolution orthophotos acquired under the United States National Agriculture Imagery Program. Over the last several decades, cities in America’s Rust Belt region have experienced population and economic declines—most notably, the city of Detroit. With increased property vacancies, many residential structures are abandoned and left vulnerable to degradation. In many cases, one of the answers is to demolish the structure, leaving a physical, permanent change to the urban fabric. This study investigates the performance of object-based classification in segmenting and classifying orthophotos across three neighbourhoods (Crary/St. Mary, Core City, Pulaski) with different demolition rates within Detroit. The research successfully generated the distinction between pervious and impervious land cover and linked those to parcel lot administrative boundaries within the city of Detroit. Successful detection rates of residential parcels containing structures ranged from a low of 63.99% to a high of 92.64%. Overall, if there were more empty residential parcels, the detection method performed better. Pervious and impervious overall classification accuracy for the 2018 and 2014 imagery was 98.333% (kappa 0.966) with some slight variance in the producers and users statistics for each year. Full article
(This article belongs to the Section Land Systems and Global Change)
Show Figures

Figure 1

Figure 1
<p>Study location.</p>
Full article ">Figure 2
<p>NAIP 2014 and 2018 coverage of Detroit.</p>
Full article ">Figure 3
<p>OBIA workflow diagram.</p>
Full article ">Figure 4
<p>Observed 2.59 square km (one square mile) extent, NAIP 2018.</p>
Full article ">Figure 5
<p>NAIP imagery, opposing direction of shadows.</p>
Full article ">Figure 6
<p>Residential demolition rates by neighbourhood, and three areas of interest, 2014–2018 NAIP imagery.</p>
Full article ">Figure 7
<p>NAIP 2018 2.59 km<sup>2</sup> (one square mile) tile extent—classification result.</p>
Full article ">Figure 8
<p>NAIP 2014 2.59 km<sup>2</sup> (one square mile) tile extent—classification result.</p>
Full article ">Figure 9
<p>Batch GEOBIA, low demolition rate, Crary/St. Mary, Detroit.</p>
Full article ">Figure 10
<p>Batch GEOBIA, median demolition rate, Core City, Detroit.</p>
Full article ">Figure 11
<p>Batch GEOBIA, high demolition rate, Pulaski, Detroit.</p>
Full article ">Figure 12
<p>Three areas of interest, change detection: NAIP 2018–NAIP 2014.</p>
Full article ">Figure 13
<p>NAIP 2014 impervious surfaces on parcel lots. The shaded areas (A, B and C) in the upper portion of the figure are represented separately at higher resolution in the lower part of the figure.</p>
Full article ">Figure 14
<p>NAIP 2018 impervious surfaces on parcel lots. The shaded areas (A, B and C) in the upper portion of the figure are represented separately at higher resolution in the lower part of the figure.</p>
Full article ">Figure 15
<p>Impervious surface overlap on parcel lots.</p>
Full article ">
20 pages, 12825 KiB  
Article
A Comparison of Machine Learning Models for Mapping Tree Species Using WorldView-2 Imagery in the Agroforestry Landscape of West Africa
by Muhammad Usman, Mahnoor Ejaz, Janet E. Nichol, Muhammad Shahid Farid, Sawaid Abbas and Muhammad Hassan Khan
ISPRS Int. J. Geo-Inf. 2023, 12(4), 142; https://doi.org/10.3390/ijgi12040142 - 25 Mar 2023
Cited by 5 | Viewed by 3237
Abstract
Farmland trees are a vital part of the local economy as trees are used by farmers for fuelwood as well as food, fodder, medicines, fibre, and building materials. As a result, mapping tree species is important for ecological, socio-economic, and natural resource management. [...] Read more.
Farmland trees are a vital part of the local economy as trees are used by farmers for fuelwood as well as food, fodder, medicines, fibre, and building materials. As a result, mapping tree species is important for ecological, socio-economic, and natural resource management. The study evaluates very high-resolution remotely sensed WorldView-2 (WV-2) imagery for tree species classification in the agroforestry landscape of the Kano Close-Settled Zone (KCSZ), Northern Nigeria. Individual tree crowns extracted by geographic object-based image analysis (GEOBIA) were used to remotely identify nine dominant tree species (Faidherbia albida, Anogeissus leiocarpus, Azadirachta indica, Diospyros mespiliformis, Mangifera indica, Parkia biglobosa, Piliostigma reticulatum, Tamarindus indica, and Vitellaria paradoxa) at the object level. For every tree object in the reference datasets, eight original spectral bands of the WV-2 image, their spectral statistics (minimum, maximum, mean, standard deviation, etc.), spatial, textural, and color-space (hue, saturation), and different spectral vegetation indices (VI) were used as predictor variables for the classification of tree species. Nine different machine learning methods were used for object-level tree species classification. These were Extra Gradient Boost (XGB), Gaussian Naïve Bayes (GNB), Gradient Boosting (GB), K-nearest neighbours (KNN), Light Gradient Boosting Machine (LGBM), Logistic Regression (LR), Multi-layered Perceptron (MLP), Random Forest (RF), and Support Vector Machines (SVM). The two top-performing models in terms of highest accuracies for individual tree species classification were found to be SVM (overall accuracy = 82.1% and Cohen’s kappa = 0.79) and MLP (overall accuracy = 81.7% and Cohen’s kappa = 0.79) with the lowest numbers of misclassified trees compared to other machine learning methods. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of study area; (<b>b</b>) WV-2 image of study area with location of the sampled sites in the field; (<b>c</b>) zoomed-in view of a pansharpened WV-2 image.</p>
Full article ">Figure 2
<p>A fallow agricultural field with different tree species in the Kano Close Settled Zone, Northern Nigeria, during the dry season (January 2016).</p>
Full article ">Figure 3
<p>A subset of the WorldView-2 imagery (dry season) acquired over KCSZ is shown as a false-colour RGB composite consisting of NIR 1, red, and green bands, and tree crowns extracted using GEOBIA.</p>
Full article ">Figure 4
<p>Comparison of the accuracy of different machine learning methods using different predictors. The <span class="html-italic">y</span>-axis represents the kappa coefficient.</p>
Full article ">Figure 5
<p>Map of the spatial distribution of tree species in a small portion of the study area.</p>
Full article ">
12 pages, 1747 KiB  
Article
Application of Precision Agriculture for the Sustainable Management of Fertilization in Olive Groves
by Eliseo Roma, Vito Armando Laudicina, Mariangela Vallone and Pietro Catania
Agronomy 2023, 13(2), 324; https://doi.org/10.3390/agronomy13020324 - 20 Jan 2023
Cited by 13 | Viewed by 2725
Abstract
Olive tree growing (Olea europaea L.) has considerably increased in the last decades, as has the consumption of extra virgin olive oil in the world. Precision agriculture is increasingly being applied in olive orchards as a new method to manage agronomic variability [...] Read more.
Olive tree growing (Olea europaea L.) has considerably increased in the last decades, as has the consumption of extra virgin olive oil in the world. Precision agriculture is increasingly being applied in olive orchards as a new method to manage agronomic variability with the aim of providing individual plants with the right input amount, limiting waste or excess. The objective of this study was to develop a methodology on a GIS platform using GEOBIA algorithms in order to build prescription maps for variable rate (VRT) nitrogen fertilizers application in an olive orchard. The fertilization plan was determined for each tree by applying its own nitrogen balance, taking into account the variability of nitrogen in soil, leaf, production, and actual biometric and spectral conditions. Each olive tree was georeferenced using the S7-G Stonex instrument with real-time kinematic RTK positioning correction and the trunk cross section area (TCSA) was measured. Soil and leaves were sampled to study nutrient variability. Soil and plant samples were analyzed for all major physical and chemical properties. Spectral data were obtained using a multispectral camera (DJI multispectral) carried by an unmanned aerial vehicle (UAV) platform (DJI Phantom4). The biometric characteristics of the plants were extracted from the achieved normalized vegetation index (NDVI) map. The obtained prescription map can be used for variable rate fertilization with a tractor and fertilizer spreader connected via the ISOBUS system. Using the proposed methodology, the variable rate application of nitrogen fertilizer resulted in a 31% reduction in the amount to be applied in the olive orchard compared to the standard dose. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Experimental site location on a scale of 1:1,200,000; (<b>b</b>) study area.</p>
Full article ">Figure 2
<p>TOC (%) map of the study area.</p>
Full article ">Figure 3
<p>Nitrogen concentration determined on 36 leaf samples uniformly collected from olive plants. The red line represents the average of the whole samples, while the green line represents the threshold.</p>
Full article ">Figure 4
<p>Canopy area and NDVI value for each plant in a portion of the plot.</p>
Full article ">Figure 5
<p>(<b>a</b>) Overlay between the production map (red-green scale) and the individual tree canopy with coloration according to NDVI value; and (<b>b</b>) particular of the image.</p>
Full article ">Figure 6
<p>Prescription map of nitrogen fertilization according to the different quantity of nitrogen per square meter.</p>
Full article ">
25 pages, 17884 KiB  
Article
The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images
by Jianjun Chen, Zizhen Chen, Renjie Huang, Haotian You, Xiaowen Han, Tao Yue and Guoqing Zhou
Drones 2023, 7(1), 61; https://doi.org/10.3390/drones7010061 - 15 Jan 2023
Cited by 28 | Viewed by 4191
Abstract
When employing remote sensing images, it is challenging to classify vegetation species and ground objects due to the abundance of wetland vegetation species and the high fragmentation of ground objects. Remote sensing images are classified primarily according to their spatial resolution, which significantly [...] Read more.
When employing remote sensing images, it is challenging to classify vegetation species and ground objects due to the abundance of wetland vegetation species and the high fragmentation of ground objects. Remote sensing images are classified primarily according to their spatial resolution, which significantly impacts the classification accuracy of vegetation species and ground objects. However, there are still some areas for improvement in the study of the effects of spatial resolution and resampling on the classification results. The study area in this paper was the core zone of the Huixian Karst National Wetland Park in Guilin, Guangxi, China. The aerial images (Am) with different spatial resolutions were obtained by utilizing the UAV platform, and resampled images (An) with different spatial resolutions were obtained by utilizing the pixel aggregation method. In order to evaluate the impact of spatial resolutions and resampling on the classification accuracy, the Am and the An were utilized for the classification of vegetation species and ground objects based on the geographic object-based image analysis (GEOBIA) method in addition to various machine learning classifiers. The results showed that: (1) In multi-scale images, both the optimal scale parameter (SP) and the processing time decreased as the spatial resolution diminished in the multi-resolution segmentation process. At the same spatial resolution, the SP of the An was greater than that of the Am. (2) In the case of the Am and the An, the appropriate feature variables were different, and the spectral and texture features in the An were more significant than those in the Am. (3) The classification results of various classifiers in the case of the Am and the An exhibited similar trends for spatial resolutions ranging from 1.2 to 5.9 cm, where the overall classification accuracy increased and then decreased in accordance with the decrease in spatial resolution. Moreover, the classification accuracy of the Am was higher than that of the An. (4) When vegetation species and ground objects were classified at different spatial scales, the classification accuracy differed between the Am and the An. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Ground truth reference image.</p>
Full article ">Figure 3
<p>Technical route of this study.</p>
Full article ">Figure 4
<p>Spatial distribution of training samples.</p>
Full article ">Figure 5
<p>The change in separability between the number of features and classes (1.2 cm spatial resolution image), and the blue diamond (indicating value 2.949) was the maximum separation distance.</p>
Full article ">Figure 6
<p>Results of ESP2 scale analysis (1.2 cm spatial resolution image).</p>
Full article ">Figure 7
<p>Segmentation results of vegetation species and ground objects (1.2 cm spatial resolution image).</p>
Full article ">Figure 8
<p>The variation trend of the optimal SP and segmentation time in the Am and the An.</p>
Full article ">Figure 9
<p>Evaluation results of the importance of each feature in the Am.</p>
Full article ">Figure 10
<p>Evaluation results of the importance of each feature in the An.</p>
Full article ">Figure 11
<p>The Am classification results under RF classifier; the UAV-RGB image and ground truth reference image of the study area are shown in <a href="#drones-07-00061-f001" class="html-fig">Figure 1</a> and <a href="#drones-07-00061-f002" class="html-fig">Figure 2</a>, respectively.</p>
Full article ">Figure 12
<p>Identification accuracy of vegetation species and ground objects in the Am.</p>
Full article ">Figure 13
<p>The An classification results under RF classifier; the UAV-RGB image and ground truth reference image in the study area are shown in <a href="#drones-07-00061-f001" class="html-fig">Figure 1</a> and <a href="#drones-07-00061-f002" class="html-fig">Figure 2</a>, respectively.</p>
Full article ">Figure 14
<p>Identification accuracy of vegetation species and ground objects in the An results.</p>
Full article ">
18 pages, 82838 KiB  
Article
GEOBIA and Vegetation Indices in Extracting Olive Tree Canopies Based on Very High-Resolution UAV Multispectral Imagery
by Ante Šiljeg, Rajko Marinović, Fran Domazetović, Mladen Jurišić, Ivan Marić, Lovre Panđa, Dorijan Radočaj and Rina Milošević
Appl. Sci. 2023, 13(2), 739; https://doi.org/10.3390/app13020739 - 4 Jan 2023
Cited by 10 | Viewed by 2879
Abstract
In recent decades, precision agriculture and geospatial technologies have made it possible to ensure sustainability in an olive-growing sector. The main goal of this study is the extraction of olive tree canopies by comparing two approaches, the first of which is related to [...] Read more.
In recent decades, precision agriculture and geospatial technologies have made it possible to ensure sustainability in an olive-growing sector. The main goal of this study is the extraction of olive tree canopies by comparing two approaches, the first of which is related to geographic object-based analysis (GEOBIA), while the second one is based on the use of vegetation indices (VIs). The research area is a micro-location within the Lun olives garden, on the island of Pag. The unmanned aerial vehicle (UAV) with a multispectral (MS) sensor was used for generating a very high-resolution (VHR) UAVMS model, while another mission was performed to create a VHR digital orthophoto (DOP). When implementing the GEOBIA approach in the extraction of the olive canopy, user-defined parameters and classification algorithms support vector machine (SVM), maximum likelihood classifier (MLC), and random trees classifier (RTC) were evaluated. The RTC algorithm achieved the highest overall accuracy (OA) of 0.7565 and kappa coefficient (KC) of 0.4615. The second approach included five different VIs models (NDVI, NDRE, GNDVI, MCARI2, and RDVI2) which are optimized using the proposed VITO (VI Threshold Optimizer) tool. The NDRE index model was selected as the most accurate one, according to the ROC accuracy measure with a result of 0.888 for the area under curve (AUC). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) Test area in Lun olive gardens; (<b>b</b>) island of Pag; (<b>c</b>) Croatia.</p>
Full article ">Figure 2
<p>The research methodology framework.</p>
Full article ">Figure 3
<p>Field work; (<b>a</b>) Trinity F90+ MS mission; (<b>b</b>) Trimble R8s CPs and RP collecting.</p>
Full article ">Figure 4
<p>Differences between various segmented models: (<b>a</b>) spectral detail; (<b>b</b>) spatial detail; (<b>c</b>) minimum segment size.</p>
Full article ">Figure 5
<p>Adding olive tree class samples.</p>
Full article ">Figure 6
<p>Accuracy assessment steps: (<b>a</b>) RT vectorization; (<b>b</b>) FT vectorization; (<b>c</b>) fishnet; (<b>d</b>) attribute table.</p>
Full article ">Figure 7
<p>VITO tool scheme.</p>
Full article ">Figure 8
<p>Distribution of RT and FT.</p>
Full article ">Figure 9
<p>Thousand randomly distributed accuracy assessment points.</p>
Full article ">Figure 10
<p>MS model with 10-9-8 bands layout.</p>
Full article ">Figure 11
<p>Classified models: (<b>a</b>) SVM; (<b>b</b>) MLC; (<b>c</b>) RTC.</p>
Full article ">Figure 12
<p>VIs models: (<b>a</b>) NDVI; (<b>b</b>) NDRE; (<b>c</b>) GNDVI; (<b>d</b>) MCARI2; (<b>e</b>) RDVI2.</p>
Full article ">Figure 13
<p>VIs classified models: (<b>a</b>) NDVI; (<b>b</b>) NDRE; (<b>c</b>) GNDVI; (<b>d</b>) MCARI2; (<b>e</b>) RDVI2.</p>
Full article ">Figure 14
<p>ROC curves and AUC values for VIs models.</p>
Full article ">Figure 15
<p>ROC curves and AUC values for RT and FT polygons comparison.</p>
Full article ">Figure 16
<p>ROC curves and AUC values for 1000 points comparison.</p>
Full article ">
22 pages, 5998 KiB  
Article
On the Choice of the Most Suitable Period to Map Hill Lakes via Spectral Separability and Object-Based Image Analyses
by Antonino Maltese
Remote Sens. 2023, 15(1), 262; https://doi.org/10.3390/rs15010262 - 2 Jan 2023
Cited by 2 | Viewed by 1995
Abstract
Technological advances in Earth observation made images characterized by high spatial and temporal resolutions available, nevertheless bringing with them the radiometric heterogeneity of small geographical entities, often also changing in time. Among small geographical entities, hill lakes exhibit a widespread distribution, and their [...] Read more.
Technological advances in Earth observation made images characterized by high spatial and temporal resolutions available, nevertheless bringing with them the radiometric heterogeneity of small geographical entities, often also changing in time. Among small geographical entities, hill lakes exhibit a widespread distribution, and their census is sometimes partial or shows unreliable data. High resolution and heterogeneity have boosted the development of geographic object-based image analysis algorithms. This research analyzes which is the most suitable period for acquiring satellite images to identify and delimitate hill lakes. This is achieved by analyzing the spectral separability of the surface reflectance of hill lakes from surrounding bare or vegetated soils and by implementing a semiautomatic procedure to enhance the segmentation phase of a GEOBIA algorithm. The proposed procedure was applied to high spatial resolution satellite images acquired in two different climate periods (arid and temperate), corresponding to dry and vegetative seasons. The segmentation parameters were tuned by minimizing an under- and oversegmentation metric on surfaces and perimeters of hill lakes selected as the reference. The separability of hill lakes from their surrounding was evaluated using Euclidean and divergence metrics both in the arid and temperate periods. The classification accuracy was evaluated by calculating the error matrix and normalized error matrix. Classes’ reflectances in the image acquired in the arid period show the highest average separability (3–4 higher than in the temperate one). The segmentation based on the reference areas performs more than that based on the reference perimeters (metric ≈ 20% lower). Both separability metrics and classification accuracies indicate that images acquired in the arid period are more suitable than temperate ones to map hill lakes. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics)
Show Figures

Figure 1

Figure 1
<p>Location map: (<b>a</b>) Italy (light green) in Europe (light grey); (<b>b</b>) the river basin (dashed polygon) in Sicily (light green, south Italy) over-imposed the pilot area (blue rectangle). Coordinates Reference Systems (CRS) were chosen according to the Decree 10 November 2011 entitled “Adoption of the National Geodetic Reference System” issued by the Presidency of the Council of Ministers of Italy (Official Gazette General Series n.48 of 27 February 2012—Ordinary Suppl. n. 37).</p>
Full article ">Figure 2
<p>Pilot area. CIR composition of the selected images for the arid (<b>a</b>) and temperate (<b>b</b>) periods.</p>
Full article ">Figure 3
<p>Flowchart of the procedure to test the proposed method.</p>
Full article ">Figure 4
<p>On the primary <span class="html-italic">y</span>-axis, the frequency of occurrence of <span class="html-italic">D</span>, <span class="html-italic">f</span><sub>D</sub>, as the output of the segmentation calibration code (pilot areas representing arid and temperate periods in black and blue histograms, respectively). On the secondary <span class="html-italic">y</span>-axis, the cumulate of <span class="html-italic">f</span><sub>D</sub>, <span class="html-italic">F</span><sub>D</sub>, (dotted lines).</p>
Full article ">Figure 5
<p>Segmentation parameters optimized on reference areas in the arid period (black dots represent values resulting from the rough; blue dots represent values resulting from the fine-tuning): (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 5 Cont.
<p>Segmentation parameters optimized on reference areas in the arid period (black dots represent values resulting from the rough; blue dots represent values resulting from the fine-tuning): (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 6
<p>Segmentation parameters optimized on reference areas in the temperate period: (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 7
<p>Segmentation parameters optimized on perimeters for the arid period: (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 8
<p>Segmentation parameters optimized on perimeters for the temperate period: (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 8 Cont.
<p>Segmentation parameters optimized on perimeters for the temperate period: (<b>a</b>) <span class="html-italic">D</span> versus <span class="html-italic">k</span>; (<b>b</b>) <span class="html-italic">D</span> versus <span class="html-italic">p</span>; (<b>c</b>) <span class="html-italic">D</span> versus <span class="html-italic">s</span>; (<b>d</b>) <span class="html-italic">D</span> versus <span class="html-italic">d</span>, dispersions.</p>
Full article ">Figure 9
<p>Performance in terms of surface estimation: surface as the output of the GEOBIA procedure, <span class="html-italic">A</span><sub>S</sub>, of the 30 reference lakes vs. surfaces of digitized lakes, <span class="html-italic">A</span><sub>R</sub>, for the images representing the arid (<b>a</b>) and the temperate (<b>b</b>) periods.</p>
Full article ">Figure 10
<p>Histogram of the separability distance for the pilot areas representing the arid (black) and temperate (blue) periods: (<b>a</b>) normalized Euclidean distance, <span class="html-italic">d</span><sub>E</sub>; (<b>b</b>) normalized divergence, <span class="html-italic">d</span><sub>D</sub>.</p>
Full article ">Figure 11
<p>Spatial distribution of the hill lakes in the pilot area resulting from the segmentation-classification process. Cyan points represent lakes identified both from the images representing the arid and temperate periods; black points identify lakes classified only in the arid period; blue points identify lakes classified only in the temperate one.</p>
Full article ">Figure 12
<p>Segmentation or classification issues (yellow line bounded polygons) overimposed to the CIR composition of the images representing the arid and temperate periods (upper and lower panels, respectively): (<b>a</b>) erroneous segmentation; (<b>b</b>,<b>c</b>) misclassification; (<b>d</b>) correct segmentation and classification.</p>
Full article ">
Back to TopTop