Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 4, October
Previous Issue
Volume 4, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 4, Issue 9 (September 2012) – 16 articles , Pages 2492-2889

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
7599 KiB  
Article
Mapping Vegetation Density in a Heterogeneous River Floodplain Ecosystem Using Pointable CHRIS/PROBA Data
by Jochem Verrelst, Erika Romijn and Lammert Kooistra
Remote Sens. 2012, 4(9), 2866-2889; https://doi.org/10.3390/rs4092866 - 24 Sep 2012
Cited by 113 | Viewed by 11861
Abstract
River floodplains in the Netherlands serve as water storage areas, while they also have the function of nature rehabilitation areas. Floodplain vegetation is therefore subject to natural processes of vegetation succession. At the same time, vegetation encroachment obstructs the water flow into the [...] Read more.
River floodplains in the Netherlands serve as water storage areas, while they also have the function of nature rehabilitation areas. Floodplain vegetation is therefore subject to natural processes of vegetation succession. At the same time, vegetation encroachment obstructs the water flow into the floodplains and increases the flood risk for the hinterland. Spaceborne pointable imaging spectroscopy has the potential to quantify vegetation density on the basis of leaf area index (LAI) from a desired view zenith angle. In this respect, hyperspectral pointable CHRIS data were linked to the ray tracing canopy reflectance model FLIGHT to retrieve vegetation density estimates over a heterogeneous river floodplain. FLIGHT enables simulating top-of-canopy reflectance of vegetated surfaces either in turbid (e.g., grasslands) or in 3D (e.g., forests) mode. By inverting FLIGHT against CHRIS data, LAI was computed for three main classified vegetation types, ‘herbaceous’, ‘shrubs’ and ‘forest’, and for the CHRIS view zenith angles in nadir, backward (−36°) and forward (+36°) scatter direction. The −36° direction showed most LAI variability within the vegetation types and was best validated, closely followed by the nadir direction. The +36° direction led to poorest LAI retrievals. The class-based inversion process has been implemented into a GUI toolbox which would enable the river manager to generate LAI maps in a semiautomatic way. Full article
Show Figures


<p>The study area which is located in the east of the Netherlands, indicated on the CHRIS nadir image in true colour band composition (R: 675.2 nm, G: 551.7 nm, B: 490.5 nm). The red circle represents the river floodplains of Millingerwaard. The black outlined river area overlain on the CHRIS nadir image represents the nature reserve the Gelderse Poort.</p>
Full article ">
<p>Polar plot showing the actual positions of the five angular CHRIS images during acquisition on 6 September 2005. The solar zenith angle was 46°, the solar azimuth angle 170°.</p>
Full article ">
<p>Maximum likelihood classification result of the CHRIS nadir image of the (<b>a</b>) Gelderse Poort and (<b>b</b>) Millingerwaard (indicated with the black square) into major land cover types.</p>
Full article ">
<p>LAI maps (<b>left</b>) and derived histograms for LAI&lt;8.5 (<b>right</b>) of Millingerwaard for the backward scattering direction (−36° VZA) (<b>top</b>), the nadir direction (<b>middle</b>) and the forward scattering direction (+36° VZA) (<b>down</b>), derived with FLIGHT model inversion.</p>
Full article ">
<p>Mean validation results and standard deviation of the estimated LAI obtained with FLIGHT model inversion, plotted against the measured LAI values obtained with the hemispherical camera for the backward scattering direction (−36° VZA), the nadir direction and the forward scattering direction (+36° VZA).</p>
Full article ">
<p>Maps of minimum RMSEs for LAI retrievals (<b>left</b>) and derived histograms for &lt;8.5 (<b>right</b>) of Millingerwaard for the backward scattering direction (−36° VZA) (<b>top</b>), the nadir direction (<b>middle</b>) and the forward scattering direction (+36° VZA) (<b>down</b>), derived with FLIGHT model inversion.</p>
Full article ">
<p>LAI map and histogram for the backward scattering direction (−36° VZA), derived with FLIGHT model inversion after applying to the Gelderse Poort area.</p>
Full article ">
734 KiB  
Article
Multivariate Analysis of MODerate Resolution Imaging Spectroradiometer (MODIS) Aerosol Retrievals and the Statistical Hurricane Intensity Prediction Scheme (SHIPS) Parameters for Atlantic Hurricanes
by Mohammed M. Kamal, Ruixin Yang and John J. Qu
Remote Sens. 2012, 4(9), 2846-2865; https://doi.org/10.3390/rs4092846 - 24 Sep 2012
Cited by 1 | Viewed by 7228
Abstract
MODerate Resolution Imaging Spectroradiometer (MODIS) aerosol retrievals over the North Atlantic spanning seven hurricane seasons are combined with the Statistical Hurricane Intensity Prediction Scheme (SHIPS) parameters. The difference between the current and future intensity changes were selected as response variables. For 24 major [...] Read more.
MODerate Resolution Imaging Spectroradiometer (MODIS) aerosol retrievals over the North Atlantic spanning seven hurricane seasons are combined with the Statistical Hurricane Intensity Prediction Scheme (SHIPS) parameters. The difference between the current and future intensity changes were selected as response variables. For 24 major hurricanes (category 3, 4 and 5) between 2003 and 2009, eight lead time response variables were determined to be between 6 and 48 h. By combining MODIS and SHIPS data, 56 variables were compiled and selected as predictors for this study. Variable reduction from 56 to 31 was performed in two steps; the first step was via correlation coefficients (cc) followed by Principal Component Analysis (PCA) extraction techniques. The PCA reduced 31 variables to 20. Five categories were established based on the PCA group variables exhibiting similar physical phenomena. Average aerosol retrievals from MODIS Level 2 data in the vicinity of UTC 1,200 and 1,800 h were mapped to the SHIPS parameters to perform Multiple Linear Regression (MLR) between each response variable against six sets of predictors of 31, 30, 28, 27, 23 and 20 variables. The deviation among the predictors Root Mean Square Error (RMSE) varied between 0.01 through 0.05 and, therefore, implied that reducing the number of variables did not change the core physical information. Even when the parameters are reduced from 56 to 20, the correlation values exhibit a stronger relationship between the response and predictors. Therefore, the same phenomena can be explained by the reduction of variables. Full article
Show Figures


<p>(<b>a</b>) MODIS data was collected within the annulus of the two concentric circles with radii r1 and r2. (<b>b</b>) Data collection following the motion of a hurricane (for example).</p>
Full article ">
<p>Average root mean square error (RMSE) for each future difference (FD).</p>
Full article ">
<p>Contribution Factors on the MLR between the FD48 and Original set of 55 variables. MSLP was removed from the analysis.</p>
Full article ">
<p>Contribution Factors on the MLR between the FD48 and Predictor_1 which has 31 variables.</p>
Full article ">
<p>Contribution Factors on the MLR between the FD48 and Predictor_6 which has 20 variables.</p>
Full article ">
<p>R<sup>2</sup> values and RMSE at eight lead time positions 06, 12, 18, 24, 30, 36, 42 and 48 h.</p>
Full article ">
<p>Residual plots for FD06 and FD48.</p>
Full article ">
9284 KiB  
Article
Modelling Forest α-Diversity and Floristic Composition — On the Added Value of LiDAR plus Hyperspectral Remote Sensing
by Benjamin F. Leutner, Björn Reineking, Jörg Müller, Martin Bachmann, Carl Beierkuhnlein, Stefan Dech and Martin Wegmann
Remote Sens. 2012, 4(9), 2818-2845; https://doi.org/10.3390/rs4092818 - 21 Sep 2012
Cited by 74 | Viewed by 12616
Abstract
The decline of biodiversity is one of the major current global issues. Still, there is a widespread lack of information about the spatial distribution of individual species and biodiversity as a whole. Remote sensing techniques are increasingly used for biodiversity monitoring and especially [...] Read more.
The decline of biodiversity is one of the major current global issues. Still, there is a widespread lack of information about the spatial distribution of individual species and biodiversity as a whole. Remote sensing techniques are increasingly used for biodiversity monitoring and especially the combination of LiDAR and hyperspectral data is expected to deliver valuable information. In this study spatial patterns of vascular plant community composition and α-diversity of a temperate montane forest in Germany were analysed for different forest strata. The predictive power of LiDAR (LiD) and hyperspectral (MNF) datasets alone and combined (MNF+LiD) was compared using random forest regression in a ten-fold cross-validation scheme that included feature selection and model tuning. The final models were used for spatial predictions. Species richness could be predicted with varying accuracy (R2 = 0.26 to 0.55) depending on the forest layer. In contrast, community composition of the different layers, obtained by multivariate ordination, could in part be modelled with high accuracies for the first ordination axis (R2 = 0.39 to 0.78), but poor accuracies for the second axis (R2 ≤ 0.3). LiDAR variables were the best predictors for total species richness across all forest layers (R2 LiD = 0.3, R2 MNF = 0.08, R2 MNF+LiD = 0.2), while for community composition across all forest layers both hyperspectral and LiDAR predictors achieved similar performances (R2 LiD = 0.75, R2 MNF = 0.76, R2 MNF+LiD = 0.78). The improvement in R2 was small (≤0.07)—if any—when using both LiDAR and hyperspectral data as compared to using only the best single predictor set. This study shows the high potential of LiDAR and hyperspectral data for plant biodiversity modelling, but also calls for a critical evaluation of the added value of combining both with respect to acquisition costs. Full article
(This article belongs to the Special Issue Remote Sensing of Biological Diversity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Bavarian Forest national park with bounding boxes of the two hyperspectral scenes. N: northern scene, S: southern scene. Underlying digital elevation model from SRTM3 v2.1 [<a href="#b50-remotesensing-04-02818" class="html-bibr">50</a>]. Projection: left: WGS84 UTM 33N; right: WGS84.</p>
Full article ">
<p>Processing framework. Predictor variables were derived from LiDAR (LiD) and hyperspectral (MNF) data. Biodiversity indicators (<span class="html-italic">SR</span>,<span class="html-italic">H</span>′) and non-metric multidimensional scaling (NMDS) ordination scores of two NMDS axes were modelled as response variables (NMDS<sub>1</sub>,NMDS<sub>2</sub>). The procedure was repeated for herb, shrub and tree layer and their total, using LiD, MNF and MNF+LiD predictors both with and without feature selection.</p>
Full article ">
<p>Bi-plot of site and species scores in NMDS space for herb (HL), shrub (SL), tree layer (TL) and their total (ALL). Species acronyms consist of the first three letters of genus and species respectively (see supplementary materials, Table S1 for decoding). Only species occurring in more than 20% of the plots are shown. Colour mapping depicts species richness.</p>
Full article ">
<p>Kendall’s <span class="html-italic">τ</span> rank correlation for inter-predictor correlation. The frequency histograms are based on the upper triangular correlation matrix (diagonal removed). The LiDAR and MNF panels are based on correlation between all predictors within a predictor set, whereas the MNF+LiD panel shows only the correlation coefficients between the ten LiDAR and the MNF variables.</p>
Full article ">
<p>Model assessment of ten-fold cross-validation of random forests with and without feature selection and with <span class="html-italic">mtry</span> optimization. Response variables: Species richness (<span class="html-italic">SR</span>), Shannon index (<span class="html-italic">H</span>′) and ordination scores (<span class="html-italic">NMDS</span><sub>1</sub>, <span class="html-italic">NMDS</span><sub>2</sub>). Predictor variables: LiDAR (LiD) and hyperspectral (MNF) predictor sets and their combination (MNF+LiD). Bars: cross-validated <math display="inline"> <mrow> <msubsup> <mrow> <mi>R</mi></mrow> <mrow> <mtext>CV</mtext></mrow> <mn>2</mn></msubsup></mrow></math> with error bars depicting the bootstrapped standard deviation of <math display="inline"> <mrow> <msubsup> <mrow> <mi>R</mi></mrow> <mrow> <mtext>CV</mtext></mrow> <mn>2</mn></msubsup></mrow></math>. Points show the corresponding mean OOB estimates <math display="inline"> <mrow> <msubsup> <mrow> <mi>R</mi></mrow> <mrow> <mtext>OOB</mtext></mrow> <mn>2</mn></msubsup></mrow></math> averaged over the ten cross-validation groups together with their standard deviation in error bars. Values of <span class="html-italic">R</span><sup>2</sup> below zero are artifacts of the calculation, should be interpreted as zero and are hence displayed as such.</p>
Full article ">
<p>Predicted species richness of the shrub layer (SL) for the LiDAR (LiD) and hyperspectral (MNF) predictor sets and their combination (MNF+LiD). Upper panel: averaged predictions of ten cross-validation models and plot information used to train and evaluate the random forests. Background: color-infrared composite of the HyMap data. Lower panel: standard deviation of the predictions of the ten cross-validation models. Bar charts show the residuals between predicted and observed estimates in cross-validation. Positive residuals are due to an overestimation by the model and <span class="html-italic">vice versa</span>.</p>
Full article ">
<p>Spatial prediction of position on the first NMDS axis of all vegetation layers combined (ALL) for the LiDAR (LiD) and hyperspectral (MNF) predictor sets and their combination (MNF+LiD). Upper panel: averaged predictions of ten cross-validation models and plot information used to train and evaluate the random forests. Background: color-infrared composite of the HyMap data. Lower panel: standard deviation of the predictions of the ten cross-validation models. Bar charts show the residuals between predicted and observed estimates in cross-validation. Positive residuals are due to an overestimation by the model and <span class="html-italic">vice versa</span>.</p>
Full article ">
<p>Spatial prediction of position on the first NMDS axis of the herb layer (HL) for the LiDAR (LiD) and hyperspectral (MNF) predictor sets and their combination (MNF+LiD). Upper panel: averaged predictions of ten cross-validation models and plot information used to train and evaluate the random forests. Background: color-infrared composite of the HyMap data. Lower panel: standard deviation of the predictions of the ten cross-validation models. Bar charts show the residuals between predicted and observed estimates in cross-validation. Positive residuals are due to an overestimation by the model and <span class="html-italic">vice versa</span>.</p>
Full article ">
5047 KiB  
Article
Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA). Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation
by Andrea Baraldi and Luigi Boschetti
Remote Sens. 2012, 4(9), 2768-2817; https://doi.org/10.3390/rs4092768 - 20 Sep 2012
Cited by 17 | Viewed by 10043
Abstract
According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA) systems and three-stage iterative geographic object-oriented image analysis (GEOOIA) systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality [...] Read more.
According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA) systems and three-stage iterative geographic object-oriented image analysis (GEOOIA) systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs) of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO) guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS) image understanding system (RS-IUS), from sub-symbolic statistical model-based (inductive) image segmentation to symbolic physical model-based (deductive) image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive) RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a) computational theory (system design), (b) information/knowledge representation, (c) algorithm design and (d) implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™) is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage, to accomplish multi-scale image segmentation and multi-granularity image pre-classification simultaneously, automatically and in near real-time. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Data flow diagram (DFD) [<a href="#b40-remotesensing-04-02768" class="html-bibr">40</a>], showing processing blocks as rectangles and data derived products as circles, of the Shackelford and Davis stratified hierarchical fuzzy classification approach for high-resolution MS images of urban areas [<a href="#b28-remotesensing-04-02768" class="html-bibr">28</a>].</p>
Full article ">
<p>Data flow diagram (DFD) [<a href="#b40-remotesensing-04-02768" class="html-bibr">40</a>], showing processing blocks as rectangles and data derived products as circles, of the novel hybrid three-stage stratified (driven-by-knowledge, symbolic mask-conditioned) hierarchical RS-IUS architecture consisting of [<a href="#b15-remotesensing-04-02768" class="html-bibr">15</a>–<a href="#b23-remotesensing-04-02768" class="html-bibr">23</a>]: (i) Stage 0 (zero, in dark and light blue) is a RS image pre-processing (image enhancement) block comprising: (a) a compulsory radiometric calibration module (in dark blue) and (b) an optional battery (in light blue) of driven-by-knowledge image enhancement algorithms, e.g., stratified topographic correction [<a href="#b20-remotesensing-04-02768" class="html-bibr">20</a>], image mosaicking, image co-registration, <span class="html-italic">etc</span>.; (ii) Stage 1 (in green) is an application-independent, context-insensitive (per-pixel), spectral knowledge-based preliminary classifier (e.g., implemented as SIAM™ [<a href="#b15-remotesensing-04-02768" class="html-bibr">15</a>–<a href="#b23-remotesensing-04-02768" class="html-bibr">23</a>]); it provides the RS image pre-processing Stage 0 with a feedback loop to accomplish stratified RS image enhancement; (iii) stage 2 (in red) is a battery of stratified, context-sensitive, application-, sensor- and class-specific feature extractors and one-class classification modules. In this figure, as input, a multi-spectral (MS) image featuring a fine spatial resolution ≤ 10 m is adopted together with its panchromatic (PAN) image brightness, so that man-made structures (e.g., roads, buildings) are expected to be visible, which affects the composition of the second-stage battery of stratified class-specific classifiers.</p>
Full article ">
<p>(<b>a</b>) Zoomed image extracted from a QuickBird-2 image of Campania, Italy (acquisition date: 2004-13-06, 09:58 GMT), depicted in false colors (R: band CH3, G: band CH4, B: band CH1), 2.44 m resolution, calibrated into TOARF values, pan-sharpened at 0.61m resolution. (<b>b</b>) Preliminary classification map automatically generated by Q-SIAM™ from the image shown in (a). Output spectral categories are depicted in pseudo colors. Map legend: refer to <a href="#t3-remotesensing-04-02768" class="html-table">Table 3</a>. The image texture information is well preserved in the preliminary classification map, e.g., the mapped forest area is high-texture while the mapped grassland is low-texture.</p>
Full article ">
<p><b>(a)</b> Subset of a 7-band Landsat-like image generated from a MODIS image, radiometrically calibrated into TOARF values, acquired on 5 January 2007 (depicted in false colors, with Red channel: MODIS band 6 (Medium Infra-Red, MIR), Green channel: MODIS band 2 (Near-IR, NIR), Blue channel: MODIS band 3 (visible Blue)). The image subset covers an area of approximately 500 km × 500 km in Northern Italy. Spatial resolution: 500 m. (<b>b</b>) Output map, consisting of 95 spectral categories, generated by L-SIAM™ from the radiometrically calibrated MODIS image shown in (a). Output spectral categories are depicted in pseudo colors. Map legend shown in <a href="#t2-remotesensing-04-02768" class="html-table">Table 2</a>. (<b>c</b>) Piecewise constant approximation of (a) based on segments extracted from the preliminary classification map shown in (b), such that each segment is replaced with its mean reflectance value in the radiometrically calibrated input image. It is noteworthy that small, but genuine image details appear well preserved, <span class="html-italic">i.e.</span>, L-SIAM™ performs simultaneously as a preliminary classifier in the symbolic space and an edge-preserving smoothing filter in the sensory data domain.</p>
Full article ">
<p>(<b>a</b>) Transect extracted from the MODIS Band 2 (NIR) of <a href="#f4-remotesensing-04-02768" class="html-fig">Figure 4(a)</a>. (<b>b</b>) Transect extracted from the L-SIAM™ preliminary classification map featuring 95 spectral categories, indexed from 1 to 95, and shown in <a href="#f4-remotesensing-04-02768" class="html-fig">Figure 4(b)</a>. (<b>c</b>) Transect extracted from Band 2 (NIR) of the piecewise constant approximation of the 7-band MODIS input image shown in <a href="#f4-remotesensing-04-02768" class="html-fig">Figure 4(c)</a>. In comparison with the original transect shown in (a), small but genuine image details appear well preserved. In practice, L-SIAM™ performs simultaneously as a preliminary classifier in the symbolic space and an edge-preserving smoothing filter in the sensory data domain.</p>
Full article ">
<p>Previously shown in [<a href="#b23-remotesensing-04-02768" class="html-bibr">23</a>]. (<b>a</b>) Joint NASA and USGS Web-Enabled Landsat Data (WELD) Project ( <a href="http://landsat.usgs.gov/WELD.php" target="_blank">http://landsat.usgs.gov/WELD.php</a>) [<a href="#b79-remotesensing-04-02768" class="html-bibr">79</a>], providing seamless consistent mosaics of fused Landsat-7 Enhanced TM Plus (ETM+) and MODIS data radiometrically calibrated into top-of-atmosphere reflectance (TOARF) values. Weekly, monthly, seasonal and annual composites are freely available to the user community. Each consists of 663 fixed location tiles. Spatial resolution: 30 m. Area coverage: Continental USA and Alaska. Period coverage: 7-year. (<b>b</b>) and (<b>c</b>) Preliminary classification map of Alaska and continental USA automatically generated by L-SIAM™ from the 2006 annual WELD mosaic. L-SIAM™ was run overnight on a standard desktop computer. To the best of these authors’ knowledge, this is the first example of such a high-level product automatically generated at both the NASA and USGS. Map legend: refer to <a href="#t2-remotesensing-04-02768" class="html-table">Table 2</a>.</p>
Full article ">
<p>Adapted from K. Navulur (2007). SIAM™ implementation at three different granularity levels of classification (fine, intermediate and coarse): symbolic 2-D object-based parent-child relationships across spatial scales. Legend of the SIAM™ symbols. V: Vegetation, AV: Average Vegetation, SV: Strong Vegetation, NIR: Near Infra-Red, AVHNIR: AV with High NIR, AVLNIR: AV with Low NIR. For example, starting from the fine granularity classification level, the OR-combination of the “children” spectral categories AVLNIR, AV with Medium NIR (AVMNIR) (not shown) and AVHNIR generates the “parent” spectral category AV at the intermediate semantic granularity level. Starting from the intermediate granularity classification level, the OR-combination of the “children” spectral categories SV, AV and Dark Vegetation (DV) (not shown) generates the “parent” spectral category V at the coarse semantic granularity level, <span class="html-italic">etc</span>. It is noteworthy that the SIAM™ symbolic 2-D object-based parent-child multi-scale relationships are detected automatically and provided with semantic labels, unlike the parent-child relationships detected by traditional semi-automatic sub-symbolic multi-scale image segmentation algorithms (e.g., Definien’s). In practice, SIAM™ accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously (refer to Section 3).</p>
Full article ">
<p>Previously shown in [<a href="#b23-remotesensing-04-02768" class="html-bibr">23</a>]. (<b>a</b>) 4-band GMES-IMAGE 2006 Coverage 1 mosaic, consisting of approximately two thousand 4-band IRS-P6 LISS-III, SPOT-4, and SPOT-5 images, mostly acquired during the year 2006, depicted in false colors: Red: Band 4 (Short Wave InfraRed, SWIR), Green: Band 3 (Near IR, NIR), Blue: Band 1 (Visible Green). Down-scaled spatial resolution: 25 m. (<b>b</b>) Preliminary classification map automatically generated by S-SIAM™ from the mosaic shown in (a). To the best of these authors’ knowledge, this is the first example of such a high-level product automatically generated at the European Commission-Joint Research Center (EC-JRC). Output spectral categories are depicted in pseudo colors. Map legend: similar to <a href="#t2-remotesensing-04-02768" class="html-table">Table 2</a>.</p>
Full article ">
<p>Previously shown in [<a href="#b23-remotesensing-04-02768" class="html-bibr">23</a>]. (<b>a</b>) 4-band GMES-IMAGE 2006 Coverage 1 mosaic, consisting of approximately two thousand 4-band IRS-P6 LISS-III, SPOT-4, and SPOT-5 images, mostly acquired during the year 2006, depicted in false colors: Red: Band 4 (Short Wave InfraRed, SWIR), Green: Band 3 (Near IR, NIR), Blue: Band 1 (Visible Green). Down-scaled spatial resolution: 25 m. (<b>b</b>) Preliminary classification map automatically generated by S-SIAM™ from the mosaic shown in (a). To the best of these authors’ knowledge, this is the first example of such a high-level product automatically generated at the European Commission-Joint Research Center (EC-JRC). Output spectral categories are depicted in pseudo colors. Map legend: similar to <a href="#t2-remotesensing-04-02768" class="html-table">Table 2</a>.</p>
Full article ">
<p>(<b>a</b>) NOAA AVHRR (Sat. 17) image acquired on 2005-08-09 covering Turkey, the Balkans and part of Italy (R: band 3a, G: band 2, B: band 1), radiometrically calibrated into TOARF values. The image is in the original, highly non-linear swath projection with spatial resolution at nadir of 1.1 km. (<b>b</b>) Preliminary classification map automatically generated by AV-SIAM™ from the image shown in (a). Output spectral categories are depicted in pseudo colors. Map legend: similar to <a href="#t2-remotesensing-04-02768" class="html-table">Table 2</a>.</p>
Full article ">
3706 KiB  
Article
Mapping of Ice Motion in Antarctica Using Synthetic-Aperture Radar Data
by Jeremie Mouginot, Bernd Scheuchl and Eric Rignot
Remote Sens. 2012, 4(9), 2753-2767; https://doi.org/10.3390/rs4092753 - 18 Sep 2012
Cited by 184 | Viewed by 13168
Abstract
Ice velocity is a fundamental parameter in studying the dynamics of ice sheets. Until recently, no complete mapping of Antarctic ice motion had been available due to calibration uncertainties and lack of basic data. Here, we present a method for calibrating and mosaicking [...] Read more.
Ice velocity is a fundamental parameter in studying the dynamics of ice sheets. Until recently, no complete mapping of Antarctic ice motion had been available due to calibration uncertainties and lack of basic data. Here, we present a method for calibrating and mosaicking an ensemble of InSAR satellite measurements of ice motion from six sensors: the Japanese ALOS PALSAR, the European Envisat ASAR, ERS-1 and ERS-2, and the Canadian RADARSAT-1 and RADARSAT-2. Ice motion calibration is made difficult by the sparsity of in-situ reference points and the shear size of the study area. A sensor-dependent data stacking scheme is applied to reduce measurement uncertainties. The resulting ice velocity mosaic has errors in magnitude ranging from 1 m/yr in the interior regions to 17 m/yr in coastal sectors and errors in flow direction ranging from less than 0.5° in areas of fast flow to unconstrained direction in sectors of slow motion. It is important to understand how these mosaics are calibrated to understand the inner characteristics of the velocity products as well as to plan future InSAR acquisitions in the Antarctic. As an example, we show that in broad sectors devoid of ice-motion control, it is critical to operate ice motion mapping on a large scale to avoid pitfalls of calibration uncertainties that would make it difficult to obtain quality products and especially construct reliable time series of ice motion needed to detect temporal changes. Full article
(This article belongs to the Special Issue Remote Sensing by Synthetic Aperture Radar Technology)
Show Figures


<p>Maps of Antarctica with the footprint of processed tracks for each year employed in this study. More tracks are available, especially for RADARSAT-1, ERS-1/-2 and ALOS PALSAR. Maps are displayed in south polar stereographic projection.</p>
Full article ">
<p>Maps of Antarctica with the footprint of processed tracks for each year employed in this study. More tracks are available, especially for RADARSAT-1, ERS-1/-2 and ALOS PALSAR. Maps are displayed in south polar stereographic projection.</p>
Full article ">
<p>From top to bottom, (<b>a</b>) azimuth initial offsets (<span class="html-italic">δ<sub>az</sub></span>); (<b>b</b>) offsets after median filtering; (<b>c</b>) offsets after calibration; (<b>d</b>) initial (black line) and calibrated (red line) azimuth offsets versus azimuth; triangles denote zero-motion control points; diamonds indicate where balance velocity (blue line) is used as a reference; 1 azimuth pixel = 4.0 m; (<b>e</b>) Map of Antarctic balance velocity for speed &lt; 10 m/yr [<a href="#b9-remotesensing-04-02753" class="html-bibr">9</a>] with topographic divides (red lines for slope &lt; 0.1°, blue otherwise). The footprint of track A-B is in blue.</p>
Full article ">
<p>(<b>a</b>) Flow direction for the IPY map, black lines represent major topographic divides; (<b>b</b>) Antarctic ice speed; (<b>c</b>) error in flow direction; (<b>d</b>) error in velocity magnitude or speed; (e) set of ENVISAT reference tracks used for calibration, track outlines also shown in (b). Color coding in (b) and (c) is on a logarithmic scale. Maps (a–c) are overlaid on a MODIS mosaic of Antarctica (MOA) [<a href="#b14-remotesensing-04-02753" class="html-bibr">14</a>]. Maps are displayed in south polar stereographic projection.</p>
Full article ">
<p>Ice velocity of the Wilkes Land sector using polar stereographic projection and overlaid on MOA, from (<b>a</b>) RAMP in year 2000 [<a href="#b6-remotesensing-04-02753" class="html-bibr">6</a>]; (<b>b</b>) IPY [<a href="#b13-remotesensing-04-02753" class="html-bibr">13</a>]; (<b>c</b>) difference between RAMP and IPY. Control points of zero motion are in dashed yellow. Topographic divides are in blue and red as described in <a href="#f2-remotesensing-04-02753" class="html-fig">Figure 2</a>. The solid black line in (a–c) is the InSAR grounding line [<a href="#b20-remotesensing-04-02753" class="html-bibr">20</a>]. The dashed black line in (b) indicates the position of the velocity line used in <a href="#t2-remotesensing-04-02753" class="html-table">Table 2</a>; white markers correspond to control point locations (<a href="#t2-remotesensing-04-02753" class="html-table">Table 2</a>).</p>
Full article ">
2984 KiB  
Article
Radiometric and Geometric Analysis of Hyperspectral Imagery Acquired from an Unmanned Aerial Vehicle
by Ryan Hruska, Jessica Mitchell, Matthew Anderson and Nancy F. Glenn
Remote Sens. 2012, 4(9), 2736-2752; https://doi.org/10.3390/rs4092736 - 17 Sep 2012
Cited by 163 | Viewed by 17129
Abstract
In the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the US Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was [...] Read more.
In the summer of 2010, an Unmanned Aerial Vehicle (UAV) hyperspectral calibration and characterization experiment of the Resonon PIKA II imaging spectrometer was conducted at the US Department of Energy’s Idaho National Laboratory (INL) UAV Research Park. The purpose of the experiment was to validate the radiometric calibration of the spectrometer and determine the georegistration accuracy achievable from the on-board global positioning system (GPS) and inertial navigation sensors (INS) under operational conditions. In order for low-cost hyperspectral systems to compete with larger systems flown on manned aircraft, they must be able to collect data suitable for quantitative scientific analysis. The results of the in-flight calibration experiment indicate an absolute average agreement of 96.3%, 93.7% and 85.7% for calibration tarps of 56%, 24%, and 2.5% reflectivity, respectively. The achieved planimetric accuracy was 4.6 m (based on RMSE) with a flying height of 344 m above ground level (AGL). Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs) based Remote Sensing)
Show Figures


<p>Arcturus T-16 on the catapult launcher at the Idaho National Laboratory (INL) UAV Research Park.</p>
Full article ">
<p>Modeled design of payload mounting harness.</p>
Full article ">
<p>(<b>A</b>) INL UAV runway with example hyperspectral flightline from 2010. (<b>B</b>) Calibration tarps ranging from 2.5%, 24%, and 56% reflectivity placed north to south, respectively.</p>
Full article ">
<p>ASD measured radiance of Spectralon calibration panel. Mean and standard deviation (STD = ±1σ) are shown.</p>
Full article ">
<p>Examples of geometric errors (scan-line to scan-line and S-shape) apparent in the flightline imagery analyzed in this study. Note the scan-line to scan-line errors indicated by the linear runway markers.</p>
Full article ">
<p>(<b>A</b>) Pika II measured radiance of calibration tarps. Standard deviation (STD = ±1σ). (<b>B</b>) MODTRAN predicted radiance of calibration tarps. (<b>C</b>) Comparison of MODTRAN predicated and PIKA II measured radiance of calibration tarps. (<b>D</b>) The ratio of the PIKA II measured radiance over the MODTRAN predicted radiance of calibration tarps.</p>
Full article ">
<p>Average signal-to-noise ratio (SNR) estimated in-flight from the 56% tarp and as provided by Resonon for the PIKA II.</p>
Full article ">
<p>The percent deviation from the mean for the 12 overpasses of the 3 calibration tarps.</p>
Full article ">
659 KiB  
Article
Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA). Part 1: Introduction
by Andrea Baraldi and Luigi Boschetti
Remote Sens. 2012, 4(9), 2694-2735; https://doi.org/10.3390/rs4092694 - 14 Sep 2012
Cited by 39 | Viewed by 11112
Abstract
According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA) systems and three-stage iterative geographic object-oriented image analysis (GEOOIA) systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the [...] Read more.
According to existing literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA) systems and three-stage iterative geographic object-oriented image analysis (GEOOIA) systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the degree of automation, accuracy, efficiency, robustness, scalability and timeliness of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO) guidelines, this methodological work is split into two parts. The present first paper provides a multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis of the GEOBIA/GEOOIA approaches that augments similar analyses proposed in recent years. In line with constraints stemming from human vision, this SWOT analysis promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS) image understanding system (RS-IUS), from sub-symbolic statistical model-based (inductive) image segmentation to symbolic physical model-based (deductive) image preliminary classification. Hence, a symbolic deductive pre-attentive vision first stage accomplishes image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the second part of this work a novel hybrid (combined deductive and inductive) RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a) computational theory (system design); (b) information/knowledge representation; (c) algorithm design; and (d) implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time Satellite Image Automatic Mapper™ (SIAM™) is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage, to accomplish multi-scale image segmentation and multi-granularity image pre-classification simultaneously, automatically and in near real-time. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Previously shown in [<a href="#b23-remotesensing-04-02694" class="html-bibr">23</a>]. Inherently ill-posed image understanding problem (vision). There is a well-known information gap between sub-symbolic (2-D) <span class="html-italic">image features</span> (points, lines, polygons) as input and a symbolic description (e.g., in natural language) of the 3-D viewed-scene as output [<a href="#b23-remotesensing-04-02694" class="html-bibr">23</a>,<a href="#b25-remotesensing-04-02694" class="html-bibr">25</a>,<a href="#b55-remotesensing-04-02694" class="html-bibr">55</a>]. To fill this gap, a pre-attentive vision first stage is expected to provide as output an image preliminary classification (pre-classification, primal sketch [<a href="#b13-remotesensing-04-02694" class="html-bibr">13</a>]) consisting of symbolic <span class="html-italic">semi-concepts</span> (e.g., spectral categories, say, ‘<span class="html-italic">vegetation</span>’) [<a href="#b16-remotesensing-04-02694" class="html-bibr">16</a>–<a href="#b24-remotesensing-04-02694" class="html-bibr">24</a>]. The semantic meaning of a semi-concept is: (a) superior to zero, which is the semantic value of sub-symbolic <span class="html-italic">image features</span>; and (b) equal or inferior to the semantic meaning of the attentive vision <span class="html-italic">concepts</span> (e.g., land cover classes, say, ‘<span class="html-italic">needle-leaf forest</span>’), belonging to a <span class="html-italic">world model</span>, equivalent to a 4-D spatio-temporal ontology of the physical world-through-time.</p>
Full article ">
<p>Previously shown in [<a href="#b18-remotesensing-04-02694" class="html-bibr">18</a>]. Data flow diagram (DFD) of a two-stage non-iterative geographic object-based image analysis (GEOBIA) architecture according to [<a href="#b6-remotesensing-04-02694" class="html-bibr">6</a>], based on the GEOBIA terminology introduced in [<a href="#b34-remotesensing-04-02694" class="html-bibr">34</a>,<a href="#b35-remotesensing-04-02694" class="html-bibr">35</a>]. In a DFD, processing blocks are shown as rectangles and sensor-derived data products as circles [<a href="#b87-remotesensing-04-02694" class="html-bibr">87</a>]. Pre-attentive vision image simplification is pursued by means of an inherently ill-posed driven-without-knowledge image segmentation approach that generates as output a sub-symbolic segmentation map, either single-scale or multi-scale, where each image-object is identified by a sub-symbolic (e.g., numerical) label (e.g., segment 1, segment 2, <span class="html-italic">etc</span>.) featuring no semantic meaning.</p>
Full article ">
<p>Sketch of the GEOOIA iterative procedure.</p>
Full article ">
<p>DFD of a three-stage iterative GEOOIA architecture derived from the sketch shown in <a href="#f3-remotesensing-04-02694" class="html-fig">Figure 3</a>. In this DFD, processing blocks are shown as rectangles and sensor-derived data products as circles [<a href="#b87-remotesensing-04-02694" class="html-bibr">87</a>]. For more details about this RS-IUS scheme, refer to the text.</p>
Full article ">
1286 KiB  
Article
Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data
by Markus Immitzer, Clement Atzberger and Tatjana Koukal
Remote Sens. 2012, 4(9), 2661-2693; https://doi.org/10.3390/rs4092661 - 14 Sep 2012
Cited by 666 | Viewed by 37047
Abstract
Tree species diversity is a key parameter to describe forest ecosystems. It is, for example, important for issues such as wildlife habitat modeling and close-to-nature forest management. We examined the suitability of 8-band WorldView-2 satellite data for the identification of 10 tree species [...] Read more.
Tree species diversity is a key parameter to describe forest ecosystems. It is, for example, important for issues such as wildlife habitat modeling and close-to-nature forest management. We examined the suitability of 8-band WorldView-2 satellite data for the identification of 10 tree species in a temperate forest in Austria. We performed a Random Forest (RF) classification (object-based and pixel-based) using spectra of manually delineated sunlit regions of tree crowns. The overall accuracy for classifying 10 tree species was around 82% (8 bands, object-based). The class-specific producer’s accuracies ranged between 33% (European hornbeam) and 94% (European beech) and the user’s accuracies between 57% (European hornbeam) and 92% (Lawson’s cypress). The object-based approach outperformed the pixel-based approach. We could show that the 4 new WorldView-2 bands (Coastal, Yellow, Red Edge, and Near Infrared 2) have only limited impact on classification accuracy if only the 4 main tree species (Norway spruce, Scots pine, European beech, and English oak) are to be separated. However, classification accuracy increased significantly using the full spectral resolution if further tree species were included. Beside the impact on overall classification accuracy, the importance of the spectral bands was evaluated with two measures provided by RF. An in-depth analysis of the RF output was carried out to evaluate the impact of reference data quality and the resulting reliability of final class assignments. Finally, an extensive literature review on tree species classification comprising about 20 studies is presented. Full article
(This article belongs to the Special Issue Remote Sensing of Biological Diversity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Study site, test area and location of reference samples (homogeneous stands).</p>
Full article ">
<p>Manual delineation of sunlit tree crowns exemplified for the test area: (<b>a</b>) selected tree crowns in the image with 0.5 m pixel size, and (<b>b</b>) derived tree crown polygons in the image with 2 m pixel size.</p>
Full article ">
<p>Mean spectral signatures of the 10 tree species derived from the 8 WorldView-2 bands using the reference polygons (for sample size see <a href="#t1-remotesensing-04-02661" class="html-table">Table 1</a>): (<b>a</b>) full spectrum, and (<b>b</b>) wavelength range of visible light (detail of the full spectrum).</p>
Full article ">
<p>Box-whisker-plots of median reflectance values of the 8 WorldView-2 bands for the 10 tree species derived from the reference polygons.</p>
Full article ">
<p>Effect of the number of trees and the number of random split variables at each node (mtry) on the overall accuracy for the object-based RF classification of 10 tree species using the 8 bands of WorldView-2 (mean overall accuracy from 20 repetitions).</p>
Full article ">
<p>Spider charts representing the user’s accuracies for the following classification approaches (RF, 10 tree species): (<b>a</b>) 8 <span class="html-italic">versus</span> 4 bands (object-based), (<b>b</b>) object-based versus pixel-based (8 bands).</p>
Full article ">
<p>Histograms based on the classification unambiguity for (<b>a</b>) coniferous and (<b>b</b>) broadleaf trees ranging from 0 to 1 with an interval of 0.1 for the three cases (<span class="html-italic">case a</span>: correctly classified samples of the specified tree species, <span class="html-italic">case b</span>: samples of the specified tree species classified as another tree species, and <span class="html-italic">case c</span>: samples of other tree species classified as the specified tree species).</p>
Full article ">
<p>Classification of the test area (RF, 10 tree species, 8 bands): (<b>a</b>) tree species map, (<b>b</b>) reliability map derived from the species-specific user’s accuracies and the individual unambiguities.</p>
Full article ">
1774 KiB  
Article
Flux Measurements in Cairo. Part 2: On the Determination of the Spatial Radiation and Energy Balance Using ASTER Satellite Data
by Corinne Myrtha Frey and Eberhard Parlow
Remote Sens. 2012, 4(9), 2635-2660; https://doi.org/10.3390/rs4092635 - 13 Sep 2012
Cited by 21 | Viewed by 7290
Abstract
This study highlights the possibilities and constraints of determining instantaneous spatial surface radiation and land heat fluxes from satellite images in a heterogeneous urban area and its agricultural and natural surroundings. Net radiation was determined using ASTER satellite data and MODTRAN radiative transfer [...] Read more.
This study highlights the possibilities and constraints of determining instantaneous spatial surface radiation and land heat fluxes from satellite images in a heterogeneous urban area and its agricultural and natural surroundings. Net radiation was determined using ASTER satellite data and MODTRAN radiative transfer calculations. The soil heat flux was estimated with two empirical methods using radiative terms and vegetation indices. The turbulent heat fluxes finally were determined with the LUMPS (Local-Scale Urban Meteorological Parameterization Scheme) and the ARM (Aerodynamic Resistance Method) method. Results were compared to in situ measured ground data. The performance of the atmospheric correction was found to be crucial for the estimation of the radiation balance and thereafter the heat fluxes. The soil heat flux could be modeled satisfactorily by both of the applied approaches. The LUMPS method, for the turbulent fluxes, appeals by its simplicity. However, a correct spatial estimation of associated parameters could not always be achieved. The ARM method showed the better spatial results for the turbulent heat fluxes. In comparison with the in situ measurements however, the LUMPS approach rendered the better results than the ARM method. Full article
Show Figures


<p>False color composite (band 1–3) of a part of the study area from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
<p>Footprints for the three stations and the scenes from 24 December 2007. Due to less unstable conditions, the flux footprints extend over a large area. As the color table is linear, only about 50% of the footprint is given in color.</p>
Full article ">
<p>Net radiation (option ‘best fit’) from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
<p>Soil heat flux (‘Parlow/urban’) from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
<p>Soil heat flux (‘Frey/NDVI’) from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
<p>MAD of Q<sub>H</sub> for the different methods of soil heat flux, parameters of the LUMPS scheme and atmospheric correction. MADs are given for simple pixel comparison and for the usage of the footprint model. Annotations are given in <a href="#t5-remotesensing-04-02635" class="html-table">Table 5</a>.</p>
Full article ">
<p>MAD of Q<sub>LE</sub> for the different methods of soil heat flux, parameters of the LUMPS scheme and atmospheric correction. MADs are given for simple pixel comparison and for the usage of the footprint model. Annotations are given in <a href="#t5-remotesensing-04-02635" class="html-table">Table 5</a>.</p>
Full article ">
<p>Q<sub>H</sub> modeled using the ‘Parlow/urban’ Q<sub>s</sub> and the option ‘best fit’ from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
<p>Q<sub>LE</sub> modeled using the ‘Parlow/urban’ Q<sub>s</sub> and the option ‘best fit’ from one of the ASTER scenes from 24 December 2007.</p>
Full article ">
1418 KiB  
Article
Remote Sensing of Fractional Green Vegetation Cover Using Spatially-Interpolated Endmembers
by Brian Johnson, Ryutaro Tateishi and Toshiyuki Kobayashi
Remote Sens. 2012, 4(9), 2619-2634; https://doi.org/10.3390/rs4092619 - 12 Sep 2012
Cited by 60 | Viewed by 9206
Abstract
Fractional green vegetation cover (FVC) is a useful parameter for many environmental and climate-related applications. A common approach for estimating FVC involves the linear unmixing of two spectral endmembers in a remote sensing image; bare soil and green vegetation. The spectral properties of [...] Read more.
Fractional green vegetation cover (FVC) is a useful parameter for many environmental and climate-related applications. A common approach for estimating FVC involves the linear unmixing of two spectral endmembers in a remote sensing image; bare soil and green vegetation. The spectral properties of these two endmembers are typically determined based on field measurements, estimated using additional data sources (e.g., soil databases or land cover maps), or extracted directly from the imagery. Most FVC estimation approaches do not consider that the spectral properties of endmembers may vary across space. However, due to local differences in climate, soil type, vegetation species, etc., the spectral characteristics of soil and green vegetation may exhibit positive spatial autocorrelation. When this is the case, it may be useful to take these local variations into account for estimating FVC. In this study, spatial interpolation (Inverse Distance Weighting and Ordinary Kriging) was used to predict variations in the spectral characteristics of bare soil and green vegetation across space. When the spatially-interpolated values were used in place of scene-invariant endmember values to estimate FVC in an Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image, the accuracy of FVC estimates increased, providing evidence that it may be useful to consider the effects of spatial autocorrelation for spectral mixture analysis. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>False color Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) image, with the study area outlined in black (centered at 36°57′N, 140°38′E). NIR, R, and Green ASTER bands are shown in red, green, and blue color, respectively.</p>
Full article ">
<p>Normalized Differential Vegetation Index (NDVI) (<b>a</b>) and Modified Soil Adjusted Vegetation Index (MSAVI) (<b>b</b>) images of the study area. The topographic effects evident in <a href="#f1-remotesensing-04-02619" class="html-fig">Figure 1</a> have been largely removed using the vegetation indices.</p>
Full article ">
<p>Predicted NDVI<span class="html-italic"><sub>s</sub></span> (<b>a</b>) and NDVI<span class="html-italic"><sub>v</sub></span> (<b>b</b>) values using Inverse Distance Weighting (IDW) interpolation. Yellow points show the locations of gv_0 (a) and gv_1 (b) samples. Optimal IDW exponent value was 1.12 for (a) and 2.45 for (b).</p>
Full article ">
<p>Predicted NDVI<span class="html-italic"><sub>s</sub></span> (<b>a</b>) and NDVI<span class="html-italic"><sub>v</sub></span> (<b>b</b>) values using Ordinary Kriging (OK) interpolation. Yellow points show the locations of gv_0 (a) and gv_1 (b) samples.</p>
Full article ">
<p>Scatterplots of Reference and Estimated FVC using the NDVI (<b>a</b>), OK-NDVI (<b>b</b>), MSAVI (<b>c</b>), and OK-MSAVI (<b>d</b>) approach. OK-NDVI and OK-MSAVI show a modestly better linear fit than NDVI and MSAVI (based on R<sup>2</sup> values).</p>
Full article ">
<p>Scatterplots of Reference and Estimated FVC using the NDVI (<b>a</b>), OK-NDVI (<b>b</b>), MSAVI (<b>c</b>), and OK-MSAVI (<b>d</b>) approach. OK-NDVI and OK-MSAVI show a modestly better linear fit than NDVI and MSAVI (based on R<sup>2</sup> values).</p>
Full article ">
<p>Estimated FVC (%) using the most accurate estimation method, OK-NDVI.</p>
Full article ">
4139 KiB  
Article
Overcoming Limitations with Landsat Imagery for Mapping of Peat Swamp Forests in Sundaland
by Lahiru S. Wijedasa, Sean Sloan, Dimitrios G. Michelakis and Gopalasamy R. Clements
Remote Sens. 2012, 4(9), 2595-2618; https://doi.org/10.3390/rs4092595 - 10 Sep 2012
Cited by 57 | Viewed by 15539
Abstract
Landsat can be used to map tropical forest cover at 15–60 m resolution, which is helpful for detecting small but important perturbations in increasingly fragmented forests. However, among the remaining Landsat satellites, Landsat-5 no longer has global coverage and, since 2003, a mechanical [...] Read more.
Landsat can be used to map tropical forest cover at 15–60 m resolution, which is helpful for detecting small but important perturbations in increasingly fragmented forests. However, among the remaining Landsat satellites, Landsat-5 no longer has global coverage and, since 2003, a mechanical fault in the Scan-Line Corrector (SLC-Off) of the Landsat-7 satellite resulted in a 22–25% data loss in each image. Such issues challenge the use of Landsat for wall-to-wall mapping of tropical forests, and encourage the use of alternative, spatially coarser imagery such as MODIS. Here, we describe and test an alternative method of post-classification compositing of Landsat images for mapping over 20.5 million hectares of peat swamp forest in the biodiversity hotspot of Sundaland. In order to reduce missing data to levels comparable to those prior to the SLC-Off error, we found that, for a combination of Landsat-5 images and SLC-off Landsat-7 images used to create a 2005 composite, 86% of the 58 scenes required one or two images, while 14% required three or more images. For a 2010 composite made using only SLC-Off Landsat-7 images, 64% of the scenes required one or two images and 36% required four or more images. Missing-data levels due to cloud cover and shadows in the pre SLC-Off composites (7.8% and 10.3% for 1990 and 2000 enhanced GeoCover mosaics) are comparable to the post SLC-Off composites (8.2% and 8.3% in the 2005 and 2010 composites). The area-weighted producer’s accuracy for our 2000, 2005 and 2010 composites were 77%, 85% and 86% respectively. Overall, these results show that missing-data levels, classification accuracy, and geographic coverage of Landsat composites are comparable across a 20-year period despite the SLC-Off error since 2003. Correspondingly, Landsat still provides an appreciable utility for monitoring tropical forests, particularly in Sundaland’s rapidly disappearing peat swamp forests. Full article
(This article belongs to the Special Issue Remote Sensing of Biological Diversity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Example of how Scan-Line Corrector (SLC-Off) error affects Landsat scenes in Riau province, Sumatra. (<b>a</b>) Overlay of four Landsat-7 images with SLC-Off error for scene 127/059 (blue), and one Landsat image with SLC-Off error for scene 126/059 (red); (<b>b</b>–<b>d</b>) close-up views of SLC-OFF missing-data areas across each Landsat scene. Increasing intensity of blue indicates increasing number of scenes with overlaps of missing data due to SLC-Off error. Note that among the four images for scene 127/059, the missing-data stripes only overlap completely in two images (c). At the edge of the scene (d), missing data is further reduced by a 30-km wide overlap with the adjacent scene, visible as red stripes.</p>
Full article ">
<p>(<b>a</b>) Map showing the study area and number of images used per scene in the 2005 composite (n = 105 images). (<b>b</b>) Map showing the study area and number of images used per scene in the 2010 composite (n = 133 images). Red areas indicate original peat swamp forest extent.</p>
Full article ">
<p>Land cover classes as represented by training sites: burn scars/bare earth/urban areas (1), agriculture mosaic (2), disturbed /re-growth peat swamp forest (3), primary peat swamp forest (4), cloud cover (5), cloud shadow (6) and water (7). The lines in the image represent missing data due to SLC-Off error.</p>
Full article ">
<p>Flow diagram of classification and compositing methodology for Landsat scene 127/059. From raw image, classified image, reclassified image and composited image.</p>
Full article ">
<p>Areas used for accuracy assessment of the 2000, 2005 and 2010 composites.</p>
Full article ">
<p>The number of images composited per scene for 2005 and 2010 composite mosaics. Note the high proportion of scenes requiring three or fewer images. Also note the decreasing proportion of scenes requiring only a single image between 2005 and 2010. This indicates that even though areas of peat swamp forest cover of interest occupy smaller proportions of scenes in 2010 and would potentially require fewer images to be mapped, this is offset by additional missing area due to the SLC-Off error.</p>
Full article ">
<p>Graphical illustration of missing-data reduction per image when compositing images and adjacent scenes (<b>a</b>) two, (<b>b</b>) four and (<b>c</b>) six Landsat-7 SLC-Off images of the 2010 composite mosaic, plus images of adjacent scenes (corresponding spatial illustrations of these scenes are given in <a href="#f8-remotesensing-04-02595" class="html-fig">Figure 8</a> and in the <a href="#app1" class="html-app">appendix</a>). Percentages reported are with respect to the study area, not the entire scene. The scene path/rows for panes (a), (b) and (c) are 126/060, 127/059, and 124/060, respectively.</p>
Full article ">
<p>Spatial illustration of missing-data reduction per image when compositing four Landsat-7 SLC-Off images for scene 127/059 of the 2010 composite mosaic. Percentages reported are with respect to the study area, not the entire scene. The mapped area corresponds to <a href="#f7-remotesensing-04-02595" class="html-fig">Figure 7(b)</a>. Equivalent figures corresponding to <a href="#f7-remotesensing-04-02595" class="html-fig">Figure 7(a,c)</a> are given in the <a href="#app1" class="html-app">Appendix</a>.</p>
Full article ">
<p>Comparison of the 2000 and 2010 Landsat mosaics with the corresponding MODIS mosaics of 2000 and 2010 created by Miettinen <span class="html-italic">et al.</span> [<a href="#b11-remotesensing-04-02595" class="html-bibr">11</a>], Northern Riau Province, Sumatra, Indonesia.</p>
Full article ">
701 KiB  
Article
Discrimination of Switchgrass Cultivars and Nitrogen Treatments Using Pigment Profiles and Hyperspectral Leaf Reflectance Data
by Anserd J. Foster, Vijaya Gopal Kakani, Jianjun Ge and Jagadeesh Mosali
Remote Sens. 2012, 4(9), 2576-2594; https://doi.org/10.3390/rs4092576 - 10 Sep 2012
Cited by 15 | Viewed by 7819
Abstract
The objective of this study was to compare the use of hyperspectral narrowbands, hyperspectral narrowband indices and pigment measurements collected from switchgrass leaf as potential tools for discriminating among twelve switchgrass cultivars and five N treatments in one cultivar (Alamo). Hyperspectral reflectance, UV-B [...] Read more.
The objective of this study was to compare the use of hyperspectral narrowbands, hyperspectral narrowband indices and pigment measurements collected from switchgrass leaf as potential tools for discriminating among twelve switchgrass cultivars and five N treatments in one cultivar (Alamo). Hyperspectral reflectance, UV-B absorbing compounds, photosynthetic pigments (chlorophyll a, chlorophyll b and carotenoids) of the uppermost fully expanded leaves were determined at monthly intervals from May to September. Leaf hyperspectral data was collected using ASD FieldSpec FR spectroradiometer (350–2,500 nm). Discrimination of the cultivars and N treatments were determined based on Principal Component Analysis (PCA) and linear discriminant analysis (DA). The stepwise discriminant analysis was used to determine the best indices that differentiate switchgrass cultivars and nitrogen treatments. Results of PCA showed 62% of the variability could be explained in PC1 dominated by middle infrared wavebands, over 20% in PC2 dominated by near infrared wavebands and just over 10% in PC3 dominated by green wavebands for separating both cultivars and N treatments. Discriminating among the cultivars resulted in an overall accuracy of 81% with the first five PCs in the month of September, but was less accurate (27%) in classifying N treatments using the spectral data. Discrimination based on pigment data using the first two PCs resulted in an overall accuracy of less than 10% for separating switchgrass cultivars , but was more accurate (47%) in grouping N treatments. The plant senescence ratio index (PSRI) was found to be the best index for separating the cultivars late in the season, while the transform chlorophyll absorption ration index (TCARI) was best for separating the N treatments. Leaf spectra data was found to be more useful than pigment data for the discrimination of switchgrass cultivars, particularly late in the growing season. Full article
Show Figures


<p>Twelve switchgrass cultivars grown in Stillwater Oklahoma (OK) for Biomass yield potential, (<b>A</b>) Carthage; (<b>B</b>) Alamo; (<b>C</b>) Kanlow; (<b>D</b>) Southlow; (<b>E</b>) Cave-In-Rock; (<b>F</b>) Forestburg; (<b>G</b>) Blackwell; (<b>H</b>) Nebraska 28; (<b>I</b>) Shelter; (<b>J</b>) Shawnee; (<b>K</b>) Sunburst; (<b>L</b>) Cimarron.</p>
Full article ">
<p>Mean leaf spectral profile of twelve switchgrass collected in May, June, July, August and September of 2011. (<b>Top left</b>) figure shows leaf spectral profile for the month of May; (<b>Top right</b>) figure for the month of June; (<b>Middle left</b>) figure shows for the month of July; (<b>Middle right</b>) figure shows for month of August, and (<b>Bottom left</b>) figure shows the month of September. Nine spectral measurements were taken per cultivar at each sampling interval.</p>
Full article ">
<p>Mean leaf spectral profile for five nitrogen treatments collected in June, July and August of 2011. (<b>Top</b>) figure shows leaf spectral profile for the month of June; (<b>Center</b>) figure shows for month of July, and (<b>Bottom</b>) figure shows for the month of August. N1-0: 0 kg·N·ha<sup>−1</sup>, N2–84: 84 kg·N·ha<sup>−1</sup>, N3–168: 168 kg·N·ha<sup>−1</sup>, N4–252: 252 kg·N·ha<sup>−1</sup>, and N5-WL: Winter legume (hairy Vetch). Nine spectral measurements were taken per N treatment at each sampling interval.</p>
Full article ">
37123 KiB  
Article
Estimation of Supraglacial Dust and Debris Geochemical Composition via Satellite Reflectance and Emissivity
by Kimberly Casey and Andreas Kääb
Remote Sens. 2012, 4(9), 2554-2575; https://doi.org/10.3390/rs4092554 - 7 Sep 2012
Cited by 10 | Viewed by 9264
Abstract
We demonstrate spectral estimation of supraglacial dust, debris, ash and tephra geochemical composition from glaciers and ice fields in Iceland, Nepal, New Zealand and Switzerland. Surface glacier material was collected and analyzed via X-ray fluorescence spectroscopy (XRF) and X-ray diffraction (XRD) for geochemical [...] Read more.
We demonstrate spectral estimation of supraglacial dust, debris, ash and tephra geochemical composition from glaciers and ice fields in Iceland, Nepal, New Zealand and Switzerland. Surface glacier material was collected and analyzed via X-ray fluorescence spectroscopy (XRF) and X-ray diffraction (XRD) for geochemical composition and mineralogy. In situ data was used as ground truth for comparison with satellite derived geochemical results. Supraglacial debris spectral response patterns and emissivity-derived silica weight percent are presented. Qualitative spectral response patterns agreed well with XRF elemental abundances. Quantitative emissivity estimates of supraglacial SiO2 in continental areas were 67% (Switzerland) and 68% (Nepal), while volcanic supraglacial SiO2 averages were 58% (Iceland) and 56% (New Zealand), yielding general agreement. Ablation season supraglacial temperature variation due to differing dust and debris type and coverage was also investigated, with surface debris temperatures ranging from 5.9 to 26.6 °C in the study regions. Applications of the supraglacial geochemical reflective and emissive characterization methods include glacier areal extent mapping, debris source identification, glacier kinematics and glacier energy balance considerations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">
<p>Map of study region locations, indicated by the cyan boxes.</p>
Full article ">
<p>ASTER (band 3,2,1) images of Zmutt glacier (top left, acquired 29 July 2004) and Khumbu glacier (top right, acquired 29 November 2005). VNIR-SWIR multispectral AST_07XT reflectance is plotted beneath the images. Zmutt glacier serpentine Fe rich and Fe poor longitudinal debris bands can be differentiated from the Khumbu glacier leucogranite lighter granitic and darker schistic longitudinal debris bands.</p>
Full article ">
<p>Images of the Iceland study region (top left, MODIS bands 1,4,3, acquired 28 August 2010) and Mt. Ruapehu, New Zealand study region (top right, ASTER bands 3,2,1, acquired 9 January 2008) are shown below. VNIR-SWIR multispectral MOD09GA (Iceland) and AST_07XT (New Zealand) surface reflectance means are plotted beneath the images. Note, the red box on Hofsjökull indicates the coverage of the Hyperion scene, discussed in Section 4.2, <a href="#f4-remotesensing-04-02554" class="html-fig">Figure 4</a>.</p>
Full article ">
<p>Hofsjökull Hyperion (24 September 2001) at-sensor top-of-atmosphere supraglacial reflectance plotted at left, and the true color image is displayed on the right. Circles on the Hofsjökull image display the areas with varying tephra coverage in which reflectance data was selected to plot. The same colors are used for circles as the top-of-atmosphere reflectance response patterns (e.g., lightest orange indicating high firn, black indicating heaviest tephra covered ice). Atmospheric water vapor, oxygen and CO<sub>2</sub> absorption features are removed.</p>
Full article ">
<p>Shortwave and thermal infrared false color image composites of study regions differentiate silica rich (yellow coloring) <span class="html-italic">vs</span>. silica poor (blue coloring) supraglacial debris mineralogy.</p>
Full article ">
<p>Lhotse Shar (reflectance data taken at red dot) and Imja glacier (reflectance data taken at blue dot) Hyperion at-sensor top-of-atmosphere reflectance of supraglacial ice and debris, taken from 13 May 2002 Hyperion acquisition. Atmospheric water vapor, oxygen and CO<sub>2</sub> absorption features were removed. The true color image composite is displayed on the right, and was acquired by ALI on 4 October 2010 (10m pan enhanced image shown). The black “X” on upper Lhotse Shar glacier shows the area used for the spectral reflectance data collection.</p>
Full article ">
2653 KiB  
Article
Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification
by Juliane Huth, Claudia Kuenzer, Thilo Wehrmann, Steffen Gebhardt, Vo Quoc Tuan and Stefan Dech
Remote Sens. 2012, 4(9), 2530-2553; https://doi.org/10.3390/rs4092530 - 7 Sep 2012
Cited by 58 | Viewed by 15135
Abstract
We present a novel and innovative automated processing environment for the derivation of land cover (LC) and land use (LU) information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain) enables the standardized, independent, user-friendly, and comparable derivation of [...] Read more.
We present a novel and innovative automated processing environment for the derivation of land cover (LC) and land use (LU) information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain) enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT) for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS), as introduced by the Open Geospatial Consortium (OGC), are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network) enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective. Full article
Show Figures


<p>Flowchart of the Twinned Object- and Pixel based Automated classification Chain (TWOPAC).</p>
Full article ">
<p>Sampling procedure—selection and extraction of segments with features.</p>
Full article ">
<p>Classification result of TWOPAC run with Rapid Eye mosaic (2010-01-27) from the central part of the Mekong Delta—subset of object-based result (<b>a</b>); detail on rural area and natural tree area (<b>b</b>); detail on urban area (<b>c</b>).</p>
Full article ">
<p>Classification result of TWOPAC run with SPOT5 dataset (2008-01-08) from the western coast of the Mekong Delta—subset of pixel-based result ((<b>a</b>) detail on rural area and natural tree area in (<b>b</b>); detail on urban area (<b>c</b>)); subset of object-based result ((<b>d</b>) details in (<b>e</b>,<b>f</b>)).</p>
Full article ">
<p>Classification result of TWOPAC run with SPOT 4 dataset (2010-01-02) from the Dongting lake, Hunan province, China: pixel-based result generated with C5.0 ((<b>a</b>), details in (<b>b</b>,<b>c</b>)); for comparison maximum likelihood classification ((<b>d</b>), details in (<b>e</b>,<b>f</b>)).</p>
Full article ">
<p>Classification result of TWOPAC run with multi-temporal MODIS 8 tiles mosaic (2009) for Central Asia: Overview of the classification result for Uzbekistan (<b>a</b>); details on specific land cover types—cultivated area and natural vegetation (<b>b</b>,<b>c</b>).</p>
Full article ">
1757 KiB  
Article
Hyperspectral Time Series Analysis of Native and Invasive Species in Hawaiian Rainforests
by Ben Somers and Gregory P. Asner
Remote Sens. 2012, 4(9), 2510-2529; https://doi.org/10.3390/rs4092510 - 29 Aug 2012
Cited by 57 | Viewed by 10685
Abstract
The unique ecosystems of the Hawaiian Islands are progressively being threatened following the introduction of exotic species. Operational implementation of remote sensing for the detection, mapping and monitoring of these biological invasions is currently hampered by a lack of knowledge on the spectral [...] Read more.
The unique ecosystems of the Hawaiian Islands are progressively being threatened following the introduction of exotic species. Operational implementation of remote sensing for the detection, mapping and monitoring of these biological invasions is currently hampered by a lack of knowledge on the spectral separability between native and invasive species. We used spaceborne imaging spectroscopy to analyze the seasonal dynamics of the canopy hyperspectral reflectance properties of four tree species: (i) Metrosideros polymorpha, a keystone native Hawaiian species; (ii) Acacia koa, a native Hawaiian nitrogen fixer; (iii) the highly invasive Psidium cattleianum; and (iv) Morella faya, a highly invasive nitrogen fixer. The species specific separability of the reflectance and derivative-reflectance signatures extracted from an Earth Observing-1 Hyperion time series, composed of 22 cloud-free images spanning a period of four years and was quantitatively evaluated using the Separability Index (SI). The analysis revealed that the Hawaiian native trees were universally unique from the invasive trees in their near-infrared-1 (700–1,250 nm) reflectance (0.4 > SI > 1.4). Due to its higher leaf area index, invasive trees generally had a higher near-infrared reflectance. To a lesser extent, it could also be demonstrated that nitrogen-fixing trees were spectrally unique from non-fixing trees. The higher leaf nitrogen content of nitrogen-fixing trees was expressed through slightly increased separabilities in visible and shortwave-infrared reflectance wavebands (SI = 0.4). We also found phenology to be key to spectral separability analysis. As such, it was shown that the spectral separability in the near-infrared-1 reflectance between the native and invasive species groups was more expressed in summer (SI > 0.7) than in winter (SI < 0.7). The lowest separability was observed for March-July (SI < 0.3). This could be explained by the invasives taking advantage of the warmer summer period to expand their canopy. There was, however, no specific time window or a single spectral region that always defined the separability of all species groups, and thus intensive monitoring of plant phenology as well as the use of the full-range (400–2,500 nm) spectrum was highly advantageous in differentiating each species. These results set a basis for an operational invasive species monitoring program in Hawai’i using spaceborne imaging spectroscopy. Full article
(This article belongs to the Special Issue Remote Sensing of Biological Diversity)
Show Figures


<p>Overview of the Hawai’i Volcanoes National Park on the Island of Hawai’i (19.4°N, 155.2°W; imagery from the Carnegie Airborne Observatory; [<a href="#b17-remotesensing-04-02510" class="html-bibr">17</a>]).</p>
Full article ">
<p>Reflectance and 1st-d reflectance spectro-temporal Separability Index charts for the pairwise species comparisons.</p>
Full article ">
<p>Reflectance and 1st-d reflectance spectro-temporal Separability Index charts for the pairwise species comparisons.</p>
Full article ">
<p>Reflectance and 1<sup>st</sup>-d reflectance spectro-temporal Separability Index chart for the pairwise comparison of (<b>i</b>) invasives and (<b>ii</b>) natives.</p>
Full article ">
<p>Spectro-temporal SI charts for the pairwise species comparisons of the temporal displacement spectra (<span class="html-italic">i.e.</span>, temporal derivatives or the change in reflectance between two consecutive time steps/months).</p>
Full article ">
<p>The spectro-temporal SI charts for the pairwise comparison of (<b>i</b>) invasives and (<b>ii</b>) natives of the temporal displacement spectra (<span class="html-italic">i.e.</span>, temporal derivatives).</p>
Full article ">
<p>Mean and 95% confidence interval for December and August reflectance of native and invasive species for the visible, near-infrared and shortwave-infrared. Approximately 400 spectra (or image pixels) were selected for each species (see Section 2.1).</p>
Full article ">
<p>Mean and 95% confidence interval for December and August 1d-reflectance of native and invasive species. Approximately 400 spectra (or image pixels) were selected for each species (see Section 2.1).</p>
Full article ">
<p>Reflectance and 1st-d reflectance spectro-temporal Separability Index charts for the comparisons of native <span class="html-italic">vs</span>. invasive tree species.</p>
Full article ">
897 KiB  
Article
Monitoring Biennial Bearing Effect on Coffee Yield Using MODIS Remote Sensing Imagery
by Tiago Bernardes, Maurício Alves Moreira, Marcos Adami, Angélica Giarolla and Bernardo Friedrich Theodor Rudorff
Remote Sens. 2012, 4(9), 2492-2509; https://doi.org/10.3390/rs4092492 - 27 Aug 2012
Cited by 72 | Viewed by 11918
Abstract
Coffee is the second most valuable traded commodity worldwide. Brazil is the world’s largest coffee producer, responsible for one third of the world production. A coffee plot exhibits high and low production in alternated years, a characteristic so called biennial yield. High yield [...] Read more.
Coffee is the second most valuable traded commodity worldwide. Brazil is the world’s largest coffee producer, responsible for one third of the world production. A coffee plot exhibits high and low production in alternated years, a characteristic so called biennial yield. High yield is generally a result of suitable conditions of foliar biomass. Moreover, in high production years one plot tends to lose more leaves than it does in low production years. In both cases some correlation between coffee yield and leaf biomass can be deduced which can be monitored through time series of vegetation indices derived from satellite imagery. In Brazil, a comprehensive, spatially distributed study assessing this relationship has not yet been done. The objective of this study was to assess possible correlations between coffee yield and MODIS derived vegetation indices in the Brazilian largest coffee-exporting province. We assessed EVI and NDVI MODIS products over the period between 2002 and 2009 in the south of Minas Gerais State whose production accounts for about one third of the Brazilian coffee production. Landsat images were used to obtain a reference map of coffee areas and to identify MODIS 250 m pure pixels overlapping homogeneous coffee crops. Only MODIS pixels with 100% coffee were included in the analysis. A wavelet-based filter was used to smooth EVI and NDVI time profiles. Correlations were observed between variations on yield of coffee plots and variations on vegetation indices for pixels overlapping the same coffee plots. The vegetation index metrics best correlated to yield were the amplitude and the minimum values over the growing season. The best correlations were obtained between variation on yield and variation on vegetation indices the previous year (R = 0.74 for minEVI metric and R = 0.68 for minNDVI metric). Although correlations were not enough to estimate coffee yield exclusively from vegetation indices, trends properly reflect the biennial bearing effect on coffee yield. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of Agriculture)
Show Figures


<p>The study area in the south of Minas Gerais State.</p>
Full article ">
<p>Overlap of the limits of MODIS pixels with coffee fields in TM/Landsat 5 image, false color composite color 3B4R5G, (<b>A</b>) pixels fully occupied by coffee crop and (<b>B</b>) crop variability within a MODIS pixel in Landsat images.</p>
Full article ">
<p>Flowchart of data processing adopted in the study.</p>
Full article ">
<p>Annual variation of vegetation indices for the selected pixels (<b>A</b>). Standard crop with the maximum vegetation index value for March (<b>B</b>) and minimum vegetation index for August (<b>C</b>) on Image TM/Landsat 3B4R5G.</p>
Full article ">
<p>Filtered EVI and NDVI time series for coffee crop and the original data (without filtering process).</p>
Full article ">
<p>Correlation between variation on coffee yield and variation on minimum values of vegetation indices (minEVI and minNDVI) for the same year.</p>
Full article ">
<p>Correlation between variation on minimum values of vegetation indices (minEVI and minNDVI) and variation on coffee yield the following year.</p>
Full article ">
<p>Water balance for Guaxupé location and Pearson correlation coefficients from 2002 to 2009.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop