Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 8, October
Previous Issue
Volume 8, August
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 8, Issue 9 (September 2016) – 96 articles

Cover Story (view full-size image): The use of unmanned aerial vehicles (UAVs) combined with Structure-from-Motion (SfM) now allows scientists to undertake low-level aerial surveys with relative ease. A wide range of disciplines now use the combined technique: ranging from undertaking ecological surveys to assessing surface change in a geomorphological context. Our work contributes to this exciting UAV-SfM research area by helping users understand how survey error can arise and how it can be avoided. When the camera positions used are unknown, the SfM processing requires known surveyed positions (xyz co-ordinates) on the ground (‘ground control points’, GCPs). We provide guidance on the impact of GCP number and distribution on survey error. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
6338 KiB  
Technical Note
Ground-Control Networks for Image Based Surface Reconstruction: An Investigation of Optimum Survey Designs Using UAV Derived Imagery and Structure-from-Motion Photogrammetry
by Toby N. Tonkin and Nicholas G. Midgley
Remote Sens. 2016, 8(9), 786; https://doi.org/10.3390/rs8090786 - 21 Sep 2016
Cited by 218 | Viewed by 13724
Abstract
The use of small UAV (Unmanned Aerial Vehicle) and Structure-from-Motion (SfM) with Multi-View Stereopsis (MVS) for acquiring survey datasets is now commonplace, however, aspects of the SfM-MVS workflow require further validation. This work aims to provide guidance for scientists seeking to adopt this [...] Read more.
The use of small UAV (Unmanned Aerial Vehicle) and Structure-from-Motion (SfM) with Multi-View Stereopsis (MVS) for acquiring survey datasets is now commonplace, however, aspects of the SfM-MVS workflow require further validation. This work aims to provide guidance for scientists seeking to adopt this aerial survey method by investigating aerial survey data quality in relation to the application of ground control points (GCPs) at a site of undulating topography (Ennerdale, Lake District, UK). Sixteen digital surface models (DSMs) were produced from a UAV survey using a varying number of GCPs (3-101). These DSMs were compared to 530 dGPS spot heights to calculate vertical error. All DSMs produced reasonable surface reconstructions (vertical root-mean-square-error (RMSE) of <0.2 m), however, an improvement in DSM quality was found where four or more GCPs (up to 101 GCPs) were applied, with errors falling to within the suggested point quality range of the survey equipment used for GCP acquisition (e.g., vertical RMSE of <0.09 m). The influence of a poor GCP distribution was also investigated by producing a DSM using an evenly distributed network of GCPs, and comparing it to a DSM produced using a clustered network of GCPs. The results accord with existing findings, where vertical error was found to increase with distance from the GCP cluster. Specifically vertical error and distance to the nearest GCP followed a strong polynomial trend (R2 = 0.792). These findings contribute to our understanding of the sources of error when conducting a UAV-SfM survey and provide guidance on the collection of GCPs. Evidence-driven UAV-SfM survey designs are essential for practitioners seeking reproducible, high quality topographic datasets for detecting surface change. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of Ennerdale in the UK. (<b>b</b>) An orthomosiac image of the study site.</p>
Full article ">Figure 2
<p>A digital surface model (DSM) of the Ennerdale moraines with the positions of ground control points (GCPs) (101 GCPs) and spot heights indicated (530 spot heights). The trimmed survey area displayed here is 0.071 km<sup>2</sup>.</p>
Full article ">Figure 3
<p>Ground control point (GCP) locations used to produce the 16 digital surface models (DSMs) reported in <a href="#remotesensing-08-00786-t001" class="html-table">Table 1</a>. The top panel relates to the models that use fewer than 15 GCPs. For example, for a model using 3 GCPs, the GCPs numbered 1, 2 and 3 were applied to the model. For a model using 7 GCPs, the GCPs numbered 1, 2, 3, 4, 5, 6, and 7 were applied to the model.</p>
Full article ">Figure 4
<p>(<b>a</b>) The residuals assessed by subtracting a digital surface model (DSM) produced using a uniform distribution of ground control points (GCPs) from a DSM produced using a sub-optimum clustered GCP distribution. Error increases into the decimetre range with distance from the GCP cluster. (<b>b</b>) Polynomial regression of sampled cells (a total of 10,000 values) from the DSM of difference highlighting the influence of GCP distribution on error.</p>
Full article ">
29991 KiB  
Article
An Inter-Comparison of Techniques for Determining Velocities of Maritime Arctic Glaciers, Svalbard, Using Radarsat-2 Wide Fine Mode Data
by Thomas Schellenberger, Wesley Van Wychen, Luke Copland, Andreas Kääb and Laurence Gray
Remote Sens. 2016, 8(9), 785; https://doi.org/10.3390/rs8090785 - 21 Sep 2016
Cited by 21 | Viewed by 8335
Abstract
Glacier dynamics play an important role in the mass balance of many glaciers, ice caps and ice sheets. In this study we exploit Radarsat-2 (RS-2) Wide Fine (WF) data to determine the surface speed of Svalbard glaciers in the winters of 2012/2013 and [...] Read more.
Glacier dynamics play an important role in the mass balance of many glaciers, ice caps and ice sheets. In this study we exploit Radarsat-2 (RS-2) Wide Fine (WF) data to determine the surface speed of Svalbard glaciers in the winters of 2012/2013 and 2013/2014 using Synthetic Aperture RADAR (SAR) offset and speckle tracking. The RS-2 WF mode combines the advantages of the large spatial coverage of the Wide mode (150 × 150 km) and the high pixel resolution (9 m) of the Fine mode and thus has a major potential for glacier velocity monitoring from space through offset and speckle tracking. Faster flowing glaciers (1.95 m·d−1–2.55 m·d−1) that are studied in detail are Nathorstbreen, Kronebreen, Kongsbreen and Monacobreen. Using our Radarsat-2 WF dataset, we compare the performance of two SAR tracking algorithms, namely the GAMMA Remote Sensing Software and a custom written MATLAB script (GRAY method) that has primarily been used in the Canadian Arctic. Both algorithms provide comparable results, especially for the faster flowing glaciers and the termini of slower tidewater glaciers. A comparison of the WF data to RS-2 Ultrafine and Wide mode data reveals the superiority of RS-2 WF data over the Wide mode data. Full article
(This article belongs to the Special Issue Remote Sensing of Glaciers)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Image extents of Radarsat-2 image pairs utilized in this study (green outlines denote Radarsat-2 Wide imagery acquired 25 December 2012 and 18 January 2013; red outlines denote Radarsat-2 Ultrafine imagery acquired 3 January 2013 and 20 February 2013; blue outlines denote Radarsat-2 Wide Fine imagery with acquisition dates). Names of the main islands and the location of Longyearbyen (LYB) are also marked. Background image: MODIS Terra RGB composite.</p>
Full article ">Figure 2
<p>Comparison of displacements of Kronebreen from RS-2 Wide Fine data and GPS: (<b>a</b>) RS-2 Wide Fine (GAMMA) vs. GPS; and (<b>b</b>) RS-2 Wide Fine (GRAY) vs. GPS. Solid line: regression line. Dashed line: line of equality (y = x). For GPS data see Schellenberger et al. [<a href="#B19-remotesensing-08-00785" class="html-bibr">19</a>], courtesy of J. Kohler (Norwegian Polar Institute) and C. Reijmer (Utrecht University).</p>
Full article ">Figure 3
<p>Surface speed of Svalbard glaciers processed with the GRAY method (fastest speed is quoted). #1: Kronebreen (2.12 m·d<sup>−1</sup>); #2: Kongsbreen (1.95 m·d<sup>−1</sup>); #3: Blomstrandbreen (1.96 m·d<sup>−1</sup>); #4: Liliehøøkbreen (1.33 m·d<sup>−1</sup>); #5: Monacobreen (2.13 m·d<sup>−1</sup>); #6: Mittag-Lefflerbreen (0.70 m·d<sup>−1</sup>); #7: Nordbreen (0.71 m·d<sup>−1</sup>); #8: Nordre Franklinbreen (0.72 m·d<sup>−1</sup>); #9: Idunbreen (1.76 m·d<sup>−1</sup>); #10: Frazerbreen (0.94 m·d<sup>−1</sup>); #11: Aldousbreen (1.06 m·d<sup>−1</sup>); #12: Bodleybreen (2.27 m·d<sup>−1</sup>); #13: Rijpbreen (1.83 m·d<sup>−1</sup>); #14: Duvebreen (1.14 m·d<sup>−1</sup>); #15: Schweigaardbreen (1.96 m·d<sup>−1</sup>); #16: Basin-3 (18.5 m·d<sup>−1</sup>, the peak of the surge could not be captured with GRAY); #17: Emil’janovbreen; #18: Oslokbreen (1.35 m·d<sup>−1</sup>); #19: Austre Torellbreen (1.20 m·d<sup>−1</sup>); #20: Nathorstbreen system (2.55 m·d<sup>−1</sup>); #21: Sveabreen (2.19 m·d<sup>−1</sup>); #22: Wahlenbergbreen (1.51 m·d<sup>−1</sup>); #23: Aavatsmarkbreen (1.40 m·d<sup>−1</sup>). Dates of speed estimates are given in <a href="#remotesensing-08-00785-f001" class="html-fig">Figure 1</a> and <a href="#remotesensing-08-00785-t002" class="html-table">Table 2</a>. Background image: MODIS Terra RGB composite.</p>
Full article ">Figure 4
<p>(<b>a</b>) Surface speed of the Nathorstbreen system (#20 in <a href="#remotesensing-08-00785-f003" class="html-fig">Figure 3</a>) in S-Spitsbergen consisting of Nathorstbreen (N), Dobrowolskibreen (D), Polakbreen (P) and Zawatzkibreen (Z) (19 December 2012–12 January 2013; WF; GRAY); and (<b>b</b>) speed profile along the centerline of Nathorstbreen (orange: GAMMA; green: GRAY).</p>
Full article ">Figure 5
<p>(<b>a</b>) Surface speed of Kronebreen (#1 in <a href="#remotesensing-08-00785-f003" class="html-fig">Figure 3</a>) in NW-Spitsbergen (1–25 January 2013; WF; GRAY); and (<b>b</b>) speed profile along the centerline of Kronebreen (orange: GAMMA; green: GRAY).</p>
Full article ">Figure 6
<p>(<b>a</b>) Surface speed of the Kongsbreen (#2 in <a href="#remotesensing-08-00785-f003" class="html-fig">Figure 3</a>) in NW-Spitsbergen (1–25 January 2013; WF; GRAY); and (<b>b</b>) speed profile along the centerline of Kongsbreen (orange: GAMMA; green: GRAY).</p>
Full article ">Figure 7
<p>(<b>a</b>) Surface speed of the Monacobreen (#5 in <a href="#remotesensing-08-00785-f003" class="html-fig">Figure 3</a>) in N-Spitsbergen (1–25 January 2013; WF; GRAY); and (<b>b</b>) speed profile along the centerline of Monacobreen (orange: GAMMA; green: GRAY).</p>
Full article ">Figure 8
<p>Frequency distribution of speed differences of 55 glaciers between GAMMA and GRAY.</p>
Full article ">Figure 9
<p>Surface speed of Kronebreen based on: (<b>a</b>) RS-2 UF data (3 January 2013–20 February 2013) processed with GAMMA; (<b>b</b>) RS-2 W data (25 December 2012–18 January 2013) processed with GAMMA; (<b>c</b>) RS-2 WF data (1 January 2013–25 January 2013) processed with GAMMA; and (<b>d</b>) RS-2 WF data (1 January 2013–25 January 2013) processed with GRAY.</p>
Full article ">Figure 10
<p>Validation of glacier speed on Kronebreen: Speed extracted from SAR maps on a 1 km point grid: (<b>a</b>) RS-2 UF GAMMA vs. RS-2 WF GAMMA; (<b>b</b>) RS-2 UF GAMMA vs. RS-2 WF GRAY; (<b>c</b>) RS-2 WF GRAY vs. RS-2 WF GAMMA; and (<b>d</b>) RS-2 UF GAMMA vs. RS-2 W GAMMA.</p>
Full article ">
1350 KiB  
Correction
Correction: Liu, Y. et al. Time-Dependent Afterslip of the 2009 Mw 6.3 Dachaidan Earthquake (China) and Viscosity beneath the Qaidam Basin Inferred from Postseismic Deformation Observations. Remote Sens. 2016, 8, 649
by Yang Liu, Caijun Xu, Zhenhong Li, Yangmao Wen, Jiajun Chen and Zhicai Li
Remote Sens. 2016, 8(9), 784; https://doi.org/10.3390/rs8090784 - 21 Sep 2016
Cited by 1 | Viewed by 4532
Abstract
After publication of the research paper [1] an error was recognized.[...] Full article
Show Figures

Figure 1

Figure 1
<p>Tectonics associated with the 2009 Mw 6.3 DCD earthquake. The bottom-left and bottom-right insets show the location of the main figure, respectively. The light blue rectangle is the spatial extent of the Envisat Advanced Synthetic Aperture Radar (ASAR) descending Track 319 images, with AZI and LOS referring to satellite azimuth and look directions. The focal mechanisms of the 2008 and 2009 events are from United States Geological Survey (USGS) [4]. The two purple rectangles are the surface projections of the main fault rupturing zones during the 2008 and 2009 events [3,21]. The black and yellow hollow circles are the aftershocks of the 2008 and 2009 events, respectively [4]. The purple and black lines are the active faults from Peltzer and Saucier [24] and Deng et al. [5], respectively. (Note that the references here correspond to those in the original manuscript).</p>
Full article ">
1741 KiB  
Editorial
Observation and Monitoring of Mangrove Forests Using Remote Sensing: Opportunities and Challenges
by Chandra Giri
Remote Sens. 2016, 8(9), 783; https://doi.org/10.3390/rs8090783 - 21 Sep 2016
Cited by 113 | Viewed by 17205
Abstract
Mangrove forests, distributed in the tropical and subtropical regions of the world, are in a constant flux. They provide important ecosystem goods and services to nature and society. In recent years, the carbon sequestration potential and protective role of mangrove forests from natural [...] Read more.
Mangrove forests, distributed in the tropical and subtropical regions of the world, are in a constant flux. They provide important ecosystem goods and services to nature and society. In recent years, the carbon sequestration potential and protective role of mangrove forests from natural disasters is being highlighted as an effective option for climate change adaptation and mitigation. The forests are under threat from both natural and anthropogenic forces. However, accurate, reliable, and timely information of the distribution and dynamics of mangrove forests of the world is not readily available. Recent developments in the availability and accessibility of remotely sensed data, advancement in image pre-processing and classification algorithms, significant improvement in computing, availability of expertise in handling remotely sensed data, and an increasing awareness of the applicability of remote sensing products has greatly improved our scientific understanding of changing mangrove forest cover attributes. As reported in this special issue, the use of both optical and radar satellite data at various spatial resolutions (i.e., 1 m to 30 m) to derive meaningful forest cover attributes (e.g., species discrimination, above ground biomass) is on the rise. This multi-sensor trend is likely to continue into the future providing a more complete inventory of global mangrove forest distributions and attribute inventories at enhanced temporal frequency. The papers presented in this “Special Issue” provide important remote sensing monitoring advancements needed to meet future scientific objectives for global mangrove forest monitoring from local to global scales. Full article
(This article belongs to the Special Issue Remote Sensing of Mangroves: Observation and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Distribution of the mangrove forests of the world for the year 2000 at 30 m spatial resolution [<a href="#B1-remotesensing-08-00783" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Conceptual diagram of the integration of data, computing, and methods using science and engineering to improve our scientific understanding of mangrove forest cover change.</p>
Full article ">Figure 3
<p>Conceptual framework of pre-processing and image classification showing centralized versus field/ground/local level processing.</p>
Full article ">
6170 KiB  
Article
An Inter-Comparison Study of Multi- and DBS Lidar Measurements in Complex Terrain
by Lukas Pauscher, Nikola Vasiljevic, Doron Callies, Guillaume Lea, Jakob Mann, Tobias Klaas, Julian Hieronimus, Julia Gottschall, Annedore Schwesig, Martin Kühn and Michael Courtney
Remote Sens. 2016, 8(9), 782; https://doi.org/10.3390/rs8090782 - 21 Sep 2016
Cited by 51 | Viewed by 9363
Abstract
Wind measurements using classical profiling lidars suffer from systematic measurement errors in complex terrain. Moreover, their ability to measure turbulence quantities is unsatisfactory for wind-energy applications. This paper presents results from a measurement campaign during which multiple WindScanners were focused on one point [...] Read more.
Wind measurements using classical profiling lidars suffer from systematic measurement errors in complex terrain. Moreover, their ability to measure turbulence quantities is unsatisfactory for wind-energy applications. This paper presents results from a measurement campaign during which multiple WindScanners were focused on one point next to a reference mast in complex terrain. This multi-lidar (ML) technique is also compared to a profiling lidar using the Doppler beam swinging (DBS) method. First- and second-order statistics of the radial wind velocities from the individual instruments and the horizontal wind components of several ML combinations are analysed in comparison to sonic anemometry and DBS measurements. The results for the wind speed show significantly reduced scatter and directional error for the ML method in comparison to the DBS lidar. The analysis of the second-order statistics also reveals a significantly better correlation for the ML technique than for the DBS lidar, when compared to the sonic. However, the probe volume averaging of the lidars leads to an attenuation of the turbulence at high wave numbers. Also the configuration (i.e., angles) of the WindScanners in the ML method seems to be more important for turbulence measurements. In summary, the results clearly show the advantages of the ML technique in complex terrain and indicate that it has the potential to achieve significantly higher accuracy in measuring turbulence quantities for wind-energy applications than classical profiling lidars. Full article
(This article belongs to the Special Issue Remote Sensing of Wind Energy)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sketch of the multi-lidar (ML) and Doppler beam swinging (DBS) strategies next to a reference mast with a sonic as used in this study. Note that the angles and positions of the lidars differ from the instrument setup for reasons of illustrative clarity.</p>
Full article ">Figure 2
<p>Instrumental setup during the measurement campaign. Left: aerial photograph (data source: published with kind permission of ©Hessische Verwaltung für Bodenmanagement und Geoinformation (HVBG)) with circles indicating the locations of the lidar and the met mast; the locations which are used in this study are indicated in black. Right: terrain (data source: Shuttle Radar Topography Mission [<a href="#B31-remotesensing-08-00782" class="html-bibr">31</a>] version 2.1) and trajectories of the intersecting lidar beams for the ML measurement. The windcube v2 (WC) is also located at the MA position.</p>
Full article ">Figure 3
<p>Variance of the radial velocity measurements of the WindScanners <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <msubsup> <msup> <mi>v</mi> <mo>′</mo> </msup> <mrow> <mi>r</mi> <mo> </mo> </mrow> <mn>2</mn> </msubsup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> <mrow> <mi>l</mi> <mi>i</mi> <mi>d</mi> <mi>a</mi> <mi>r</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> against the respective variance in the direction of the beam derived from the sonic anemometer <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <msubsup> <msup> <mi>v</mi> <mo>′</mo> </msup> <mrow> <mi>r</mi> <mo> </mo> </mrow> <mn>2</mn> </msubsup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> when the wind direction was close to parallel (±15°) to the azimuth angle of the WindScanner for (<b>a</b>) SE; (<b>b</b>) SW; (<b>c</b>) EE; grey diamonds indicate the unfiltered sonic measurements; blue crosses are the sonic data after the application of (10). The sonic data was filtered with <math display="inline"> <semantics> <mrow> <msup> <mrow> <mrow> <mo>|</mo> <mrow> <mover accent="true"> <mi>ζ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> </mrow> <mn>2</mn> </msup> </mrow> </semantics> </math> to separate out the effects of the temporal averaging.</p>
Full article ">Figure 4
<p>Spectra of <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mi>r</mi> </msub> </mrow> </semantics> </math> measured by different WindScanners and the sonic anemometer when the wind direction was close to parallel (±15°) to the azimuth angle of the WindScanner (<b>a</b>–<b>c</b>); (<b>d</b>) spectra of the MA location and the <math display="inline"> <semantics> <mi>w</mi> </semantics> </math> component of the sonic anemometer; sonic time series have been aggregated to 0.5 Hz before calculation of the spectra to account for the temporal averaging effect and the associated filter function <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>ζ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> ; the black line indicates the theoretical −2/3-slope in the inertial subrange; (<b>e</b>) spectral transfer function <math display="inline"> <semantics> <mrow> <msup> <mrow> <mrow> <mo>|</mo> <mover accent="true"> <mi>ζ</mi> <mo stretchy="false">^</mo> </mover> <mo>|</mo> </mrow> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> <msub> <mi>T</mi> <mrow> <msub> <mi>v</mi> <mi>r</mi> </msub> </mrow> </msub> </mrow> </semantics> </math> derived from the average of the normalised spectra in (<b>a</b>–<b>c</b>); only periods with <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>u</mi> <mo>¯</mo> </mover> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>4</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <msup> <mi>u</mi> <mrow> <mo>′</mo> <mn>2</mn> </mrow> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>0.2</mn> <msup> <mrow> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mn>2</mn> </msup> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> (SW, SE, EE), and <math display="inline"> <semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <msup> <mi>w</mi> <mrow> <mo>′</mo> <mn>2</mn> </mrow> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>0.1</mn> <msup> <mrow> <mtext> </mtext> <mi mathvariant="normal">m</mi> </mrow> <mn>2</mn> </msup> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> (MA) were used in the spectral averaging; n indicates the number of periods used for averaging.</p>
Full article ">Figure 5
<p>Scatter plots of <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>V</mi> <mi>h</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo> </mo> </mrow> </semantics> </math> from different lidar configurations against the reference sonic at 188 m; (<b>a</b>) SE/SW/EE; (<b>b</b>) SE/SW; (<b>c</b>) SE/EE; (<b>d</b>) SW/EE; (<b>e</b>) Windcube v2 next to the mast; (<b>a</b>–<b>d</b>) are measurements from WindScanners in ML mode; the Windcube in (<b>e</b>) is operated in DBS mode. For ML combinations with only two beams, <math display="inline"> <semantics> <mrow> <mi>w</mi> <mo>=</mo> <mn>0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> is assumed in the wind-vector reconstruction; n indicates the number of value pairs displayed in the individual scatter plots.</p>
Full article ">Figure 6
<p>(<b>a</b>) mean of the directional deviation in <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>V</mi> <mi>h</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <mo> </mo> </mrow> </semantics> </math> between the sonic and the different lidar configurations; (<b>b</b>) same as (<b>a</b>) but for the SE/EE combination with and without the correction for <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>w</mi> <mo>¯</mo> </mover> </mrow> </semantics> </math>; see text for details; periods during which <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>V</mi> <mi>h</mi> </msub> </mrow> <mo stretchy="true">¯</mo> </mover> <msub> <mo> </mo> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&lt;</mo> <mn>4</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> were excluded from the comparison to increase the comparability to other complex terrain measurements reported in the literature. Also, bins with <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">n</mi> <mo>&lt;</mo> <mn>5</mn> </mrow> </semantics> </math> are not displayed; error bars denote ± one standard deviation.</p>
Full article ">Figure 7
<p>Scatter plots of <math display="inline"> <semantics> <mrow> <mover accent="true"> <mrow> <msup> <mi>u</mi> <mrow> <mo>′</mo> <mn>2</mn> </mrow> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> <mtext> </mtext> </mrow> </semantics> </math> from different lidar configurations against the reference sonic at 188 m; (<b>a</b>) SE, SW and EE; (<b>b</b>) SE and SW; (<b>c</b>) SE and EE; (<b>d</b>) SW and EE; (<b>e</b>) Windcube v2 next to the mast; (<b>a</b>–<b>d</b>) are measurements from WindScanners in ML mode; the Wincube is operated in DBS mode. For ML combinations with only two beams, <math display="inline"> <semantics> <mrow> <mi>w</mi> <mo>=</mo> <mn>0</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> is assumed in the wind-vector reconstruction; n indicates the number of value pairs displayed in the individual scatter plots.</p>
Full article ">Figure 8
<p>Normalised spectra of <math display="inline"> <semantics> <mi>u</mi> </semantics> </math> and <math display="inline"> <semantics> <mi>v</mi> </semantics> </math> . For (<b>a</b>) SW/SE/EE, (<b>b</b>) SW/SE and (<b>c</b>) Windcube v2 next to the mast; solid lines are the sonic spectra; dashed lines are the lidar spectra; sonic time series have been aggregated to 0.5 Hz (a and b) and 0.89 Hz before calculation of the spectra; only periods with <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>u</mi> <mo>¯</mo> </mover> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>4</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <msup> <mi>u</mi> <mrow> <mo>′</mo> <mn>2</mn> </mrow> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> <mrow> <mi>s</mi> <mi>o</mi> <mi>n</mi> <mi>i</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>0.2</mn> <mo> </mo> <msup> <mi mathvariant="normal">m</mi> <mn>2</mn> </msup> <mo>·</mo> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math> were used in the spectral averaging; <math display="inline"> <semantics> <mrow> <msub> <mi>u</mi> <mo>*</mo> </msub> </mrow> </semantics> </math> is the friction velocity computed from the sonic anemometer measurements; the black line indicates the theoretical −2/3 slope in the inertial subrange; n indicates the number of periods used for averaging.</p>
Full article ">
16483 KiB  
Article
Cultural Heritage Sites in Danger—Towards Automatic Damage Detection from Space
by Daniele Cerra, Simon Plank, Vasiliki Lysandrou and Jiaojiao Tian
Remote Sens. 2016, 8(9), 781; https://doi.org/10.3390/rs8090781 - 21 Sep 2016
Cited by 46 | Viewed by 8905
Abstract
The intentional damage to local Cultural Heritage sites carried out in recent months by the Islamic State have received wide coverage from the media worldwide. Earth Observation data provide important information to assess this damage in such non-accessible areas, and automated image processing [...] Read more.
The intentional damage to local Cultural Heritage sites carried out in recent months by the Islamic State have received wide coverage from the media worldwide. Earth Observation data provide important information to assess this damage in such non-accessible areas, and automated image processing techniques will be needed to speed up the analysis if a fast response is desired. This paper shows the first results of applying fast and robust change detection techniques to sensitive areas, based on the extraction of textural information and robust differences of brightness values related to pre- and post-disaster satellite images. A map highlighting potentially damaged buildings is derived, which could help experts at timely assessing the damages to the Cultural Heritage sites of interest. Encouraging results are obtained for two archaeological sites in Syria and Iraq. Full article
(This article belongs to the Special Issue Remote Sensing for Cultural Heritage)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Real parts of the Gabor filter bank used in the experiments in this paper according to the different orientations and scales. Highlighted are the filters giving the highest contribution to the detection to be described in next section.</p>
Full article ">Figure 2
<p>Map showing the locations of the two sites analyzed in this paper.</p>
Full article ">Figure 3
<p>Palmyra’s archaeological site: Temple of Bel (<b>a</b>) ©Dario Bajurin and tower tombs in the background (<b>b</b>) ©szymanskim.</p>
Full article ">Figure 4
<p>Subset of the WorldView-2 image acquired on the 27 August 2015 (©European Space Imaging/DigitalGlobe).</p>
Full article ">Figure 5
<p>Subset of the WorldView-2 image acquired on the 2 September 2015 (©European Space Imaging/DigitalGlobe).</p>
Full article ">Figure 6
<p>Preliminary change maps derived from differences of Gabor features (<b>a</b>) and Robust Differences (<b>b</b>). A limited number of false alarms is visible in both.</p>
Full article ">Figure 7
<p>Change map obtained combining the results in <a href="#remotesensing-08-00781-f006" class="html-fig">Figure 6</a>a,b. The impact of false alarms is mitigated and the four targets stand out clearly.</p>
Full article ">Figure 8
<p>Red: post-processed change map from <a href="#remotesensing-08-00781-f007" class="html-fig">Figure 7</a> overlaid on the 2 September 2015 WorldView-2 image reported in <a href="#remotesensing-08-00781-f005" class="html-fig">Figure 5</a>. All the main damaged areas are correctly identified (©European Space Imaging/DigitalGlobe).</p>
Full article ">Figure 9
<p>Report from [<a href="#B29-remotesensing-08-00781" class="html-bibr">29</a>] showing destroyed CH sites in Palmyra. This map was used as validation for the detected damaged areas (©ASOR CHI/DigitalGlobe).</p>
Full article ">Figure 10
<p>Pre-disaster image acquired on the 20 February 2014, screenshot from Google Earth (©DigitalGlobe).</p>
Full article ">Figure 11
<p>red: post-processed changes overlaid on the 2 September 2015 WorldView-2 image reported in <a href="#remotesensing-08-00781-f005" class="html-fig">Figure 5</a>. All the main damaged areas are correctly identified with two small false alarms in the southern part of the image (©European Space Imaging/DigitalGlobe).</p>
Full article ">Figure 12
<p>Multitemporal representation of damage overlaid on the 2 September 2015 WorldView-2 image reported in <a href="#remotesensing-08-00781-f005" class="html-fig">Figure 5</a> (©European Space Imaging/DigitalGlobe). Damage that occurred between <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics> </math> and between <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>2</mn> </msub> </semantics> </math> is highlighted in blue and red, respectively.</p>
Full article ">Figure 13
<p>Nimrud archaeological site topographic map.</p>
Full article ">Figure 14
<p>(<b>left</b>) Nimrud pre- (GeoEye-1, 11 July 2011) and (<b>right</b>) post-event image (WorldView-2, 20 April 2015) (©European Space Imaging/DigitalGlobe).</p>
Full article ">Figure 15
<p>Nimrud change map, overlaid on the post-destruction image (WorldView-2, 20 April 2015).</p>
Full article ">
6838 KiB  
Article
Biophysical Characterization of Protected Areas Globally through Optimized Image Segmentation and Classification
by Javier Martínez-López, Bastian Bertzky, Francisco Javier Bonet-García, Lucy Bastin and Grégoire Dubois
Remote Sens. 2016, 8(9), 780; https://doi.org/10.3390/rs8090780 - 21 Sep 2016
Cited by 10 | Viewed by 8122
Abstract
Protected areas (PAs) need to be assessed systematically according to biodiversity values and threats in order to support decision-making processes. For this, PAs can be characterized according to their species, ecosystems and threats, but such information is often difficult to access and usually [...] Read more.
Protected areas (PAs) need to be assessed systematically according to biodiversity values and threats in order to support decision-making processes. For this, PAs can be characterized according to their species, ecosystems and threats, but such information is often difficult to access and usually not comparable across regions. There are currently over 200,000 PAs in the world, and assessing these systematically according to their ecological values remains a huge challenge. However, linking remote sensing with ecological modelling can help to overcome some limitations of conservation studies, such as the sampling bias of biodiversity inventories. The aim of this paper is to introduce eHabitat+, a habitat modelling service supporting the European Commission’s Digital Observatory for Protected Areas, and specifically to discuss a component that systematically stratifies PAs into different habitat functional types based on remote sensing data. eHabitat+ uses an optimized procedure of automatic image segmentation based on several environmental variables to identify the main biophysical gradients in each PA. This allows a systematic production of key indicators on PAs that can be compared globally. Results from a few case studies are illustrated to show the benefits and limitations of this open-source tool. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the protected areas selected for this study (European Petroleum Survey Group—EPSG:4326).</p>
Full article ">Figure 2
<p>Workflow of the whole methodology for each protected area and the main piece of software used at each step.</p>
Full article ">Figure 3
<p>Map of habitat functional types (HFT) identified in the Sierra Nevada National Park and normalized mean values of the biophysical variables used (European Petroleum Survey Group—EPSG:4326). The description of the study variables can be found in <a href="#sec2dot2-remotesensing-08-00780" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 4
<p>Map of predominant ecosystem types in Sierra Nevada National Park (abbreviated as SN ECO in the legend) spatially grouped to best match the resulting HFTs (source: [<a href="#B94-remotesensing-08-00780" class="html-bibr">94</a>]; European Petroleum Survey Group—EPSG:4326). Group legend: (1) natural pine tree forest; (2) oak tree forest and pine plantations; (3) high mountain shrubland, bare rock and scree; (4) agricultural areas, mid-mountain shrubland and grassland.</p>
Full article ">Figure 5
<p>Map of HFTs identified in the Virunga National Park and normalized mean values of the biophysical variables used (European Petroleum Survey Group—EPSG:4326). The description of the study variables can be found in <a href="#sec2dot2-remotesensing-08-00780" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 6
<p>Map of HFTs identified in the Kakadu National Park and normalized mean values of the biophysical variables used (European Petroleum Survey Group—EPSG:4326). The description of the study variables can be found in <a href="#sec2dot2-remotesensing-08-00780" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 7
<p>Map of HFTs identified in the Okavango Delta World Heritage Site and normalized mean values of the biophysical variables used (European Petroleum Survey Group—EPSG:4326). The description of the study variables can be found in <a href="#sec2dot2-remotesensing-08-00780" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 8
<p>Map of HFTs identified in the Canaima National Park and normalized mean values of the biophysical variables used (European Petroleum Survey Group—EPSG:4326). The description of the study variables can be found in <a href="#sec2dot2-remotesensing-08-00780" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 9
<p>Example of autocorrelation and variance curves used for the optimization of the similarity threshold parameter for Canaima National Park. Variance refers to the mean internal variance of all of the segments based on all input variables, and Moran’s <span class="html-italic">I</span> refers to the mean autocorrelation based on all input variables. See <a href="#sec2dot3-remotesensing-08-00780" class="html-sec">Section 2.3</a> for more details.</p>
Full article ">
14744 KiB  
Article
An Image-Based Approach for the Co-Registration of Multi-Temporal UAV Image Datasets
by Irene Aicardi, Francesco Nex, Markus Gerke and Andrea Maria Lingua
Remote Sens. 2016, 8(9), 779; https://doi.org/10.3390/rs8090779 - 21 Sep 2016
Cited by 55 | Viewed by 9086
Abstract
During the past years, UAVs (Unmanned Aerial Vehicles) became very popular as low-cost image acquisition platforms since they allow for high resolution and repetitive flights in a flexible way. One application is to monitor dynamic scenes. However, the fully automatic co-registration of the [...] Read more.
During the past years, UAVs (Unmanned Aerial Vehicles) became very popular as low-cost image acquisition platforms since they allow for high resolution and repetitive flights in a flexible way. One application is to monitor dynamic scenes. However, the fully automatic co-registration of the acquired multi-temporal data still remains an open issue. Most UAVs are not able to provide accurate direct image georeferencing and the co-registration process is mostly performed with the manual introduction of ground control points (GCPs), which is time consuming, costly and sometimes not possible at all. A new technique to automate the co-registration of multi-temporal high resolution image blocks without the use of GCPs is investigated in this paper. The image orientation is initially performed on a reference epoch and the registration of the following datasets is achieved including some anchor images from the reference data. The interior and exterior orientation parameters of the anchor images are then fixed in order to constrain the Bundle Block Adjustment of the slave epoch to be aligned with the reference one. The study involved the use of two different datasets acquired over a construction site and a post-earthquake damaged area. Different tests have been performed to assess the registration procedure using both a manual and an automatic approach for the selection of anchor images. The tests have shown that the procedure provides results comparable to the traditional GCP-based strategy and both the manual and automatic selection of the anchor images can provide reliable results. Full article
(This article belongs to the Special Issue Recent Trends in UAV Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Reference Image Block Co-registration procedure.</p>
Full article ">Figure 2
<p>Workflow (<b>on the left</b>) of the automatic selection of anchor images. (<b>On the right</b>) a visual explanation of the procedure: (<b>a</b>) the overlap of the two datasets (reference and input) according to the GNSS Exif information; (<b>b</b>) feature matching between a reference and an input image in the same area; (<b>c</b>) Alpha Shape generation according to the selected common points between images; (<b>d</b>) reference image selection according to a defined threshold <span class="html-italic">t<sub>A</sub></span>.</p>
Full article ">Figure 3
<p>The case study: EFPL’s SwissTech Convention Center in Lausanne, Switzerland. In the right image there is an example of the flight plan and image acquisition.</p>
Full article ">Figure 4
<p>CPs’ RMSE results for the Reference Ground Control Point-based registration. The blue line represents the mean horizontal RMSE and the green one is for the vertical component. On the right, a statistical summary of the obtained RMSE results is presented.</p>
Full article ">Figure 5
<p>Example of a reference image (epoch 2 with the red border) and the same area in the other datasets.</p>
Full article ">Figure 6
<p>CPs’ RMSE results for the Reference Image-Based Co-registration with 18 manually selected images from the reference Epoch 2. The blue line represents the mean horizontal RMSE and the green one is for the vertical component. On the right, statistical summary of the obtained RMSE results.</p>
Full article ">Figure 7
<p>RGCP and RIBC comparison: CPs’ RMSE for the horizontal (<b>left</b>) and vertical (<b>right</b>) component.</p>
Full article ">Figure 8
<p>Effect of low image quality: uneven keypoints distribution. (<b>a</b>) The original image with the presence of fog; (<b>b</b>) keypoint distribution on the original image; (<b>c</b>) keypoint distribution on the image after the pre-processing with the Wallis filter.</p>
Full article ">Figure 9
<p>CPs’ RMSE results for the Reference Image-Based Co-registration with 18 manually selected images from the reference Epoch 10. The blue line represents the mean horizontal RMSE and the green one is for the vertical component. On the right, a statistical summary of the obtained RMSE results is provided.</p>
Full article ">Figure 10
<p>RIBC comparison with Epoch 2 and Epoch 10 as reference: CPs RMSE for the horizontal (<b>left</b>) and vertical (<b>right</b>) component.</p>
Full article ">Figure 11
<p>CPs RMSE results for the Reference Image-Based Co-registration with 18 manually selected images using a sequential approach for the choice of the reference epoch. The hand right table provides a statistical summary of the obtained RMSE results.</p>
Full article ">Figure 12
<p>RIBC comparison with 18—37 and 67 (full) anchor images: CPs RMSE for the horizontal (<b>left</b>) and vertical (<b>right</b>) component.</p>
Full article ">Figure 13
<p>CPs’ RMSE results for the Reference Image-Based Co-registration with automatically selected images from the reference Epoch 2. The blue line represents the mean horizontal RMSE and the green one is for the vertical component. The right hand table provides a statistical summary of the obtained RMSE results.</p>
Full article ">Figure 14
<p>RIBC comparison: 18 (in blue) and 37 (in red) manually selected images and automatic selection (in green): CPs RMSE for the horizontal (<b>left</b>) and vertical (<b>right</b>) component.</p>
Full article ">Figure 15
<p>Image footprint and distribution of both the manual (18 and 37 images) and the automatic approach in Epoch 3.</p>
Full article ">Figure 16
<p>The Taiwan datasets: a built up area after an earthquake.</p>
Full article ">Figure 17
<p>Alpha Shape area generated from the Taiwan dataset with the automatic procedure: (<b>a</b>) image selected in the “stable” area; (<b>b</b>) image rejected in the collapsed area.</p>
Full article ">Figure 18
<p>Results comparison between the RGCP, the RIBC manual (18 images) and the RIBC automatic approaches: CPs RMSE for the horizontal (<b>left</b>) and vertical (<b>right</b>) component.</p>
Full article ">
3270 KiB  
Article
An Integrated Approach for Monitoring Contemporary and Recruitable Large Woody Debris
by Jeffrey J. Richardson and L. Monika Moskal
Remote Sens. 2016, 8(9), 778; https://doi.org/10.3390/rs8090778 - 20 Sep 2016
Cited by 14 | Viewed by 4947
Abstract
Large woody debris (LWD) plays a critical structural role in riparian ecosystems, but it can be difficult and time-consuming to quantify and survey in the field. We demonstrate an automated method for quantifying LWD using aerial LiDAR and object-based image analysis techniques, as [...] Read more.
Large woody debris (LWD) plays a critical structural role in riparian ecosystems, but it can be difficult and time-consuming to quantify and survey in the field. We demonstrate an automated method for quantifying LWD using aerial LiDAR and object-based image analysis techniques, as well as a manual method for quantifying LWD using image interpretation derived from LiDAR rasters and aerial four-band imagery. In addition, we employ an established method for estimating the number of individual trees within the riparian forest. These methods are compared to field data showing high accuracies for the LWD method and moderate accuracy for the individual tree method. These methods can be integrated to quantify the contemporary and recruitable LWD in a river system. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) shows Washington State with King County in grey and the study area location as a black dot (47.482878, −122.217066); (<b>B</b>) shows the four river systems and the extent of remotely sensed data coverage; (<b>C</b>) shows an example of a large wood plot (location shown by * in <b>B</b>); and (<b>D</b>) shows an example of an individual tree plot (location shown by + in <b>B</b>).</p>
Full article ">Figure 2
<p>Histogram of individual tree diameters at breast height (DBH) in all individual tree (IT) plots.</p>
Full article ">Figure 3
<p>Histogram of individual tree heights in all individual tree (IT) plots.</p>
Full article ">Figure 4
<p>Map of LWD for Cedar River using the automated method.</p>
Full article ">Figure 5
<p>Comparison of automated LWD detection to field-surveyed LWD at Cedar River.</p>
Full article ">Figure 6
<p>Model-predicted LWD as compared to field-measured LWD at Green, Raging, and Snoqualmie Rivers.</p>
Full article ">Figure 7
<p>Comparison of model-predicted LWD detection to field-measured LWD at all rivers.</p>
Full article ">Figure 8
<p>Map of LWD for Cedar River using the manual method with imagery and LiDAR combined.</p>
Full article ">Figure 9
<p>Amount of variability explained by model predicting field-measured LWD length from manually-identified LWD length when the minimum length for inclusion in the model is reduced from 0–25 m.</p>
Full article ">Figure 10
<p>Assessment of accuracy of manual LWD identification when compared to field-surveyed LDW with a minimum length of 18 m at Cedar River.</p>
Full article ">Figure 11
<p>Map of individual and clumped trees for Cedar River.</p>
Full article ">Figure 12
<p>Accuracy of individual tree frequency estimation using LiDAR-based methods for trees greater than 20 m in height.</p>
Full article ">Figure 13
<p>LWD from a Cedar River plot. Red lines are manually-identified LWD.</p>
Full article ">Figure 14
<p>Example of LWD recruitment potential in high risk areas.</p>
Full article ">
13362 KiB  
Article
Sediment-Mass Accumulation Rate and Variability in the East China Sea Detected by GRACE
by Ya-Chi Liu, Cheinway Hwang, Jiancheng Han, Ricky Kao, Chau-Ron Wu, Hsuan-Chang Shih and Natthachet Tangdamrongsub
Remote Sens. 2016, 8(9), 777; https://doi.org/10.3390/rs8090777 - 20 Sep 2016
Cited by 16 | Viewed by 7872
Abstract
The East China Sea (ECS) is a region with shallow continental shelves and a mixed oceanic circulation system allowing sediments to deposit on its inner shelf, particularly near the estuary of the Yangtze River. The seasonal northward-flowing Taiwan Warm Current and southward-flowing China [...] Read more.
The East China Sea (ECS) is a region with shallow continental shelves and a mixed oceanic circulation system allowing sediments to deposit on its inner shelf, particularly near the estuary of the Yangtze River. The seasonal northward-flowing Taiwan Warm Current and southward-flowing China Coastal Current trap sediments from the Yangtze River, which are accumulated over time at rates of up to a few mm/year in equivalent water height. Here, we use the Gravity Recovery and Climate Experiment (GRACE) gravity products from three data centres to determine sediment mass accumulation rates (MARs) and variability on the ECS inner shelf. We restore the atmospheric and oceanic effects to avoid model contaminations on gravity signals associated with sediment masses. We apply destriping and spatial filters to improve the gravity signals from GRACE and use the Global Land Data Assimilation System to reduce land leakage. The GRACE-derived MARs over April 2002–March 2015 on the ECS inner shelf are about 6 mm/year and have magnitudes and spatial patterns consistent with those from sediment-core measurements. The GRACE-derived monthly sediment depositions show variations at time scales ranging from six months to more than two years. Typically, a positive mass balance of sediment deposition occurs in late fall to early winter when the southward coastal currents prevail. A negative mass balance happens in summer when the coastal currents are northward. We identify quasi-biennial sediment variations, which are likely to be caused by quasi-biennial variations in rain and erosion in the Yangtze River basin. We briefly explain the mechanisms of such frequency-dependent variations in the GRACE-derived ECS sediment deposition. There is no clear perturbation on sediment deposition over the ECS inner shelf induced by the Three Gorges Dam. The limitations of GRACE in resolving sediment deposition are its low spatial resolution (about 250 km) and possible contaminations by land hydrological and oceanic signals. Potential GRACE-derived sediment depositions in six major estuaries are presented. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Regional ocean circulation pattern and ETOPO1 (1 arc-minute global relief model of Earth’s surface) bathymetry around the East China Sea (ECS). The red dot shows the Datong hydrographic station; the area scattered with brown dots is the targeted area of sediment deposition near the Yangtze estuary. Abbreviations: TWC, Taiwan Warm Current; KC, Kuroshio Current; JCC, Jiangsu Coastal Current; ZFCC, Zhejiang Fujian Coastal Current; ECSCC, East China Sea Coastal Current (based on Deng et al. [<a href="#B5-remotesensing-08-00777" class="html-bibr">5</a>] and Liu et al. [<a href="#B8-remotesensing-08-00777" class="html-bibr">8</a>]).</p>
Full article ">Figure 2
<p>A flowchart of GRACE data processing for determining sediment deposition. SHC, spherical harmonic coefficient; GLDAS, Global Land Data Assimilation System; EWH, equivalent water height.</p>
Full article ">Figure 3
<p>GRACE-derived mass changes over the ECS in connection with sediment deposition, sea level change and depletion of seawater. A sample sediment core is drilled to analyse the cumulative mass of the core.</p>
Full article ">Figure 4
<p>Monthly GRACE-derived EWHs (blue squares) over April 2002–March 2015 and rates (red line) from (<b>a</b>) CSR; (<b>b</b>) JPL; (<b>c</b>) GFZ and (<b>d</b>) the mean of the three. An EWH is the mean of the EWHs at the grids over 122–126°E, 27–32°N (the thick black dots in the dashed boxes in <a href="#remotesensing-08-00777-f005" class="html-fig">Figure 5</a>a–c).</p>
Full article ">Figure 5
<p>GRACE-derived sediment mass accumulation rates (MARs) (EWH rates) from (<b>a</b>) CSR; (<b>b</b>) JPL and (<b>c</b>) GFZ over April 2002–March 2015; and (<b>d</b>–<b>f</b>) are the standard deviations of the rates. The dashed boxes contain the area (27–32°N, 122–126°E) where the area-averaged EWHs in <a href="#remotesensing-08-00777-f004" class="html-fig">Figure 4</a> are determined.</p>
Full article ">Figure 6
<p>Sediment MARs (in cm/year) over April 2002–March 2015 from (<b>a</b>) GRACE and (<b>b</b>) sediment core measurements [<a href="#B4-remotesensing-08-00777" class="html-bibr">4</a>]. The EWH rate is equivalent to MAR (see <a href="#sec3dot1-remotesensing-08-00777" class="html-sec">Section 3.1</a>).</p>
Full article ">Figure 7
<p>EWHs from GRACE and comparison with the sediment discharge of Yangtze. (<b>a</b>) EWHs on the ECS inner shelf from mean GRACE products and GLDAS; (<b>b</b>) Comparison between measured sediment discharges of the Yangtze River at the Datong hydrographic station and GRACE-derived EWHs around this station. The shaded periods mark the time periods of impoundments of the Three Gorge Dam (TGD). In June 2003, the TGD water level was raised to 135 m above sea level, followed by the rise to 156 m in late October 2006 and, finally, to 175 m in late October 2010.</p>
Full article ">Figure 8
<p>(<b>a</b>) Time series of detrended GRACE-derived EWHs and (<b>b</b>) their wavelet spectrum; (<b>c</b>) Time series of the Multivariate ENSO Index (MEI) obtained from NOAA [<a href="#B36-remotesensing-08-00777" class="html-bibr">36</a>] and (<b>d</b>) its wavelet spectrum.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>l</b>) Different spatial patterns of EWH rates on the ECS inner shelf using GRACE records spanning different time periods.</p>
Full article ">Figure 10
<p>GRACE-derived EWH rates near the estuaries of the (<b>a</b>) Amazon; (<b>b</b>) Congo; (<b>c</b>) Indus; (<b>d</b>) Mississippi; (<b>e</b>) Pearl and (<b>f</b>) Rhine rivers over April 2002–March 2015.</p>
Full article ">
5218 KiB  
Article
Dynamics of Fractional Vegetation Coverage and Its Relationship with Climate and Human Activities in Inner Mongolia, China
by Siqin Tong, Jiquan Zhang, Si Ha, Quan Lai and Qiyun Ma
Remote Sens. 2016, 8(9), 776; https://doi.org/10.3390/rs8090776 - 20 Sep 2016
Cited by 70 | Viewed by 7341
Abstract
Long-term remote sensing normalized difference vegetation index (NDVI) datasets have been widely used in monitoring vegetation changes. In this study, the NASA Global Inventory Modeling and Mapping Studies (GIMMS) NDVI3g dataset was used as the data source, and the dimidiate pixel model, intensity [...] Read more.
Long-term remote sensing normalized difference vegetation index (NDVI) datasets have been widely used in monitoring vegetation changes. In this study, the NASA Global Inventory Modeling and Mapping Studies (GIMMS) NDVI3g dataset was used as the data source, and the dimidiate pixel model, intensity analysis, and residual analysis were used to analyze the changes of vegetation coverage in Inner Mongolia—from 1982 to 2010—and their relationships with climate and human activities. This study also explored vegetation changes in Inner Mongolia with respect to natural factors and human activities. The results showed that the estimated vegetation coverage exhibited a high correlation (0.836) with the actual measured values. The increased vegetation coverage area (49.2% of the total area) was larger than the decreased area (43.3%) from the 1980s to the 1990s, whereas the decreased area (57.1%) was larger than the increased area (35.6%) from the 1990s to the early 21st century. This finding indicates that vegetation growth in the 1990s was better than that in the other two decades. Intensity analysis revealed that changes in the average annual rate from the 1990s to the early 21st century were relatively faster than those in the 1980s–1990s. During the 1980s–1990s, the gain of high vegetation coverage areas was active, and the loss was dormant; in contrast, the gain and loss of low vegetation coverage areas were both dormant. In the 1990s to the early 21st century, the gains of high and low vegetation coverage areas were both dormant, whereas the losses were active. During the study period, areas of low vegetation coverage were converted into ones with higher coverage, and areas of high vegetation coverage were converted into ones with lower coverage. The vegetation coverage exhibited a good correlation (R2 = 0.60) with precipitation, and the positively correlated area was larger than the negatively correlated area. Human activities not only promote the vegetation coverage, but also have a destructive effect on vegetation, and the promotion effect during 1982 to 2000 was larger than from 2001 to 2010, while, the destructive effect was larger from 2000 to 2010. Full article
(This article belongs to the Special Issue Remote Sensing of Vegetation Structure and Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic locations of the study area and the field observation site.</p>
Full article ">Figure 2
<p>Correlation analysis of the estimated and measured values of FVC.</p>
Full article ">Figure 3
<p>Spatial distribution of the vegetation coverage degree in each decade in Inner Mongolia. (where, (<b>a</b>–<b>c</b>) represents the 1980s, 1990s, and the early 21st century, respectively).</p>
Full article ">Figure 4
<p>Spatial distribution variation of vegetation coverage from the 1980s to the 1990s (<b>a</b>) and from the 1990s to the early 21st century (<b>b</b>). (The sub-figures indicate the area percentage of each change category, and the following sub-figures also have same meaning.).</p>
Full article ">Figure 5
<p>Time intensity analysis of two time intervals: the 1980s–1990s and the 1990s–early 21st century.</p>
Full article ">Figure 6
<p>Intensity category analysis for two time intervals: 1980s–1990s (<b>a1</b>,<b>a2</b>) and 1990s–early 21st century (<b>b1</b>,<b>b2</b>); a/b1 for gains and a/b2 for losses.</p>
Full article ">Figure 7
<p>Spatial patterns of FVC trends in Inner Mongolia in 1982–2000 (<b>a</b>) and 2001–2010 (<b>b</b>).</p>
Full article ">Figure 8
<p>Temporal trends of growing season mean NDVI (<b>a</b>); precipitation (<b>b</b>); and temperature (<b>c</b>) in Inner Mongolia from 1982 to 2010.</p>
Full article ">Figure 9
<p>Correlation between FVC and precipitation in 1982–2000 (<b>a</b>) and 2001–2010 (<b>b</b>).</p>
Full article ">Figure 10
<p>Residual trend images of NDVI in Inner Mongolia during 1982–2000 (<b>a</b>) and 2001–2010 (<b>b</b>).</p>
Full article ">
4217 KiB  
Article
Long-Term Monitoring of the Flooding Regime and Hydroperiod of Doñana Marshes with Landsat Time Series (1974–2014)
by Ricardo Díaz-Delgado, David Aragonés, Isabel Afán and Javier Bustamante
Remote Sens. 2016, 8(9), 775; https://doi.org/10.3390/rs8090775 - 20 Sep 2016
Cited by 49 | Viewed by 10634
Abstract
This paper presents a semi-automatic procedure to discriminate seasonally flooded areas in the shallow temporary marshes of Doñana National Park (SW Spain) by using a radiommetrically normalized long time series of Landsat MSS, TM, and ETM+ images (1974–2014). Extensive field campaigns for ground [...] Read more.
This paper presents a semi-automatic procedure to discriminate seasonally flooded areas in the shallow temporary marshes of Doñana National Park (SW Spain) by using a radiommetrically normalized long time series of Landsat MSS, TM, and ETM+ images (1974–2014). Extensive field campaigns for ground truth data retrieval were carried out simultaneous to Landsat overpasses. Ground truth was used as training and testing areas to check the performance of the method. Simple thresholds on TM and ETM band 5 (1.55–1.75 μm) worked significantly better than other empirical modeling techniques and supervised classification methods to delineate flooded areas at Doñana marshes. A classification tree was applied to band 5 reflectance values to classify flooded versus non-flooded pixels for every scene. Inter-scene cross-validation identified the most accurate threshold on band 5 reflectance (ρ < 0.186) to classify flooded areas (Kappa = 0.65). A joint TM-MSS acquisition was used to find the MSS band 4 (0.8 a 1.1 μm) threshold. The TM flooded area was identical to the results from MSS 4 band threshold ρ < 0.10 despite spectral and spatial resolution differences. Band slicing was retrospectively applied to the complete time series of MSS and TM images. About 391 flood masks were used to reconstruct historical spatial and temporal patterns of Doñana marshes flooding, including hydroperiod. Hydroperiod historical trends were used as a baseline to understand Doñana’s flooding regime, test hydrodynamic models, and give an assessment of relevant management and restoration decisions. The historical trends in the hydroperiod of Doñana marshes show two opposite spatial patterns. While the north-western part of the marsh is increasing its hydroperiod, the southwestern part shows a steady decline. Anomalies in each flooding cycle allowed us to assess recent management decisions and monitor their hydrological effects. Full article
(This article belongs to the Special Issue What can Remote Sensing Do for the Conservation of Wetlands?)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of (<b>a</b>) Andalusia in Europe and (<b>b</b>) Doñana in Andalusia at the mouth of the Guadalquivir River; (<b>c</b>) the area covered by Doñana natural marshes (solid blue) within the protected area of Doñana National and Natural parks (red line). Numbers indicate main tributary rivers and ravines (1. <span class="html-italic">Arroyo de la Rocina</span>; 2. <span class="html-italic">Arroyo del Partido</span>; 3. <span class="html-italic">Encauzamiento del Río Guadiamar</span>). Background: False color composite of a 2007 Landsat TM image.</p>
Full article ">Figure 2
<p>Different flooding features in Doñana marshes showing the spatio-temporal variability found in these wetlands. The flooded picture shows the shallow inundation with variable plant cover and turbidity.</p>
Full article ">Figure 3
<p>Flowchart of the methodology followed in the study.</p>
Full article ">Figure 4
<p>(<b>a</b>) Overlay of flooding masks for the wet cycle in 2003–2004 (total rainfall = 775 mm); (<b>b</b>) calculated hydroperiod in number of days for the same cycle. Color palettes and number of classes are the same for comparison purposes. The white tack indicates the location of the N28 limnimetric scale.</p>
Full article ">Figure 5
<p>Number of total available flooding masks per flooding cycle. Gini coefficients are also depicted (solid line), pointing up the uniform distribution of flooding masks for every flooding cycle (high values indicate uneven distribution of available flooding masks). Cycle range values are also shown (dotted line), revealing the time span between the earliest and latest flooding masks. Asterisks indicate the discarded cycles for time series analysis.</p>
Full article ">Figure 6
<p>Mean marsh hydroperiod versus total accumulated rainfall per flooding cycle. Asterisks indicate the discarded cycles for time series analysis. Rainfall data available since 1978.</p>
Full article ">Figure 7
<p>(<b>a</b>) Mean hydroperiod for Doñana marshes (1974–2014); (<b>b</b>) Hydroperiod anomaly calculated for the flooding cycle 2007–2008. Both are expressed in days per year. Numbers in both figures indicate interesting features (see text for more details).</p>
Full article ">Figure 8
<p>Theil–Sen slope in days/year for the hydroperiod time series (1974–2014). Red indicates consistent and significant decreasing trend while blue indicates the opposite. Yellowish pixels show significant but slight trends. Transparent pixels show non-significant trends. Label 1 locates <span class="html-italic">Vera Sur</span> (see text for details).</p>
Full article ">
845 KiB  
Article
Local Knowledge and Professional Background Have a Minimal Impact on Volunteer Citizen Science Performance in a Land-Cover Classification Task
by Carl Salk, Tobias Sturn, Linda See and Steffen Fritz
Remote Sens. 2016, 8(9), 774; https://doi.org/10.3390/rs8090774 - 20 Sep 2016
Cited by 13 | Viewed by 7082
Abstract
The idea that closer things are more related than distant things, known as ‘Tobler’s first law of geography’, is fundamental to understanding many spatial processes. If this concept applies to volunteered geographic information (VGI), it could help to efficiently allocate tasks in citizen [...] Read more.
The idea that closer things are more related than distant things, known as ‘Tobler’s first law of geography’, is fundamental to understanding many spatial processes. If this concept applies to volunteered geographic information (VGI), it could help to efficiently allocate tasks in citizen science campaigns and help to improve the overall quality of collected data. In this paper, we use classifications of satellite imagery by volunteers from around the world to test whether local familiarity with landscapes helps their performance. Our results show that volunteers identify cropland slightly better within their home country, and do slightly worse as a function of linear distance between their home and the location represented in an image. Volunteers with a professional background in remote sensing or land cover did no better than the general population at this task, but they did not show the decline with distance that was seen among other participants. Even in a landscape where pasture is easily confused for cropland, regional residents demonstrated no advantage. Where we did find evidence for local knowledge aiding classification performance, the realized impact of this effect was tiny. Rather, the inherent difficulty of a task is a much more important predictor of volunteer performance. These findings suggest that, at least for simple tasks, the geographical origin of VGI volunteers has little impact on their ability to complete image classifications. Full article
(This article belongs to the Special Issue Citizen Science and Earth Observation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distribution of players of the Cropland Capture game among the top 10 countries with the most participants. Only participants who rated at least 1000 images are included in this figure. An additional 105 players from 53 other countries were included in the analyses in this paper.</p>
Full article ">Figure 2
<p>An example of pasture land in an image from the Cropland Capture game. (<b>A</b>) The image as seen by players of the game. Players overwhelmingly classified this as being cropland; and (<b>B</b>) a view of the same parcel from ground-level as seen in Google Street View, showing that this field is pasture, not cropland.</p>
Full article ">Figure 3
<p>Predicted probability of agreement with the majority of ratings for an image in the Cropland Capture game. (<b>A</b>) As a function of the image location’s great circle distance from home and a volunteer’s home region (‘western’ includes North America, Europe, and Oceania; ‘non-western’ is Asia, Africa, and Latin America); (<b>B</b>) As a function of self-reported profession. RS/LC stands for ‘Remote sensing/land cover’. Differences among groups are non-significant (see main text); (<b>C</b>) As a function of image distance from home and professional background. The ‘specialist’ professional background applies to users who reported their profession as ‘remote sensing or land cover’, while the ‘non-specialist category applies to everyone else. The dotted lines show ±2 standard error confidence intervals. The y-axes in panels A and C are on a logit scale due to the binomial probability model used for the response variable. Note the small range of values on these y-axes—this indicates low explanatory power of the models, in spite of their statistical significance (see main text).</p>
Full article ">
3978 KiB  
Article
Improved Detection of Human Respiration Using Data Fusion Basedon a Multistatic UWB Radar
by Hao Lv, Fugui Qi, Yang Zhang, Teng Jiao, Fulai Liang, Zhao Li and Jianqi Wang
Remote Sens. 2016, 8(9), 773; https://doi.org/10.3390/rs8090773 - 20 Sep 2016
Cited by 32 | Viewed by 6046
Abstract
This paper investigated the feasibility for improved detection of human respiration using data fusion based on a multistatic ultra-wideband (UWB) radar. UWB-radar-based respiration detection is an emerging technology that has great promise in practice. It can be applied to remotely sense the presence [...] Read more.
This paper investigated the feasibility for improved detection of human respiration using data fusion based on a multistatic ultra-wideband (UWB) radar. UWB-radar-based respiration detection is an emerging technology that has great promise in practice. It can be applied to remotely sense the presence of a human target for through-wall surveillance, post-earthquake search and rescue, etc. In these applications, a human target’s position and posture are not known a priori. Uncertainty of the two factors results in a body orientation issue of UWB radar, namely the human target’s thorax is not always facing the radar. Thus, the radial component of the thorax motion due to respiration decreases and the respiratory motion response contained in UWB radar echoes is too weak to be detected. To cope with the issue, this paper used multisensory information provided by the multistatic UWB radar, which took the form of impulse radios and comprised one transmitting and four separated receiving antennas. An adaptive Kalman filtering algorithm was then designed to fuse the UWB echo data from all the receiving channels to detect the respiratory-motion response contained in those data. In the experiment, a volunteer’s respiration was correctly detected when he curled upon a camp bed behind a brick wall. Under the same scenario, the volunteer’s respiration was detected based on the radar’s single transmitting-receiving channels without data fusion using conventional algorithm, such as adaptive line enhancer and single-channel Kalman filtering. Moreover, performance of the data fusion algorithm was experimentally investigated with different channel combinations and antenna deployments. The experimental results show that the body orientation issue for human respiration detection via UWB radar can be dealt well with the multistatic UWB radar and the Kalman-filter-based data fusion, which can be applied to improve performance of UWB radar in real applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Systematic diagram of the multistatic UWB radar.</p>
Full article ">Figure 2
<p>Processing flow of the data fusion algorithm based on Kalman filtering.</p>
Full article ">Figure 3
<p>Schematic diagram of the Kalman filtering.</p>
Full article ">Figure 4
<p>Experimental setup. The multistatic UWB radar’s antennas were deployed as a line array, in which the receiving antennas were placed close to the transmitting one.</p>
Full article ">Figure 5
<p>Measurement of the volunteer that curled upon a camp bed with his feet pointing to the transmitting antenna. In the figure, he was positioned approximately at (6 m, −20°) away from the origin, namely the transmitting antenna.</p>
Full article ">Figure 6
<p>Measurement of the artificial breathing object: (<b>a</b>) photo of the object; (<b>b</b>) the multistatic UWB radar’s receiving antennas were separated from the transmitting one; (<b>c</b>) the transmitting antenna was moved away from the receiving ones.</p>
Full article ">Figure 7
<p>UWB data after the preprocessing. The rows from top to bottom correspond to the data of channel 1, 2, 3 and 4, respectively. The blue broken line indicates the waveforms manually picked on the basis of the a priori range information of the target.</p>
Full article ">Figure 8
<p>Four waveforms detected by the target association and the data fusion result (<b>a</b>) and their corresponding power spectrum in the frequency domain (<b>b</b>) when the volunteer was measured at the position (4 m, 0°). The rows from top to bottom are the waveforms from channel 1, 2, 3, 4 and the detected respiration, respectively. The detected respiration was fused by two methods: one is the Kalman filtering proposed in this paper and the other is a simple method based on averaging all four waveforms.</p>
Full article ">Figure 9
<p>Four waveforms detected by the target association and the data fusion result (<b>a</b>) and their corresponding power spectrum in the frequency domain (<b>b</b>) when the volunteer was measured at the position (6 m, 0°). The rows from top to bottom are the waveforms from channel 1, 2, 3, 4 and the detected respiration, respectively. The detected respiration was fused by two methods: one is the Kalman filtering proposed in this paper and the other is a simple method based on averaging all four waveforms.</p>
Full article ">Figure 10
<p>Four waveforms detected by the target association and the data fusion result and (<b>a</b>) their corresponding power spectrum in the frequency domain (<b>b</b>) when the volunteer was measured at the position (6 m, −20°). The rows from top to bottom were the waveforms from channel 1, 2, 3, 4 and the detected respiration, respectively. The detected respiration was fused by two methods: one is the Kalman filtering proposed in this paper and the other is a simple method based on averaging all four waveforms.</p>
Full article ">Figure 11
<p>Results processed by the reference algorithm in [<a href="#B6-remotesensing-08-00773" class="html-bibr">6</a>]: the volunteer was at the position (4 m, 0°) (<b>a</b>); the volunteer was at the position (6 m, 0°) (<b>b</b>); and the volunteer was at the position (6 m, −20°) (<b>c</b>). The rows from top to bottom corresponds to channel 1, 2, 3 and 4, respectively.</p>
Full article ">Figure 12
<p>Results processed by the single-channel Kalman filtering: the volunteer was at the position (4 m, 0°) (<b>a</b>); the volunteer was at the position (6 m, 0°) (<b>b</b>); and the volunteer was at the position (6 m, −20°) (<b>c</b>). The rows from top to bottom corresponds to channel 1, 2, 3 and 4, respectively.</p>
Full article ">Figure 13
<p>Power spectrum of the detected respiration by fusing channels 1, 3 and 4 (the blue line) and fusing all the channels (the red line). The rectangular shadow denotes the band expansion in the BRIR. The data was measured when the volunteer was (4 m, 0°) away from the radar.</p>
Full article ">Figure 14
<p>Detection results of the artificial breathing object (left column) and their corresponding power spectrum (right column) with different antenna deployments: (<b>a</b>,<b>b</b>) the transmitting and receiving antennas were placed close together; (<b>c</b>,<b>d</b>) the transmitting and receiving antennas were placed separately; and (<b>e</b>,<b>f</b>) the transmitting antenna was moving away from the receiving ones.</p>
Full article ">
2880 KiB  
Article
Quarter-Century Offshore Winds from SSM/I and WRF in the North Sea and South China Sea
by Charlotte Bay Hasager, Poul Astrup, Rong Zhu, Rui Chang, Merete Badger and Andrea N. Hahmann
Remote Sens. 2016, 8(9), 769; https://doi.org/10.3390/rs8090769 - 20 Sep 2016
Cited by 13 | Viewed by 7521
Abstract
We study the wind climate and its long-term variability in the North Sea and South China Sea, areas relevant for offshore wind energy development, using satellite-based wind data, because very few reliable long-term in-situ sea surface wind observations are available. The Special Sensor [...] Read more.
We study the wind climate and its long-term variability in the North Sea and South China Sea, areas relevant for offshore wind energy development, using satellite-based wind data, because very few reliable long-term in-situ sea surface wind observations are available. The Special Sensor Microwave Imager (SSM/I) ocean winds extrapolated from 10 m to 100 m using the Charnock relationship and the logarithmic profile method are compared to Weather Research and Forecasting (WRF) model results in both seas and to in-situ observations in the North Sea. The mean wind speed from SSM/I and WRF differ only by 0.1 m/s at Fino1 in the North Sea, while west of Hainan in the South China Sea the difference is 1.0 m/s. Linear regression between SSM/I and WRF winds at 100 m show correlation coefficients squared of 0.75 and 0.67, standard deviation of 1.67 m/s and 1.41 m/s, and mean difference of −0.12 m/s and 0.83 m/s for Fino1 and Hainan, respectively. The WRF-derived winds overestimate the values in the South China Sea. The inter-annual wind speed variability is estimated as 4.6% and 4.4% based on SSM/I at Fino1 and Hainan, respectively. We find significant changes in the seasonal wind pattern at Fino1 with springtime winds arriving one month earlier from 1988 to 2013 and higher winds in June; no yearly trend in wind speed is observed in the two seas. Full article
(This article belongs to the Special Issue Remote Sensing of Wind Energy)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Maps of the study sites and the SSM/I average areas indicated with red boxes the sites: Fino1 in the North Sea (<b>left</b>); and Hainan in the South China Sea (<b>right</b>).</p>
Full article ">Figure 2
<p>Number of validated SSM/I observations per year at: Fino1 (<b>left</b>); and Hainan (<b>right</b>).</p>
Full article ">Figure 3
<p>Total number of valid measurements from SSM/I at each registered time point of day using 6-min bins in the period from 1988 to 2013 for: Fino1 (<b>left</b>); and Hainan (<b>right</b>).</p>
Full article ">Figure 4
<p>WRF model nested domains for: the North Sea (<b>left</b>); and South China Sea (<b>right</b>).</p>
Full article ">Figure 5
<p>(<b>Top</b>) SSM/I (<b>left</b>) and WRF (<b>right</b>) weekly-averaged wind speed versus in-situ wind speed for Fino1 at 100 m including linear regression results; (<b>middle</b>) SSM/I versus WRF weekly-averaged wind speed at 10 m for Fino1 (<b>left</b>) and Hainan (<b>right</b>); and (<b>bottom</b>) SSM/I versus WRF weekly-averaged wind speed at 100 m for Fino1 (<b>left</b>) and Hainan (<b>right</b>).</p>
Full article ">Figure 6
<p>Long-term averaged wind profiles with and without stability correction at: Fino1 (<b>left</b>); and Hainan (<b>right</b>).</p>
Full article ">Figure 7
<p>Average monthly SSM/I and WRF wind speeds at 100 m for 26 years from 1988 to 2013 for Fino1 (<b>left</b>) and Hainan (<b>right</b>), except WRF at Hainan is from 1989 to 2013.</p>
Full article ">Figure 8
<p>SSM/I- and WRF-derived wind speed distributions and related Weibull curves at 100 m for: Fino1 (1988–2013) (<b>left</b>); and Hainan (1988–2013, except WRF is 1989–2013) (<b>right</b>). For Fino1, meteorological observations and related Weibull curve at 100 m for 2004–2013 are also included.</p>
Full article ">Figure 9
<p>Diurnal wind speed from SSM/I averaged for each registered time point (6-min bins) during 26 years (1988–2013) for Fino1 (<b>left</b>) and hourly values from WRF during same years and Hainan (<b>right</b>) but without year 1988 from WRF.</p>
Full article ">Figure 10
<p>Monthly averages of SSM/I wind speeds at 100 m as function of year for 1988 to 2013 for: Fino1 (<b>left</b>); and Hainan (<b>right</b>). Dashed lines indicate the calculated trends.</p>
Full article ">Figure 11
<p>The average wind speed trend and the 95% percentile wind speed trend based on SSM/I wind speeds at 100 m as function of year for 1988 to 2013 for: Fino1 (<b>top left</b>); and Hainan (<b>top right</b>) and significance test. Dashed lines indicate the calculated significance levels for: Fino1 (<b>bottom left</b>); and Hainan (<b>bottom right</b>).</p>
Full article ">Figure 12
<p>Yearly averages and the 95% percentile wind speed trends of SSM/I at 100 m at Fino1 (1988–2013) and in-situ observations (<b>left</b>); and (<b>right</b>) Hainan SSM/I (1988–2013). Dashed lines indicate the calculated trends.</p>
Full article ">Figure 13
<p>LINCOM model results of the roughness at the sea as function of wind speed at 10 m and fetch (shown in log10 scale) and overlaid with the corresponding Charnock coefficient in contour lines.</p>
Full article ">
7255 KiB  
Article
Analysis of MABEL Bathymetry in Keweenaw Bay and Implications for ICESat-2 ATLAS
by Nicholas A. Forfinski-Sarkozi and Christopher E. Parrish
Remote Sens. 2016, 8(9), 772; https://doi.org/10.3390/rs8090772 - 19 Sep 2016
Cited by 64 | Viewed by 9165
Abstract
In 2018, the National Aeronautics and Space Administration (NASA) is scheduled to launch the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), with a new six-beam, green-wavelength, photon-counting lidar system, Advanced Topographic Laser Altimeter System (ATLAS). The primary objectives of the ICESat-2 mission are [...] Read more.
In 2018, the National Aeronautics and Space Administration (NASA) is scheduled to launch the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), with a new six-beam, green-wavelength, photon-counting lidar system, Advanced Topographic Laser Altimeter System (ATLAS). The primary objectives of the ICESat-2 mission are to measure ice-sheet elevations, sea-ice thickness, and global biomass. However, if bathymetry can be reliably retrieved from ATLAS data, this could assist in addressing a key data need in many coastal and inland water body areas, including areas that are poorly-mapped and/or difficult to access. Additionally, ATLAS-derived bathymetry could be used to constrain bathymetry derived from complementary data, such as passive, multispectral imagery and synthetic aperture radar (SAR). As an important first step in evaluating the ability to map bathymetry from ATLAS, this study involves a detailed assessment of bathymetry from the Multiple Altimeter Beam Experimental Lidar (MABEL), NASA’s airborne ICESat-2 simulator, flown on the Earth Resources 2 (ER-2) high-altitude aircraft. An interactive, web interface, MABEL Viewer, was developed and used to identify bottom returns in Keweenaw Bay, Lake Superior. After applying corrections for refraction and channel-specific elevation biases, MABEL bathymetry was compared against National Oceanic and Atmospheric Administration (NOAA) data acquired two years earlier. The results indicate that MABEL reliably detected bathymetry in depths of up to 8 m, with a root mean square (RMS) difference of 0.7 m, with respect to the reference data. Additionally, a version of the lidar equation was developed for predicting bottom-return signal levels in MABEL and tested using the Keweenaw Bay data. Future work will entail extending these results to ATLAS, as the technical specifications of the sensor become available. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p><span class="html-italic">MABEL Viewer</span> prototype interface for interactive identification of bottom returns from the Multiple Altimeter Beam Experimental Lidar (MABEL).</p>
Full article ">Figure 2
<p>The study site is at the southern end of Keweenaw Bay, Lake Superior, at the eastern base of Michigan’s Keweenaw Peninsula.</p>
Full article ">Figure 3
<p>Thirteen green channels were configured for the MABEL “Transit to KPMD” mission. The graph shows the across- and along-track distances for each channel, given a nominal operational height above ground level (AGL) of 20,000 m.</p>
Full article ">Figure 4
<p>Characteristic water-surface and bottom profiles are clearly discernable in the MABEL data. The delta time on the x-axis is shown in reverse order so that the spatial orientation of the photon elevation data coincides with the 1-m resolution National Agriculture Imagery Program (NAIP) imagery shown above (i.e., viewing the track left-to-right in the above image). In addition to the surface and bathymetric returns, ground and vegetation returns are also discernable in the MABEL photon elevation data. The overlain grid shows World Geodetic System 1984 (WGS84), Universal Transverse Mercator (UTM) zone 16N coordinates.</p>
Full article ">Figure 5
<p>Surface returns (blue dots) and bottom returns (red dots) are shown for each green channel.</p>
Full article ">Figure 6
<p>Workflow for generating bathymetry from MABEL data.</p>
Full article ">Figure 7
<p>Channel-specific biases were observed in the mean surface elevations. The biases are the differences between the average surface photon elevation and the elevation of the actual water level, as modeled based on a Great Lakes Coastal Forecasting System (GLCFS) water level point query. The range of values for each channel are shown as vertical bars.</p>
Full article ">Figure 8
<p>GLCFS water-level data and a WGS84-IGLD85 separation model generated in VDatum, a vertical datum transformation tool, were used to reduce refraction-corrected depths to the WGS84 (G1674) ellipsoid. The datum separation used for this study was 35.38 m, the average separation value along the MABEL track line in the study area.</p>
Full article ">Figure 9
<p>The vertical-control methodology used to reduce the raw bottom photon elevations to WGS84 (G1674) includes traditional hydrographic water-level corrections and modern datum-separation models.</p>
Full article ">Figure 10
<p>The predicted numbers of photoelectrons per pulse (or shot) vs. depth are shown for each energy level, for the Keweenaw Bay project site.</p>
Full article ">Figure 11
<p><b>Left</b>: A portion of the depth profile from channel 6 illustrates the comparison between the observed photon returns and the reference Laser Airborne Depth Sounder (LADS) MkII dataset. The expected numbers of photoelectrons (p.e.) for each shot bin was calculated based on the average depth of the shot bin. <b>Right</b>: The corresponding (channel 6) average expected and observed numbers of photoelectrons for each depth range were calculated by averaging the respective values from the populated shot bins in that depth range.</p>
Full article ">Figure 12
<p>The average observed and expected numbers of photoelectrons (p.e.) are shown for the (<b>a</b>) low-energy and (<b>b</b>) high-energy channels. The results show good qualitative agreement between observed and expected numbers of p.e. in the 1–8 m depth range, but anomalous, shallow, near-shore effects in the 0–1 m depth range, which is typically challenging for bathymetric lidar mapping.</p>
Full article ">Figure 13
<p>The distribution of differences between the ellipsoid heights of the detected bottom photons and the nearest reference depth has a root mean square (RMS) error of 0.74 m.</p>
Full article ">Figure 14
<p>The spatial distribution of ellipsoid-height differences at the western edge of the project area are plotted over 1-m resolution NAIP imagery. Blue colors (negative numbers) represent MABEL depths that are deeper than the corresponding reference depths, and Red colors (positive numbers) represent MABEL depths that are shallower than the corresponding reference depths. Clusters of differences, such as the one highlighted with the black circle labeled ‘A’, are consistent with a sand bar that has migrated and/or the horizontal positioning error of a sand bar that has not migrated. Certain geomorphological features are labeled to aid interpretation of the background imagery.</p>
Full article ">
2182 KiB  
Article
Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar
by Hao Tang, Anu Swatantran, Terence Barrett, Phil DeCola and Ralph Dubayah
Remote Sens. 2016, 8(9), 771; https://doi.org/10.3390/rs8090771 - 19 Sep 2016
Cited by 46 | Viewed by 7627
Abstract
Airborne single-photon lidar (SPL) is a new technology that holds considerable potential for forest structure and carbon monitoring at large spatial scales because it acquires 3D measurements of vegetation faster and more efficiently than conventional lidar instruments. However, SPL instruments use green wavelength [...] Read more.
Airborne single-photon lidar (SPL) is a new technology that holds considerable potential for forest structure and carbon monitoring at large spatial scales because it acquires 3D measurements of vegetation faster and more efficiently than conventional lidar instruments. However, SPL instruments use green wavelength (532 nm) lasers, which are sensitive to background solar noise, and therefore SPL point clouds require more elaborate noise filtering than other lidar instruments to determine canopy heights, particularly in daytime acquisitions. Histogram-based aggregation is a commonly used approach for removing noise from photon counting lidar data, but it reduces the resolution of the dataset. Here we present an alternate voxel-based spatial filtering method that filters noise points efficiently while largely preserving the spatial integrity of SPL data. We develop and test our algorithms on an experimental SPL dataset acquired over Garrett County in Maryland, USA. We then compare canopy attributes retrieved using our new algorithm with those obtained from the conventional histogram binning approach. Our results show that canopy heights derived using the new algorithm have a strong agreement with field-measured heights (r2 = 0.69, bias = 0.42 m, RMSE = 4.85 m) and discrete return lidar heights (r2 = 0.94, bias = 1.07 m, RMSE = 2.42 m). Results are consistently better than height accuracies from the histogram method (field data: r2 = 0.59, bias = 0.00 m, RMSE = 6.25 m; DRL: r2 = 0.78, bias = −0.06 m and RMSE = 4.88 m). Furthermore, we find that the spatial-filtering method retains fine-scale canopy structure detail and has lower errors over steep slopes. We therefore believe that automated spatial filtering algorithms such as the one presented here can support large-scale, canopy structure mapping from airborne SPL data. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An overview of single photon lidar (SPL) data acquired from the High-Resolution Quantum Lidar System (HRQLS): (<b>a</b>) The conical scanning mechanism of HRQLS with both forward and backward scans; (<b>b</b>) An example of point clouds acquired in an individual scan; (<b>c</b>) An example of cross sectional profile composited from multiple scans; (<b>d</b>) 3D point clouds including photons from canopy, terrain and solar noise.</p>
Full article ">Figure 2
<p>A flowchart of deriving canopy height from SPL data using two independent methods: the histogram based method and the spatial filtering method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Comparisons between field heights and canopy heights (p99) derived from SPL data (using both histogram method and the spatial-filtering method); and (<b>b</b>) height differences between field data and SPL as a function of averaged slope value at each plot. For the histogram method ΔH = 0.43 × Slope − 4.05 with <span class="html-italic">r</span><sup>2</sup> = 0.28 (<span class="html-italic">p</span> &lt; 0.01), and for the spatial-filtering method ΔH = 0.26 × Slope − 2.07 with <span class="html-italic">r</span><sup>2</sup> = 0.18 (<span class="html-italic">p</span> &lt; 0.01). Symbols of different color and shape stand for different processing methods (red: histogram method, and blue: spatial-filtering method) over different types of forests (dbf: deciduous broadleaf forests, conif: coniferous forests, and mix: mixed forests).</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparisons between canopy heights derived from discrete return lidar (DRL) and SPL data (using both histogram method and the spatial-filtering method); and (<b>b</b>) height differences between DRL and SPL as a function of averaged slope value at each plot. For the histogram method ΔH = 0.21 × Slope − 1.88 with <span class="html-italic">r</span><sup>2</sup> = 0.11 (<span class="html-italic">p</span> &lt; 0.01). There was no significant relationship between ΔH and slope for the spatial-filtering method (<span class="html-italic">p</span> = 0.77). Same legend as <a href="#remotesensing-08-00771-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>A comparison example of plot-level canopy height products derived from SPL data using both histogram method and the spatial-filtering method. The plot is on a slope of about 30° with a DRL measured canopy height of 33.87 m and a field measured height of 30.3 m. The left part shows the raw level 1 HRQLS data over the plot with noises distributed both above and below the canopy-terrain layer. The center shows the pseudo-waveform generated from the histogram method, with identified canopy top (616.15 m), ground peak (574.35 m) and canopy height (42.3 m) in dash lines. The right part shows the noise-removal and point-classification results of HRQLS data using the spatial-filtering method. The ground points are in blue, and canopy points are in red with an estimated canopy height (p99) of 35.46 m.</p>
Full article ">Figure 6
<p>An illustrative example of the impact of different voxel sizes on noise removal at the individual tree level. The voxel sizes are expressed as combinations of different horizontal resolutions (xy, unit: m) and vertical resolutions (z, unit: m) in (<b>a</b>–<b>c</b>). All the three voxels of different sizes can identify the majority of noise photons (green points) both above canopy and below ground. However, an extra-fine resolution voxel may fail to capture the top of individual trees (<b>a</b>), and an extra-coarse horizontal resolution voxel may miss the entire small tree in open space (<b>c</b>).</p>
Full article ">
1928 KiB  
Article
Exploratory Analysis of Dengue Fever Niche Variables within the Río Magdalena Watershed
by Austin Stanforth, Max J. Moreno-Madriñán and Jeffrey Ashby
Remote Sens. 2016, 8(9), 770; https://doi.org/10.3390/rs8090770 - 19 Sep 2016
Cited by 12 | Viewed by 7542
Abstract
Previous research on Dengue Fever have involved laboratory tests or study areas with less diverse temperature and elevation ranges than is found in Colombia; therefore, preliminary research was needed to identify location specific attributes of Dengue Fever transmission. Environmental variables derived from the [...] Read more.
Previous research on Dengue Fever have involved laboratory tests or study areas with less diverse temperature and elevation ranges than is found in Colombia; therefore, preliminary research was needed to identify location specific attributes of Dengue Fever transmission. Environmental variables derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Tropical Rainfall Measuring Mission (TRMM) satellites were combined with population variables to be statistically compared against reported cases of Dengue Fever in the Río Magdalena watershed, Colombia. Three-factor analysis models were investigated to analyze variable patterns, including a population, population density, and empirical Bayesian estimation model. Results identified varying levels of Dengue Fever transmission risk, and environmental characteristics which support, and advance, the research literature. Multiple temperature metrics, elevation, and vegetation composition were among the more contributory variables found to identify future potential outbreak locations. Full article
(This article belongs to the Special Issue Multi-Sensor and Multi-Data Integration in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study site identification of the Magdalena Watershed in Colombia, demonstrating its geographical location, elevation diversity, and presence of larger populated urban environments.</p>
Full article ">Figure 2
<p>Graphic depicting stages of analysis.</p>
Full article ">Figure 3
<p>Model output of principal component loadings. The strongest component loading was bolded for each variable.</p>
Full article ">Figure 4
<p>Population density and EBE model graphical comparisons to the reported cases per 10,000 population.</p>
Full article ">
9699 KiB  
Article
High-Resolution NDVI from Planet’s Constellation of Earth Observing Nano-Satellites: A New Data Source for Precision Agriculture
by Rasmus Houborg and Matthew F. McCabe
Remote Sens. 2016, 8(9), 768; https://doi.org/10.3390/rs8090768 - 19 Sep 2016
Cited by 150 | Viewed by 21900
Abstract
Planet Labs (“Planet”) operate the largest fleet of active nano-satellites in orbit, offering an unprecedented monitoring capacity of daily and global RGB image capture at 3–5 m resolution. However, limitations in spectral resolution and lack of accurate radiometric sensor calibration impact the utility [...] Read more.
Planet Labs (“Planet”) operate the largest fleet of active nano-satellites in orbit, offering an unprecedented monitoring capacity of daily and global RGB image capture at 3–5 m resolution. However, limitations in spectral resolution and lack of accurate radiometric sensor calibration impact the utility of this rich information source. In this study, Planet’s RGB imagery was translated into a Normalized Difference Vegetation Index (NDVI): a common metric for vegetation growth and condition. Our framework employs a data mining approach to build a set of rule-based regression models that relate RGB data to atmospherically corrected Landsat-8 NDVI. The approach was evaluated over a desert agricultural landscape in Saudi Arabia where the use of near-coincident (within five days) Planet and Landsat-8 acquisitions in the training of the regression models resulted in NDVI predictabilities with an r2 of approximately 0.97 and a Mean Absolute Deviation (MAD) on the order of 0.014 (~9%). The MAD increased to 0.021 (~14%) when the Landsat NDVI training image was further away (i.e., 11–16 days) from the corrected Planet image. In these cases, the use of MODIS observations to inform on the change in NDVI occurring between overpasses was shown to significantly improve prediction accuracies. MAD levels ranged from 0.002 to 0.011 (3.9% to 9.1%) for the best performing 80% of the data. The technique is generic and extendable to any region of interest, increasing the utility of Planet’s dense time-series of RGB imagery. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the multi-sensor processing framework used for converting the Planet (P) RGB information into Landsat-8 (L) consistent estimates of atmospherically corrected NDVI. The integration of MODIS 250 data is needed to correct for changes in cover conditions occurring between the Planet and Landsat scene acquisitions when they are more than ~4 days apart. Specific details are provided in <a href="#sec2dot3-remotesensing-08-00768" class="html-sec">Section 2.3</a>.</p>
Full article ">Figure 2
<p>(<b>a</b>) True color RGB imagery from Landsat-8 and Planet acquisitions two days apart. The insert in the Planet image is included to demonstrate the impact of pivot irrigation on the RGB signal; (<b>b</b>) Intercomparison of Landsat-8 NDVI and Planet NDVI computed from RGB imagery using Cubist regressions generated on the basis of Landsat-8 training data from DOY 320. The two white arrows highlight fields that have undergone significant change in cover conditions between the two acquisitions. Zoomed images are included to demonstrate the enhanced spatial detail achievable with Planet data.</p>
Full article ">Figure 3
<p>(<b>a</b>) Density scatter plot intercomparing Landsat-8 and Planet NDVI. The Planet NDVI at DOY 318 was estimated from multi-variate regression models using Landsat NDVI from DOY 320 as training data; (<b>b</b>) Normalized frequency distributions of the Landsat-8 and Planet NDVI using DOY 320 as the Landsat training base. Note that the Planet data have been resampled to 30 m resolution to facilitate a scale consistent intercomparison and statistic evaluation.</p>
Full article ">Figure 4
<p>(<b>a</b>) Landsat-8 NDVI on DOY 304 (top) and 320 (bottom); (<b>b</b>) Day-specific (i.e., interpolated) MODIS NDVI on DOY 304 (top) and 318 (bottom); (<b>c</b>) Planet NDVI on DOY 318 predicted on the basis of Landsat-8 NDVI from DOY 304. The bottom plot demonstrates the utility of using higher frequency MODIS data to inform the Cubist-based regression models when near-coincident Landsat and Planet acquisitions do not exist.</p>
Full article ">Figure 5
<p>(<b>a</b>) Density scatter plots intercomparing Landsat-8 and Planet NDVI. In the leftmost plot, Planet NDVI on DOY 318 was estimated from Cubist regression models using Landsat NDVI from DOY 304 as training data. In the rightmost plot, MODIS NDVI from DOY 304 and 318 were used to inform on the change in cover conditions occurring between the Landsat and Planet overpasses; (<b>b</b>) Normalized frequency distributions of the Landsat-8 and Planet NDVI using DOY 304 as the Landsat training base. Note that the Planet data have been resampled to 30 m resolution to facilitate a scale consistent intercomparison and statistic evaluation.</p>
Full article ">Figure 6
<p>Time-series of (<b>a</b>) 250 m resolution MODIS, (<b>b</b>) 30 m resolution Landsat-8, and (<b>c</b>,<b>d</b>) 3 m resolution Planet NDVI over a subset of the farm. In (<b>c</b>), NDVI was predicted on the basis of a near-coincident (i.e., within 0–5 days) Landsat acquisition that is indicated by a red boundary box (<b>b</b>). In (<b>d</b>), the Landsat-8 NDVI image used as training base is within 11–16 days of the Planet acquisition, and MODIS NDVI is used to inform on the change occurring over these timespans.</p>
Full article ">
6526 KiB  
Article
Evaluation of Single Photon and Geiger Mode Lidar for the 3D Elevation Program
by Jason M. Stoker, Qassim A. Abdullah, Amar Nayegandhi and Jayna Winehouse
Remote Sens. 2016, 8(9), 767; https://doi.org/10.3390/rs8090767 - 19 Sep 2016
Cited by 71 | Viewed by 11852
Abstract
Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these [...] Read more.
Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these new technologies for the 3DEP. While not able to collect data currently to meet USGS lidar base specification, this is partially due to the fact that the specification was written for linear-mode systems specifically. With little effort on part of the manufacturers of the new lidar systems and the USGS Lidar specifications team, data from these systems could soon serve the 3DEP program and its users. Many of the shortcomings noted in this study have been reported to have been corrected or improved upon in the next generation sensors. Full article
(This article belongs to the Special Issue Airborne Laser Scanning)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Areas of interest used for processing and assessing bare earth.</p>
Full article ">Figure 2
<p>Location of leaf-on linear mode collection (LMWptLO15), IntelliEarth™ daytime collection (GMHarLO15_7.5kDT), and IntelliEarth™ sensor 2 leaf-off collection (GMHarLF15_26k).</p>
Full article ">Figure 3
<p>Location of checkpoints used for Vertical Accuracy Assessments. Plus signs are NVA checkpoints. Triangles are VVA checkpoints.</p>
Full article ">Figure 4
<p>Example of range walk.</p>
Full article ">Figure 5
<p>Cross section used for comparisons, overlaid on imagery (<b>image top</b>). Sample profiles of HRQLS, leaf on (<b>top profile</b>); linear-mode lidar, leaf-on (<b>middle-top profile</b>); linear-mode lidar, leaf-off; (<b>middle-bottom profile</b>); and IntelliEarth™ lidar, leaf on (<b>bottom profile</b>).</p>
Full article ">Figure 6
<p>Example of an intensity image from: a linear-mode system (<b>top</b>); and the IntelliEarth™ system (<b>bottom</b>).</p>
Full article ">Figure 7
<p>Correlations between Dewberry’s and Woolpert’s IntelliEarth™ DEM differences (<b>top</b>); and HRQLS DEM differences (<b>bottom</b>) from accepted 3DEP lidar. r = 0.74 and 0.55, respectively.</p>
Full article ">
4757 KiB  
Article
Estimating Ladder Fuels: A New Approach Combining Field Photography with LiDAR
by Heather A. Kramer, Brandon M. Collins, Frank K. Lake, Marek K. Jakubowski, Scott L. Stephens and Maggi Kelly
Remote Sens. 2016, 8(9), 766; https://doi.org/10.3390/rs8090766 - 17 Sep 2016
Cited by 27 | Viewed by 8155
Abstract
Forests historically associated with frequent fire have changed dramatically due to fire suppression and past harvesting over the last century. The buildup of ladder fuels, which carry fire from the surface of the forest floor to tree crowns, is one of the critical [...] Read more.
Forests historically associated with frequent fire have changed dramatically due to fire suppression and past harvesting over the last century. The buildup of ladder fuels, which carry fire from the surface of the forest floor to tree crowns, is one of the critical changes, and it has contributed to uncharacteristically large and severe fires. The abundance of ladder fuels makes it difficult to return these forests to their natural fire regime or to meet management objectives. Despite the importance of ladder fuels, methods for quantifying them are limited and imprecise. LiDAR (Light Detection and Ranging), a form of active remote sensing, is able to estimate many aspects of forest structure across a landscape. This study investigates a new method for quantifying ladder fuel in the field (using photographs with a calibration banner) and remotely (using LiDAR data). We apply these new techniques in the Klamath Mountains of Northern California to predict ladder fuel levels across the study area. Our results demonstrate a new utility of LiDAR data to identify fire hazard and areas in need of fuels reduction. Full article
(This article belongs to the Special Issue Remote Sensing of Vegetation Structure and Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Klamath River study area between Orleans and Happy Camp, CA. Within the 14,323 ha LiDAR footprint, four focal areas, totaling 2622 ha (6480 ac), were chosen to sample the forest.</p>
Full article ">Figure 2
<p>A flowchart showing the stratification of plots is shown.</p>
Full article ">Figure 3
<p>The original plot location and new, shifted plot location are shown with the locations of mapped tall trees in relation to plot center. Tree heights are shown with the shifted tree locations.</p>
Full article ">Figure 4
<p>Examples of photos used to analyze ladder fuel cover (<b>A</b>,<b>B</b>); and their respective analysis boundaries (<b>C</b>,<b>D</b>). Areas left out of analysis are faded out in (<b>C</b>,<b>D</b>), whereas areas included in analysis are outlined in black. <a href="#remotesensing-08-00766-f004" class="html-fig">Figure 4</a>A,B shows plots with low ladder fuel cover. <a href="#remotesensing-08-00766-f004" class="html-fig">Figure 4</a>A’s banner is unobstructed, while the banner in <a href="#remotesensing-08-00766-f004" class="html-fig">Figure 4</a>B is partially obscured by tanoak leaves close to the camera lens, leading to only part of the banner being used for analysis, shown in <a href="#remotesensing-08-00766-f004" class="html-fig">Figure 4</a>D.</p>
Full article ">Figure 5
<p>Binned cover estimates, used to approximate ladder fuel cover. A canopy cover image series was adapted from Forest Inventory and Analysis (FIA) protocol [<a href="#B70-remotesensing-08-00766" class="html-bibr">70</a>], and binned into six cover classes. The cover estimates for each image set is shown at the bottom, and the average cover in each bin is shown at the top of each class. This was also the diagram used to assist technicians in assigning cover estimates.</p>
Full article ">Figure 6
<p>Pictorial representation of the 20 relative percent cover metrics calculated from our LiDAR data. Each vertical bar represents a different height range for which relative percent cover was calculated. For example, the most familiar metric is the percent cover, shown in yellow in the figure above. This metric is calculated as the number of LiDAR hits over breast height (1.37–50 m) divided by the total number of LiDAR hits. These 20 metrics represent only a portion of LiDAR-derived metrics calculated, in addition to basic point statistics and percentile heights. These relative percent cover values were calculated for both first and all LiDAR returns (representing 40 of the 54 metrics calculated).</p>
Full article ">Figure 7
<p>Our detailed workflow illustrates how we used plots and LiDAR to answer our key questions.</p>
Full article ">Figure 8
<p>A box-and-whisker plot that displays the distribution of ladder fuel cover within each 1 m class across all plots is shown. We also include the average ladder fuel cover between 1 and 4 m in each plot.</p>
Full article ">Figure 9
<p>Ladder fuel cover derived from the LiDAR point cloud is compared to ground-based measurements. This model is significant at the <span class="html-italic">p</span> &lt; 0.05 level, with an R-squared value of 0.73, root mean squared error of roughly 10%, and 10-fold cross-validation error of roughly 11%. The ground-based measurements range between 7% and 91% ladder fuel cover.</p>
Full article ">Figure 10
<p>Ladder fuel cover, as predicted from the LiDAR point cloud using Equation (1), is displayed across the study area.</p>
Full article ">
809 KiB  
Article
The Sensitivity of AOD Retrieval to Aerosol Type and Vertical Distribution over Land with MODIS Data
by Yerong Wu, Martin De Graaf and Massimo Menenti
Remote Sens. 2016, 8(9), 765; https://doi.org/10.3390/rs8090765 - 17 Sep 2016
Cited by 17 | Viewed by 6471
Abstract
This study is to evaluate the sensitivity of Aerosol Optical Depth (AOD τ) to aerosol vertical profile and type, using the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 algorithm over land. Four experiments were performed, using different aerosol properties including 3 possible [...] Read more.
This study is to evaluate the sensitivity of Aerosol Optical Depth (AOD τ) to aerosol vertical profile and type, using the Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 algorithm over land. Four experiments were performed, using different aerosol properties including 3 possible non-dust aerosol models and 14 vertical distributions. The algorithm intrinsic uncertainty was investigated as well as the interplay effect of aerosol vertical profile and type on the retrieval. The results show that the AOD retrieval is highly sensitive to aerosol vertical profile and type. With 4 aerosol vertical distributions, the algorithm with a fixed vertical distribution gives about 5% error in the AOD retrieval with aerosol loading τ 0 . 5 . With pure aerosols (smoke and dust), the retrieval of AOD shows errors ranging from 2% to 30% for a series of vertical distributions. Errors in aerosol type assumption in the algorithm can lead to errors of up to 8% in the AOD retrieval. The interplay effect can give the AOD retrieval errors by over 6%. In addition, intrinsic algorithm errors were found, with a value of >3% when τ> 3.0. This is due to the incorrect estimation of the surface reflectance. The results suggest that the MODIS algorithm can be improved by considering a realistic aerosol model and its vertical profile, and even further improved by reducing the algorithm intrinsic errors. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Size distribution (<b>a</b>) and phase function (<b>b</b>) for the four aerosol models. For the comparison, the phase function of Rayleigh is added in the plot. The result is reproduced using the data from [<a href="#B3-remotesensing-08-00765" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>Exponential distribution as a function of scale height, ranging from 1 to 6 by step of one (see ExpH1, ExpH2,...and ExpH6). The total AOD (<math display="inline"> <semantics> <msub> <mi>τ</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>t</mi> </mrow> </msub> </semantics> </math>) is set as 0.5. The tail of the distribution beyond 15 km is vertically cut off since the AOD is significantly small in high space.</p>
Full article ">Figure 3
<p>The uncertainty of four retrieval parameters as function of fine ratio <span class="html-italic">η</span> in C6 algorithm. Note that the result is achieved by averaging the values of the parameters (<math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>,</mo> <mi>η</mi> <mo>,</mo> <mi>ϵ</mi> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math>) over 1520 geometrical combinations, assuming aerosol model as Moderately absorbing (Generic model in the figure) with the surface reflectance <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>15</mn> </mrow> </semantics> </math> in the retrieval. (<b>a</b>) shows the relatively difference of mean AOD between the retrieved and the expected one (reference) (e.g., when aerosol loading is set as 0.5, then the expected AOD should be 0.5 exactly); (<b>b</b>–<b>d</b>) show the mean values of fine ratio <span class="html-italic">η</span>, fitting error <span class="html-italic">ϵ</span> and the surface reflectance <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math>, respectively.</p>
Full article ">Figure 4
<p>The surface reflectance <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> and simulation in the retrieval (Issue 1). These parameters are produced by the C6_DT algorithm with a synthetic measurement. The mixture of generic + dust (<math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>) is selected to construct the synthetic data at the given geometrical angle (nadir view, <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mn>24</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math>). In the algorithm, the possible values of <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> are obtained (<b>a</b>), shown as a function of aerosol loading and the fine ratio. Note that the <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> was plotted out only with aerosol loading (0.25, 0.5 and 1.0), shown as “black *”, “red triangle” and “green triangle”, respectively. Other values with large aerosol loading were not shown. With the experimental relationship of surface reflectance for VISvs2.11, the possible <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> is further to generate the simulation for the TOA reflectance at two visible bands (0.466 and 0.644 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m) at each node of aerosol loading (0.25 and 0.5) and fine ratio, given in (<b>b</b>). “White” line indicates the simulation, where “solid ”and “dash” line means the simulation varied with aerosol loading and fine ratio, respectively. To compare with the simulation, the corresponding “truth” value is also given, plotted as “black” line.</p>
Full article ">Figure 5
<p>The surface reflectance <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> and simulation in the retrieval (Issue 2). Selected was a similar synthetic data as in <a href="#remotesensing-08-00765-f004" class="html-fig">Figure 4</a>, but with heavy aerosol loading (<math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>≥</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </semantics> </math>). Note that that the aerosol loading and fine ratio are assumed as knowns for the simulation. The possible surface reflectance <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> varies with aerosol loading and fine ratio, given as “solid” and “dash” line, respectively in (<b>a</b>). The corresponding simulation of TOA reflectance is given in (<b>b</b>). Other symbols are similar to that in <a href="#remotesensing-08-00765-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>The AOD errors caused by different aerosol vertical distributions. The errors <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics> </math> are calculated by averaging over 1520 geometrical combinations, shown as a function of aerosol loadings in (<b>a</b>–<b>c</b>); In (<b>a</b>), the label of “ExpH3S_ExpH2R-Ref” indicate the AOD difference <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics> </math> between “ExpH3_ExpH2R”and “Ref”, where “ExpH3S_ExpH2R” means the AOD is achieved with ExpH3 simulation, but using ExpH2 LUT in the retrieval. As for “Ref”, the AOD is achieved with ExpH2 simulation, using the LUT with the same distribution as in simulation. Other labels are similar too.</p>
Full article ">Figure 7
<p>The AOD errors caused by different aerosol vertical distributions. The AOD that achieved with the “ideal” simulation with ExpH2 and Exp2L0 was selected as “reference” in the comparison of the retrieval with “ExpH” and “Exp2L” series, respectively. The difference for “ExpH” and “Exp2L” are given in (<b>a</b>,<b>b</b>), respectively. Note the results are obtained with pure smoke aerosol at <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>, under 5 angles (nadir view with 5 solar zenith angles <math display="inline"> <semantics> <msub> <mi>θ</mi> <mn>0</mn> </msub> </semantics> </math>). The 2.11 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m surface reflectance is set as 0.15.</p>
Full article ">Figure 8
<p>The simulations with smoke model, under a given viewing geometry. (<b>a</b>,<b>b</b>) show the results with nadir view and solar zenith angle <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mn>48</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math>, respectively. The Exp2L simulation series are created with <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>, shown as 7 points in the figure. The 2.11 <math display="inline"> <semantics> <mi mathvariant="sans-serif">μ</mi> </semantics> </math>m TOA reflectance is not shown due to its near nonsensitivity to the aerosol vertical distribution. The <math display="inline"> <semantics> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> </semantics> </math> is set as 0.15.</p>
Full article ">Figure 9
<p>The AOD errors by wrongly assuming aerosol models. The mean errors <math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics> </math> are calculated by averaging over 1520 geometrical combinations, with <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>15</mn> </mrow> </semantics> </math>, shown as a function of aerosol loadings. The “reference” AOD is the one that is achieved in the algorithm by the correctly choosing aerosol model. Some abbreviations of aerosol models are used (G: Generic, S: Smoke, U: UrbanIndustrial). “G_U” in (<b>a</b>) means the relative difference between the result with that generic aerosol is observed while urbanIndustrial one is given in the retrieval and “reference”. Similarly, the result with “S_U” is shown in (<b>b</b>).</p>
Full article ">Figure 10
<p>The AOD errors by wrongly assuming aerosol models. Note the results are calculated with <math display="inline"> <semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> at five angles (nadir view with 5 solar zenith angles <math display="inline"> <semantics> <msub> <mi>θ</mi> <mn>0</mn> </msub> </semantics> </math>). The other symbols are the same as that in <a href="#remotesensing-08-00765-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>The AOD errors by wrongly assuming aerosol vertical distribution and type. The smoke aerosol with Exp2L3 distribution is used in the simulation, but using the LUT of UrbanIndustrial model with ExpH2 in the retrieval. Note the results are calculated with <math display="inline"> <semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mn>2</mn> <mo>.</mo> <mn>11</mn> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>15</mn> </mrow> </semantics> </math>. The other symbols are the same as that in <a href="#remotesensing-08-00765-f009" class="html-fig">Figure 9</a>.</p>
Full article ">
3002 KiB  
Article
Effect of High-Frequency Sea Waves on Wave Period Retrieval from Radar Altimeter and Buoy Data
by Xifeng Wang and Kaoru Ichikawa
Remote Sens. 2016, 8(9), 764; https://doi.org/10.3390/rs8090764 - 17 Sep 2016
Cited by 7 | Viewed by 4730
Abstract
Wave periods estimated from satellite altimetry data behave differently from those calculated from buoy data, especially in low-wind conditions. In this paper, the geometric mean wave period T a is calculated from buoy data, rather than the commonly used zero-crossing wave period [...] Read more.
Wave periods estimated from satellite altimetry data behave differently from those calculated from buoy data, especially in low-wind conditions. In this paper, the geometric mean wave period T a is calculated from buoy data, rather than the commonly used zero-crossing wave period T z . The geometric mean wave period uses the fourth moment of the wave frequency spectrum and is related to the mean-square slope of the sea surface measured using altimeters. The values of T a obtained from buoys and altimeters agree well (root mean square difference: 0.2 s) only when the contribution of high-frequency sea waves is estimated by a wavenumber spectral model to complement the buoy data, because a buoy cannot obtain data from waves having wavelengths that are shorter than the characteristic dimension of the buoy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of the 30 collocated National Data Buoy Center (NDBC) buoys.</p>
Full article ">Figure 2
<p>Buoy <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>z</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>A</mi> </msubsup> </mrow> </semantics> </math> data when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>. The regression lines estimated by the orthogonal distance regression (ODR) method are plotted by the dashed (0–5 m·s<sup>−1</sup>), broken (5–10 m·s<sup>−1</sup>), and solid (&gt;10 m·s<sup>−1</sup>) green lines.</p>
Full article ">Figure 3
<p>Buoy <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>A</mi> </msubsup> </mrow> </semantics> </math> when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>. The regression lines estimated by the ODR method are plotted by the dashed (0–5 m·s<sup>−1</sup>), broken (5–10 m·s<sup>−1</sup>), and solid (&gt;10 m·s<sup>−1</sup>) green lines.</p>
Full article ">Figure 4
<p>Buoy <math display="inline"> <semantics> <mrow> <msubsup> <mi>H</mi> <mi>s</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <msubsup> <mi>H</mi> <mi>s</mi> <mi>A</mi> </msubsup> </mrow> </semantics> </math> when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for the case in which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 5
<p>Buoy <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>S</mi> <msup> <mi>S</mi> <mi>B</mi> </msup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>S</mi> <msup> <mi>S</mi> <mi>A</mi> </msup> </mrow> </semantics> </math> when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for the case in which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 6
<p>Compensated high-frequency portion of <span class="html-italic">MSS</span> for buoys. The red, blue, and black lines represent the equilibrium range, the saturation range, and the total range, respectively.</p>
Full article ">Figure 7
<p>Buoy <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>S</mi> <msup> <mi>S</mi> <mrow> <mi>C</mi> <mi>B</mi> </mrow> </msup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>S</mi> <msup> <mi>S</mi> <mi>A</mi> </msup> </mrow> </semantics> </math> when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for the case in which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 8
<p>Buoy <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mrow> <mi>C</mi> <mi>B</mi> </mrow> </msubsup> </mrow> </semantics> </math> and altimeter <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>A</mi> </msubsup> </mrow> </semantics> </math> data when the wind speed is lower (<b>a</b>) or higher (<b>b</b>) than 5 m·s<sup>−1</sup>. In the right-hand panel, the blue (red) plots indicate data for which the wind speed is higher (lower) than 10 m·s<sup>−1</sup>. The regression lines estimated by the ODR method are plotted by the dashed (0–5 m·s<sup>−1</sup>), broken (5–10 m·s<sup>−1</sup>), and solid (&gt;10 m·s<sup>−1</sup>) green lines.</p>
Full article ">Figure 9
<p>Amount of data for the 16 data subsets.</p>
Full article ">Figure 10
<p>Scatter plot of <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>z</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> with <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> (blue) or <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mrow> <mi>C</mi> <mi>B</mi> </mrow> </msubsup> </mrow> </semantics> </math> (red), together with regression lines (solid and broken lines, respectively). For subsets (<b>a</b>) wind speed: 4–5 m·s<sup>−1</sup> and (<b>b</b>) wind speed: 10–11 m·s<sup>−1</sup>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Slope of the regression lines of <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> (blue) and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mrow> <mi>C</mi> <mi>B</mi> </mrow> </msubsup> </mrow> </semantics> </math> (red) for 16 data subsets; (<b>b</b>) RMSD around the regression lines of <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mi>B</mi> </msubsup> </mrow> </semantics> </math> (blue) and <math display="inline"> <semantics> <mrow> <msubsup> <mi>T</mi> <mi>a</mi> <mrow> <mi>C</mi> <mi>B</mi> </mrow> </msubsup> </mrow> </semantics> </math> (red) for 16 data subsets.</p>
Full article ">
2736 KiB  
Article
A Simple Harmonic Model for FAPAR Temporal Dynamics in the Wetlands of the Volga-Akhtuba Floodplain
by Alexander Kozlov, Maria Kozlova and Nikolai Skorik
Remote Sens. 2016, 8(9), 762; https://doi.org/10.3390/rs8090762 - 17 Sep 2016
Cited by 9 | Viewed by 5015
Abstract
The paper reports a technique used to construct a reference time series for the fraction of absorbed photosynthetically-active radiation (FAPAR) based on remotely-sensed data in the largest Russian arid wetland territory. For the arid Volga-Akhtuba wetlands, FAPAR appears to be an informative spectral [...] Read more.
The paper reports a technique used to construct a reference time series for the fraction of absorbed photosynthetically-active radiation (FAPAR) based on remotely-sensed data in the largest Russian arid wetland territory. For the arid Volga-Akhtuba wetlands, FAPAR appears to be an informative spectral index for estimating plant cover health and its seasonal and annual dynamics. Since FAPAR algorithms are developed for multiple satellite sensors, all FAPAR-based models are suggested to be universal and useful for future studies and long-term monitoring of plant cover, particularly in wetlands. The model developed in the present work for FAPAR temporal dynamics clearly reflects the field-observed seasonal and annual changes of plant cover in the Volga-Akhtuba floodplain wetlands. Various types of wetland plant communities were categorized by the specific parameters of the model seasonal vegetation curve. In addition, the values derived from the model function allow quantitative estimation of wetland plant cover health. This information is particularly important for the Volga-Akhtuba floodplain, because its hydrological regime is regulated by the Volzhskaya hydropower plant. The ecosystem is extremely fragile and sensitive to human impact, and wetland plant cover health is a key indicator of regulatory efficiency. The present study is another step towards developing a methodology focused on arid wetland vegetation monitoring and conservation of its biodiversity and natural conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Test sites under study: wetlands, deserts and dry steppes.</p>
Full article ">Figure 2
<p>Model curves of the fraction of absorbed photosynthetically-active radiation (FAPAR) temporal dynamics along with theoretically-estimated 2-<span class="html-italic">σ</span> corridors and numerical results, including model-derived indices, compared to actual data for land sites with different types of plant cover, with the anomalous 2006 given using distinct markers and color: (<b>a</b>) marshes near Proran runnel, (<b>b</b>) woodlands near Pahotniy runnel, (<b>c</b>) cereal meadows around Kaloshi lake, (<b>d</b>) desert near Seroglazovo.</p>
Full article ">
8763 KiB  
Article
Seasonal Separation of African Savanna Components Using Worldview-2 Imagery: A Comparison of Pixel- and Object-Based Approaches and Selected Classification Algorithms
by Żaneta Kaszta, Ruben Van De Kerchove, Abel Ramoelo, Moses Azong Cho, Sabelo Madonsela, Renaud Mathieu and Eléonore Wolff
Remote Sens. 2016, 8(9), 763; https://doi.org/10.3390/rs8090763 - 16 Sep 2016
Cited by 49 | Viewed by 6817
Abstract
Separation of savanna land cover components is challenging due to the high heterogeneity of this landscape and spectral similarity of compositionally different vegetation types. In this study, we tested the usability of very high spatial and spectral resolution WorldView-2 (WV-2) imagery to classify [...] Read more.
Separation of savanna land cover components is challenging due to the high heterogeneity of this landscape and spectral similarity of compositionally different vegetation types. In this study, we tested the usability of very high spatial and spectral resolution WorldView-2 (WV-2) imagery to classify land cover components of African savanna in wet and dry season. We compared the performance of Object-Based Image Analysis (OBIA) and pixel-based approach with several algorithms: k-nearest neighbor (k-NN), maximum likelihood (ML), random forests (RF), classification and regression trees (CART) and support vector machines (SVM). Results showed that classifications of WV-2 imagery produce high accuracy results (>77%) regardless of the applied classification approach. However, OBIA had a significantly higher accuracy for almost every classifier with the highest overall accuracy score of 93%. Amongst tested classifiers, SVM and RF provided highest accuracies. Overall classifications of the wet season image provided better results with 93% for RF. However, considering woody leaf-off conditions, the dry season classification also performed well with overall accuracy of 83% (SVM) and high producer accuracy for the tree cover (91%). Our findings demonstrate the potential of imagery like WorldView-2 with OBIA and advanced supervised machine-learning algorithms in seasonal fine-scale land cover classification of African savanna. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the study area. True color composites (derived from the WorldView-2 image) for both the dry season and wet season are illustrated. The borders of Kruger National Park and Sabi Sands Wildtuin are marked in green and red, respectively. The white rectangle corresponds to the subset used in <a href="#remotesensing-08-00763-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 2
<p>Subset of a WorldView-2 images (false color composite (FCC)) with corresponding Object-Based Image Analysis (OBIA) and pixel-based classifications of wet and dry season using support vector machines (SVM) classifier.</p>
Full article ">Figure 3
<p>General workflow to derive savanna components.</p>
Full article ">Figure 4
<p>Land cover map for the July 2012 and March 2013 images, made using Object-Based Image Analysis (OBIA) and the random forests (RF) classifier.</p>
Full article ">Figure 5
<p>Plots of features importance using Random Forest classifier with pixel- and object-based approach applied to June 2012 and March 2013 images.</p>
Full article ">
5481 KiB  
Article
Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods
by Lei Ma, Manchun Li, Thomas Blaschke, Xiaoxue Ma, Dirk Tiede, Liang Cheng, Zhenjie Chen and Dong Chen
Remote Sens. 2016, 8(9), 761; https://doi.org/10.3390/rs8090761 - 16 Sep 2016
Cited by 81 | Viewed by 9181
Abstract
Object-based change detection (OBCD) has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived [...] Read more.
Object-based change detection (OBCD) has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI) information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD) always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters. Full article
(This article belongs to the Special Issue Monitoring of Land Changes)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The two study areas located within the city of Changzhou, China.</p>
Full article ">Figure 2
<p>Change detection flowchart for a comprehensive assessment.</p>
Full article ">Figure 3
<p>Study sites: (<b>a</b>,<b>b</b>) are WorldView-2 true-colour images for Study site 1; and (<b>c</b>) shows the reference polygons from manual interpretation; (<b>d</b>,<b>e</b>) are WorldView-2 true-colour images for Study site 2; and (<b>f</b>) shows the reference polygons for Study site 2. In the ground reference maps, the <b>red</b> patches indicate changed areas while <b>blue</b> patches indicate unchanged areas.</p>
Full article ">Figure 4
<p>Overall accuracy of change detection at nineteen different segmentation scales and five confidence levels (i.e., 0.995, 0.99, 0.975, 0.95 and 0.9) using four change detection methods and three segmentation strategies, for Study site 1.</p>
Full article ">Figure 5
<p>Overall accuracy of change detection for Study site 2. (for an explanation, see caption for <a href="#remotesensing-08-00761-f004" class="html-fig">Figure 4</a>.)</p>
Full article ">Figure 6
<p>Sensitivity and specificity at each segmentation scale using four change detection methods and three segmentation strategies, for both study sites: (<b>a</b>) sensitivity for Study site 1; (<b>b</b>) sensitivity for Study site 2; (<b>c</b>) specificity for Study site 1; (<b>d</b>) specificity for Study site 2. The value of the <span class="html-italic">y</span>-axis indicates the proportion of true positive/negatives, where 0 is completely inaccurate, while 1 is completely accurate.</p>
Full article ">Figure 7
<p>Changes in the three measures of accuracy for Study site 1 (0.1 indicates 10%) following the addition of object-level textural and Normalized Difference Vegetation Index (NDVI) information. Scale: 140.</p>
Full article ">Figure 8
<p>Changes in the three measures of accuracy for Study site 2 (0.1 indicates 10%) following the addition of object-level textural and NDVI information. Scale: 140.</p>
Full article ">Figure 9
<p>Change maps were produced for Study site 1 using the Multivariate Alteration Detection (MAD) method and segmentation Strategy 2; the segmentation scale was fixed at 20 or 100 to show the effect of “sliver objects”. (<b>a</b>) A example result of segmentation at scale 100; (<b>b</b>) Change objects detected for the segments of (a); (<b>c</b>) A example result of segmentation at scale 20; (<b>d</b>) Change objects detected for the segments of (c); (<b>e</b>) Change map at scale 20 and 100 respectively for area 1.</p>
Full article ">
13314 KiB  
Article
Scale Effects of the Relationships between Urban Heat Islands and Impact Factors Based on a Geographically-Weighted Regression Model
by Xiaobo Luo and Yidong Peng
Remote Sens. 2016, 8(9), 760; https://doi.org/10.3390/rs8090760 - 15 Sep 2016
Cited by 39 | Viewed by 7704
Abstract
Urban heat island (UHI) effect, the side effect of rapid urbanization, has become an obstacle to the further healthy development of the city. Understanding its relationships with impact factors is important to provide useful information for climate adaptation urban planning strategies. For this [...] Read more.
Urban heat island (UHI) effect, the side effect of rapid urbanization, has become an obstacle to the further healthy development of the city. Understanding its relationships with impact factors is important to provide useful information for climate adaptation urban planning strategies. For this purpose, the geographically-weighted regression (GWR) approach is used to explore the scale effects in a mountainous city, namely the change laws and characteristics of the relationships between land surface temperature and impact factors at different spatial resolutions (30–960 m). The impact factors include the Soil-adjusted Vegetation Index (SAVI), the Index-based Built-up Index (IBI), and the Soil Brightness Index (NDSI), which indicate the coverage of the vegetation, built-up, and bare land, respectively. For reference, the ordinary least squares (OLS) model, a global regression technique, is also employed by using the same dependent variable and explanatory variables as in the GWR model. Results from the experiment exemplified by Chongqing showed that the GWR approach had a better prediction accuracy and a better ability to describe spatial non-stationarity than the OLS approach judged by the analysis of the local coefficient of determination (R2), Corrected Akaike Information Criterion (AICc), and F-test at small spatial resolution (< 240 m); however, when the spatial scale was increased to 480 m, this advantage has become relatively weak. This indicates that the GWR model becomes increasingly global, revealing the relationships with more generalized geographical patterns, and then spatial non-stationarity in the relationship will tend to be neglected with the increase of spatial resolution. Full article
(This article belongs to the Special Issue Earth Observations for a Better Future Earth)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area ((<b>a</b>) China, (<b>b</b>) Chongqing, and (<b>c</b>) the study area).</p>
Full article ">Figure 2
<p>The derivation results of SAVI, IBI, NDSI, LST, and the classification result of the study area at the spatial resolution of 30 m ((<b>a</b>) SAVI; (<b>b</b>) NDSI; (<b>c</b>) IBI; (<b>d</b>) LST; and (<b>e</b>) the classification result of the study area).</p>
Full article ">Figure 3
<p>The spatial distribution of the local coefficients estimated by GWR models at a spatial resolution of 30 m. (<b>a</b>–<b>c</b>) are the slope coefficients for the SAVI, IBI, and NDSI in single-factor models, respectively; and (<b>d</b>–<b>f</b>) are the coefficients for the SAVI, IBI, and NDSI in multi-factor models, respectively. Moreover, the yellow area represents water.</p>
Full article ">Figure 4
<p>The spatial distribution of the local R<sup>2</sup> in the GWR models at a spatial resolution of 30 m, (<b>a</b>) for the LST–SAVI model; (<b>b</b>) for the LST–IBI model; (<b>c</b>) for the LST–NDSI model; and (<b>d</b>) for the multi-factor model (using the SAVI, IBI, and NDSI as the explanatory variables simultaneously). Moreover, the yellow area represents water.</p>
Full article ">Figure 5
<p>The spatial distribution of the local R<sup>2</sup> in the multi-factor GWR models (using the SAVI, IBI, and NDSI as the explanatory variables simultaneously) at different spatial resolutions, (<b>a</b>–<b>f</b>) represent the spatial resolutions of 30 m, 60 m, 120 m, 240 m, 480 m, and 960 m, respectively. Moreover, the yellow area represents water.</p>
Full article ">Figure 6
<p>The spatial distribution of the local residuals in the multi-factor GWR models (using the SAVI, IBI, and NDSI as the explanatory variables simultaneously) at different spatial resolutions; (<b>a</b>)–(<b>f</b>) represent the spatial resolutions of 30 m, 60 m, 120 m, 240 m, 480 m, and 960 m, respectively. Moreover, the blue area represents water.</p>
Full article ">
21023 KiB  
Article
Enhanced Compositional Mapping through Integrated Full-Range Spectral Analysis
by Meryl L. McDowell and Fred A. Kruse
Remote Sens. 2016, 8(9), 757; https://doi.org/10.3390/rs8090757 - 15 Sep 2016
Cited by 10 | Viewed by 7063
Abstract
We developed a method to enhance compositional mapping from spectral remote sensing through the integration of visible to near infrared (VNIR, ~0.4–1 µm), shortwave infrared (SWIR, ~1–2.5 µm), and longwave infrared (LWIR, ~8–13 µm) data. Spectral information from the individual ranges was first [...] Read more.
We developed a method to enhance compositional mapping from spectral remote sensing through the integration of visible to near infrared (VNIR, ~0.4–1 µm), shortwave infrared (SWIR, ~1–2.5 µm), and longwave infrared (LWIR, ~8–13 µm) data. Spectral information from the individual ranges was first analyzed independently and then the resulting compositional information in the form of image endmembers and apparent abundances was integrated using ISODATA cluster analysis. Independent VNIR, SWIR, and LWIR analyses of a study area near Mountain Pass, California identified image endmembers representing vegetation, manmade materials (e.g., metal, plastic), specific minerals (e.g., calcite, dolomite, hematite, muscovite, gypsum), and general lithology (e.g., sulfate-bearing, carbonate-bearing, and silica-rich units). Integration of these endmembers and their abundances produced a final full-range classification map incorporating much of the variation from all three spectral ranges. The integrated map and its 54 classes provide additional compositional information that is not evident in the VNIR, SWIR, or LWIR data alone, which allows for more complete and accurate compositional mapping. A supplemental examination of hyperspectral LWIR data and comparison with the multispectral LWIR data used in the integration illustrates its potential to further improve this approach. Full article
(This article belongs to the Special Issue Multi-Sensor and Multi-Data Integration in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The region near Mountain Pass, California. The basemap image is from the ArcGIS Online World Imagery basemap by Esri. The solid black outline gives the approximate extent of the study area covered by the spectral imagery. Significant terrain features are labeled. Note that the Solar Facility location is marked but not visible in the image, which was acquired pre-construction.</p>
Full article ">Figure 2
<p>Simplified surface geologic map indicating general lithology and notable features, based on the surficial and geologic maps of Schmidt and McMackin [<a href="#B29-remotesensing-08-00757" class="html-bibr">29</a>], Miller et al. in [<a href="#B31-remotesensing-08-00757" class="html-bibr">31</a>] and Hewett [<a href="#B32-remotesensing-08-00757" class="html-bibr">32</a>]. The solid black outline gives the approximate extent of the study area covered by the spectral imagery. Uncolored surfaces within the study area represent Quaternary alluvium deposits.</p>
Full article ">Figure 3
<p>Methods flow chart summarizing the process used in the independent spectral analysis of each wavelength range and the subsequent integration and classification of the resulting endmember abundances. The rounded boxes indicate data products, and the rectangular boxes indicate procedures.</p>
Full article ">Figure 4
<p>Example of matched filter (MF) score vs. Infeasibility score scatterplot illustrating the selection of an infeasibility threshold at the edge of the data background distribution separating feasible and infeasible MF scores. The class shown here is shortwave infrared Class 2.</p>
Full article ">Figure 5
<p>Selected visible to near infrared (VNIR) image endmember spectra and example reference spectra. <sup>1</sup> USGS Digital Spectral Library [<a href="#B61-remotesensing-08-00757" class="html-bibr">61</a>]; <sup>2</sup> ASTER Spectral Library [<a href="#B62-remotesensing-08-00757" class="html-bibr">62</a>].</p>
Full article ">Figure 6
<p>False color composite images of selected visible to near infrared (VNIR) endmember classes: (<b>a</b>) 6/15/33 in red/green/blue; (<b>b</b>) 12/32/33 in red/green/blue. Pixels with approximate abundance values less than 40% for the selected endmembers are masked in gray.</p>
Full article ">Figure 7
<p>Selected shortwave infrared (SWIR) image endmember spectra and reference spectra of: (<b>a</b>) mineral phases; (<b>b</b>) vegetation; (<b>c</b>) manmade materials. <sup>1</sup> USGS Digital Spectral Library [<a href="#B61-remotesensing-08-00757" class="html-bibr">61</a>]; <sup>2</sup> ASTER Spectral Library [<a href="#B62-remotesensing-08-00757" class="html-bibr">62</a>]; <sup>3</sup> Turner et al. [<a href="#B63-remotesensing-08-00757" class="html-bibr">63</a>]. * spectrum has been vertically offset to facilitate comparison and plotting.</p>
Full article ">Figure 8
<p>False color composite images of selected shortwave infrared (SWIR) endmember classes: (<b>a</b>) 3/6/5 in red/green/blue; (<b>b</b>) 2/7/8 in red/green/blue. Pixels with approximate abundance values less than 40% for the selected endmembers are masked in gray.</p>
Full article ">Figure 9
<p>Selected longwave infrared (LWIR) image endmember spectra modeled predominantly in areas of: (<b>a</b>) natural terrain; (<b>b</b>) manmade surfaces, and example reference spectra convolved to the MODIS/ASTER Airborne Simulator (MASTER) bandpasses. <sup>1</sup> ASTER Spectral Library [<a href="#B62-remotesensing-08-00757" class="html-bibr">62</a>]. * spectrum’s vertical scale has been decreased to facilitate comparison and plotting.</p>
Full article ">Figure 10
<p>False color composite images of selected longwave infrared (LWIR) endmember classes: (<b>a</b>) 3/10/28 in red/green/blue; (<b>b</b>) 11/17/20 in red/green/blue. Pixels with approximate abundance values less than 40% for the selected endmembers are masked in gray.</p>
Full article ">Figure 11
<p>Classification map produced from integration of the visible to near infrared (VNIR), shortwave infrared (SWIR), and longwave infrared (LWIR) image endmembers. The map subsets shown in subsequent figures re outlined in black.</p>
Full article ">Figure 12
<p>Mean endmember apparent abundance values for selected classes produced by integration of the visible to near infrared (VNIR), shortwave infrared (SWIR), and longwave infrared (LWIR) image endmembers.</p>
Full article ">Figure 13
<p>Examination of individual wavelength range results incorporated in the full-range characterization of a siliciclastic rock unit: (<b>a</b>) subset of the integrated classification map with outlines of general surface units and regions of interest corresponding to subsequent plots; (<b>b</b>) graph of mean endmember abundance values for selected regions of interest, offset for clarity; (<b>c</b>) abundance distribution images for notable endmembers; (<b>d</b>) mean visible to near infrared (VNIR) and shortwave infrared (SWIR) spectra of selected regions of interest; (<b>e</b>) mean longwave infrared (LWIR) spectra of selected regions of interest.</p>
Full article ">Figure 14
<p>Examination of individual wavelength range results incorporated in the full-range characterization of a quartz-rich sandstone unit: (<b>a</b>) subset of the integrated classification map with outlines of general surface units and regions of interest corresponding to subsequent plots; (<b>b</b>) graph of mean endmember abundance values for selected regions of interest, offset for clarity; (<b>c</b>) abundance distribution images for notable endmembers; (<b>d</b>) mean visible to near infrared (VNIR) and shortwave infrared (SWIR) spectra of selected regions of interest; (<b>e</b>) mean longwave infrared (LWIR) spectra of selected regions of interest.</p>
Full article ">Figure 15
<p>Examination of individual wavelength range results incorporated in the full-range characterization of a carbonate unit: (<b>a</b>) subset of the integrated classification map with outlines of general surface units and regions of interest corresponding to subsequent plots; (<b>b</b>) graph of mean endmember abundance values for selected regions of interest, offset for clarity; (<b>c</b>) abundance distribution images for notable endmembers; (<b>d</b>) mean visible to near infrared (VNIR) and shortwave infrared (SWIR) spectra of selected regions of interest; (<b>e</b>) mean longwave infrared (LWIR) spectra of selected regions of interest.</p>
Full article ">Figure 16
<p>False color composite mosaic of the Mako emissivity data with bands 8.59/11.32/12.10 µm displayed in red/green/blue. The white boxes outline regions of interest discussed in the text and <a href="#remotesensing-08-00757-f017" class="html-fig">Figure 17</a>. The black boxes outline the data subsets shown in <a href="#remotesensing-08-00757-f018" class="html-fig">Figure 18</a> and <a href="#remotesensing-08-00757-f019" class="html-fig">Figure 19</a>.</p>
Full article ">Figure 17
<p>Mean emissivity spectra of regions of interest (ROI) M1-M7 from the Mako data at: (<b>a</b>) Mako native spectral resolution; (<b>b</b>) convolved to the spectral response functions of the MODIS/ASTER Airborne Simulator (MASTER) longwave infrared bands. See <a href="#remotesensing-08-00757-f016" class="html-fig">Figure 16</a> for ROI locations; gray dashed lines indicate band positions displayed as red (R), green (G), and blue (B) in <a href="#remotesensing-08-00757-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 18
<p>Detailed examination of a subset of the Mako emissivity data: (<b>a</b>) outlines of selected regions of interest (ROIs) on the false color composite image with bands 8.59/11.32/12.10 µm displayed as red/green/blue; (<b>b</b>) mean emissivity spectra of the ROIs; (<b>c</b>) mean emissivity spectra of the ROIs convolved to the spectral response functions of the MODIS/ASTER Airborne Simulator (MASTER) longwave infrared bands; (<b>d</b>) ROI outlines on Mako data resampled to the pixel size of our MASTER data set; (<b>e</b>) mean emissivity spectrum of each ROI plotted with the spectra of the new resampled pixels to which it contributes.</p>
Full article ">Figure 19
<p>Detailed examination of a subset of the Mako emissivity data: (<b>a</b>) outlines of selected regions of interest (ROIs) on the false color composite image with bands 8.59/11.32/12.10 µm displayed as red/green/blue; (<b>b</b>) mean emissivity spectra of the ROIs; (<b>c</b>) mean emissivity spectra of the ROIs convolved to the spectral response functions of the MODIS/ASTER Airborne Simulator (MASTER) longwave infrared bands; (<b>d</b>) ROI outlines on Mako data resampled to the pixel size of our MASTER data set; (<b>e</b>) mean emissivity spectrum of each ROI plotted with the spectra of the new resampled pixels to which it contributes.</p>
Full article ">
25904 KiB  
Article
Assessment of Carbon Flux and Soil Moisture in Wetlands Applying Sentinel-1 Data
by Katarzyna Dabrowska-Zielinska, Maria Budzynska, Monika Tomaszewska, Alicja Malinska, Martyna Gatkowska, Maciej Bartold and Iwona Malek
Remote Sens. 2016, 8(9), 756; https://doi.org/10.3390/rs8090756 - 15 Sep 2016
Cited by 38 | Viewed by 10471
Abstract
The objectives of the study were to determine the spatial rate of CO2 flux (Net Ecosystem Exchange) and soil moisture in a wetland ecosystem applying Sentinel-1 IW (Interferometric Wide) data of VH (Vertical Transmit/Horizontal Receive—cross polarization) and VV (Vertical Transmit/Vertical Receive—like polarization) [...] Read more.
The objectives of the study were to determine the spatial rate of CO2 flux (Net Ecosystem Exchange) and soil moisture in a wetland ecosystem applying Sentinel-1 IW (Interferometric Wide) data of VH (Vertical Transmit/Horizontal Receive—cross polarization) and VV (Vertical Transmit/Vertical Receive—like polarization) polarization. In-situ measurements of carbon flux, soil moisture, and LAI (Leaf Area Index) were carried out over the Biebrza Wetland in north-eastern Poland. The impact of soil moisture and LAI on backscattering coefficient (σ°) calculated from Sentinel-1 data showed that LAI dominates the influence on σ° when soil moisture is low. The models for soil moisture have been derived for wetland vegetation habitat types applying VH polarization (R2 = 0.70 to 0.76). The vegetation habitats: reeds, sedge-moss, sedges, grass-herbs, and grass were classified using combined one Landsat 8 OLI (Operational Land Imager) and three TerraSAR-X (TSX) ScanSAR VV data. The model for the assessment of Net Ecosystem Exchange (NEE) has been developed based on the assumption that soil moisture and biomass represented by LAI have an influence on it. The σ° VH and σ° VV describe soil moisture and LAI, and have been the input to the NEE model. The model, created for classified habitats, is as follows: NEE = f (σ° Sentinel-1 VH, σ° Sentinel-1 VV). Reasonably good predictions of NEE have been achieved for classified habitats (R2 = 0.51 to 0.58). The developed model has been used for mapping spatial and temporal distribution of NEE over Biebrza wetland habitat types. Eventually, emissions of CO2 to the atmosphere (NEE positive) has been noted when soil moisture (SM) and biomass were low. This study demonstrates the importance of the capability of Sentinel-1 microwave data to calculate soil moisture and estimate NEE with all-weather acquisition conditions, offering an important advantage for frequent wetlands monitoring. Full article
(This article belongs to the Special Issue What can Remote Sensing Do for the Conservation of Wetlands?)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Localisation of the test site in Poland on the orthophotomap (<b>a</b>); and the Biebrza Wetland area on the Landsat 8 OLI RGB (4,3,2) image registered on 3 August 2014 (<b>b</b>).</p>
Full article ">Figure 2
<p>Map of Biebrza Wetland vegetation habitat types.</p>
Full article ">Figure 3
<p>Map of soil moisture based on Sentinel-1 VH data acquired on 11 June 2015.</p>
Full article ">Figure 4
<p>Map of soil moisture based on Sentinel-1 VH data acquired on 10 July 2015.</p>
Full article ">Figure 5
<p>Relationship between observed soil moisture (SM) and calculated from Sentinel-1 VV data through statistically derived algorithm.</p>
Full article ">Figure 6
<p>Relationship between observed soil moisture (SM) and calculated from Sentinel-1 VH data through statistically derived algorithm.</p>
Full article ">Figure 7
<p>Relationship between LAI and Bw (wet biomass) for all considered habitats.</p>
Full article ">Figure 8
<p>Relationship between σ° VH observed and predicted by applying Equation (2).</p>
Full article ">Figure 9
<p>3D-area plot for Equation (2).</p>
Full article ">Figure 10
<p>Relationship between σ° VH observed and predicted by applying Equation (3).</p>
Full article ">Figure 11
<p>3D-area plot for Equation (3).</p>
Full article ">Figure 12
<p>Map of NEE distribution over Biebrza wetland on 11 June 2015.</p>
Full article ">Figure 13
<p>Map of NEE distribution over Biebrza wetland on 10 July 2015.</p>
Full article ">Figure 14
<p>Soil moisture difference map created by applying dates: 10 July 2015 and 11 June 2015.</p>
Full article ">
12684 KiB  
Article
Crowdsourcing Rapid Assessment of Collapsed Buildings Early after the Earthquake Based on Aerial Remote Sensing Image: A Case Study of Yushu Earthquake
by Shuai Xie, Jianbo Duan, Shibin Liu, Qin Dai, Wei Liu, Yong Ma, Rui Guo and Caihong Ma
Remote Sens. 2016, 8(9), 759; https://doi.org/10.3390/rs8090759 - 14 Sep 2016
Cited by 32 | Viewed by 6660
Abstract
Remote sensing (RS) images play a significant role in disaster emergency response. Web2.0 changes the way data are created, making it possible for the public to participate in scientific issues. In this paper, an experiment is designed to evaluate the reliability of crowdsourcing [...] Read more.
Remote sensing (RS) images play a significant role in disaster emergency response. Web2.0 changes the way data are created, making it possible for the public to participate in scientific issues. In this paper, an experiment is designed to evaluate the reliability of crowdsourcing buildings collapse assessment in the early time after an earthquake based on aerial remote sensing image. The procedure of RS data pre-processing and crowdsourcing data collection is presented. A probabilistic model including maximum likelihood estimation (MLE), Bayes’ theorem and expectation-maximization (EM) algorithm are applied to quantitatively estimate the individual error-rate and “ground truth” according to multiple participants’ assessment results. An experimental area of Yushu earthquake is provided to present the results contributed by participants. Following the results, some discussion is provided regarding accuracy and variation among participants. The features of buildings labeled as the same damage type are found highly consistent. This suggests that the building damage assessment contributed by crowdsourcing can be treated as reliable samples. This study shows potential for a rapid building collapse assessment through crowdsourcing and quantitatively inferring “ground truth” according to crowdsourcing data in the early time after the earthquake based on aerial remote sensing image. Full article
(This article belongs to the Special Issue Earth Observations for Geohazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Maps about administrative areas and terrains: (<b>a</b>) the geographical location of Qinghai Province, China; (<b>b</b>) the geographical location of Yushu County in Qinghai Province; (<b>c</b>) the geographical location of Jiegu Town in Yushu County; and (<b>d</b>) the topographic map of Jiegu Town made by Landsat 8 OLI image (true color).</p>
Full article ">Figure 2
<p>The aerial image of Jiegu Town captured on 14 April 2010, which was the area severely damaged by the earthquake.</p>
Full article ">Figure 3
<p>The architecture of processing.</p>
Full article ">Figure 4
<p>The experiment area and the location of the 3456 data points, which were contributed by 27 participants online. In the map, the green, yellow and red points indicate the damage type of “basically intact”, “partially collapsed” and “completely collapsed”, respectively.</p>
Full article ">Figure 5
<p>The red, blue and green lines demonstrate the fluctuation of marginal probability of “basically intact”, “partially collapsed” and “completely collapsed”, respectively, with the increase of the number of iterations. When the number of iterations reaches 12, EM algorithm converges.</p>
Full article ">Figure 6
<p>Comparison of the results between the two different methods: (<b>a</b>) the result of EM algorithm; and (<b>b</b>) the result of “majority” method, which are generated with individual buildings rendered with colors representing the type of damage. The rectangles represent the buildings, and green, yellow and red indicate “basically intact”, “partially collapsed” and “completely collapsed”, respectively. The rectangles with the id number on them have the different results between the two methods.</p>
Full article ">Figure 7
<p>The percentage of each damage type on each building. The green, yellow and red bars represent the percentage of “basically intact”, “partially collapsed” and “completely collapsed”, respectively, on each building. The building ids depicted by <span class="html-italic">X</span>-axis are in ascending order according to the percentage of “basically intact” on each building.</p>
Full article ">Figure 8
<p>The standard deviation of participants’ assessment results on each building. The building ids depicted by <span class="html-italic">X</span>-axis are also in ascending order according to the percentage of “basically intact” on each building.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop