Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 8, April
Previous Issue
Volume 8, February
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 8, Issue 3 (March 2016) – 104 articles

Cover Story (view full-size image): This paper presents the preliminary results of two classification exercises assessing the capabilities of pre-operational (August 2015) Sentinel-2 (S2) data for mapping crop types and tree species. The maps were produced at 10 m spatial resolution by using the full spectral resolution of S2 (ten spectral bands). A supervised classifier was deployed and trained with appropriate ground truth. In both case studies, S2 data confirmed its expected capabilities to produce reliable land cover maps. Cross-validated overall accuracies ranged between 65% (tree species) and 76% (crop types). As only single date acquisitions were available for this first study, the full potential of S2 data could not be assessed. The two S2 satellites offer global coverage every five days and therefore can concurrently exploit unprecedented spectral and temporal information with high spatial resolution. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12744 KiB  
Article
Blending Satellite Observed, Model Simulated, and in Situ Measured Soil Moisture over Tibetan Plateau
by Yijian Zeng, Zhongbo Su, Rogier Van der Velde, Lichun Wang, Kai Xu, Xing Wang and Jun Wen
Remote Sens. 2016, 8(3), 268; https://doi.org/10.3390/rs8030268 - 22 Mar 2016
Cited by 78 | Viewed by 8394
Abstract
The inter-comparison of different soil moisture (SM) products over the Tibetan Plateau (TP) reveals the inconsistency among different SM products, when compared to in situ measurement. It highlights the need to constrain the model simulated SM with the in situ measured data climatology. [...] Read more.
The inter-comparison of different soil moisture (SM) products over the Tibetan Plateau (TP) reveals the inconsistency among different SM products, when compared to in situ measurement. It highlights the need to constrain the model simulated SM with the in situ measured data climatology. In this study, the in situ soil moisture networks, combined with the classification of climate zones over the TP, were used to produce the in situ measured SM climatology at the plateau scale. The generated TP scale in situ SM climatology was then used to scale the model-simulated SM data, which was subsequently used to scale the SM satellite observations. The climatology-scaled satellite and model-simulated SM were then blended objectively, by applying the triple collocation and least squares method. The final blended SM can replicate the SM dynamics across different climatic zones, from sub-humid regions to semi-arid and arid regions over the TP. This demonstrates the need to constrain the model-simulated SM estimates with the in situ measurements before their further applications in scaling climatology of SM satellite products. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Tibet-Obs networks are distributed under the three different climatic zones. The climatic zones over Tibetan Plateau were classified based on the FAO Aridity Index Map. The dark blue color represents the area around Tibetan Plateau, with elevation lower than 3000 m above sea level (a.s.l.); and (<b>b</b>) time series of satellite observed, LSM simulated and <span class="html-italic">in situ</span> measured SM for the Tibet-Obs sites: Ali, Maqu, Naqu, and Ngari-Shiquanhe.</p>
Full article ">Figure 2
<p>Series of individual station’s observations and the averaged SM data for Tibet-Obs networks between 1 August 2010 and 31 October 2011: (<b>a</b>) Maqu; (<b>b</b>) Naqu; (<b>c</b>) Ali; (<b>d</b>) Shiquanhe; and (<b>e</b>) averaged soil moisture for different networks.</p>
Full article ">Figure 2 Cont.
<p>Series of individual station’s observations and the averaged SM data for Tibet-Obs networks between 1 August 2010 and 31 October 2011: (<b>a</b>) Maqu; (<b>b</b>) Naqu; (<b>c</b>) Ali; (<b>d</b>) Shiquanhe; and (<b>e</b>) averaged soil moisture for different networks.</p>
Full article ">Figure 3
<p>The flowchart of scaling and blending strategy deployed in this study.</p>
Full article ">Figure 4
<p>Time-longitude diagram for SM products: (<b>a</b>) <span class="html-italic">in situ</span> climatology; (<b>b</b>) scaled ERA-Interim climatology, and (<b>c</b>) original ERA-Interim climatology, for the calibration period (1 November 2010–31 October 2011).</p>
Full article ">Figure 5
<p>Time-longitude diagram for scaled SM products: (<b>a</b>) scaled ERA-Interim; (<b>b</b>) scaled AMSRE; (<b>c</b>) scaled ASCAT; (<b>d</b>) blended SM product; (<b>e</b>) original AMSRE; and (<b>f</b>) original ASCAT, for the blending period (1 May 2008–31 October 2010); and the number of data for (<b>g</b>) ERA-Interim; (<b>h</b>) AMSRE; and (<b>i</b>) ASCAT.</p>
Full article ">Figure 6
<p>The blending of the scaled AMSRE, ASCAT, and ERA-Interim SM products was implemented with two sub-steps: (<b>a</b>–<b>c</b>) sub-step one is to blend the three collocated SM data, as identified by the number of triplets, the minimum correlation coefficient among each other and the <span class="html-italic">p</span>-value. It is to note that (<b>c</b>) was shown as (1—<span class="html-italic">p</span>-value) to make it more visible as most of <span class="html-italic">p</span>-value is close to zero; (<b>d</b>,<b>e</b>) sub-step two is to blend the rest of data, of which the two satellite SM data collocate individually with the scaled ERA-Interim data, but not collocating with each other; and (<b>f</b>) the final blended SM product. (<b>a</b>–<b>c</b>) were plotted using the three SM data over the blending period; (<b>d</b>–<b>f</b>) are plotted using data on 17 August 2008.</p>
Full article ">Figure 7
<p>Weights and relative errors of the three scaled SM products for the blended SM: (<b>a</b>,<b>d</b>) ERA-Interim; (<b>b</b>,<b>e</b>) AMSRE; and (<b>c</b>,<b>f</b>) ASCAT.</p>
Full article ">Figure 8
<p>Weights and relative errors for those scaled AMSRE and ASCAT data, not collocating with each other, but with the scaled ERA-Interim data: (<b>a</b>–<b>d</b>) ERA-Interim <span class="html-italic">vs.</span> AMSRE; (<b>e</b>–<b>h</b>) ERA-Interim <span class="html-italic">vs.</span> ASCAT.</p>
Full article ">Figure 8 Cont.
<p>Weights and relative errors for those scaled AMSRE and ASCAT data, not collocating with each other, but with the scaled ERA-Interim data: (<b>a</b>–<b>d</b>) ERA-Interim <span class="html-italic">vs.</span> AMSRE; (<b>e</b>–<b>h</b>) ERA-Interim <span class="html-italic">vs.</span> ASCAT.</p>
Full article ">Figure 9
<p>Seasonal variations of anomalies of the blended SM product over the TP for the blending period (1 May 2008–31 October 2010). (<b>a</b>,<b>b</b>) Transition 1 (April); (<b>c</b>–<b>e</b>) monsoon season (May–October); (<b>f</b>,<b>h</b>) Transition 2 (November); and (<b>i</b>–<b>k</b>) winter season (December–March).</p>
Full article ">Figure 10
<p>Taylor diagram illustrating the statistics of the inter-comparison between the <span class="html-italic">in situ</span> measurement and the 13 sets of different satellite SM products, over (<b>a</b>) Naqu; (<b>b</b>) Maqu; (<b>c</b>) Ali; and (<b>d</b>) Shiquanhe. The original SM data were represented by the blue markers; the scaled and equal weighting average of the SM datasets were represented by the red markers; the blended SM was represented by the green marker. The original SM data were symbolled with the bold <b>BCDEFG</b>, while the scaled SM were with the italic bold <b><span class="html-italic">BCDEFG</span></b>.</p>
Full article ">
12141 KiB  
Article
Anatomy of Subsidence in Tianjin from Time Series InSAR
by Peng Liu, Qingquan Li, Zhenhong Li, Trevor Hoey, Guoxiang Liu, Chisheng Wang, Zhongwen Hu, Zhiwei Zhou and Andrew Singleton
Remote Sens. 2016, 8(3), 266; https://doi.org/10.3390/rs8030266 - 22 Mar 2016
Cited by 41 | Viewed by 8664
Abstract
Groundwater is a major source of fresh water in Tianjin Municipality, China. The average rate of groundwater extraction in this area for the last 20 years fluctuates between 0.6 and 0.8 billion cubic meters per year. As a result, significant subsidence has been [...] Read more.
Groundwater is a major source of fresh water in Tianjin Municipality, China. The average rate of groundwater extraction in this area for the last 20 years fluctuates between 0.6 and 0.8 billion cubic meters per year. As a result, significant subsidence has been observed in Tianjin. In this study, C-band Envisat (Environmental Satellite) ASAR (Advanced Synthetic Aperture Radar) images and L-band ALOS (Advanced Land Observing Satellite) PALSAR (Phased Array type L-band Synthetic Aperture Radar) data were employed to recover the Earth’s surface evolution during the period between 2007 and 2009 using InSAR time series techniques. Similar subsidence patterns can be observed in the overlapping area of the ASAR and PALSAR mean velocity maps with a maximum radar line of sight rate of ~170 mm·year−1. The west subsidence is modeled for ground water volume change using Mogi source array. Geological control by major faults on the east subsidence is analyzed. Storage coefficient of the east subsidence is estimated by InSAR displacements and temporal pattern of water level changes. InSAR has proven a useful tool for subsidence monitoring and displacement interpretation associated with underground water usage. Full article
(This article belongs to the Special Issue Earth Observations for Geohazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The tectonic framework of the Northern China Plain. Boundary of North China Plain is in thick red line. Piedmont plain, alluvial fan and flood plain, and coastal plain in North China Plain are bounded by thick yellow and blue lines. Major faults and subsidiary faults in North China Plain are shown in thin black and red solid lines, respectively [<a href="#B19-remotesensing-08-00266" class="html-bibr">19</a>]. Palaeogene lacuna (missing line of Paleogene system) is in thin dotted line, while further details of the fault system in Tianjin are shown by thin yellow solid lines [<a href="#B20-remotesensing-08-00266" class="html-bibr">20</a>]. Locations of major cities in this area are shown. Envisat ASAR frame, ALOS PALSAR frame, and political boundary of Tianjin are outlined with dashed lines; (<b>b</b>) Annual groundwater pumping volume and rainfall in Tianjin City from 1980 to 2007 (after [<a href="#B5-remotesensing-08-00266" class="html-bibr">5</a>]); (<b>c</b>) Stratigraphy in Bohai Bay Basin (after [<a href="#B21-remotesensing-08-00266" class="html-bibr">21</a>,<a href="#B22-remotesensing-08-00266" class="html-bibr">22</a>,<a href="#B23-remotesensing-08-00266" class="html-bibr">23</a>]). The upper Tertiary formations can be divided into Neogene Minghuazhen (Nm) and Guantao (Ng) groups. The lower Tertiary formations can be divided into Eogene Dongying (Ed), Shahejie (Es), and Kongdian (Ek) formations.</p>
Full article ">Figure 2
<p>LOS (line of sight) rates from ALOS (17 January 2007 to 9 September 2009) and Envisat (4 January 2008 to 12 June 2009) satellites. Negative values mean moving away from the satellite (e.g., subsidence). The straight line “TJLF (Tianjin Langfang)” is located between Tianjin City and Langfang City. The poly line “HSR (High Speed Railway)” is the path of Jing-Hu high speed railway. The displacement rates along the two swathes are shown in <a href="#remotesensing-08-00266-f003" class="html-fig">Figure 3</a>. BM2010 and BM2011, along with other benchmarks, are shown in white dots. Leveling results for all benchmarks are compared with InSAR displacements in <a href="#remotesensing-08-00266-f004" class="html-fig">Figure 4</a>. Reference points for InSAR and leveling are shown in black triangle and dot respectively. Model area is outlined by rectangle. Counties and districts in Tianjin are outlined by thin dashed line. Geothermal fields with gradient above 3.5 (°C/100 m) are outlined by thick dashed line.</p>
Full article ">Figure 3
<p>(<b>a</b>) LOS rates against longitude in TJLF swath with buffer distance of 500 meters to both sides of the line; and (<b>b</b>) LOS rates against latitude in a swath along the Jing-Hu High Speed Railway with buffer distance of 500 meters to the railway at both sides.</p>
Full article ">Figure 4
<p>(<b>a</b>) Correlation between ALOS LOS displacements and four leveling campaigns; (<b>b</b>) correlation between Envisat LOS displacements and three leveling campaigns; (<b>c</b>) correlation between decomposed vertical displacements and three leveling campaigns; (<b>d</b>) interpolated ALOS displacements and leveling displacement; (<b>e</b>) interpolated Envisat displacements and leveling displacement; and (<b>f</b>) decomposed vertical displacement and leveling displacement.</p>
Full article ">Figure 5
<p>Mogi reservoir model: (<b>a</b>) no smooth applied; and (<b>b</b>) Laplacian smooth.</p>
Full article ">Figure 6
<p>L curve to determine source smoothness. The horizontal axis denotes the modeled source smoothness <math display="inline"> <semantics> <mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mrow> <mo>‖</mo> <mi>L</mi> <mi>S</mi> <mo>‖</mo> </mrow> <mn>2</mn> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>, where <span class="html-italic">L</span> and <span class="html-italic">S</span> are defined in Equation (4). The vertical axis denotes source fit to InSAR observations <math display="inline"> <semantics> <mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mrow> <mo>‖</mo> <mi>G</mi> <mi>S</mi> <mo>−</mo> <mi>D</mi> <mo>‖</mo> </mrow> <mn>2</mn> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> , where <span class="html-italic">G</span> and <span class="html-italic">D</span> are defined in Equation (4). The curve is formed by 13 different Laplacian regularized sources <span class="html-italic">S</span>. Each <span class="html-italic">S</span> corresponds to the best-fit result of each inversion. Smoothing strength for the source is controlled by <span class="html-italic">LF</span> in model inversion. There are 13 smoothing factors <span class="html-italic">F</span> range between 10<sup>−4</sup> to 10<sup>−1</sup> with exponential step of 0.25. Note that horizontal axis <math display="inline"> <semantics> <mrow> <mi>L</mi> <mi>S</mi> </mrow> </semantics> </math> is calculated without multiplying smoothing factor <span class="html-italic">F</span>. The square centered with an asterisk represents that a value of 10<sup>−3</sup> is employed as the smoothing factor for final result.</p>
Full article ">Figure 7
<p>(<b>a</b>) ALOS LOS annual displacements (annual rates times one year); (<b>b</b>) modeled ALOS LOS annual displacement; (<b>c</b>) residuals between observed and modeled displacements for ALOS; (<b>d</b>) Envisat LOS annual displacements; (<b>e</b>) modeled Envisat LOS annual displacement; (<b>f</b>) residuals between observed and modeled displacements for Envisat; and (<b>g</b>) modeled Source array volume changes from ALOS displacement. Dashed line is a contour of −2700 m<sup>3</sup>.</p>
Full article ">Figure 8
<p>Modeled annual: (<b>a</b>) horizontal displacements; (<b>b</b>) horizontal displacements in E-W direction; and (<b>c</b>) horizontal displacements in N-S direction.</p>
Full article ">Figure 9
<p>Observed LOS surface displacement (<b>a</b>) and Modeled LOS displacements with reservoir grid size of: (<b>b</b>) 300 m; (<b>c</b>) 270 m; (<b>d</b>) 225 m; and (<b>e</b>) 75 m.</p>
Full article ">Figure 10
<p>Faults and LOS subsidence rates in Tianjin: (<b>a</b>) Tectonic system of the area includes Dacheng Salient (DCS), Shuangyao Salient (SYS), Panzhuang Salient (PZS), Xiaohanzhuang Salient (XHZS), Litan Depression (LTD), Banqiao Depression (BQD), and Baitangkou Depression (BTKD) [<a href="#B57-remotesensing-08-00266" class="html-bibr">57</a>,<a href="#B58-remotesensing-08-00266" class="html-bibr">58</a>,<a href="#B59-remotesensing-08-00266" class="html-bibr">59</a>,<a href="#B61-remotesensing-08-00266" class="html-bibr">61</a>,<a href="#B62-remotesensing-08-00266" class="html-bibr">62</a>]. Major faults include the Tianjinnan Fault (TJNF), Tianjinbei Fault (TJBF), Dasi Fault (DSF), Haihe Fault (HHF) and Cangdong Fault (CDF); (<b>b</b>) TJ01 and SY02 are seismic profile of TJBF. TJ08 is seismic profile of TJNF, and TS1 and TS4 are boreholes near TJ08. TJ23 is seismic profile of CDF [<a href="#B59-remotesensing-08-00266" class="html-bibr">59</a>].</p>
Full article ">Figure 11
<p>Envisat and ALOS LOS displacement at JH-12 Well (the triangle in <a href="#remotesensing-08-00266-f012" class="html-fig">Figure 12</a>b) in east subsidence area. Precipitation data are available from China Meteorological Administration, specifically meteorology station No. 54527 in Tianjin [<a href="#B65-remotesensing-08-00266" class="html-bibr">65</a>]. Water level of JH-12 Well in Aquifer II is available to the end of 2008 [<a href="#B27-remotesensing-08-00266" class="html-bibr">27</a>].</p>
Full article ">Figure 12
<p>Water levels in 2008 superimposed on LOS subsidence rates from Envisat (4 January 2008 to 12 June 2009): (<b>a</b>) Aquifer III; and (<b>b</b>) Aquifer IV. Location of JH-12 Well is marked in triangle. Tuanbo Reservoir is labeled.</p>
Full article ">Figure 13
<p>(<b>a</b>) Detrend of Envisat time series at JH-12 Well; (<b>b</b>) continuous wavelet power of the Envisat time series; (<b>c</b>) water level of JH-12 Well; (<b>d</b>) continuous wavelet power of water level time series; (<b>e</b>) cross wavelet transform of Envisat and water level time series at JH-12 Well; and (<b>f</b>) wavelet coherence of Envisat and water level time series for JH-12. Images (<b>g</b>–<b>l</b>) are the same for ALOS and water level time series at JH-12 Well. The thick contour is the 95% confidence level. The cone of influence (COI) is shown in light shadow.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) Detrend of Envisat time series at JH-12 Well; (<b>b</b>) continuous wavelet power of the Envisat time series; (<b>c</b>) water level of JH-12 Well; (<b>d</b>) continuous wavelet power of water level time series; (<b>e</b>) cross wavelet transform of Envisat and water level time series at JH-12 Well; and (<b>f</b>) wavelet coherence of Envisat and water level time series for JH-12. Images (<b>g</b>–<b>l</b>) are the same for ALOS and water level time series at JH-12 Well. The thick contour is the 95% confidence level. The cone of influence (COI) is shown in light shadow.</p>
Full article ">
8131 KiB  
Article
Examining Urban Impervious Surface Distribution and Its Dynamic Change in Hangzhou Metropolis
by Longwei Li, Dengsheng Lu and Wenhui Kuang
Remote Sens. 2016, 8(3), 265; https://doi.org/10.3390/rs8030265 - 22 Mar 2016
Cited by 54 | Viewed by 8482
Abstract
Analysis of urban distribution and its expansion using remote sensing data has received increasing attention in the past three decades, but little research has examined spatial patterns of urban distribution and expansion with buffer zones in different directions. This research selected Hangzhou metropolis [...] Read more.
Analysis of urban distribution and its expansion using remote sensing data has received increasing attention in the past three decades, but little research has examined spatial patterns of urban distribution and expansion with buffer zones in different directions. This research selected Hangzhou metropolis as a case study to analyze spatial patterns and dynamic changes based on time-series urban impervious surface area (ISA) datasets. ISA was developed from Landsat imagery between 1991 and 2014 using a hybrid approach consisting of linear spectral mixture analysis, decision tree classifiers, and post-processing. The spatial patterns of ISA distribution and its dynamic changes in eight directions—east, southeast, south, southwest, west, northwest, north, and northeast—at the temporal scale were analyzed with a buffer zone-based approach. This research indicated that ISA can be extracted from Landsat imagery with both producer and user accuracies of over 90%. ISA in Hangzhou metropolis increased from 146 km2 in 1991 to 868 km2 in 2014. Annual ISA growth rates were between 15.6 km2 and 48.8 km2 with the lowest growth rate in 1994–2000 and the highest growth rate in 2005–2010. Urban ISA increase before 2000 was mainly due to infilling within the urban landscape, and, after 2005, due to urban expansion in the urban-rural interfaces. Urban expansion in this study area has different characteristics in various directions that are influenced by topographic factors and urban development policies. Full article
(This article belongs to the Special Issue Monitoring of Land Changes)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area Hangzhou metropolis, a coastal region in Zhejiang province, China, and the strategy of analyzing impervious surface distribution and its dynamic change with the buffer zone-based approach in different directions. (There are eight administrative units: Yh–Yuhang; Xs–Xiaoshan; Xh–Xihu; Jg–Jianggan; Bj–Binjiang; Gs–Gongshu; Sc–Shangcheng; Xc–Xiacheng. The circles are shown at 10 km intervals for simplification).</p>
Full article ">Figure 2
<p>Framework of mapping impervious surface distribution from Landsat multispectral image. (Note: MNDWI, modified normalized difference water index; ISA, impervious surface area; final ISA is the combination of bright ISA, dark ISA, and other ISA separated from the confusion).</p>
Full article ">Figure 3
<p>Comparison of impervious surface area (ISA) density distributions (cell size of 1 km<sup>2</sup>) at temporal scale showing the ISA transition patterns from urban core to rural regions in Hangzhou metropolis, 1991–2014.</p>
Full article ">Figure 4
<p>Dynamic change in impervious surface area (ISA) in Hangzhou metropolis during the change detection periods between 1991 and 2014.</p>
Full article ">Figure 5
<p>Impervious surface area (ISA) amount and density for each administrative unit in Hangzhou metropolis showing the ISA growth over time (Note: (<b>a</b>) represents the trend of ISA amount over time; (<b>b</b>) represents the trend of ISA density over time).</p>
Full article ">Figure 6
<p>Comparison of annual growth rates of impervious surface area (ISA) at administrative units of Hangzhou metropolis in different change detection periods.</p>
Full article ">Figure 7
<p>Comparison of impervious surface area (ISA) densities at temporal scale in eight directions at 2 km intervals within a radial distance of 50 km from the urban core of Hangzhou metropolis, China (Note: (<b>a</b>) represents the trend of ISA density distribution of entire study area over time; (<b>a1</b>–<b>a8</b>) represent the trend of ISA density distribution at different directions: East—(<b>a1</b>); North—(<b>a2</b>); Northeast—(<b>a3</b>); Northwest—(<b>a4</b>); South—(<b>a5</b>); Southeast—(<b>a6</b>); Southwest—(<b>a7</b>); and West—(<b>a8</b>)).</p>
Full article ">Figure 8
<p>Comparison of impervious surface area (ISA) density change among different change detection periods in eight directions at 2 km intervals within a 50 km radius from Hangzhou metropolis urban center (Note: (a) represents the trend of ISA density change of entire study area over time; (<b>a1</b>–<b>a8</b>) represents the trend of ISA density change at different directions: East—(<b>a1</b>); North—(<b>a2</b>); Northeast—(<b>a3</b>); Northwest—(<b>a4</b>); South—(<b>a5</b>); Southeast—(<b>a6</b>); Southwest—(<b>a7</b>); and West—(<b>a8</b>)).</p>
Full article ">Figure 8 Cont.
<p>Comparison of impervious surface area (ISA) density change among different change detection periods in eight directions at 2 km intervals within a 50 km radius from Hangzhou metropolis urban center (Note: (a) represents the trend of ISA density change of entire study area over time; (<b>a1</b>–<b>a8</b>) represents the trend of ISA density change at different directions: East—(<b>a1</b>); North—(<b>a2</b>); Northeast—(<b>a3</b>); Northwest—(<b>a4</b>); South—(<b>a5</b>); Southeast—(<b>a6</b>); Southwest—(<b>a7</b>); and West—(<b>a8</b>)).</p>
Full article ">Figure 9
<p>Impacts of topographic factors—elevation (<b>a</b>) and slope (<b>b</b>) on impervious surface area (ISA) distribution (Note: ISA data were developed from the 2014 Landsat 8 OLI imagery, and elevation and slope are from ASTER GDEM data.)</p>
Full article ">
8699 KiB  
Article
Novel Approach to Unsupervised Change Detection Based on a Robust Semi-Supervised FCM Clustering Algorithm
by Pan Shao, Wenzhong Shi, Pengfei He, Ming Hao and Xiaokang Zhang
Remote Sens. 2016, 8(3), 264; https://doi.org/10.3390/rs8030264 - 22 Mar 2016
Cited by 61 | Viewed by 8542
Abstract
This study presents a novel approach for unsupervised change detection in multitemporal remotely sensed images. This method addresses the problem of the analysis of the difference image by proposing a novel and robust semi-supervised fuzzy C-means (RSFCM) clustering algorithm. The advantage of the [...] Read more.
This study presents a novel approach for unsupervised change detection in multitemporal remotely sensed images. This method addresses the problem of the analysis of the difference image by proposing a novel and robust semi-supervised fuzzy C-means (RSFCM) clustering algorithm. The advantage of the RSFCM is to further introduce the pseudolabels from the difference image compared with the existing change detection methods; these methods, mainly use difference intensity levels and spatial context. First, the patterns with a high probability of belonging to the changed or unchanged class are identified by selectively thresholding the difference image histogram. Second, the pseudolabels of these nearly certain pixel-patterns are jointly exploited with the intensity levels and spatial information in the properly defined RSFCM classifier in order to discriminate the changed pixels from the unchanged pixels. Specifically, labeling knowledge is used to guide the RSFCM clustering process to enhance the change information and obtain a more accurate membership; information on spatial context helps to lower the effect of noise and outliers by modifying the membership. RSFCM can detect more changes and provide noise immunity by the synergistic exploitation of pseudolabels and spatial context. The two main contributions of this study are as follows: (1) it proposes the idea of combining the three information types from the difference image, namely, (a) intensity levels, (b) labels, and (c) spatial context; and (2) it develops the novel RSFCM algorithm for image segmentation and forms the proposed change detection framework. The proposed method is effective and efficient for change detection as confirmed by six experimental results of this study. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of proposed change detection approach based on robust semi-supervised fuzzy C-means (RSFCM).</p>
Full article ">Figure 2
<p>Examples of difference image histogram <math display="inline"> <semantics> <mrow> <mi>h</mi> <mo stretchy="false">(</mo> <msup> <mi>i</mi> <mi>ρ</mi> </msup> <mo stretchy="false">)</mo> </mrow> </semantics> </math> and definitions of uncertain and nearly certain parts.</p>
Full article ">Figure 3
<p>(<b>a</b>) Example of an undesirable configuration of membership; (<b>b</b>) Second-order neighborhood system of pixel (<span class="html-italic">i</span>, <span class="html-italic">j</span>); (<b>c</b>) Distances between center pixel (<span class="html-italic">i</span>, <span class="html-italic">j</span>) and its neighbors.</p>
Full article ">Figure 4
<p>(<b>a</b>) Image acquired in April 1999; (<b>b</b>) Image acquired in May 1999; (<b>c</b>) Ground truth of (<b>a</b>,<b>b</b>); (<b>d</b>) Band 4 of image acquired in April 2000; (<b>e</b>) Band 4 of image acquired in May 2002; (<b>f</b>) Ground truth of (<b>d</b>,<b>e</b>); (<b>g</b>) Image acquired in July 1997; (<b>h</b>) Image acquired in August 1997; (<b>i</b>) Ground truth of (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Band 4 of image acquired in August 2001; (<b>b</b>) Band 4 of image acquired in August 2002; (<b>c</b>) Ground truth of (<b>a</b>,<b>b</b>); (<b>d</b>) Band 3 of image acquired in July 2000; (<b>e</b>) Band 3 of image acquired in July 2006; (<b>f</b>) Ground truth of (<b>d</b>,<b>e</b>); (<b>g</b>) Band 7 of image acquired in August 2007; (<b>h</b>) Band 7 of image acquired in August 2010; (<b>i</b>) Ground truth of (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 6
<p>Testing curves of parameter α on six remote sensing datasets.</p>
Full article ">Figure 7
<p>Final change detection maps of the Bern dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM (special case of RSFCM); (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 8
<p>Final change detection maps of the Mexico dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM; (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 9
<p>Final change detection maps of the Ottawa dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM; (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 10
<p>Final change detection maps of the Liaoning dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM; and (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 11
<p>Final change detection maps of the Madeirinha dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM; (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 12
<p>Final change detection maps of the Neimeng dataset produced by (<b>a</b>) EM; (<b>b</b>) EMMRF; (<b>c</b>) FCM; (<b>d</b>) FLICM; (<b>e</b>) RFLICM; (<b>f</b>) sRSFCM; (<b>g</b>) proposed RSFCM; and (<b>h</b>) reference image.</p>
Full article ">Figure 13
<p>Close-up shot of Mexico data (<b>a</b>) FCM; (<b>b</b>) FLICM; (<b>c</b>) RFLICM; (<b>d</b>) sRSFCM; and (<b>e</b>) RSFCM.</p>
Full article ">Figure 14
<p>Close-up shot of Madeirinha data (<b>a</b>) FCM; (<b>b</b>) FLICM; (<b>c</b>) RFLICM; (<b>d</b>) sRSFCM; and (<b>e</b>) RSFCM.</p>
Full article ">Figure 15
<p>Average results from different datasets (<b>a</b>) <span class="html-italic">OE</span> values; (<b>b</b>) <span class="html-italic">KC</span> values; and (<b>c</b>) computation times <span class="html-italic">T.</span></p>
Full article ">
2950 KiB  
Article
A 30+ Year AVHRR LAI and FAPAR Climate Data Record: Algorithm Description and Validation
by Martin Claverie, Jessica L. Matthews, Eric F. Vermote and Christopher O. Justice
Remote Sens. 2016, 8(3), 263; https://doi.org/10.3390/rs8030263 - 22 Mar 2016
Cited by 117 | Viewed by 14053
Abstract
In- land surface models, which are used to evaluate the role of vegetation in the context of global climate change and variability, LAI and FAPAR play a key role, specifically with respect to the carbon and water cycles. The AVHRR-based LAI/FAPAR dataset offers [...] Read more.
In- land surface models, which are used to evaluate the role of vegetation in the context of global climate change and variability, LAI and FAPAR play a key role, specifically with respect to the carbon and water cycles. The AVHRR-based LAI/FAPAR dataset offers daily temporal resolution, an improvement over previous products. This climate data record is based on a carefully calibrated and corrected land surface reflectance dataset to provide a high-quality, consistent time-series suitable for climate studies. It spans from mid-1981 to the present. Further, this operational dataset is available in near real-time allowing use for monitoring purposes. The algorithm relies on artificial neural networks calibrated using the MODIS LAI/FAPAR dataset. Evaluation based on cross-comparison with MODIS products and in situ data show the dataset is consistent and reliable with overall uncertainties of 1.03 and 0.15 for LAI and FAPAR, respectively. However, a clear saturation effect is observed in the broadleaf forest biomes with high LAI (>4.5) and FAPAR (>0.8) values. Full article
(This article belongs to the Special Issue Satellite Climate Data Records and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>International Geosphere-Biosphere Program (IGBP) land cover classification from Hansen <span class="html-italic">et al</span>. [<a href="#B21-remotesensing-08-00263" class="html-bibr">21</a>]. Labels were simplified according to <a href="#remotesensing-08-00263-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>BELMANIP-2 and DIRECT network sites location.</p>
Full article ">Figure 3
<p>Conceptual representation of the ANN, including normalization (“Norm”) steps. Notice that the number of neurons correspond to the actual number of neurons. S and L stand for “sigmoid” and “linear” neurons, respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>e</b>) Domain definition for the five classes (red polygons) in the red/NIR surface reflectance space. Greyscale images represent the density function for each 0.01 surface reflectance (SR) bin (white = no value; black = high density). Refer to <a href="#remotesensing-08-00263-t001" class="html-table">Table 1</a> for biome class definitions. The domain definition is calculated using AVH09C1 data acquired from 2001 to 2007.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>x</b>) Theoretical performances of the LAI and FAPAR retrieval. Each row refers to a land cover class and the bottom row to all classes merged. On the first column, the cumulative distribution functions (CDF) of MCD15 training data and AVH15 LAI retrieval are shown. On the second column, scatter plots between LAI MCD15 (x-axis) and LAI AVH15 (y-axis) are displayed. The graphs are reproduced for FAPAR on the third and fourth column. Only data from DIRECT sites (not used for training) were plotted. Statistical metrics on the second and fourth subplot columns are defined in Equations (4)–(6); values in parenthesis correspond to metric values divided by the reference mean value. Refer to <a href="#remotesensing-08-00263-t001" class="html-table">Table 1</a> for class biome definitions.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) Comparison of retrieval from AVHRR NOAA-16 (N16) and AVHRR NOAA-18 (N18) for BELMANIP-2 and DIRECT sites from 2 July 2005 to 31 December 2006. Statistical metrics are defined in Equations (4)–(6); values in parenthesis correspond to metric values divided by the reference mean value.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>c</b>) <span class="html-italic">In situ</span> validation over DIRECT sites. Ground measurement covers initially a footprint of 3 km × 3 km and were extrapolated to 0.05° using MCD15 products for direct comparison. Statistical metrics are defined in Equations (4)–(6); values in parenthesis correspond to metric values divided by the reference mean value.</p>
Full article ">
13199 KiB  
Article
A Geographically and Temporally Weighted Regression Model for Ground-Level PM2.5 Estimation from Satellite-Derived 500 m Resolution AOD
by Yang Bai, Lixin Wu, Kai Qin, Yufeng Zhang, Yangyang Shen and Yuan Zhou
Remote Sens. 2016, 8(3), 262; https://doi.org/10.3390/rs8030262 - 22 Mar 2016
Cited by 136 | Viewed by 12264
Abstract
Regional haze episodes have occurred frequently in eastern China over the past decades. As a critical indicator to evaluate air quality, the mass concentration of ambient fine particulate matters smaller than 2.5 μm in aerodynamic diameter (PM2.5) is involved in many [...] Read more.
Regional haze episodes have occurred frequently in eastern China over the past decades. As a critical indicator to evaluate air quality, the mass concentration of ambient fine particulate matters smaller than 2.5 μm in aerodynamic diameter (PM2.5) is involved in many studies. To overcome the limitations of ground measurements on PM2.5 concentration, which is featured in disperse representation and coarse coverage, many statistical models were developed to depict the relationship between ground-level PM2.5 and satellite-derived aerosol optical depth (AOD). However, the current satellite-derived AOD products and statistical models on PM2.5–AOD are insufficient to investigate PM2.5 characteristics at the urban scale, in that spatial resolution is crucial to identify the relationship between PM2.5 and anthropogenic activities. This paper presents a geographically and temporally weighted regression (GTWR) model to generate ground-level PM2.5 concentrations from satellite-derived 500 m AOD. The GTWR model incorporates the SARA (simplified high resolution MODIS aerosol retrieval algorithm) AOD product with meteorological variables, including planetary boundary layer height (PBLH), relative humidity (RH), wind speed (WS), and temperature (TEMP) extracted from WRF (weather research and forecasting) assimilation to depict the spatio-temporal dynamics in the PM2.5–AOD relationship. The estimated ground-level PM2.5 concentration has 500 m resolution at the MODIS satellite’s overpass moments twice a day, which can be used for air quality monitoring and haze tracking at the urban and regional scale. To test the performance of the GTWR model, a case study was carried out in a region covering the adjacent parts of Jiangsu, Shandong, Henan, and Anhui provinces in central China. A cross validation was done to evaluate the performance of the GTWR model. Compared with OLS, GWR, and TWR models, the GTWR model obtained the highest value of coefficient of determination (R2) and the lowest values of mean absolute difference (MAD), root mean square error (RMSE), and mean absolute percentage error (MAPE). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of AERONET station and environmental monitoring stations in the JSHA study region.</p>
Full article ">Figure 2
<p>Histograms and descriptive statistics of six variables ((<b>a</b>) Ground-level PM<sub>2.5</sub>; (<b>b</b>) SARA AOD; (<b>c</b>) WRF PBLH; (<b>d</b>) WRF RH; (<b>e</b>) WRF WS; and (<b>f</b>) WRF TEMP).</p>
Full article ">Figure 3
<p>Scatter plots of the meteorological fields ((<b>a</b>) SARA AOD; (<b>b</b>) WRF PBLH; (<b>c</b>) WRF RH; (<b>d</b>) WRF WS; and (<b>e</b>) WRF TEMP) against the hourly ground level PM<sub>2.5</sub> concentration.</p>
Full article ">Figure 4
<p>Flowchart of the proposed GTWR method for estimating hourly PM<sub>2.5</sub> at 500 m resolution.</p>
Full article ">Figure 5
<p>Contrast of satellite-derived AOD results in the study region on 11 January 2015. (<b>a</b>) True color (RGB:143); (<b>b</b>) Terra DT AOD at 10 km; (<b>c</b>) Terra DB AOD at 10 km; (<b>d</b>) Terra SARA AOD at 500 m.</p>
Full article ">Figure 6
<p>Contrast of satellite-derived AOD results in the study region on 13 February 2015. (<b>a</b>) True color (RGB:143); (<b>b</b>) Aqua DT AOD at 10 km; (<b>c</b>) Aqua DB AOD at 10 km; (<b>d</b>) Aqua SARA AOD at 500 m.</p>
Full article ">Figure 7
<p>Comparison between GTWR estimations and ground measurements of PM<sub>2.5</sub> concentration.</p>
Full article ">Figure 8
<p>Scatter plots between the measured and fitted PM<sub>2.5</sub> concentrations for model fitting. (<b>a</b>) OLS; (<b>b</b>) GWR; (<b>c</b>) TWR; and (<b>d</b>) GTWR. The short dashed (<b>orange</b>) line, long dashed (<b>blue</b>) line, and solid (<b>red</b>) line are the 95% confidence band, 1:1 line, and regression line, respectively.</p>
Full article ">Figure 8 Cont.
<p>Scatter plots between the measured and fitted PM<sub>2.5</sub> concentrations for model fitting. (<b>a</b>) OLS; (<b>b</b>) GWR; (<b>c</b>) TWR; and (<b>d</b>) GTWR. The short dashed (<b>orange</b>) line, long dashed (<b>blue</b>) line, and solid (<b>red</b>) line are the 95% confidence band, 1:1 line, and regression line, respectively.</p>
Full article ">Figure 9
<p>Scatter plots between the measured and fitted PM<sub>2.5</sub> concentrations for cross validation. (<b>a</b>) OLS; (<b>b</b>) GWR; (<b>c</b>) TWR and (<b>d</b>) GTWR. The short dashed (<b>orange</b>) lines, long dashed (<b>blue</b>) line, and solid (<b>red</b>) line are the 95% confidence band, 1:1 line, and regression line, respectively.</p>
Full article ">Figure 10
<p>Spatial distribution of GTWR model-estimated PM<sub>2.5</sub> concentration, from Terra (local time 10:30 a.m.) and Aqua (local time 1:30 p.m.), respectively, over the study region during a haze formation process from the afternoon of 8 February 2015 to the morning of 11 February 2015 ((<b>a</b>) Aqua 8 February 2015; (<b>b</b>) Terra 9 February 2015; (<b>c</b>) Aqua 9 February 2015; (<b>d</b>) Terra 10 February 2015; (<b>e</b>) Aqua 10 February 2015; and (<b>f</b>) Terra 11 February 2015). <span class="html-italic">In situ</span> measurements at training stations (circles) and validation (triangles) stations are also shown in the figures.</p>
Full article ">Figure 10 Cont.
<p>Spatial distribution of GTWR model-estimated PM<sub>2.5</sub> concentration, from Terra (local time 10:30 a.m.) and Aqua (local time 1:30 p.m.), respectively, over the study region during a haze formation process from the afternoon of 8 February 2015 to the morning of 11 February 2015 ((<b>a</b>) Aqua 8 February 2015; (<b>b</b>) Terra 9 February 2015; (<b>c</b>) Aqua 9 February 2015; (<b>d</b>) Terra 10 February 2015; (<b>e</b>) Aqua 10 February 2015; and (<b>f</b>) Terra 11 February 2015). <span class="html-italic">In situ</span> measurements at training stations (circles) and validation (triangles) stations are also shown in the figures.</p>
Full article ">Figure 11
<p>Spatial distribution of ground PM<sub>2.5</sub> measurements over the study region during a haze formation process from the afternoon of 8 February 2015 to the morning of 11 February 2015, referring to the local passing moment of satellites Terra and Aqua ((<b>a</b>) Aqua 8 February 2015; (<b>b</b>) Terra 9 February 2015; (<b>c</b>) Aqua 9 February 2015; (<b>d</b>) Terra 10 February 2015; (<b>e</b>) Aqua 10 February 2015; and (<b>f</b>) Terra 11 February 2015). <span class="html-italic">In situ</span> measurements at training stations (circles) and validation (triangles) stations are also shown in the figures.</p>
Full article ">
2950 KiB  
Article
Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map
by Myroslava Lesiv, Elena Moltchanova, Dmitry Schepaschenko, Linda See, Anatoly Shvidenko, Alexis Comber and Steffen Fritz
Remote Sens. 2016, 8(3), 261; https://doi.org/10.3390/rs8030261 - 22 Mar 2016
Cited by 39 | Viewed by 8374
Abstract
Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived [...] Read more.
Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR), as well as classification and regression trees (CART). We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs. Full article
(This article belongs to the Special Issue Validation and Inter-Comparison of Land Cover and Land Use Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distribution of the training data points. The legend corresponds to a forest score, which is the number of input layers defining forest cover at the point locations (see <a href="#sec3dot1-remotesensing-08-00261" class="html-sec">Section 3.1</a> for more explanation).</p>
Full article ">Figure 2
<p>Binary response: forest presence/absence: (<b>a</b>) apparent error rate by forest score; (<b>b</b>) sensitivity and specificity estimated for the high disagreement area (“forest score” four).</p>
Full article ">Figure 3
<p>Percentage of forest in a grid: (<b>a</b>) apparent error rate by forest scores; (<b>b</b>) sensitivity and specificity estimated for the high disagreement areas (combined forest scores “four” and “five”).</p>
Full article ">Figure 4
<p>The agreement between the forest cover maps produced by the NN, NB, CART, LR and GWR methods. The “forest score of the methods” indicates the number of methods that predict forest presence, e.g., 0, no methods report forest; 5, all methods report forest. The coloured bubbles (the training dataset) correspond to the forest score according to the input maps (0, if no product reports forest; 9, if all of the products report forest).</p>
Full article ">Figure 5
<p>An example of aggregating tree cover data from MODIS VCF at 250-m resolution to 1 km and calculating the percentage of forest cover: (<b>a</b>) the original tree cover values at 250 m of MODIS VCF; (<b>b</b>) conversion to forest/non-forest based on tree cover values greater than 10% covering more than 0.5 ha; (<b>c</b>) the forest cover percentage aggregated to 1 km is then 50%, since half of the sub-pixels are covered by forest. Source: [<a href="#B7-remotesensing-08-00261" class="html-bibr">7</a>].</p>
Full article ">
3321 KiB  
Article
Monitoring Grassland Seasonal Carbon Dynamics, by Integrating MODIS NDVI, Proximal Optical Sampling, and Eddy Covariance Measurements
by Enrica Nestola, Carlo Calfapietra, Craig A. Emmerton, Christopher Y.S. Wong, Donnette R. Thayer and John A. Gamon
Remote Sens. 2016, 8(3), 260; https://doi.org/10.3390/rs8030260 - 19 Mar 2016
Cited by 31 | Viewed by 9552
Abstract
This study evaluated the seasonal productivity of a prairie grassland (Mattheis Ranch, in Alberta, Canada) using a combination of remote sensing, eddy covariance, and field sampling collected in 2012–2013. A primary objective was to evaluate different ways of parameterizing the light-use efficiency (LUE) [...] Read more.
This study evaluated the seasonal productivity of a prairie grassland (Mattheis Ranch, in Alberta, Canada) using a combination of remote sensing, eddy covariance, and field sampling collected in 2012–2013. A primary objective was to evaluate different ways of parameterizing the light-use efficiency (LUE) model for assessing net ecosystem fluxes at two sites with contrasting productivity. Three variations on the NDVI (Normalized Difference Vegetation Index), differing by formula and footprint, were derived: (1) a narrow-band NDVI (NDVI680,800, derived from mobile field spectrometer readings); (2) a broad-band proxy NDVI (derived from an automated optical phenology station consisting of broad-band radiometers); and (3) a satellite NDVI (derived from MODIS AQUA and TERRA sensors). Harvested biomass, net CO2 flux, and NDVI values were compared to provide a basis for assessing seasonal ecosystem productivity and gap filling of tower flux data. All three NDVIs provided good estimates of dry green biomass and were able to clearly show seasonal changes in vegetation growth and senescence, confirming their utility as metrics of productivity. When relating fluxes and optical measurements, temporal aggregation periods were considered to determine the impact of aggregation on model accuracy. NDVI values from the different methods were also calibrated against fAPARgreen (the fraction of photosynthetically active radiation absorbed by green vegetation) values to parameterize the APARgreen (absorbed PAR) term of the LUE (light use efficiency) model for comparison with measured fluxes. While efficiency was assumed to be constant in the model, this analysis revealed hysteresis in the seasonal relationships between fluxes and optical measurements, suggesting a slight change in efficiency between the first and second half of the growing season. Consequently, the best results were obtained by splitting the data into two stages, a greening phase and a senescence phase, and applying separate fits to these two periods. By incorporating the dynamic irradiance regime, the model based on APARgreen rather than NDVI best captured the high variability of the fluxes and provided a more realistic depiction of missing fluxes. The strong correlations between these optical measurements and independently measured fluxes demonstrate the utility of integrating optical with flux measurements for gap filling, and provide a foundation for using remote sensing to extrapolate from the flux tower to larger regions (upscaling) for regional analysis of net carbon uptake by grassland ecosystems. Full article
(This article belongs to the Special Issue Remote Sensing of Vegetation Structure and Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of study location (<b>top left</b>), with detail of study area (<b>bottom left</b>), showing the E3 and E5 calibration sites. Further details of each site are provided on the right. Image credits: ESRI, DigitalGlobe, GeoEye, i-cubed, Earthstar Geographics, CNES/Airbus DS, USDA, USGS, AEX, Getmapping, Aerogrid, IGN, IGP, swisstopo, and the GIS User Community.</p>
Full article ">Figure 2
<p>Optical and biomass sampling design around each eddy covariance (EC) and phenology station (PS) location. Canopy-level NDVI<sub>680,800</sub> was sampled at regular intervals within a one hectare region surrounding each flux tower using 10 m grid spacing (approx. 100 samples; sampling locations represented by black dots). Biomass sampling and <span class="html-italic">f</span>APAR<sub>green</sub> calibration occurred at the locations indicated by numbered circles (1–12), with the sampling years indicated by each circle.</p>
Full article ">Figure 3
<p>Experimental design, summarizing steps used in derivation of NDVI and APAR<sub>green</sub> (arrows), for comparisons with net CO<sub>2</sub> fluxes (double lines). As indicated, APAR<sub>green</sub> was estimated from the different NDVI metrics by regression of measured <span class="html-italic">f</span>APAR<sub>green</sub> against proxy NDVI, and multiplying <span class="html-italic">f</span>APAR<sub>green</sub> with PAR (photosynthetic photon flux density, measured with PAR sensors). Because the proxy NDVI provided continuous measurements, it was used to derive APAR<sub>green</sub> for comparison against Net CO<sub>2</sub> flux. Further explanation can be found in the text.</p>
Full article ">Figure 4
<p>Sample diurnal course of proxy NDVI (from phenology station, circles) and net CO<sub>2</sub> flux (from eddy covariance, triangles), showing average values (open symbols) calculated for the 5-h midday period (arrow between two vertical lines). See Methods and <a href="#app1-remotesensing-08-00260" class="html-app">Supplemental Materials</a> for further details on averaging (aggregation). Data from site E3, 7 June 2012.</p>
Full article ">Figure 5
<p>Time series (2012–2013) of NDVI<sub>680,800</sub> collected from spectrometer with grid method, proxy NDVI from 2-channel sensors on phenology station (5-h averaged), MODIS NDVI (Aqua and Terra sensors), green biomass, at (<b>a</b>) E3 site; and (<b>b</b>) E5 site. Low (&lt;0) NDVI values due to winter snow cover.</p>
Full article ">Figure 6
<p>Correlations between green biomass and (<b>a</b>) NDVI<sub>680,800</sub>; (<b>b</b>) proxy NDVI (5 h averaged) and (<b>c</b>) MODIS NDVI for both E3 (black dots) and E5 sites (white dots) in 2012 and 2013. Error bars for green biomass are expressed as standard error of the mean. Resulting equations and correlations (<span class="html-italic">R</span><sup>2</sup> values) are indicated for each fit above.</p>
Full article ">Figure 7
<p>Relationship between proxy NDVI (5 h average, panel <b>a</b>); MODIS NDVI (panel <b>b</b>); APAR<sub>green</sub> (panel <b>c</b>) and filtered net CO<sub>2</sub> fluxes (5 h average) for the E3 site in 2012. Fits are shown for the whole season (dashed line), for the green-up phase (black dots, solid line), and for the senescence phase (white dots, solid line). <span class="html-italic">R</span><sup>2</sup> values are reported for each panel considering the whole season (dashed line), the green-up phase (black dots) and the senescence phase (white dots). <span class="html-italic">p</span> values are &lt;0.001 for all the relationships.</p>
Full article ">Figure 8
<p>Time series of observed and modeled net CO<sub>2</sub> fluxes based on proxy NDVI (panel <b>a</b>); MODIS NDVI (panel <b>b</b>); and APAR<sub>green</sub> (panel <b>c</b>). The original fluxes are shown as solid lines. Fluxes modeled using a single fit are shown as dashed lines, and fluxes modeled using two separate fits are shown as dotted lines. All results are for the E3 site in 2012.</p>
Full article ">
5330 KiB  
Article
Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification
by Jian Zhang, Chenghai Yang, Huaibo Song, Wesley Clint Hoffmann, Dongyan Zhang and Guozhong Zhang
Remote Sens. 2016, 8(3), 257; https://doi.org/10.3390/rs8030257 - 18 Mar 2016
Cited by 50 | Viewed by 10246
Abstract
Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications has not been well documented in related [...] Read more.
Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications has not been well documented in related studies. The objective of this research was to apply three commonly-used classification methods (unsupervised, supervised, and object-based) to three-band imagery with RGB (red, green, and blue bands) and four-band imagery with RGB and near-infrared (NIR) bands to evaluate the performance of a dual-camera imaging system for crop identification. Airborne images were acquired from a cropping area in Texas and mosaicked and georeferenced. The mosaicked imagery was classified using the three classification methods to assess the usefulness of NIR imagery for crop identification and to evaluate performance differences between the object-based and pixel-based methods. Image classification and accuracy assessment showed that the additional NIR band imagery improved crop classification accuracy over the RGB imagery and that the object-based method achieved better results with additional non-spectral image features. The results from this study indicate that the airborne imaging system based on two consumer-grade cameras used in this study can be useful for crop identification and other agricultural applications. Full article
(This article belongs to the Special Issue Remote Sensing in Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area: (<b>a</b>) Geographic map of the study area, which is located in the bottom of Brazos basin and near College Station, Texas; and (<b>b</b>) image map of the study area from Google Earth.</p>
Full article ">Figure 2
<p>Imaging system and platform: (<b>a</b>) Components of the imaging system (two Nikon D90 cameras with Nikkor 24 mm lenses, two Nikon GP-1A GPS receiver, a 7-inch portable LCD video monitor, a wireless remote shutter release); (<b>b</b>) cameras mounted on the right step of an Air Tractor AT-402B; (<b>c</b>) close-up picture showing the custom-made camera box; and (<b>d</b>) close-up picture of the cameras in the box.</p>
Full article ">Figure 3
<p>Normalized spectral sensitivity of two Nikon D90 cameras and relative reflectance of 10 land use and land cover (LULC) classes. The dotted lines represent different channels of the RGB camera (Nikon-color-r, Nikon-color-g, and Nikon-color-b) and the modified NIR camera (Nikon-nir-r, Nikon-nir-g, Nikon-nir-b, and Nikon-nir-mono). The solid lines represent the relative reflectance of 10 LULC classes.</p>
Full article ">Figure 4
<p>Mosaicked images for this study: (<b>a</b>) three-band RGB image, band 1 = blue, band 2 = green and band 3 = red; (<b>b</b>) NIR band image; and (<b>c</b>) CIR composite (NIR, red and green) extracted from the four-band image, band 1 = blue, band 2 = green, band 3 = red and band 4 = NIR.</p>
Full article ">Figure 5
<p>Image segmentation results: (<b>a</b>) three-band image segmentation with 970 image objects; and (<b>b</b>) four-band image segmentation with 950 image objects.</p>
Full article ">Figure 6
<p>Image classification results (ten-class): (<b>a</b>) unsupervised classification for three-band image (3US); (<b>b</b>) unsupervised classification for four-band image (4US), (<b>c</b>) supervised classification for three-band image (3S); (<b>d</b>) supervised classification for four-band image (4S); (<b>e</b>) object-based classification for three-band image (3OB); and (<b>f</b>) object-based classification for three-band image (4OB).</p>
Full article ">Figure 7
<p>Spectral separability between any two classes for the three-band and four-band images.</p>
Full article ">Figure 8
<p>Decision tree models for object-based classification. Abbreviations for the 10 classes: IM=impervious, BF=bare soil and fallow, GA=grass, FE=forest, WA=water, SB=soybean, WM=watermelon, CO=corn, SG=sorghum, and CT=cotton. <sup>1</sup> (n) is the number ID of each feature, ranging from (1) to (42), which is described in <a href="#remotesensing-08-00257-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 9
<p>Average kappa coefficient difference for crop and non-crop by three classification methods (APk6) and difference of them. Unsupervised classification method (US), supervised classification method (S), object-based classification method (OB).</p>
Full article ">Figure 10
<p>Overall accuracy (<b>a</b>) and overall kappa (<b>b</b>) for six class groupings based on six classification types. Classification methods were represented as unsupervised classification for three-band image (3US), unsupervised classification for four-band image (4US), supervised classification for three-band image (3S), supervised classification for four-band image (4S), object-based classification for three-band image (3OB), and object-based classification for four-band image (4OB).</p>
Full article ">
9315 KiB  
Article
A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification
by Huai Yu, Wen Yang, Gui-Song Xia and Gang Liu
Remote Sens. 2016, 8(3), 259; https://doi.org/10.3390/rs8030259 - 17 Mar 2016
Cited by 64 | Viewed by 11717
Abstract
Scene classification plays an important role in understanding high-resolution satellite (HRS) remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been [...] Read more.
Scene classification plays an important role in understanding high-resolution satellite (HRS) remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT), which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of high-resolution satellite (HRS) image classification based on the CTS descriptor.</p>
Full article ">Figure 2
<p>Schematic map of the Binary Partition Tree (BPT) construction. (<b>a</b>) Original image with 4 regions; (<b>b</b>) The construction of the BPT.</p>
Full article ">Figure 3
<p>The segmentation of the HRS image via Color Binary Partition Tree (CBPT) (<b>a</b>) An HRS airport image; (<b>b</b>) Segmentation results at multiple scales.</p>
Full article ">Figure 4
<p>Two typical examples of each class of the 21-class UC Merced dataset.</p>
Full article ">Figure 5
<p>Confusion matrix for the descriptors based on the CTS descriptor on the UC Merced dataset.</p>
Full article ">Figure 6
<p>(<b>a</b>) The original image of Scene-TZ; (<b>b</b>) The geographic location of Scene-TZ.</p>
Full article ">Figure 7
<p>Classification result on the Scene-TZ Dataset. (<b>a</b>) A typical sample of each class in the 8-class Scene-TZ (<b>b</b>) Ground reference data (<b>c</b>) OF result (<b>d</b>) EP result (<b>e</b>) SSEP result (<b>f</b>) CTS result.</p>
Full article ">Figure 8
<p>Confusion matrix of the classification result based on the CTS on Scene-TZ.</p>
Full article ">Figure 9
<p>(<b>a</b>) The original image of Scene-TA; (<b>b</b>) The geographic location of Scene-TA.</p>
Full article ">Figure 10
<p>Classification result on the Scene-TA Dataset (<b>a</b>) The original image (<b>b</b>) Ground reference data (<b>c</b>) OF result (<b>d</b>) EP result (<b>e</b>) SSEP result (<b>f</b>) CTS result.</p>
Full article ">Figure 11
<p>Confusion matrix for the classification results based on the CTS descriptor on Scene-TA.</p>
Full article ">
11953 KiB  
Article
An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage
by Syed Ali Naqi Gilani, Mohammad Awrangjeb and Guojun Lu
Remote Sens. 2016, 8(3), 258; https://doi.org/10.3390/rs8030258 - 17 Mar 2016
Cited by 81 | Viewed by 10973
Abstract
The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on [...] Read more.
The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Complex scenarios in the Australian data sets: Partly occluded and shadowed buildings.</p>
Full article ">Figure 2
<p>Workflow of the proposed building detection and regularisation approach.</p>
Full article ">Figure 3
<p>Sample data: (<b>a</b>) RGB Orthoimage; (<b>b</b>) The building mask.</p>
Full article ">Figure 4
<p>(<b>a</b>) Grids overlaid on a sample image; (<b>b</b>) LiDAR points within a grid cell.</p>
Full article ">Figure 5
<p>Image line extraction: (<b>a</b>) All classified lines on RGB Orthoimage; (<b>b</b>) All classified lines on the building mask.</p>
Full article ">Figure 6
<p>(<b>a</b>) Connected components extracted from the building mask of test data; (<b>b</b>) Overlaying building mask and orthoimage (misalignment and squeezed boundaries); (<b>c</b>) Boundaries of candidate regions sketched on input orthoimage; and (<b>d</b>) Complex scenes representing false boundary delineation from (i) to (vi).</p>
Full article ">Figure 7
<p>Line clustering process: (<b>a</b>) Pool of Candidate lines to a sample building; (<b>b</b>) Clusters found with their associated lines in different clustering colours.</p>
Full article ">Figure 8
<p>Cell clustering process: (<b>a</b>) Cells collection for a candidate region (4 blue holes correspond to a cell); (<b>b</b>) Clustered cells (red); (<b>c</b>) Clusters marked as buildings (cyan boundary) and; (<b>d</b>–<b>h</b>) Boundary delineation samples.</p>
Full article ">Figure 9
<p>Pixel-based enlargement process: (<b>a</b>) Candidate boundary pixel (<math display="inline"> <msub> <mi>P</mi> <mi>n</mi> </msub> </math>) selection; (<b>b</b>) Snapshot of the accumulated pixels; (<b>c</b>) Extended region; and (<b>d</b>) Detected outline and final building boundary.</p>
Full article ">Figure 10
<p>(<b>a</b>) Building boundaries before enlargement (Under detected edges with black boundary and red region); (<b>b</b>) Final detected buildings after enlargement (cyan boundary).</p>
Full article ">Figure 11
<p>Building regularisation process: (<b>a</b>) Candidate boundary region (yellow) from building mask and image extracted lines; (<b>b</b>) Clustered lines to candidate region and final detected building; (<b>c</b>) Boundary lines selection; (<b>d</b>) Edge selection and boundary marking; (<b>e</b>) Edge line estimation for unmarked boundary outline; and (<b>f</b>) 2D building footprint/regularised boundary.</p>
Full article ">Figure 12
<p>Data sets: Vaihingen; (<b>a</b>) Area 1; (<b>b</b>) Area 2; and (<b>c</b>) Area 3; (<b>d</b>) Aitkenvale; (<b>e</b>) Hobart; (<b>f</b>) Eltham; and (<b>g</b>) Hervey Bay.</p>
Full article ">Figure 13
<p>Building detection on the ISPRS German data set: (<b>a</b>–<b>c</b>) Area 1, (<b>d</b>–<b>f</b>) Area 2, and (<b>g</b>–<b>i</b>) Area 3. Column1: pixel-based evaluation, Column2: boundary before regularisation, and Column 3: regularised boundary.</p>
Full article ">Figure 14
<p>(<b>a</b>,<b>b</b>) Building detection and regularisation on AV data set; (<b>c</b>) Building regularisation example in AV; (<b>d</b>,<b>e</b>) Building detection and regularisation on EL data set; and (<b>f</b>–<b>h</b>) Building regularisation examples in EL. Areas marked in (b) and (e) are magnified in (c) and (f–h&gt;), respectively.</p>
Full article ">Figure 15
<p>Building detection and regularisation on Hobart (<b>a</b>,<b>b</b>) and Hervey Bay (<b>c</b>,<b>d</b>). Building detection examples after regularisation in: (<b>e</b>–<b>h</b>) HT data set (1.6 points/m<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>) ; and (<b>i</b>) HB data set (12 points/m<math display="inline"> <msup> <mrow/> <mn>2</mn> </msup> </math>). Areas marked in (b) and (d) are magnified in (e) to (i).</p>
Full article ">Figure 16
<p>Building examples from ISPRS and Australian data sets. Detected and regularised buildings on (<b>a</b>,<b>b</b>) VH Area 3; (<b>c</b>,<b>d</b>) AV; (<b>e</b>,<b>f</b>) VH Area 3; (<b>g</b>,<b>h</b>) VH Area 2; (<b>i</b>,<b>j</b>) EL; (<b>k</b>,<b>l</b>) HT; and (<b>m</b>,<b>n</b>) HB.</p>
Full article ">
5664 KiB  
Article
Ash Decline Assessment in Emerald Ash Borer Infested Natural Forests Using High Spatial Resolution Images
by Justin Murfitt, Yuhong He, Jian Yang, Amy Mui and Kevin De Mille
Remote Sens. 2016, 8(3), 256; https://doi.org/10.3390/rs8030256 - 17 Mar 2016
Cited by 30 | Viewed by 11446
Abstract
The invasive emerald ash borer (EAB, Agrilus planipennis Fairmaire) infects and eventually kills endemic ash trees and is currently spreading across the Great Lakes region of North America. The need for early detection of EAB infestation is critical to managing the spread of [...] Read more.
The invasive emerald ash borer (EAB, Agrilus planipennis Fairmaire) infects and eventually kills endemic ash trees and is currently spreading across the Great Lakes region of North America. The need for early detection of EAB infestation is critical to managing the spread of this pest. Using WorldView-2 (WV2) imagery, the goal of this study was to establish a remote sensing-based method for mapping ash trees undergoing various infestation stages. Based on field data collected in Southeastern Ontario, Canada, an ash health score with an interval scale ranging from 0 to 10 was established and further related to multiple spectral indices. The WV2 image was segmented using multi-band watershed and multiresolution algorithms to identify individual tree crowns, with watershed achieving higher segmentation accuracy. Ash trees were classified using the random forest classifier, resulting in a user’s accuracy of 67.6% and a producer’s accuracy of 71.4% when watershed segmentation was utilized. The best ash health score-spectral index model was then applied to the ash tree crowns to map the ash health for the entire area. The ash health prediction map, with an overall accuracy of 70%, suggests that remote sensing has potential to provide a semi-automated and large-scale monitoring of EAB infestation. Full article
(This article belongs to the Special Issue Remote Sensing of Forest Health)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The three study sites within the Credit River Watershed ((<b>a</b>) is the north site; (<b>b</b>) is the middle site and (<b>c</b>) is the south site). The WorldView 2 image for the study area was displayed in true color composite (RGB: 321).</p>
Full article ">Figure 2
<p>A collection of field pictures: (<b>a</b>) is ash tree bark as identified by the diamond-shaped pattern; (<b>b</b>) is an ash tree leaf as denoted by the tear drop shape and thin point; and (<b>c</b>) is a general photo of the mixed canopy.</p>
Full article ">Figure 3
<p>Out of the observed 86 ash trees, 57 trees (red dots) were used for establishing the tree health prediction model and 29 trees (yellow dots) were used for model validation. Only the north site (<b>a</b>) and middle site (<b>b</b>) were visited for tree health data, and no data were collected for the south site because of its inaccessibility.</p>
Full article ">Figure 4
<p>The process for hemispherical photography classification using CAN_EYE software. Eight images were imported into the software and the outer edges of the images were masked out. From this, the images were classified into percent vegetation and percent sky (gaps). The tree used in this figure has 87.3% vegetation cover and 12.7% open canopy.</p>
Full article ">Figure 5
<p>A complete workflow for mapping ash trees and predicting EAB infestation stages.</p>
Full article ">Figure 6
<p>The segmentation accuracy as a function of the segmentation threshold or scale. The lower the segmentation evaluation index, the better the particular scale or threshold value is at segmenting the image. (<b>a</b>) shows the results of the multiresolution segmentation and (<b>b</b>) shows the results of the watershed segmentation. The red dots in the lines indicate the optimal scales/threshold with the lowest SEI for that image.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>f</b>) The size of segments created by watershed and multiresolution algorithms change as a function of the scale/threshold. Reference polygons are in red and segments are in yellow. (<b>a</b>) and (<b>d</b>) are segments created by a larger threshold/scale value (0.5 for watershed and 12 for multiresolution); (<b>b</b>) and (<b>e</b>) are the best segment threshold/scale values (0.25 for watershed and 8 for multiresolution); (<b>c</b>) and (<b>f</b>) are the smallest segments created by the two methods (0.1 for watershed and 4 for multiresolution).</p>
Full article ">Figure 8
<p>Classification maps from the watershed (<b>a</b>) and multiresolution (<b>b</b>) segmentations.</p>
Full article ">Figure 9
<p>The relationship between the vegetation index NDVI and the ash health score.</p>
Full article ">Figure 10
<p>The ash health prediction map for the entire area.</p>
Full article ">
6053 KiB  
Article
Remote Sensing of Deformation of a High Concrete-Faced Rockfill Dam Using InSAR: A Study of the Shuibuya Dam, China
by Wei Zhou, Shaolin Li, Zhiwei Zhou and Xiaolin Chang
Remote Sens. 2016, 8(3), 255; https://doi.org/10.3390/rs8030255 - 17 Mar 2016
Cited by 43 | Viewed by 8397
Abstract
Settlement is one of the most important deformation characteristics of high concrete faced rockfill dams (CFRDs, >100 m). High CFRDs safety would pose a great threat to the security of people’s lives and property downstream if this kind of deformation were not to [...] Read more.
Settlement is one of the most important deformation characteristics of high concrete faced rockfill dams (CFRDs, >100 m). High CFRDs safety would pose a great threat to the security of people’s lives and property downstream if this kind of deformation were not to be measured correctly, as traditional monitoring approaches have limitations in terms of durability, coverage, and efficiency. It has become urgent to develop new monitoring techniques to complement or replace traditional monitoring approaches for monitoring the safety and operation status of high CFRDs. This study examines the Shuibuya Dam (up to 233.5 m in height) in China, which is currently the highest CFRD in the world. We used space-borne Interferometric Synthetic Aperture Radar (InSAR) time series to monitor the surface deformation of the Shuibuya Dam. Twenty-one ALOS PALSAR images that span the period from 28 February 2007 to 11 March 2011 were used to map the spatial and temporal deformation of the dam. A high correlation of 0.93 between the InSAR and the in-situ monitoring results confirmed the reliability of the InSAR method; the deformation history derived from InSAR is also consistent with the in-situ settlement monitoring system. In addition, the InSAR results allow continuous investigation of dam deformation over a wide area that includes the entire dam surface as well as the surrounding area, offering a clear picture continuously of the dam deformation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area and the Shuibuya reservoir: (<b>a</b>) The location of the Shuibuya reservoir (red triangle); (<b>b</b>) View of the Shuibuya concrete faced rockfill dam.</p>
Full article ">Figure 2
<p>Geological map of the Shuibuya Dam. 1. Quaternary; 2–4. Permian; 5. Carbonic; 6–7. Devonian; 8. Silurian; 9. Geological boundary; 10. Fault; 11. The site of the dam.</p>
Full article ">Figure 3
<p>Typical zoning and grading curve of the Shuibuya concrete faced rockfill dam. (<b>a</b>) Typical zoning; (<b>b</b>) grading curve.</p>
Full article ">Figure 3 Cont.
<p>Typical zoning and grading curve of the Shuibuya concrete faced rockfill dam. (<b>a</b>) Typical zoning; (<b>b</b>) grading curve.</p>
Full article ">Figure 4
<p>Layout of the exterior settlement monitoring system (ESMS) of the Shuibuya.</p>
Full article ">Figure 5
<p>External settlement recorded by the monitoring stations on the downstream surface of the dam. (<b>a</b>) Settlement records from stations WS01 to WS05; (<b>b</b>) Settlement records from stations WS06 to WS11.</p>
Full article ">Figure 6
<p>Interferogram distribution in terms of the spatial and temporal baseline. Each black triangle represents one synthetic aperture radar (SAR) image and the black solid line between two triangles represents one interferogram.</p>
Full article ">Figure 7
<p>The cropped study area and the average coherence map in radar geometry (not geocoded). (<b>a</b>) The amplitude image of the study area; (<b>b</b>) the average coherence map, where white indicates pixels with strong coherence and black indicates pixels with low or no coherence.</p>
Full article ">Figure 8
<p>Vertical deformation velocity and root mean square (RMS) of the study area in radar geometry (not geocoded). (<b>a</b>) the vertical deformation velocity; (<b>b</b>) the RMS of the vertical deformation velocity.</p>
Full article ">Figure 9
<p>Comparison of the velocity from the reference levelling and the estimated InSAR results. The locations in (<b>a</b>) corresponding to the levelling measurements are shown in <a href="#remotesensing-08-00255-f004" class="html-fig">Figure 4</a>, and the velocity of both measurements is shown in <a href="#remotesensing-08-00255-t002" class="html-table">Table 2</a>. Black line in (<b>b</b>) is a 1:1 line.</p>
Full article ">Figure 10
<p>Settlement time series derived from InSAR (blue) and <span class="html-italic">in-situ</span> measurements (red) for stations 1–5 (Figures <b>a</b>–<b>e</b>). Each blue square indicates the average mean velocity of the selected buffer zone, and the error bar represents the standard deviation. Red triangles indicate the levelling measurements and the solid black line indicates the change in reservoir water level.</p>
Full article ">Figure 11
<p>Settlement time series derived from InSAR (blue) and <span class="html-italic">in-situ</span> measurements (red) for stations 6–11 (Figures <b>a</b>–<b>f</b>). Each blue square indicates the average mean velocity of the selected buffer zone, and the error bar represents the standard deviation. Red triangles indicate the levelling measurements and the solid black line indicates the change in reservoir water level.</p>
Full article ">
2127 KiB  
Article
Spectral Dependent Degradation of the Solar Diffuser on Suomi-NPP VIIRS Due to Surface Roughness-Induced Rayleigh Scattering
by Xi Shao, Changyong Cao and Tung-Chang Liu
Remote Sens. 2016, 8(3), 254; https://doi.org/10.3390/rs8030254 - 17 Mar 2016
Cited by 34 | Viewed by 8209
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi National Polar Orbiting Partnership (SNPP) uses a solar diffuser (SD) as its radiometric calibrator for the reflective solar band calibration. The SD is made of Spectralon™ (one type of fluoropolymer) and was chosen because [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard Suomi National Polar Orbiting Partnership (SNPP) uses a solar diffuser (SD) as its radiometric calibrator for the reflective solar band calibration. The SD is made of Spectralon™ (one type of fluoropolymer) and was chosen because of its controlled reflectance in the Visible/Near-Infrared/Shortwave-Infrared region and its near-Lambertian reflectance property. On-orbit changes in VIIRS SD reflectance as monitored by the Solar Diffuser Stability Monitor showed faster degradation of SD reflectance for 0.4 to 0.6 µm channels than the longer wavelength channels. Analysis of VIIRS SD reflectance data show that the spectral dependent degradation of SD reflectance in short wavelength can be explained with a SD Surface Roughness (length scale << wavelength) based Rayleigh Scattering (SRRS) model due to exposure to solar UV radiation and energetic particles. The characteristic length parameter of the SD surface roughness is derived from the long term reflectance data of the VIIRS SD and it changes at approximately the tens of nanometers level over the operational period of VIIRS. This estimated roughness length scale is consistent with the experimental result from radiation exposure of a fluoropolymer sample and validates the applicability of the Rayleigh scattering-based model. The model is also applicable to explaining the spectral dependent degradation of the SDs on other satellites. This novel approach allows us to better understand the physical processes of the SD degradation, and is complementary to previous mathematics based models. Full article
(This article belongs to the Collection Visible Infrared Imaging Radiometers and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Solar diffuser (SD) and solar diffuser stability monitor (SDSM) used for onboard calibration for VIIRS RSBs (from JPSS VIIRS SDR ATBD).</p>
Full article ">Figure 2
<p>(<b>a1</b>) Spacecraft solar zenith angle and (<b>a2</b>) solar light illumination of VIIRS SD as seen by I1 channel detectors (overlaid together) over multiple orbits; and (<b>b</b>) zoom-in view of I1 view of SD.</p>
Full article ">Figure 3
<p>(<b>a</b>) VIIRS SD degradation over time as revealed by H factor for M1–M7 band; and (<b>b</b>) spectral dependence of VIIRS SD reflectance change over time.</p>
Full article ">Figure 4
<p>Illustration of statistical parameters used in characterizing surface roughness.</p>
Full article ">Figure 5
<p>Applying Bennett and Porteus (Equation (4)) and Surface Roughness based Rayleigh Scattering (SRRS) model to fit for spectral dependent degradation of VIIRS SD.</p>
Full article ">Figure 6
<p>Fitting of VIIRS SD spectral reflectance with the SRRS reflectance correction model (Equation (8)) at several instants during 2.5 years operation of VIIRS.</p>
Full article ">Figure 7
<p>(<b>a</b>) Trending of VIIRS SD surface roughness characteristic parameter <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mi mathvariant="normal">s</mi> </msub> <mi>l</mi> </mrow> </semantics> </math>; and (<b>b</b>) evolution of correlation coefficient (<b>b1</b>) and Root-Mean-Square (RMS) error (<b>b2</b>) of the spectral fitting using Equation (8). The RMS errors are calculated for short wavelengths (&lt;600 nm), long wavelengths (&gt;600 nm) and whole spectral range combined, respectively.</p>
Full article ">
5959 KiB  
Article
Estimating Evapotranspiration of an Apple Orchard Using a Remote Sensing-Based Soil Water Balance
by Magali Odi-Lara, Isidro Campos, Christopher M. U. Neale, Samuel Ortega-Farías, Carlos Poblete-Echeverría, Claudio Balbontín and Alfonso Calera
Remote Sens. 2016, 8(3), 253; https://doi.org/10.3390/rs8030253 - 17 Mar 2016
Cited by 66 | Viewed by 10582
Abstract
The main goal of this research was to estimate the actual evapotranspiration (ETc) of a drip-irrigated apple orchard located in the semi-arid region of Talca Valley (Chile) using a remote sensing-based soil water balance model. The methodology to estimate ETc [...] Read more.
The main goal of this research was to estimate the actual evapotranspiration (ETc) of a drip-irrigated apple orchard located in the semi-arid region of Talca Valley (Chile) using a remote sensing-based soil water balance model. The methodology to estimate ETc is a modified version of the Food and Agriculture Organization of the United Nations (FAO) dual crop coefficient approach, in which the basal crop coefficient (Kcb) was derived from the soil adjusted vegetation index (SAVI) calculated from satellite images and incorporated into a daily soil water balance in the root zone. A linear relationship between the Kcb and SAVI was developed for the apple orchard Kcb = 1.82·SAVI − 0.07 (R2 = 0.95). The methodology was applied during two growing seasons (2010–2011 and 2012–2013), and ETc was evaluated using latent heat fluxes (LE) from an eddy covariance system. The results indicate that the remote sensing-based soil water balance estimated ETc reasonably well over two growing seasons. The root mean square error (RMSE) between the measured and simulated ETc values during 2010–2011 and 2012–2013 were, respectively, 0.78 and 0.74 mm·day−1, which mean a relative error of 25%. The index of agreement (d) values were, respectively, 0.73 and 0.90. In addition, the weekly ETc showed better agreement. The proposed methodology could be considered as a useful tool for scheduling irrigation and driving the estimation of water requirements over large areas for apple orchards. Full article
(This article belongs to the Special Issue Remote Sensing in Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the experimental apple orchard, the eddy covariance flux station and the surrounding fields.</p>
Full article ">Figure 2
<p>Daily reference evapotranspiration (FAO-56 Penman–Monteith ET<sub>o</sub>), rainfall and irrigation during the 2010–2011 and 2012–2013 growing seasons.</p>
Full article ">Figure 3
<p>Overall comparison between sensible heat flux plus latent heat flux (LE + H) <span class="html-italic">versus</span> net radiation minus soil heat flux (Rn − G) in a drip-irrigated apple orchard during the 2010–11 and 2012–13 growing seasons.</p>
Full article ">Figure 4
<p>Temporal evolution of measured and modeled evapotranspiration for the drip-irrigated apple orchard (ET<sub>c</sub>) during the 2010–11 and 2012–13 seasons.</p>
Full article ">Figure 5
<p>Evolution of the soil adjusted vegetation index (SAVI) and the basal crop coefficient from the eddy covariance system (EC) for the drip‑irrigated apple orchard during the 2010–11 and 2012–13 growing seasons. Shaded areas represent the midseason growth stages. Unfilled squares represent the period with apple-weeds cover mixture.</p>
Full article ">Figure 6
<p>K<sub>cb</sub>-SAVI linear relationship for the studied apple orchard. K<sub>cb</sub> values at effective full cover correspond to the apple canopy without weeds.</p>
Full article ">Figure 7
<p>Daily and weekly values of measured and modeled actual evapotranspiration (ET<sub>c</sub>) for both seasons. Modeled ET<sub>c</sub> values were simulated using the remote sensing-based water balance models.</p>
Full article ">
4088 KiB  
Article
Assessing Earthquake-Induced Tree Mortality in Temperate Forest Ecosystems: A Case Study from Wenchuan, China
by Hongcheng Zeng, Tao Lu, Hillary Jenkins, Robinson I. Negrón-Juárez and Jiceng Xu
Remote Sens. 2016, 8(3), 252; https://doi.org/10.3390/rs8030252 - 17 Mar 2016
Cited by 6 | Viewed by 6850
Abstract
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by [...] Read more.
Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg∙C was lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a significant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of Wenchuan earthquake and field sample plots.</p>
Full article ">Figure 2
<p>The typical spectral reflectance of the four endmembers used in Spectral Mixture Analysis.</p>
Full article ">Figure 3
<p>The scatter plots of fraction of green vegetation (GV) in summer and winter (<b>a</b>) and the forest utility function (<b>b</b>). Points C and D represent ideal evergreen and deciduous forests, respectively; line CD represents the ideal forest line. AC and AD represent relative probabilities for a pixel to contain evergreen or deciduous forest. A pixel is assumed to be evergreen forest with a probability of AD/(AC + AD), and deciduous with a probability of AC/(AC + AD).</p>
Full article ">Figure 4
<p>The distribution of biomass ratio between belowground and aboveground: (<b>a</b>) deciduous with log-mean of 3.3 and log-STD 0.3; and (<b>b</b>) evergreen with log-mean of 2.8 and log-STD 0.3.</p>
Full article ">Figure 5
<p>The Monte Carlo simulation routine for estimating biomass loss.</p>
Full article ">Figure 6
<p>The relationship between field measured forest biomass loss rate and: Landsat TM ΔGV (<b>a</b>); and Landsat TM and MODIS ΔGV (<b>b</b>).</p>
Full article ">Figure 7
<p>MODIS-derived ΔGV (<b>a</b>); and the relationship between MODIS-derived ΔGV and distance away from Seismic Intensity Isoline 10 (<b>b</b>). Positive ΔGV represents forest loss, whereas negative indicates intact or improved forest conditions. A negative distance in (<b>b</b>) indicates pixels that were located within the seismic intensity isoline, <span class="html-italic">i.e.</span>, the seismic intensity of these pixels was greater than 10.</p>
Full article ">Figure 8
<p>Mean value (<b>a</b>) and variance value (<b>b</b>) of forest biomass loss from the Wenchuan earthquake at the pixel scale.</p>
Full article ">Figure 9
<p>The relationship between ΔGV and terrain slope (<b>a</b>); tree’s diameter at breast height (<b>b</b>); and tree’s height (<b>c</b>). The brown and green points represent deciduous and evergreen forests, respectively.</p>
Full article ">
2941 KiB  
Article
Correction of Incidence Angle and Distance Effects on TLS Intensity Data Based on Reference Targets
by Kai Tan and Xiaojun Cheng
Remote Sens. 2016, 8(3), 251; https://doi.org/10.3390/rs8030251 - 16 Mar 2016
Cited by 79 | Viewed by 9541
Abstract
The original intensity value recorded by terrestrial laser scanners is influenced by multiple variables, among which incidence angle and distance play a crucial and dominant role. Further studies on incidence angle and distance effects are required to improve the accuracy of currently available [...] Read more.
The original intensity value recorded by terrestrial laser scanners is influenced by multiple variables, among which incidence angle and distance play a crucial and dominant role. Further studies on incidence angle and distance effects are required to improve the accuracy of currently available methods and to implement these methods in practical applications. In this study, the effects of incidence angle and distance on intensity data of the Faro Focus3D 120 terrestrial laser scanner are investigated. A new method is proposed to eliminate the incidence angle and distance effects. The proposed method is based on the linear interpolation of the intensity values of reference targets previously scanned at various incidence angles and distances. Compared with existing methods, a significant advantage of the proposed method is that estimating the specific function forms of incidence angle versus intensity and distance versus intensity is no longer necessary; these are canceled out when the scanned and reference targets are measured at the same incidence angle and distance. Results imply that the proposed method has high accuracy and simplicity in eliminating incidence angle and distance effects and can significantly reduce the intensity variations caused by these effects on homogeneous surfaces. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Reflected laser shots by extended Lambertian targets are uniformly scattered into a hemisphere. <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">n</mi> </mrow> </semantics> </math> is the normal vector, and <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> is the incidence angle. In most cases of TLS (terrestrial laser scanning), the emitter and receiver coincide.</p>
Full article ">Figure 2
<p>Geometric relationship of scan time. <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">n</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="normal">n</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="normal">n</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="normal">n</mi> <mn>3</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> is the normal vector estimated by computing the best-fitting plane to a neighborhood of points surrounding the point of interest <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">S</mi> <mrow> <mo>(</mo> <mrow> <mi mathvariant="normal">x</mi> <mo>,</mo> <mi mathvariant="normal">y</mi> <mo>,</mo> <mi mathvariant="normal">z</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <mi mathvariant="normal">O</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="normal">x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="normal">y</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="normal">z</mi> <mn>0</mn> </msub> </mrow> <mo>)</mo> </mrow> <mo> </mo> </mrow> </semantics> </math> is the scanner center. <math display="inline"> <semantics> <mi mathvariant="sans-serif">θ</mi> </semantics> </math> is the incidence angle and <math display="inline"> <semantics> <mi mathvariant="normal">R</mi> </semantics> </math> is the distance.</p>
Full article ">Figure 3
<p>Instruments and equipment utilized in the experiments. Four Lambertian targets with a size of 10 cm× 10 cm and reflectance of 20%, 40%, 60%, and 80% were mounted on a board that can rotate horizontally through a goniometer. The instrument used was Faro Focus<sup>3D</sup> 120.</p>
Full article ">Figure 4
<p>(<b>a</b>) Original intensity with respect to incidence angle at a distance of 5 m for the four reference targets; (<b>b</b>) Original intensity with respect to distance at an incidence angle of 0° for the four reference targets.</p>
Full article ">Figure 5
<p>Measured and interpolated intensity values for the four reference targets at scanning geometries from A to L.</p>
Full article ">Figure 6
<p>Relative correction results for the four targets at scanning geometries from A to L. (<b>a</b>) The 80% target is used as a reference; (<b>b</b>) The 60% target is used as a reference; (<b>c</b>) The 40% target is used as a reference; (<b>d</b>) The 20% target is used as a reference.</p>
Full article ">Figure 7
<p>(<b>a</b>) Original intensity image of the white lime wall; (<b>b</b>) Original intensity image of the building facade with gray bricks; (<b>c</b>) Original intensity image of the cement road; (<b>d</b>) Original intensity values of the sampled regions of the three surfaces; (<b>e</b>) Distances of the sampled regions of the three surfaces; (<b>f</b>) Cosine of incidence angles of the sampled regions of the three surfaces.</p>
Full article ">Figure 8
<p>Relatively corrected intensity values of the sampled regions of the three surfaces. (<b>a</b>) The 80% target is used as a reference; (<b>b</b>) The 60% target is used as a reference; (<b>c</b>) The 40% target is used as a reference; (<b>d</b>) The 20% target is used as a reference.</p>
Full article ">Figure 9
<p>Absolutely corrected intensity values of the sampled regions of the three surfaces. (<b>a</b>) The 80% target is used as a reference; (<b>b</b>) The 60% target is used as a reference; (<b>c</b>) The 40% target is used as a reference; (<b>d</b>) The 20% target is used as a reference.</p>
Full article ">
7202 KiB  
Article
Nonlocal Total Variation Subpixel Mapping for Hyperspectral Remote Sensing Imagery
by Ruyi Feng, Yanfei Zhong, Yunyun Wu, Da He, Xiong Xu and Liangpei Zhang
Remote Sens. 2016, 8(3), 250; https://doi.org/10.3390/rs8030250 - 16 Mar 2016
Cited by 22 | Viewed by 6996
Abstract
Subpixel mapping is a method of enhancing the spatial resolution of images, which involves dividing a mixed pixel into subpixels and assigning each subpixel to a definite land-cover class. Traditionally, subpixel mapping is based on the assumption of spatial dependence, and the spatial [...] Read more.
Subpixel mapping is a method of enhancing the spatial resolution of images, which involves dividing a mixed pixel into subpixels and assigning each subpixel to a definite land-cover class. Traditionally, subpixel mapping is based on the assumption of spatial dependence, and the spatial correlation information among pixels and subpixels is considered in the prediction of the spatial locations of land-cover classes within the mixed pixels. In this paper, a novel subpixel mapping method for hyperspectral remote sensing imagery based on a nonlocal method, namely nonlocal total variation subpixel mapping (NLTVSM), is proposed to use the nonlocal self-similarity prior to improve the performance of the subpixel mapping task. Differing from the existing spatial regularization subpixel mapping technique, in NLTVSM, the nonlocal total variation is used as a spatial regularizer to exploit the similar patterns and structures in the image. In this way, the proposed method can obtain an optimal subpixel mapping result and accuracy by considering the nonlocal spatial information. Compared with the classical and state-of-the-art subpixel mapping approaches, the experimental results using a simulated hyperspectral image, two synthetic hyperspectral remote sensing images, and a real hyperspectral image confirm that the proposed algorithm can obtain better results in both visual and quantitative evaluations. Full article
(This article belongs to the Special Issue Spatial Enhancement of Hyperspectral Data and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Simple example of subpixel mapping (3 × 3 low-resolution pixels, scale factor 5, three land-cover classes). (<b>a</b>) Fraction image; (<b>b</b>) optimal distribution; and (<b>c</b>) inferior distribution.</p>
Full article ">Figure 2
<p>Flowchart of the proposed NLTVSM method.</p>
Full article ">Figure 3
<p>The basic principle of the nonlocal means model.</p>
Full article ">Figure 4
<p>Simulated image. (<b>a</b>) Original simulated hyperspectral image (400 × 400); (<b>b</b>) reference classification image; and (<b>c</b>) fractional abundances (100 × 100).</p>
Full article ">Figure 5
<p>Washington DC Mall HYDICE image; (<b>a</b>) original Washington DC Mall hyperspectral image (200 × 300); (<b>b</b>) the ROIs; (<b>c</b>) the reference classification image; and (<b>d</b>) fractional abundances (50 × 75).</p>
Full article ">Figure 6
<p>HYDICE Urban image; (<b>a</b>) original HYDICE Urban image (300 × 300); (<b>b</b>) the ROIs; (<b>c</b>) the reference classification image; and (<b>d</b>) fractional abundances (75 × 75).</p>
Full article ">Figure 7
<p>Nuance dataset. (<b>a</b>) Original Nuance hyperspectral image (50 × 50); (<b>b</b>) fractional abundances (50 × 50); (<b>c</b>) original HR color image (150 × 150); (<b>d</b>) the ROIs; and (<b>e</b>) the reference classification map.</p>
Full article ">Figure 8
<p>The subpixel mapping results for the simulated image. (<b>a</b>) Reference image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 9
<p>The subpixel mapping results for the Washington DC Mall HYDICE image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 9 Cont.
<p>The subpixel mapping results for the Washington DC Mall HYDICE image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 10
<p>The subpixel mapping results for the HYDICE Urban image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 11
<p>The subpixel mapping results for the Nuance hyperspectral image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 11 Cont.
<p>The subpixel mapping results for the Nuance hyperspectral image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">Figure 12
<p>Sensitivity analysis for the spatial regularization parameter <span class="html-italic">λ</span>. (<b>a</b>) Washington DC Mall image; and (<b>b</b>) real Nuance image.</p>
Full article ">Figure 13
<p>The NLTVSM results based on different fractional abundance images. (<b>a</b>) Reference classification image; (<b>b</b>) SUnSAL (OA = 73.63%, Kappa = 0.62); (<b>c</b>) FCLS (OA = 73.77%, Kappa = 0.63); and (<b>d</b>) FCLS and P-SVM (OA = 78.11%, Kappa = 0.69).</p>
Full article ">Figure 14
<p>The subpixel mapping results for the Washington DC Mall HYDICE image with a shade endmember in the fraction image. (<b>a</b>) Reference classification image; (<b>b</b>) SASM; (<b>c</b>) PSSM; (<b>d</b>) GASM; (<b>e</b>) GSM; (<b>f</b>) TVSM; and (<b>g</b>) NLTVSM.</p>
Full article ">
4115 KiB  
Article
Investigation and Mitigation of the Crosstalk Effect in Terra MODIS Band 30
by Junqiang Sun, Sriharsha Madhavan and Menghua Wang
Remote Sens. 2016, 8(3), 249; https://doi.org/10.3390/rs8030249 - 16 Mar 2016
Cited by 17 | Viewed by 4859
Abstract
It has been previously reported that thermal emissive bands (TEB) 27–29 in the Terra (T-) MODerate resolution Imaging Spectroradiometer (MODIS) have been significantly affected by electronic crosstalk. Successful linear theory of the electronic crosstalk effect was formulated, and it successfully characterized the effect [...] Read more.
It has been previously reported that thermal emissive bands (TEB) 27–29 in the Terra (T-) MODerate resolution Imaging Spectroradiometer (MODIS) have been significantly affected by electronic crosstalk. Successful linear theory of the electronic crosstalk effect was formulated, and it successfully characterized the effect via the use of lunar observations as viable inputs. In this paper, we report the successful characterization and mitigation of the electronic crosstalk for T-MODIS band 30 using the same characterization methodology. Though the phenomena of the electronic crosstalk have been well documented in previous works, the novel for band 30 is the need to also apply electronic crosstalk correction to the non-linear term in the calibration coefficient. The lack of this necessity in early works thus demonstrates the distinct difference of band 30, and, yet, in the same instances, the overall correctness of the characterization formulation. For proper result, the crosstalk correction is applied to the band 30 calibration coefficients including the non-linear term, and also to the earth view radiance. We demonstrate that the crosstalk correction achieves a long-term radiometric correction of approximately 1.5 K for desert targets and 1.0 K for ocean scenes. Significant striping removal in the Baja Peninsula earth view imagery is also demonstrated due to the successful amelioration of detector differences caused by the crosstalk effect. Similarly significant improvement in detector difference is shown for the selected ocean and desert targets over the entire mission history. In particular, band 30 detector 8, which has been flagged as “out of family” is restored by the removal of the crosstalk contamination. With the correction achieved, the science applications based on band 30 can be significantly improved. The linear formulation, the characterization methodology, and the crosstalk effect correction coefficients derived using lunar observations are once again demonstrated to work remarkably well. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) instrument setup with on-board calibrators; (<b>b</b>) v-grooved BlackBody controlled using various thermistors.</p>
Full article ">Figure 2
<p>MODIS LWIR focal plane.</p>
Full article ">Figure 3
<p>Lunar response of Terra band 30 detector 1: (<b>a</b>) 15 December 2000; (<b>b</b>) 11 December 2014.</p>
Full article ">Figure 4
<p>Terra MODIS band 30 crosstalk coefficients: (<b>a</b>) sending band 27; (<b>b</b>) sending band 28; (<b>c</b>) sending band 29.</p>
Full article ">Figure 4 Cont.
<p>Terra MODIS band 30 crosstalk coefficients: (<b>a</b>) sending band 27; (<b>b</b>) sending band 28; (<b>c</b>) sending band 29.</p>
Full article ">Figure 5
<p>Mirror side difference of the calibration offset term obtained from the fitting without the constraint that <span class="html-italic">a</span><sub>0</sub> is kept 0 for band 30: (<b>a</b>) before the crosstalk correction; (<b>b</b>) after the crosstalk correction.</p>
Full article ">Figure 6
<p>The <span class="html-italic">a</span><sub>2</sub> calibration terms for band 30: (<b>a</b>) before the crosstalk correction; (<b>b</b>) after the crosstalk correction.</p>
Full article ">Figure 7
<p>Terra band 30 <span class="html-italic">b1</span> for individual detectors: (<b>a</b>) no correction for both WUCD and routine BB calibration; (<b>b</b>) the correction only applied to routine BB calibration; and (<b>c</b>) the correction applied to both WUCD and routine BB calibration.</p>
Full article ">Figure 8
<p>Crosstalk induced striping and the striping removal in BT of Terra MODIS band 30 at Baja peninsula in 2012: (<b>a</b>) C6 BT before crosstalk correction; (<b>b</b>) C6 BT with crosstalk correction but no crosstalk correction applied in <span class="html-italic">a</span><sub>0</sub> and <span class="html-italic">a</span><sub>2</sub>; (<b>c</b>) C6 BT with crosstalk correction applied to <span class="html-italic">a</span><sub>0</sub>, <span class="html-italic">a</span><sub>2</sub>, <span class="html-italic">b</span><sub>1</sub>, and EV; and (<b>d</b>) profiles along track direction for all three cases.</p>
Full article ">Figure 8 Cont.
<p>Crosstalk induced striping and the striping removal in BT of Terra MODIS band 30 at Baja peninsula in 2012: (<b>a</b>) C6 BT before crosstalk correction; (<b>b</b>) C6 BT with crosstalk correction but no crosstalk correction applied in <span class="html-italic">a</span><sub>0</sub> and <span class="html-italic">a</span><sub>2</sub>; (<b>c</b>) C6 BT with crosstalk correction applied to <span class="html-italic">a</span><sub>0</sub>, <span class="html-italic">a</span><sub>2</sub>, <span class="html-italic">b</span><sub>1</sub>, and EV; and (<b>d</b>) profiles along track direction for all three cases.</p>
Full article ">Figure 9
<p>Terra MODIS band 30 C6 brightness temperature at Pacific Ocean.</p>
Full article ">Figure 10
<p>Crosstalk correction for Terra MODIS band 30 at Pacific Ocean.</p>
Full article ">Figure 11
<p>Terra MODIS band 30 band-averaged brightness temperature at Pacific Ocean before and after crosstalk correction.</p>
Full article ">Figure 12
<p>Terra MODIS band 30 detector difference at Pacific Ocean: (<b>a</b>) before crosstalk correction; (<b>b</b>) after crosstalk correction.</p>
Full article ">Figure 13
<p>Terra MODIS band 30 C6 brightness temperature at Libya 1.</p>
Full article ">Figure 14
<p>Crosstalk correction for Terra MODIS band 30 at Libya 1.</p>
Full article ">Figure 15
<p>Terra MODIS band 30 band-averaged brightness temperature at Libya 1 before and after crosstalk correction.</p>
Full article ">Figure 16
<p>Terra MODIS band 30 detector difference at Libya 1: (<b>a</b>) before crosstalk correction; (<b>b</b>) after crosstalk correction.</p>
Full article ">
3247 KiB  
Article
A Two-Source Model for Estimating Evaporative Fraction (TMEF) Coupling Priestley-Taylor Formula and Two-Stage Trapezoid
by Hao Sun
Remote Sens. 2016, 8(3), 248; https://doi.org/10.3390/rs8030248 - 16 Mar 2016
Cited by 25 | Viewed by 6316
Abstract
Remotely sensed land surface temperature and fractional vegetation coverage (LST/FVC) space has been widely used in modeling and partitioning land surface evaporative fraction (EF) which is important in managing water resources. However, most of such models are based on conventional trapezoid and simply [...] Read more.
Remotely sensed land surface temperature and fractional vegetation coverage (LST/FVC) space has been widely used in modeling and partitioning land surface evaporative fraction (EF) which is important in managing water resources. However, most of such models are based on conventional trapezoid and simply determine the wet edge as air temperature (Ta) or the lowest LST value in an image. We develop a new Two-source Model for estimating EF (TMEF) based on a two-stage trapezoid coupling with an extension of the Priestly-Taylor formula. Latent heat flux on the wet edge is calculated with the Priestly-Taylor formula, whereas that on the dry edge is set to 0. The wet and dry edges are then determined by solving radiation budget and energy balance equations. The model was evaluated by comparing with other two models that based on conventional trapezoid (i.e., the Two-source Trapezoid Model for Evapotranspiration (TTME) and a One-source Trapezoid model for EF (OTEF)) in how well they simulate and partition EF using MODIS products and field observations from HiWATER-MUSOEXE in 2012. Results show that the TMEF outperforms the other two models, where EF mean absolute relative deviations are 9.57% (TMEF), 15.03% (TTME), and 30.49% (OTEF). Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) the conventional trapezoidal LST/FVC space derived from [<a href="#B17-remotesensing-08-00248" class="html-bibr">17</a>]; (<b>b</b>) the two-stage trapezoidal LST/FVC space.</p>
Full article ">Figure 2
<p>Spatial distribution of the testing sites.</p>
Full article ">Figure 3
<p>Comparison of the estimated EF with observed EF at all stations in (<b>a</b>) scatter plot and (<b>b</b>) frequency histogram of the relative deviation.</p>
Full article ">Figure 4
<p>Comparison of estimated EF between the TMEF, TTME, and OTEF models where (<b>a</b>–<b>f</b>) corresponds to the flux sites of NO. 7, NO.10~No. 14.</p>
Full article ">Figure 4 Cont.
<p>Comparison of estimated EF between the TMEF, TTME, and OTEF models where (<b>a</b>–<b>f</b>) corresponds to the flux sites of NO. 7, NO.10~No. 14.</p>
Full article ">Figure 5
<p>Comparison of estimated EFs and EFv between the TMEF and the TTME models at (<b>a</b>) No.11; (<b>b</b>) No.12; (<b>c</b>) No.13; and (<b>d</b>) No.14 stations.</p>
Full article ">Figure 6
<p>The two-stage trapezoidal LST/FVC spaces corresponded to different scenes where (<b>a</b>) corresponds to the Scene 1 and (<b>b</b>) corresponds to Scene 2.</p>
Full article ">Figure 7
<p>Sensitivity analysis of TMEF to <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi mathvariant="normal">s</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi mathvariant="normal">v</mi> </msub> </mrow> </semantics> </math> in Scene 1 (<b>a</b>) and Scene 2 (<b>a’</b>), <math display="inline"> <semantics> <mrow> <msub> <mi>μ</mi> <mo>∗</mo> </msub> </mrow> </semantics> </math> , <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> </semantics> </math> , and <math display="inline"> <semantics> <mrow> <msub> <mi>h</mi> <mi mathvariant="normal">c</mi> </msub> </mrow> </semantics> </math> in Scene 1 (<b>b</b>) and Scene 2 (<b>b’</b>), and LST and <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> </semantics> </math> in Scene 1 (<b>c</b>) and Scene 2 (<b>c’</b>).</p>
Full article ">
8606 KiB  
Article
LiDAR-Based Solar Mapping for Distributed Solar Plant Design and Grid Integration in San Antonio, Texas
by Tuan B. Le, Danial Kholdi, Hongjie Xie, Bing Dong and Rolando E. Vega
Remote Sens. 2016, 8(3), 247; https://doi.org/10.3390/rs8030247 - 16 Mar 2016
Cited by 11 | Viewed by 9172
Abstract
This study represents advancements in the state-of-the-art of the solar energy industry by leveraging LiDAR-based building characterization for city-wide, distributed solar photovoltaics, solar maps, highlighting the distribution of solar energy across the city of San Antonio. A methodology is implemented to systematically derive [...] Read more.
This study represents advancements in the state-of-the-art of the solar energy industry by leveraging LiDAR-based building characterization for city-wide, distributed solar photovoltaics, solar maps, highlighting the distribution of solar energy across the city of San Antonio. A methodology is implemented to systematically derive the tilt and azimuth angles of each rooftop and to quantify solar direct, diffuse, and global horizontal irradiance for hundreds of buildings in a LiDAR tile scale, by using already established methodologies that are typically only applied to a single location or building rooftop. The methodology enables the formulation of typical meteorological data, measured or forecasted time series of irradiances over distributed assets. A new concept on the subject of distributed solar plant (DSP) design is also introduced, by using the building rooftop tilt and azimuth angles, to strategically optimize the use and adoption of solar incentives according to the grid age and its vulnerabilities to solar variability in the neighborhoods. The method presented here shows that on an hourly basis DSP design could provide a 5% and 9% of net load capacity support per hour in the afternoon and morning times, respectively. Our results show that standard building rooftop tilt angles in the south Texas region has significant impact on the total amount of the energy over the course of a day, though its impact on the shapes of the daily energy profile is relatively insignificant when compared to the azimuth angle. Building surfaces’ azimuth angle is the most important factor to determine the shape of daily energy profile and its peak location within a day. The methodology developed in this study can be employed to study the potential solar energy in other regions and to match the design of distributed solar plants to the capacity needs on specified distribution grids. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>458 LiDAR titles over the San Antonio (Bexar County), Texas area, overlaying with major highways.</p>
Full article ">Figure 2
<p>Building footprints of University of Texas San Antonio (UTSA) campus generated using ENVI LiDAR extension.</p>
Full article ">Figure 3
<p>Flowchart of tilt and azimuth angles calculation [<a href="#B34-remotesensing-08-00247" class="html-bibr">34</a>].</p>
Full article ">Figure 4
<p>Map showing the tilt and azimuth angles generated for a sample area (azimuth angle adopts the ASHRAE tradition where 0° to south, 180° to north, negative to east, and positive to west).</p>
Full article ">Figure 5
<p>DEM (<b>A</b>) converts to TIN (<b>B</b>), TIN with edges (<b>C</b>), and to triangle features (<b>D</b>).</p>
Full article ">Figure 6
<p>DEM-based solar map of San Antonio (Bexar County) on 21 July at 10:00 a.m., overlaying with major highways (black lines) of San Antonio and the neighborhood area (black rectangular box) for rooftop irradiance analysis (in <a href="#remotesensing-08-00247-f007" class="html-fig">Figure 7</a>).</p>
Full article ">Figure 7
<p>LiDAR-based solar map of a neighborhood in San Antonio on 21 July at 10:00 a.m. (<b>A</b>) 15:00 p.m. (<b>B</b>).</p>
Full article ">Figure 8
<p>Total power estimation (in MW) be potentially produced from all rooftops within one LiDAR tile that covers the UTSA main campus (calculation for the 21st of each month).</p>
Full article ">Figure 9
<p>Distributed solar energy potential in Bexar County (calculated from direct irradiance on 21 March, 21 June, 23 September and 22 December at morning, noon and afternoon hours of each day).</p>
Full article ">Figure 10
<p>The distribution of surface azimuth of all rooftops from a LiDAR tile.</p>
Full article ">Figure 11
<p>The distribution of surface tilt of all rooftops from a LiDAR tile.</p>
Full article ">Figure 12
<p>Selected neighborhood for the demonstration of distributed solar plant design.</p>
Full article ">Figure 13
<p>Residential Total Load (solid line) and Distributed Solar Plant Generation (dashed line) for 6 scenarios on 21 July.</p>
Full article ">
15430 KiB  
Article
Automated Extraction and Mapping for Desert Wadis from Landsat Imagery in Arid West Asia
by Yongxue Liu, Xiaoyu Chen, Yuhao Yang, Chao Sun and Siyu Zhang
Remote Sens. 2016, 8(3), 246; https://doi.org/10.3390/rs8030246 - 16 Mar 2016
Cited by 8 | Viewed by 7246
Abstract
Wadis, ephemeral dry rivers in arid desert regions that contain water in the rainy season, are often manifested as braided linear channels and are of vital importance for local hydrological environments and regional hydrological management. Conventional methods for effectively delineating wadis from heterogeneous [...] Read more.
Wadis, ephemeral dry rivers in arid desert regions that contain water in the rainy season, are often manifested as braided linear channels and are of vital importance for local hydrological environments and regional hydrological management. Conventional methods for effectively delineating wadis from heterogeneous backgrounds are limited for the following reasons: (1) the occurrence of numerous morphological irregularities which disqualify methods based on physical shape; (2) inconspicuous spectral contrast with backgrounds, resulting in frequent false alarms; and (3) the extreme complexity of wadi systems, with numerous tiny tributaries characterized by spectral anisotropy, resulting in a conflict between global and local accuracy. To overcome these difficulties, an automated method for extracting wadis (AMEW) from Landsat-8 Operational Land Imagery (OLI) was developed in order to take advantage of the complementarity between Water Indices (WIs), which is a technique of mathematically combining different bands to enhance water bodies and suppress backgrounds, and image processing technologies in the morphological field involving multi-scale Gaussian matched filtering and a local adaptive threshold segmentation. Evaluation of the AMEW was carried out in representative areas deliberately selected from Jordan, SW Arabian Peninsula in order to ensure a rigorous assessment. Experimental results indicate that the AMEW achieved considerably higher accuracy than other effective extraction methods in terms of visual inspection and statistical comparison, with an overall accuracy of up to 95.05% for the entire area. In addition, the AMEW (based on the New Water Index (NWI)) achieved higher accuracy than other methods (the maximum likelihood classifier and the support vector machine classifier) used for bulk wadi extraction. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Locations of the three study areas in Jordan; (<b>b</b>) true-color image (432 band for OLI) of Study Area 1; (<b>c</b>) true-color image of Study Area 2; and (<b>d</b>) true-color image of Study Area 3.</p>
Full article ">Figure 2
<p>Procedural flow chart for wadi extraction.</p>
Full article ">Figure 3
<p>(<b>a</b>–<b>e</b>) WI value distributions for wadi, bare land, vegetation, and built-up land. In the box plots, the top and bottom of the vertical lines represent the maximum and minimum; the top and bottom of the boxes represent the first quartile and the third quartile; the lines in the boxes represent the median. The red line shows the reflectance separability of the third quartile of wadis reflectance with other landscapes, indicating possible reflectance confusion. The JMD is the average value for wadis and other three types.</p>
Full article ">Figure 4
<p>Illustration of the conceptual basis of the proposed methodology. (<b>a</b>) NDWI; (<b>b</b>) AWEI<sub>nsh</sub>; (<b>c</b>) AWEI<sub>sh</sub>; (<b>d</b>) MNDWI; (<b>e</b>) Original color composited image; (<b>f</b>) I component; (<b>g</b>) new WI.</p>
Full article ">Figure 5
<p>(<b>a</b>) WI value profiles of the cross sections of several wadis in a WI image; (<b>b</b>) pixel value profiles of the cross-sections of several wadis in neighborhood analysis results.</p>
Full article ">Figure 6
<p>Results of neighborhood analysis: (<b>a</b>) NWI image; (<b>b</b>) window size = 16 pixels; (<b>c</b>) window size = 48 pixels; (<b>d</b>) window size = 80 pixels. The window sizes shown here are only used to provide a conceptual illustration of the application of the technique.</p>
Full article ">Figure 7
<p>(<b>a</b>) NWI image; (<b>b</b>) extraction results for small wadis; (<b>c</b>) extraction results for wide wadis; (<b>d</b>) the final wadi extraction results.</p>
Full article ">Figure 8
<p>(<b>a</b>–<b>d</b>) WI images of Study Area 1; (<b>e</b>–<b>h</b>) wadi extraction results for Study Area 1; (<b>i</b>–<b>l</b>) WI images of Block 1; (<b>m</b>–<b>p</b>) wadi extraction results for Block 1; (<b>q</b>–<b>t</b>) WI images for Block 2; (<b>u</b>–<b>x</b>) wadi extraction results for Block 2.</p>
Full article ">Figure 9
<p>(<b>a</b>) NWI image of Study Area 1; (<b>b</b>–<b>d</b>) extraction results for Study Area 1; (<b>e</b>) NWI image of Block 1; (<b>f</b>–<b>h</b>) extraction results for Block 1; (<b>i</b>) NWI image for Block 2; (<b>j</b>–<b>l</b>) extraction results for Block 2. For further illustrations of results, see <a href="#app1-remotesensing-08-00246" class="html-app">Figures S3 and S4 in the supplementary online materials</a>.</p>
Full article ">Figure 10
<p>Reference maps and wadi extraction results based on different window sizes. As the window size increases, wide wadis become more and more complete, but commission errors around small wadis become increasingly unacceptable; (<b>a</b>–<b>d</b>) different window sizes for block 1; (<b>e</b>–<b>h</b>) different window sizes for block 2.</p>
Full article ">Figure 11
<p>Reference images and wadi extraction results based on different values of k2. As k2 decreases, wide wadis become more and more complete, but over-extractions around small wadis become more and more significant; (<b>a</b>) reference images for block 1; (<b>b</b>–<b>d</b>) the results for different k2 in block 1; (<b>e</b>) reference image for block 2; (<b>f</b>–<b>h</b>) the results of different k2 for block 2.</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b)</b> original image band and the commission and omission errors in the vicinity of linear false alarms in a local block; (<b>c</b>,<b>d)</b> the other original image band and the commission and omission errors in the vicinity of linear false alarms in another local block.</p>
Full article ">Figure 13
<p>NWI images and over-extractions around roads (two parallel false wadis on both sides). (<b>a,b</b>) are the reference map and the extraction result for a local block; (<b>c,d</b>) are the reference map and the extraction map for another local block.</p>
Full article ">
8017 KiB  
Article
Potential of High Spatial and Temporal Ocean Color Satellite Data to Study the Dynamics of Suspended Particles in a Micro-Tidal River Plume
by Anouck Ody, David Doxaran, Quinten Vanhellemont, Bouchra Nechad, Stefani Novoa, Gaël Many, François Bourrin, Romaric Verney, Ivane Pairaud and Bernard Gentili
Remote Sens. 2016, 8(3), 245; https://doi.org/10.3390/rs8030245 - 16 Mar 2016
Cited by 58 | Viewed by 9217
Abstract
Ocean color satellite sensors are powerful tools to study and monitor the dynamics of suspended particulate matter (SPM) discharged by rivers in coastal waters. In this study, we test the capabilities of Landsat-8/Operational Land Imager (OLI), AQUA&TERRA/Moderate Resolution Imaging Spectroradiometer (MODIS) and MSG-3/Spinning [...] Read more.
Ocean color satellite sensors are powerful tools to study and monitor the dynamics of suspended particulate matter (SPM) discharged by rivers in coastal waters. In this study, we test the capabilities of Landsat-8/Operational Land Imager (OLI), AQUA&TERRA/Moderate Resolution Imaging Spectroradiometer (MODIS) and MSG-3/Spinning Enhanced Visible and Infrared Imager (SEVIRI) sensors in terms of spectral, spatial and temporal resolutions to (i) estimate the seawater reflectance signal and then SPM concentrations and (ii) monitor the dynamics of SPM in the Rhône River plume characterized by moderately turbid surface waters in a micro-tidal sea. Consistent remote-sensing reflectance (Rrs) values are retrieved in the red spectral bands of these four satellite sensors (median relative difference less than ~16% in turbid waters). By applying a regional algorithm developed from in situ data, these Rrs are used to estimate SPM concentrations in the Rhône river plume. The spatial resolution of OLI provides a detailed mapping of the SPM concentration from the downstream part of the river itself to the plume offshore limits with well defined small-scale turbidity features. Despite the low temporal resolution of OLI, this should allow to better understand the transport of terrestrial particles from rivers to the coastal ocean. These details are partly lost using MODIS coarser resolutions data but SPM concentration estimations are consistent, with an accuracy of about 1 to 3 g·m−3 in the river mouth and plume for spatial resolutions from 250 m to 1 km. The MODIS temporal resolution (2 images per day) allows to capture the daily to monthly dynamics of the river plume. However, despite its micro-tidal environment, the Rhône River plume shows significant short-term (hourly) variations, mainly controlled by wind and regional circulation, that MODIS temporal resolution failed to capture. On the contrary, the high temporal resolution of SEVIRI makes it a powerful tool to study this hourly river plume dynamics. However, its coarse resolution prevents the monitoring of SPM concentration variations in the river mouth where SPM concentration variability can reach 20 g·m−3 inside the SEVIRI pixel. Its spatial resolution is nevertheless sufficient to reproduce the plume shape and retrieve SPM concentrations in a valid range, taking into account an underestimation of about 15%–20% based on comparisons with other sensors and in situ data. Finally, the capabilities, advantages and limits of these satellite sensors are discussed in the light of the spatial and temporal resolution improvements provided by the new and future generation of ocean color sensors onboard the Sentinel-2, Sentinel-3 and Meteosat Third Generation (MTG) satellite platforms. Full article
(This article belongs to the Special Issue Remote Sensing in Coastal Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>RGB georeferenced OLI image of the study area the 23 February 2014. Black stars locate the TUCPA field campaign stations (see <a href="#sec2dot3-remotesensing-08-00245" class="html-sec">Section 2.3</a> for details), the autonomous Mesurho station is indicated by a white circle.</p>
Full article ">Figure 2
<p>(<b>A</b>) Rhône River freshwater discharge (blue) measured at Beaucaire-Tarascon (65 km upstream the Rhône River mouth) as well as particulate backscattering coefficient b<sub>bp</sub>(700) (as a proxy of SPM concentration [<a href="#B8-remotesensing-08-00245" class="html-bibr">8</a>]) (red) and Chl a fluorescence (green) measured by the Wetlabs ECO-BB2FL sensor at the Mesurho buoy, from the 22 January to the 11 March (daily averaged values). The b<sub>bp</sub>(700) coefficient was calculated from the light backscattering data measured at 117° and 700 nm (β(117°,700)) following the formula b<sub>bp</sub> = 2π × 1.1 × β(117°,700) [<a href="#B29-remotesensing-08-00245" class="html-bibr">29</a>] (water light backscattering and absorption losses were neglected). Dashed black rectangle indicate data that could be altered by probe saturation. Green zone indicates study period from 17 to 23 February; (<b>B</b>) Same for the period from 17 to 23 February 2014 (green zones on (<b>A</b>)) (data averaged over 4 h). Black solid line and black dashed lines represent 4 hours averaged wind speed and direction, respectively.</p>
Full article ">Figure 3
<p>(<b>A</b>) MODIS-A Rrs(645) product of the Rhône River plume for the 23 February 2014 obtained using MUMM [<a href="#B37-remotesensing-08-00245" class="html-bibr">37</a>] (top) and NIR- SWIR [<a href="#B36-remotesensing-08-00245" class="html-bibr">36</a>,<a href="#B42-remotesensing-08-00245" class="html-bibr">42</a>] (middle) atmospheric corrections. Grey areas on the western part of the plume on the MODIS-A Rrs product obtained with the NIR-SWIR correction correspond to area flagged because of atmospheric correction failures. Scatter-plot on bottom panel is obtained by comparing the six available MODIS (-A and -T) Rrs(645) products corrected with both (MUMM and NIR-SWIR) atmospheric correction algorithms; (<b>B</b>) OLI Rrs(655) product of the Rhône River plume for the 23 February 2014 obtained using NIR [<a href="#B13-remotesensing-08-00245" class="html-bibr">13</a>] (top) and SWIR [<a href="#B18-remotesensing-08-00245" class="html-bibr">18</a>] (middle) atmospheric correction for OLI. Scatter-plot on the bottom panel are obtained by comparing OLI Rrs(655) product corrected with both (NIR and SWIR) atmospheric corrections algorithm. Colors on scatter plots denote pixel density.</p>
Full article ">Figure 4
<p><span class="html-italic">In situ</span> Rrs weighted by sensitivity of red, green and NIR sensors spectral bands <span class="html-italic">vs. in situ</span> SPM concentration measured during the TUCPA campaign (27 stations) for the three sensors: (<b>A</b>) OLI; (<b>B</b>) MODIS -A and -T; (<b>C</b>) SEVIRI.</p>
Full article ">Figure 5
<p>Empirical relationships between <span class="html-italic">in situ</span> Rrs (weighted by sensors spectral sensitivity) in the red spectral bands of the OLI (red), MODIS (green) and SEVIRI (blue) sensors and <span class="html-italic">in situ</span> SPM concentration obtained from TUCPA campaign measurments (27 stations). Three relations are very similar and show a correlation coefficient R<sup>2</sup> ~ 0.61.</p>
Full article ">Figure 6
<p>Intercomparison of Rrs obtained in the red spectral band of each satellite sensor. Scatter-plots (top) and relative differences (bottom) are shown for comparison between (<b>A</b>) SEVIRI and MODIS-A and –T; (<b>B</b>) OLI and MODIS-T; (<b>C</b>) OLI and MODIS-A; (<b>D</b>) OLI and SEVIRI. Because of the large time difference between MODIS-A and OLI data acquisitions, the corresponding Rrs products were compared considering only a polygon inside the river plume, where Rrs variations are less important than in the plume boundaries regions. For a better comparison, the same polygon is used for the comparison between Rrs derived from MODIS-T and OLI. Linear regressions are applied on each scatter plot, showing good correlation between all sensors. The median relative difference was calculated for Rrs &gt; 0.010 (sr<sup>−1</sup>) corresponding to Rrs observed in the moderately turbid waters of the Rhône River plume, and indicated by horizontal bars. The number of observations as well as the spatial resolution used for comparison are indicated in each plot.</p>
Full article ">Figure 7
<p>(<b>A</b>) Full OLI image of SPM concentration over the Rhône River plume (left) and zoom on the Rhône River mouth (right) at the OLI native resolution (30 m) (top). Then averaged into coarser grids of 250 m, 500 m, 1 km and 3 × 5 km, from top to bottom row, corresponding to MODIS and SEVIRI resolutions. SPM concentrations are obtained using Rrs <span class="html-italic">vs.</span> SPM concentration relationship obtained in <a href="#sec3dot1dot1-remotesensing-08-00245" class="html-sec">Section 3.1.1</a>; (<b>B</b>) Maps of the standard deviation of SPM concentration at OLI native resolution (30 m) into aggregated pixels. Color scales are adapted to each map and illustrated with color bar. In order to better illustrate the whole variability of OLI SPM concentration into aggregated pixels, scatter-plots of OLI data at native resolution (30 m) as function of the resampled data are also presented. The color on the scatter plots denotes pixel density; (<b>C</b>) SPM concentration profile derived from the OLI native resolution image, as a function of distance from the mouth. Path used for this profile is illustrated by black stars on two OLI native resolution images in (<b>A</b>).</p>
Full article ">Figure 7 Cont.
<p>(<b>A</b>) Full OLI image of SPM concentration over the Rhône River plume (left) and zoom on the Rhône River mouth (right) at the OLI native resolution (30 m) (top). Then averaged into coarser grids of 250 m, 500 m, 1 km and 3 × 5 km, from top to bottom row, corresponding to MODIS and SEVIRI resolutions. SPM concentrations are obtained using Rrs <span class="html-italic">vs.</span> SPM concentration relationship obtained in <a href="#sec3dot1dot1-remotesensing-08-00245" class="html-sec">Section 3.1.1</a>; (<b>B</b>) Maps of the standard deviation of SPM concentration at OLI native resolution (30 m) into aggregated pixels. Color scales are adapted to each map and illustrated with color bar. In order to better illustrate the whole variability of OLI SPM concentration into aggregated pixels, scatter-plots of OLI data at native resolution (30 m) as function of the resampled data are also presented. The color on the scatter plots denotes pixel density; (<b>C</b>) SPM concentration profile derived from the OLI native resolution image, as a function of distance from the mouth. Path used for this profile is illustrated by black stars on two OLI native resolution images in (<b>A</b>).</p>
Full article ">Figure 8
<p>Satellite images of the SPM concentration over the Rhône River plume the 23rd February 2014. (<b>A</b>) OLI SPM product at 30m resolution acquired at 10:24; (<b>B</b>) MODIS-T and (<b>C</b>) MODIS-A SPM product at 250 m resolution acquired at 09:50 and 13:10 respectively and (<b>D</b>) SEVIRI SPM product at 3 × 5 km resolution acquired at 09:45. SPM concentrations are obtained using Rrs <span class="html-italic">vs.</span> SPM concentration relationship obtained in <a href="#sec3dot1dot1-remotesensing-08-00245" class="html-sec">Section 3.1.1</a>. Grey areas correspond to pixels masked by flags. The MODIS-T image (<b>B</b>) shows an overall low quality (<span class="html-italic">i.e.</span>, grey streak on left side) because of the location of the Rhône River plume at the border of the MODIS-T swath.</p>
Full article ">Figure 9
<p>MODIS-A and -T SPM products (<b>left</b>) and SEVIRI SPM products (<b>right</b>) for the four match-ups identified with the TUCPA <span class="html-italic">in situ</span> data. Dates and times for the four match-ups are (1) 2014-02-17 10:24; (2) 2014-02-17 12:11, (3) 2014-02-20 11:07 and (4) 2014-02-20 12:51. More match-up are available with SEVIRI but are not shown here. In addition to SPM concentrations, corresponding Rrs values in the red band of the two sensors (645 nm for MODIS and 635 nm for SEVIRI) are also reported on the color scale.</p>
Full article ">Figure 10
<p>Comparison between (<b>A</b>) MODIS-A and -T and (<b>B</b>) SEVIRI derived Rrs (left) and SPM concentration (right) with <span class="html-italic">in situ</span> data from the TUCPA campaign. Linear regression are applied on each comparison. The R<sup>2</sup>, median relative difference (MRD = median(|DATA<sub>sensors</sub>-DATA<sub>TUCPA</sub>|/DATA<sub>TUCPA</sub>*100) ) and RMSE values are indicated. Some SPM concentration match-up were considered as not relevant because of <span class="html-italic">in situ</span> data measurement error (see <a href="#sec3dot2dot3-remotesensing-08-00245" class="html-sec">Section 3.2.3</a>) and were excluded.</p>
Full article ">Figure 11
<p>(<b>A</b>) SPM concentration variations in the river mouth over the studied period (17 to 23 February). SPM concentrations were computed from the particulate backscaterring coefficient b<sub>bp</sub>(700) measured by the Wetlabs ECO-BB2FL autonomous sensor mounted at the Mesurho station according to: SPM = b<sub>bp</sub>(700)*F, with F = 416 a local calibration factor established using field measurements during the TUCPA campaign. SPM concentration measurements are averaged over 1h in order to keep only variation significant enough to be compared to variations observed from the space at meters to kilometers scales. The grey shaded regions correspond to the SEVIRI acquisition time range from 08:00 to 16:00 and the red and blue vertical line correspond to mean acquisition time of MODIS-A (10:00) and MODIS-T (13:00) respectively; (<b>B</b>) Daily averaged SPM concentration measured by the probe (black) over a large period ranging from 22 January to the 11 March 2014, compared to the SPM concentration measured by the probe at MODIS-A (red), MODIS-T (blue) and OLI (yellow) acquisition time. Dashed black rectangle indicates data that could be altered by probe saturation.</p>
Full article ">Figure 12
<p>SPM concentration contour lines for concentration &gt; 10 g m<sup>−3</sup> derived from SEVIRI and MODIS-A and -T SPM product for the 17, 20 and 23 February 2014. SEVIRI contour lines for all images acquired every 15 min are mapped with color from blue (08:00) to yellow (16:00). MODIS-A and -T contour lines are mapped in red solid line and red dashed line respectively. For an easier comparison with SEVIRI, MODIS images are ressampled to the SEVIRI resolution. For comparison, the wind speed and direction for each hour (from 5:00 to 19:00 UTC ) is also reported under each contour lines map with same color code as SEVIRI contours (grey arrows correspond to wind speed and direction for hours before and after SEVIRI and MODIS acquisition times).</p>
Full article ">
7832 KiB  
Article
Regional Scale Rain-Forest Height Mapping Using Regression-Kriging of Spaceborne and Airborne LiDAR Data: Application on French Guiana
by Ibrahim Fayad, Nicolas Baghdadi, Jean-Stéphane Bailly, Nicolas Barbier, Valéry Gond, Bruno Hérault, Mahmoud El Hajj, Frédéric Fabre and José Perrin
Remote Sens. 2016, 8(3), 240; https://doi.org/10.3390/rs8030240 - 16 Mar 2016
Cited by 45 | Viewed by 10684
Abstract
LiDAR data has been successfully used to estimate forest parameters such as canopy heights and biomass. Major limitation of LiDAR systems (airborne and spaceborne) arises from their limited spatial coverage. In this study, we present a technique for canopy height mapping using airborne [...] Read more.
LiDAR data has been successfully used to estimate forest parameters such as canopy heights and biomass. Major limitation of LiDAR systems (airborne and spaceborne) arises from their limited spatial coverage. In this study, we present a technique for canopy height mapping using airborne and spaceborne LiDAR data (from the Geoscience Laser Altimeter System (GLAS)). First, canopy heights extracted from both airborne and spaceborne LiDAR were extrapolated from available environmental data. The estimated canopy height maps using Random Forest (RF) regression from airborne or GLAS calibration datasets showed similar precisions (~6 m). To improve the precision of canopy height estimates, regression-kriging was used. Results indicated an improvement in terms of root mean square error (RMSE, from 6.5 to 4.2 m) using the GLAS dataset, and from 5.8 to 1.8 m using the airborne LiDAR dataset. Finally, in order to investigate the impact of the spatial sampling of future LiDAR missions on canopy height estimates precision, six subsets were derived from the initial airborne LiDAR dataset. Results indicated that using the regression-kriging approach a precision of 1.8 m on the canopy height map was achievable with a flight line spacing of 5 km. This precision decreased to 4.8 m for flight line spacing of 50 km. Full article
(This article belongs to the Special Issue Remote Sensing of Vegetation Structure and Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of French Guiana and map of canopy heights estimated from the GLAS dataset (in m).</p>
Full article ">Figure 2
<p>Map of canopy heights calculated from the airborne LiDAR LD dataset for French Guiana. The locations of airborne LiDAR HD datasets are in delineated with circles.</p>
Full article ">Figure 3
<p>Wall-to-wall map of French Guiana with Random Forest regressions using as reference data the canopy height estimates from: (<b>a</b>) GLAS dataset; and (<b>b</b>) LD_cal dataset.</p>
Full article ">Figure 4
<p>Comparison between the reference canopy heights of the verification datasets and the canopy height trend estimates using Random Forest: (<b>a</b>) GLAS dataset; and (<b>b</b>) LD_cal dataset.</p>
Full article ">Figure 5
<p>Wall-to-wall map of French Guiana with regression-kriging using as reference data canopy height estimates from: (<b>a</b>) GLAS dataset; and (<b>b</b>) LD dataset.</p>
Full article ">Figure 6
<p>Comparison between the reference canopy heights of the verification datasets and the canopy height estimates using Random Forest regressions and residual-kriging: (<b>a</b>) GLAS dataset; and (<b>b</b>) LD_cal dataset.</p>
Full article ">Figure 7
<p>Wall-to-wall standard deviation map (STD_DEV) of the canopy height estimates uncertainty for: (<b>a</b>) GLAS dataset; and (<b>b</b>) LD_cal dataset.</p>
Full article ">Figure 8
<p>Examples of wall-to-wall maps of French Guiana with regression-kriging using as reference data the canopy height estimates from: (<b>a</b>) LD_5; (<b>b</b>) LD_20; and (<b>c</b>) LD_50.</p>
Full article ">Figure 9
<p>Comparison between the canopy heights of our verification datasets (LD_val and HD) and the canopy height estimates from the study of [<a href="#B10-remotesensing-08-00240" class="html-bibr">10</a>].</p>
Full article ">
10173 KiB  
Article
Geodesic Flow Kernel Support Vector Machine for Hyperspectral Image Classification by Unsupervised Subspace Feature Transfer
by Alim Samat, Paolo Gamba, Jilili Abuduwaili, Sicong Liu and Zelang Miao
Remote Sens. 2016, 8(3), 234; https://doi.org/10.3390/rs8030234 - 16 Mar 2016
Cited by 33 | Viewed by 7565
Abstract
In order to deal with scenarios where the training data, used to deduce a model, and the validation data have different statistical distributions, we study the problem of transformed subspace feature transfer for domain adaptation (DA) in the context of hyperspectral image classification [...] Read more.
In order to deal with scenarios where the training data, used to deduce a model, and the validation data have different statistical distributions, we study the problem of transformed subspace feature transfer for domain adaptation (DA) in the context of hyperspectral image classification via a geodesic Gaussian flow kernel based support vector machine (GFKSVM). To show the superior performance of the proposed approach, conventional support vector machines (SVMs) and state-of-the-art DA algorithms, including information-theoretical learning of discriminative cluster for domain adaptation (ITLDC), joint distribution adaptation (JDA), and joint transfer matching (JTM), are also considered. Additionally, unsupervised linear and nonlinear subspace feature transfer techniques including principal component analysis (PCA), randomized nonlinear principal component analysis (rPCA), factor analysis (FA) and non-negative matrix factorization (NNMF) are investigated and compared. Experiments on two real hyperspectral images show the cross-image classification performances of the GFKSVM, confirming its effectiveness and suitability when applied to hyperspectral images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the main idea of geodesic flow for domain adaptation.</p>
Full article ">Figure 2
<p>Color composite images of the (<b>a</b>) ROSIS Pavia University with the corresponding ground truths: (<b>b</b>) train map; (<b>c</b>) test map; (<b>d</b>) Pavia City Centre with the corresponding ground truths; (<b>e</b>) train map and (<b>f</b>) test map.</p>
Full article ">Figure 3
<p>Color composite of the Source (<b>a</b>) and Target (<b>b</b>) images of Houston with the corresponding ground truths: (<b>c</b>) and (<b>d</b>) train maps for Source and Target; (<b>e</b>) and (<b>f</b>) test maps for Source and Target.</p>
Full article ">Figure 4
<p>Principal component scatter plots of the source and target images for Pavia (University: (<b>a</b>) PC1 <span class="html-italic">vs.</span> PC2; (<b>b</b>) PC3 <span class="html-italic">vs.</span> PC4; Center: (<b>c</b>) PC1 <span class="html-italic">vs.</span> PC2; (<b>d</b>) PC3 <span class="html-italic">vs.</span> PC4 and Houston Left: (<b>e</b>) PC1 <span class="html-italic">vs.</span> PC2; (<b>f)</b> PC3 <span class="html-italic">vs.</span> PC4; Right: (<b>g</b>) PC1 <span class="html-italic">vs.</span> PC2; (<b>h</b>) PC3 <span class="html-italic">vs.</span> PC4.) data sets.</p>
Full article ">Figure 5
<p>Overall Accuracy (OA) <span class="html-italic">versus</span> GFKSVM with RBF and linear kernel (green transparent flat surface) for ROSIS data (PCA-PCA: (<b>a</b>) University→University; (<b>b</b>) University→Center; (<b>c</b>) Center→Center; (<b>d</b>) Center→University; rPCA-rPCA: (<b>e</b>) University→University; (<b>f</b>) University→Center; (<b>g</b>) Center→Center; (<b>h</b>) Center→University; FA-FA: (<b>i</b>) University→University; (<b>j</b>) University→Center; (<b>k</b>) Center→Center; (<b>l</b>) Center→University; NNMF-NNMF: (<b>m</b>) University→University; (<b>n</b>) University→Center; (<b>o</b>) Center→Center; (<b>p</b>) Center→University), with number of subspaces <span class="html-italic">d</span> = 10.</p>
Full article ">Figure 6
<p>OA <span class="html-italic">versus</span> the parameters <span class="html-italic">m</span> and <span class="html-italic">d</span> of rPCA in GFKSVM for ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data sets: (Source→Target).</p>
Full article ">Figure 7
<p>OA <span class="html-italic">versus</span> the number of subspaces <span class="html-italic">d</span> and the parameter λ of ITLDC for the ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data sets: (Source→Target).</p>
Full article ">Figure 8
<p>OA <span class="html-italic">versus</span> the number of subspaces <span class="html-italic">d</span> and the parameter λ of JDA (trained with primal) for the ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data sets: (Source→Target).</p>
Full article ">Figure 8 Cont.
<p>OA <span class="html-italic">versus</span> the number of subspaces <span class="html-italic">d</span> and the parameter λ of JDA (trained with primal) for the ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data sets: (Source→Target).</p>
Full article ">Figure 9
<p>OA <span class="html-italic">versus</span> the number of features d, the SVM parameter γ, and the regularization factor λ of JTM for ROSIS data sets (University→Center). (<b>a</b>) OA <span class="html-italic">versus</span> γ and λ, d = 2; (<b>b</b>) OA <span class="html-italic">versus</span> γ and λ, d = 6; (<b>c</b>) OA <span class="html-italic">versus</span> γ and λ, d = 10; (<b>d</b>) OA <span class="html-italic">versus</span> γ and λ, d = 102; (<b>e</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.01; (<b>f</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.2154; (<b>g</b>) OA <span class="html-italic">versus</span> λ and d, γ = 1; (<b>h</b>) OA <span class="html-italic">versus</span> λ and d, γ = 10; (<b>i</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.01; (<b>j</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.2154; (<b>k</b>) OA <span class="html-italic">versus</span> γ and d, λ = 1; (<b>l</b>) OA <span class="html-italic">versus</span> γ and d, λ = 10.</p>
Full article ">Figure 9 Cont.
<p>OA <span class="html-italic">versus</span> the number of features d, the SVM parameter γ, and the regularization factor λ of JTM for ROSIS data sets (University→Center). (<b>a</b>) OA <span class="html-italic">versus</span> γ and λ, d = 2; (<b>b</b>) OA <span class="html-italic">versus</span> γ and λ, d = 6; (<b>c</b>) OA <span class="html-italic">versus</span> γ and λ, d = 10; (<b>d</b>) OA <span class="html-italic">versus</span> γ and λ, d = 102; (<b>e</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.01; (<b>f</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.2154; (<b>g</b>) OA <span class="html-italic">versus</span> λ and d, γ = 1; (<b>h</b>) OA <span class="html-italic">versus</span> λ and d, γ = 10; (<b>i</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.01; (<b>j</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.2154; (<b>k</b>) OA <span class="html-italic">versus</span> γ and d, λ = 1; (<b>l</b>) OA <span class="html-italic">versus</span> γ and d, λ = 10.</p>
Full article ">Figure 10
<p>OA <span class="html-italic">versus</span> the number of features <span class="html-italic">d</span>, the SVM parameter <span class="html-italic">γ</span>, and the regularization factor <span class="html-italic">λ</span> of JTM for ROSIS data sets (Left portion→Right portion). (<b>a</b>) OA <span class="html-italic">versus</span> γ and λ, d = 2; (<b>b</b>) OA <span class="html-italic">versus</span> γ and λ, d = 6; (<b>c</b>) OA <span class="html-italic">versus</span> γ and λ, d = 10; (<b>d</b>) OA <span class="html-italic">versus</span> γ and λ, d = 102; (<b>e</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.01; (<b>f</b>) OA <span class="html-italic">versus</span> λ and d, γ = 0.2154; (<b>g</b>) OA <span class="html-italic">versus</span> λ and d, γ = 1; (<b>h</b>) OA <span class="html-italic">versus</span> λ and d, γ = 10; (<b>i</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.01; (<b>j</b>) OA <span class="html-italic">versus</span> γ and d, λ = 0.2154; (<b>k</b>) OA <span class="html-italic">versus</span> γ and d, λ = 1; (<b>l</b>) OA <span class="html-italic">versus</span> γ and d, λ = 10.</p>
Full article ">Figure 11
<p>Classification accuracy curves for GFKSVM using different subspace feature transfer approaches on the ROSIS data set (University→Center). (<b>a</b>) PCA-PCA; (<b>b</b>) PCA-rPCA; (<b>c</b>) PCA-FA; (<b>d</b>) PCA-NNMF; (<b>e</b>) rPCA-PCA; (<b>f</b>) rPCA-rPCA; (<b>g</b>) rPCA-FA; (<b>h</b>) rPCA-NNFM; (<b>i</b>) FA-PCA; (<b>j</b>) FA-rPCA; (<b>k</b>) FA-FA; (<b>l</b>) FA-NNMF; (<b>m</b>) NNMF-PCA; (<b>n</b>) NNMF-rPCA; (<b>o</b>) NNMF-FA; (<b>p</b>) NNMF-NNMF.</p>
Full article ">Figure 12
<p>Classification accuracy curves for GFKSVM with various subspace feature transfer approaches on ROSIS data (Center→University). (<b>a</b>) PCA-PCA; (<b>b</b>) PCA-rPCA; (<b>c</b>) PCA-FA; (<b>d</b>) PCA-NNMF; (<b>e</b>) rPCA-PCA; (<b>f</b>) rPCA-rPCA; (<b>g</b>) rPCA-FA; (<b>h</b>) rPCA-NNFM; (<b>i</b>) FA-PCA; (<b>j</b>) FA-rPCA; (<b>k</b>) FA-FA; (<b>l</b>) FA-NNMF; (<b>m</b>) NNMF-PCA; (<b>n</b>) NNMF-rPCA; (<b>o</b>) NNMF-FA; (<b>p</b>) NNMF-NNMF.</p>
Full article ">Figure 13
<p>Classification accuracy curves for GFKSVM with various subspace feature transfer approaches on Houston data (Left site→Right site). (<b>a</b>) PCA-PCA; (<b>b</b>) PCA-rPCA; (<b>c</b>) PCA-FA; (<b>d</b>) PCA-NNMF; (<b>e</b>) rPCA-PCA; (<b>f</b>) rPCA-rPCA; (<b>g</b>) rPCA-FA; (<b>h</b>) rPCA-NNFM; (<b>i</b>) FA-PCA; (<b>j</b>) FA-rPCA; (<b>k</b>) FA-FA; (<b>l</b>) FA-NNMF; (<b>m</b>) NNMF-PCA; (<b>n</b>) NNMF-rPCA; (<b>o</b>) NNMF-FA; (<b>p</b>) NNMF-NNMF.</p>
Full article ">Figure 13 Cont.
<p>Classification accuracy curves for GFKSVM with various subspace feature transfer approaches on Houston data (Left site→Right site). (<b>a</b>) PCA-PCA; (<b>b</b>) PCA-rPCA; (<b>c</b>) PCA-FA; (<b>d</b>) PCA-NNMF; (<b>e</b>) rPCA-PCA; (<b>f</b>) rPCA-rPCA; (<b>g</b>) rPCA-FA; (<b>h</b>) rPCA-NNFM; (<b>i</b>) FA-PCA; (<b>j</b>) FA-rPCA; (<b>k</b>) FA-FA; (<b>l</b>) FA-NNMF; (<b>m</b>) NNMF-PCA; (<b>n</b>) NNMF-rPCA; (<b>o</b>) NNMF-FA; (<b>p</b>) NNMF-NNMF.</p>
Full article ">Figure 14
<p>OA curves for GFKSVM and other considered state-of-the-art DA methods on ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data sets.</p>
Full article ">Figure 15
<p>Classification maps (limited to validation/training sample locations) for GFKSVM on ROSIS data: Center→University: (<b>a</b>) PCA-PCA; (<b>b</b>) rPCA-rPCA; (<b>c</b>)-FA-FA; (<b>d</b>) NNMF-NNMF; University→Center: (<b>e</b>) PCA-PCA; (<b>f</b>) rPCA-rPCA; (<b>g</b>)-FA-FA; (<b>h</b>) NNMF-NNMF.</p>
Full article ">Figure 16
<p>Classification accuracy curves for GFKSVM and other DA algorithms applied to ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data. Each curve shows the average overall accuracy value with respect to the increasing size of training sets over 10 independent runs. The shaded areas show the standard deviation of the overall accuracy within the independent runs.</p>
Full article ">Figure 16 Cont.
<p>Classification accuracy curves for GFKSVM and other DA algorithms applied to ROSIS (<b>a</b>,<b>b</b>) and Houston (<b>c</b>,<b>d</b>) data. Each curve shows the average overall accuracy value with respect to the increasing size of training sets over 10 independent runs. The shaded areas show the standard deviation of the overall accuracy within the independent runs.</p>
Full article ">
7824 KiB  
Article
Application of the Geostationary Ocean Color Imager to Mapping the Diurnal and Seasonal Variability of Surface Suspended Matter in a Macro-Tidal Estuary
by Zhixin Cheng, Xiao Hua Wang, David Paull and Jianhua Gao
Remote Sens. 2016, 8(3), 244; https://doi.org/10.3390/rs8030244 - 15 Mar 2016
Cited by 38 | Viewed by 8027
Abstract
Total suspended particulate matter (TSM) in estuarine and coastal regions usually exhibits significant natural variations. The understanding of such variations is of great significance in coastal waters. The aim of this study is to investigate and assess the diurnal and seasonal variations of [...] Read more.
Total suspended particulate matter (TSM) in estuarine and coastal regions usually exhibits significant natural variations. The understanding of such variations is of great significance in coastal waters. The aim of this study is to investigate and assess the diurnal and seasonal variations of surface TSM distribution and its mechanisms in coastal waters based on Geostationary Ocean Color Imager (GOCI) data. As a case study, dynamic variations of TSM in the macro-tidal Yalu River estuary (YRE) of China were analysed. With regard to diurnal variability, there were usually two peaks of TSM in a tidal cycle corresponding to the maximum flood and ebb current. Tidal action appears to play a vital role in diurnal variations of TSM. Both the processes of tidal re-suspension and advection could be identified; however, the diurnal variation of TSM was mainly affected by a re-suspension process. In addition, spring-neap tides can affect the magnitude of TSM diurnal variations in the YRE. The GOCI-retrieved TSM results clearly showed the seasonal variability of surface TSM in this area, with the highest level occurring in winter and the lowest in summer. Moreover, although river discharge to the YRE was much greater in the wet season than the dry season, TSM concentrations were significantly higher in the dry season. Wind waves were considered to be the main factor affecting TSM seasonal variation in the YRE. Full article
(This article belongs to the Special Issue Remote Sensing in Coastal Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and bathymetry of the Yalu River estuary and its surrounding shelf region. Black points T1, P1, P4, P5 indicate selected monitoring stations. The line between P4 and P5 is the selected <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a>. D and H indicate the location of the Donggang Meteorological Station and Huanggou Hydrologic Station, respectively. Y03 is the location of the field observation station.</p>
Full article ">Figure 2
<p>(<b>a</b>) Comparison between <span class="html-italic">in situ</span> total suspended particulate matter (TSM) in the upper layer at Y03 in August 2009 and the Geostationary Ocean Color Imager (GOCI)-retrieved TSM concentration at T1 in August 2014 and (<b>b</b>) same comparison under the tidal phase . “HW” and “LW” represent for high slack water and low slack water, respectively. “−” and “+” represent for hours “before” and “after”, respectively</p>
Full article ">Figure 3
<p>Hourly maps of GOCI-retrieved TSM from 08:28–15:28 (local time) in the Yalu River estuary on 3 April 2014. The graph shows the hourly tide elevation on the same day. Red lines demark the extent of the turbidity maxima zone (&gt;15 g·m<sup>−3</sup> TSM concentration). The black line represents the location of <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a>.</p>
Full article ">Figure 4
<p>Hourly maps of GOCI-retrieved TSM from 08:28–15:28 (local time) in the Yalu River estuary on 2 August 2014. The graph shows the hourly tide elevation on the same day. Red lines demark the extent of the turbidity maxima zone (&gt;15 g·m<sup>−3</sup> TSM concentration). The black line represents the location of <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a>.</p>
Full article ">Figure 5
<p>TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from (<b>a</b>) 08:28–15:28 (local time) on 9 March 2014; (<b>b</b>) TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from 09:28–15:28 (local time) on 26 May 2014, corresponding tidal elevation with section-averaged TSM concentration and conditions of wind and wave height (<b>c</b>) on 9 March 2014 and (<b>d</b>) on 26 May 2014. W and S represent wind speed and significant wave height, respectively.</p>
Full article ">Figure 6
<p>TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from (<b>a</b>) 09:28–15:28 (local time) on 6 March 2014; (<b>b</b>) TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from 08:28–15:28 (local time) on 2 August 2014, corresponding tidal elevation with section-averaged TSM concentration and conditions of wind and wave height (<b>c</b>) on 6 March 2014 and (<b>d</b>) on 2 August 2014. W and S represent wind speed and significant wave height, respectively.</p>
Full article ">Figure 7
<p>TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from (<b>a</b>) 08:28–14:28 (local time) on 3 May 2014; (<b>b</b>) TSM variations at <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> from 09:28–15:28 (local time) on 21 March 2014, corresponding tidal elevation with section-averaged TSM concentration and conditions of wind and wave height (<b>c</b>) on 3 May 2014 and (<b>d</b>) on 21 March 2014. W and S represent wind speed and significant wave height, respectively.</p>
Full article ">Figure 8
<p>Spatial distributions of daily averaged TSM on (<b>a</b>) 30 May 2014 (spring tide) and (<b>b</b>) 8 June 2014 (neap tide). SD maps of TSM on (<b>c</b>) 30 May 2014 and (<b>d</b>) 8 June 2014.</p>
Full article ">Figure 9
<p>Monthly mean TSM maps retrieved from GOCI in the Yalu River estuary from January–December 2014. Red lines demark the extent of the turbidity maxima zone (&gt;15 g·m<sup>−3</sup> TSM concentration).</p>
Full article ">Figure 10
<p>Daily averaged TSM retrieved from GOCI at (<b>a</b>) P1; (<b>b</b>) P4 and (<b>c</b>) P5 from April 2011–December 2014.</p>
Full article ">Figure 11
<p>Spatial distributions of monthly mean TSM in (<b>a</b>) April 2014 (dry season) and (<b>b</b>) August 2014 (wet season). SD maps of TSM on cloud-free days in (<b>c</b>) April 2014 and (<b>d</b>) August 2014.</p>
Full article ">Figure 12
<p>Monthly mean TSM across <a href="#sec1-remotesensing-08-00244" class="html-sec">Section 1</a> in April and August of (<b>a</b>) 2013 and (<b>b</b>) 2014. (<b>c</b>) Monthly water discharge and sediment load in the wet season of the Yalu River from 2011–2014.</p>
Full article ">Figure 13
<p>Vertical distribution of salinity at Station Y03 on (<b>a</b>) 15 August 2009 during neap tide and (<b>b</b>) 9 August 2009 during spring tide.</p>
Full article ">Figure 14
<p>Relationship between area-averaged GOCI-retrieved daily averaged TSM concentration and corresponding wind speed/significant wave height for all cloud-free days at P4 in 2014.</p>
Full article ">
4945 KiB  
Article
A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data
by Bo Guo, Xianfeng Huang, Qingquan Li, Fan Zhang, Jiasong Zhu and Chisheng Wang
Remote Sens. 2016, 8(3), 243; https://doi.org/10.3390/rs8030243 - 15 Mar 2016
Cited by 26 | Viewed by 7336
Abstract
Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of [...] Read more.
Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1) building a framework for automatic pylon reconstruction; and (2) efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>3D object library <math display="inline"> <semantics> <mi mathvariant="script">M</mi> </semantics> </math>. There are six models in the library. The pylon models are represented using polyhedrons. The pylon parameters are specified as the height, width of arms, the positions of turning points, and other specific parameters.</p>
Full article ">Figure 2
<p>The initial type and parameter calculation method for a pylon. Firstly, all the points of a pylon are projected onto its main plane (<b>a</b>); A vertical profile is then sliced into several bins of a certain distance. The histogram for the length of the sliced profile is shown in (<b>b</b>); (<b>c</b>) A derivative that represents the changing length of the arms. In order to highlight the pylon’s turning points, a second derivative is calculated (<b>d</b>); and the positions where the derivatives are large are set to be 0. In all subfigures, <span class="html-italic">y</span> represents height in m. <span class="html-italic">x</span> represents length (<b>b</b>); derivative (<b>c</b>); or second derivative (<b>d</b>) in m.</p>
Full article ">Figure 3
<p>Sketch map of the data term calculation. The red points represent laser points of a pylon. The blue polygon represents the main profile of the pylon model (object), while the blue points represent the key points of the model. The red arrows represent the closest distance from a laser point to object. The blue arrows represent the distance from a key point of object to its nearest laser point.</p>
Full article ">Figure 4
<p>Foreground/background segmentation for a pylon neighborhood: (<b>a</b>) initial classification using the JointBoost classifier and local features (color legend: buildings/red, trees/green, power lines/violet, pylons/blue); (<b>b</b>) foreground and background seed points, with the foreground seed points shown in orange, the background seed points shown in blue, and the points of unreliable objects shown in black; and (<b>c</b>) the foreground and background segmentation results.</p>
Full article ">Figure 5
<p>Error elimination: (<b>a</b>) the pylon extraction result using graph cut; (<b>b</b>) the vertical bins; and (<b>c</b>) the power-line based error elimination.</p>
Full article ">Figure 6
<p>Optimization process: Evolution of the object configuration from the initial temperature (dark red) to the final one (dark blue) with the energy decreasing shown in <a href="#remotesensing-08-00243-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Energy decrease, <span class="html-italic">x</span> = <span class="html-italic">t</span> and <span class="html-italic">y</span> = <span class="html-italic">U</span>, where the circles with different colors represent the temperature corresponding to the optimization process shown in <a href="#remotesensing-08-00243-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Reconstruction results of Region 1.</p>
Full article ">Figure 9
<p>Reconstruction results of Region 2.</p>
Full article ">Figure 10
<p>Reconstruction gap caused by pylon inclination.</p>
Full article ">
2048 KiB  
Article
A Symmetric Sparse Representation Based Band Selection Method for Hyperspectral Imagery Classification
by Weiwei Sun, Man Jiang, Weiyue Li and Yinnian Liu
Remote Sens. 2016, 8(3), 238; https://doi.org/10.3390/rs8030238 - 15 Mar 2016
Cited by 37 | Viewed by 5796
Abstract
A novel Symmetric Sparse Representation (SSR) method has been presented to solve the band selection problem in hyperspectral imagery (HSI) classification. The method assumes that the selected bands and the original HSI bands are sparsely represented by each other, i.e., symmetrically represented. [...] Read more.
A novel Symmetric Sparse Representation (SSR) method has been presented to solve the band selection problem in hyperspectral imagery (HSI) classification. The method assumes that the selected bands and the original HSI bands are sparsely represented by each other, i.e., symmetrically represented. The method formulates band selection into a famous problem of archetypal analysis and selects the representative bands by finding the archetypes in the minimal convex hull containing the HSI band points (i.e., one band corresponds to a band point in the high-dimensional feature space). Without any other parameter tuning work except the size of band subset, the SSR optimizes the band selection program using the block-coordinate descent scheme. Four state-of-the-art methods are utilized to make comparisons with the SSR on the Indian Pines and PaviaU HSI datasets. Experimental results illustrate that SSR outperforms all four methods in classification accuracies (i.e., Average Classification Accuracy (ACA) and Overall Classification Accuracy (OCA)) and three quantitative evaluation results (i.e., Average Information Entropy (AIE), Average Correlation Coefficient (ACC) and Average Relative Entropy (ARE)), whereas it takes the second shortest computational time. Therefore, the proposed SSR is a good alternative method for band selection of HSI classification in realistic applications. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The image of Indian Pines dataset.</p>
Full article ">Figure 2
<p>The image of Pavia dataset.</p>
Full article ">Figure 3
<p>The OCA plots from SVM, KNN and RF classifiers on Indian Pines and PaviaU datasets. (<b>a</b>) SVM-Indian Pines; (<b>b</b>) KNN-Indian Pines; (<b>c</b>) RF-Indian Pines; (<b>d</b>) SVM-PaviaU; (<b>e</b>) KNN-PaviaU; and (<b>f</b>) RF-PaviaU</p>
Full article ">Figure 4
<p>Classification maps of all five methods on Indian Pines dataset using the SVM classifier. (<b>a</b>) Ground truth; (<b>b</b>) SID; (<b>c</b>) MVPCA; (<b>d</b>) SNMF; (<b>e</b>) SpaBS; and (<b>f</b>) SSR.</p>
Full article ">Figure 5
<p>Classification maps of all five methods on PaviaU dataset using the SVM classifier. (<b>a</b>) Ground truth; (<b>b</b>) SID; (<b>c</b>) MVPCA; (<b>d</b>) SNMF; (<b>e</b>) SpaBS; and (<b>f</b>) SSR.</p>
Full article ">
4958 KiB  
Article
Monitoring Riverbank Erosion in Mountain Catchments Using Terrestrial Laser Scanning
by Laura Longoni, Monica Papini, Davide Brambilla, Luigi Barazzetti, Fabio Roncoroni, Marco Scaioni and Vladislav Ivov Ivanov
Remote Sens. 2016, 8(3), 241; https://doi.org/10.3390/rs8030241 - 14 Mar 2016
Cited by 56 | Viewed by 10944
Abstract
Sediment yield is a key factor in river basins management due to the various and adverse consequences that erosion and sediment transport in rivers may have on the environment. Although various contributions can be found in the literature about sediment yield modeling and [...] Read more.
Sediment yield is a key factor in river basins management due to the various and adverse consequences that erosion and sediment transport in rivers may have on the environment. Although various contributions can be found in the literature about sediment yield modeling and bank erosion monitoring, the link between weather conditions, river flow rate and bank erosion remains scarcely known. Thus, a basin scale assessment of sediment yield due to riverbank erosion is an objective hard to be reached. In order to enhance the current knowledge in this field, a monitoring method based on high resolution 3D model reconstruction of riverbanks, surveyed by multi-temporal terrestrial laser scanning, was applied to four banks in Val Tartano, Northern Italy. Six data acquisitions over one year were taken, with the aim to better understand the erosion processes and their triggering factors by means of more frequent observations compared to usual annual campaigns. The objective of the research is to address three key questions concerning bank erosion: “how” erosion happens, “when” during the year and “how much” sediment is eroded. The method proved to be effective and able to measure both eroded and deposited volume in the surveyed area. Finally an attempt to extrapolate basin scale volume for bank erosion is presented. Full article
(This article belongs to the Special Issue Remote Sensing in Geology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Tartano Valley geographical setting and position of the main points of interest.</p>
Full article ">Figure 2
<p>Photographs of surveyed banks and stream flow direction.</p>
Full article ">Figure 3
<p>Sketches of the position of the banks relative to the water stream and their geometrical features (not to scale).</p>
Full article ">Figure 4
<p>Riegl LMS-Z420i at work and a cylindrical target in place.</p>
Full article ">Figure 5
<p>Bar-plot showing eroded and deposited volumes at each bank for each epoch.</p>
Full article ">Figure 6
<p>Activity graphs and change maps for Banks 1. Activity graphs summarize erosion and deposition depths in centimeters and display daily rainfalls in millimeters along with the daily freeze-thaw cycles. Change maps show the comparison between two consecutive scans with color scale corresponding to erosion (red) or deposition (blue). Only two comparisons are shown: (<b>A</b>) an epoch considered significant and (<b>B</b>) erosion peak epoch. For each epoch, bar graphs representing frequency classes for activity depth are shown for uncertainty evaluation.</p>
Full article ">Figure 7
<p>Activity graphs and change maps for Bank 2. The same style described in the caption of <a href="#remotesensing-08-00241-f006" class="html-fig">Figure 6</a> has been adopted here. Two comparisons are shown: (<b>A</b>) an epoch considered significant and (<b>B</b>) erosion peak epoch.</p>
Full article ">Figure 8
<p>Activity graphs and change maps for Bank 3. The same style described in the caption of <a href="#remotesensing-08-00241-f006" class="html-fig">Figure 6</a> has been adopted here. Two comparisons are shown: (<b>A</b>) an epoch considered significant and (<b>B</b>) erosion peak epoch.</p>
Full article ">Figure 9
<p>Activity graphs and change maps for Bank 4. The same style described in the caption of <a href="#remotesensing-08-00241-f006" class="html-fig">Figure 6</a> has been adopted here. Two comparisons are shown: (<b>A</b>) an epoch considered significant and (<b>B</b>) erosion peak epoch.</p>
Full article ">
13474 KiB  
Article
Frescoed Vaults: Accuracy Controlled Simplified Methodology for Planar Development of Three-Dimensional Textured Models
by Marco Giorgio Bevilacqua, Gabriella Caroti, Isabel Martínez-Espejo Zaragoza and Andrea Piemonte
Remote Sens. 2016, 8(3), 239; https://doi.org/10.3390/rs8030239 - 14 Mar 2016
Cited by 17 | Viewed by 7332
Abstract
In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface [...] Read more.
In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface representations. The transition from 3D to 2D representation, with the related geometric transformations, has not yet been fully formalized for planar development of frescoed vaults. Methodologies proposed so far on this subject provide transitioning from point cloud models to ideal mathematical surfaces and projecting textures using software tools. The methodology used for geometry and texture development in the present work does not require any dedicated software. The different processing steps can be individually checked for any error introduced, which can be then quantified. A direct accuracy check of the planar development of the frescoed surface has been carried out by qualified restorers, yielding a result of 3 mm. The proposed methodology, although requiring further studies to improve automation of the different processing steps, allowed extracting 2D drafts fully usable by operators restoring the vault frescoes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Model SfM/LASER; (<b>b</b>) Model SfM/MVS; (<b>c</b>) Model LASER.</p>
Full article ">Figure 2
<p>Dense contour model (isometric view).</p>
Full article ">Figure 3
<p>Discontinuity directions and vault areas (bottom view).</p>
Full article ">Figure 4
<p>Approximation by cylindrical surface.</p>
Full article ">Figure 5
<p>Actual 3D model—Cylindrical surface error.</p>
Full article ">Figure 6
<p>20-cm step contour lines model.</p>
Full article ">Figure 7
<p>Section arc—chord comparison.</p>
Full article ">Figure 8
<p>Geometric development of contours model.</p>
Full article ">Figure 9
<p>Projection deformation.</p>
Full article ">Figure 10
<p>(<b>a</b>) Orthogonal view with section lines; (<b>b</b>) Orthogonal view with high quality texture.</p>
Full article ">Figure 11
<p>Superimposition of orthogonal view on sections model development.</p>
Full article ">Figure 12
<p>High quality textured model development.</p>
Full article ">Figure 13
<p>Regions checked for deviations between cloud LASER and model SfM/MVS.</p>
Full article ">Figure 14
<p>Region A: total plaster collapse borders.</p>
Full article ">Figure 15
<p>Region B: total plaster collapse borders.</p>
Full article ">Figure 16
<p>Region C: gap in the fresco.</p>
Full article ">Figure 17
<p>Region D: crack in the topmost region of the vault.</p>
Full article ">Figure 18
<p>CPs on the vault.</p>
Full article ">Figure 19
<p>Development accuracy assessment at 1:1 scale.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop