Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 9, September
Previous Issue
Volume 9, July
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 9, Issue 8 (August 2017) – 108 articles

Cover Story (view full-size image): Surface inundation is known to have an important impact on biogeochemical, ecological and hydrological processes in wetlands. However, the spatial distribution and temporal dynamics of wetland inundation is still poorly understood. This article describes a fully automated method for estimating water fraction at sub-pixel scales using Landsat imagery. Assessment of estimated sub-pixel water fraction, using fine-resolution ground or airborne data over three wetland sites across North America, showed that our algorithm performs well over a gradient of wetland types. Additionally, comparison of our inundation estimates with those of existing surface water data products reveals a nearly five-fold increase in sensitivity to small but numerous wetlands when estimating sub-pixel water fraction. These findings therefore represent an important step in improving our understanding of wetland inundation dynamics. [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13813 KiB  
Article
An Integrated Approach to Generating Accurate DTM from Airborne Full-Waveform LiDAR Data
by Baoxin Hu, Damir Gumerov, Jianguo Wang and And Wen Zhang
Remote Sens. 2017, 9(8), 871; https://doi.org/10.3390/rs9080871 - 22 Aug 2017
Cited by 12 | Viewed by 5529
Abstract
In this study, full-waveform LiDAR data were exploited to detect weak returns backscattered by the bare terrain underneath vegetation canopies and thus improve the generation of a digital terrain model (DTM). Building on the methods of progressive generation of triangulation irregular network (TIN) [...] Read more.
In this study, full-waveform LiDAR data were exploited to detect weak returns backscattered by the bare terrain underneath vegetation canopies and thus improve the generation of a digital terrain model (DTM). Building on the methods of progressive generation of triangulation irregular network (TIN) model reported in the literature, we proposed an integrated approach where echo detection, terrain identification, and TIN generation were carried out iteratively. The proposed method was tested on a dataset collected by a Riegl LMS Q-560 scanner over a study area near Sault Ste. Marie, Ontario, Canada (46°33′56′′N, 83°25′18′′W). The results demonstrated that more terrain points under shrubs could be identified, and the generated DTMs exhibited more details in the terrain than those obtained using the progressive TIN method. In addition, 1275 points across this study area were surveyed on the ground and used to validate the proposed approach. The estimated elevations were shown to have a strong linear relationship with the measured ones, with R2 values above 0.98, and the RMSEs (Root Mean Squared Errors) between them were less than 0.15 m even for areas with hilly terrains underneath vegetation canopies. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example of a returned full-waveform of a laser pulse passing through a tree (blue points), the modelled ones by fitting the summation of eight and seven Gaussian functions to whole waveform (orange and grey lines, respectively). The echo generated by the terrain is pointed by the red arrow.</p>
Full article ">Figure 2
<p>The location of the study area (left panel) and the plots for validation (right panel).</p>
Full article ">Figure 3
<p>(<b>a</b>) Digital elevation model of the study area at the spatial resolution of 30 m by 30 m. (<b>b</b>) The slope calculated from the digital elevation model in (<b>a</b>).</p>
Full article ">Figure 4
<p>The ortho-views of the discrete LiDAR data of the four forest sites.</p>
Full article ">Figure 5
<p>The topographic points collected in the field in the Maple site and on a road that were used as references to assess the horizontal accuracy of LiDAR points and to validate the generated DTM using the proposed method, overlain on the digital surface model generated from the LiDAR data.</p>
Full article ">Figure 6
<p>The diagram of the developed method.</p>
Full article ">Figure 7
<p>Progressive triangular irregular network (TIN) algorithm parameters: θ, the iteration angle and D, the iteration distance, P is the point to be tested, and A, B, and C are the TIN facet vertices.</p>
Full article ">Figure 8
<p>Left panel: A recorded waveform passing through a facet of the terrain TIN model. Right Panel: The black circles are the peaks of the returns detected by the typical Gaussian decomposition method. The red line shows the location of the seed (point of intersection of the waveform and the TIN facet) used for the seeded Gaussian decomposition, and the green curve is the result of fitting one Gaussian function to the region near the seed.</p>
Full article ">Figure 9
<p>The TIN representation of the generated digital terrain model (DTM) models using the developed algorithm.</p>
Full article ">Figure 10
<p>The TIN representation of the generated DTM models using the progressive TIN method.</p>
Full article ">Figure 11
<p>The grid of point densities and coverage at each site, with the densities from the developed algorithm on the left and the progressive TIN method on the right. The red color represents the cells where there were no points detected for the given terrain model, and the points ranging from black to white representing a range from 1 point per cell to 4+ points per cell. The panels from the top to bottoms are the results for the sites of Jack Pine, Mixed Woods, and Maple, respectively.</p>
Full article ">Figure 12
<p>The linear relationship between the LiDAR-derived elevations using the developed algorithm and the ground measurements in the four study sites.</p>
Full article ">Figure 13
<p>The linear relationship between the LiDAR-derived elevations using the progressive TIN method and the ground measurements in the four study sites.</p>
Full article ">
53630 KiB  
Article
A Study of Spatial Soil Moisture Estimation Using a Multiple Linear Regression Model and MODIS Land Surface Temperature Data Corrected by Conditional Merging
by Chunggil Jung, Yonggwan Lee, Younghyun Cho and Seongjoon Kim
Remote Sens. 2017, 9(8), 870; https://doi.org/10.3390/rs9080870 - 22 Aug 2017
Cited by 37 | Viewed by 8779
Abstract
This study attempts to estimate spatial soil moisture in South Korea (99,000 km2) from January 2013 to December 2015 using a multiple linear regression (MLR) model and the Terra moderate-resolution imaging spectroradiometer (MODIS) land surface temperature (LST) and normalized distribution vegetation [...] Read more.
This study attempts to estimate spatial soil moisture in South Korea (99,000 km2) from January 2013 to December 2015 using a multiple linear regression (MLR) model and the Terra moderate-resolution imaging spectroradiometer (MODIS) land surface temperature (LST) and normalized distribution vegetation index (NDVI) data. The MODIS NDVI was used to reflect vegetation variations. Observed precipitation was measured using the automatic weather stations (AWSs) of the Korea Meteorological Administration (KMA), and soil moisture data were recorded at 58 stations operated by various institutions. Prior to MLR analysis, satellite LST data were corrected by applying the conditional merging (CM) technique and observed LST data from 71 KMA stations. The coefficient of determination (R2) of the original LST and observed LST was 0.71, and the R2 of corrected LST and observed LST was 0.95 for 3 selected LST stations. The R2 values of all corrected LSTs were greater than 0.83 for total 71 LST stations. The regression coefficients of the MLR model were estimated seasonally considering the five-day antecedent precipitation. The p-values of all the regression coefficients were less than 0.05, and the R2 values were between 0.28 and 0.67. The reason for R2 values less than 0.5 is that the soil classification at each observation site was not completely accurate. Additionally, the observations at most of the soil moisture monitoring stations used in this study started in December 2014, and the soil moisture measurements did not stabilize. Notably, R2 and root mean square error (RMSE) in winter were poor, as reflected by the many missing values, and uncertainty existed in observations due to freezing and mechanical errors in the soil. Thus, the prediction accuracy is low in winter due to the difficulty of establishing an appropriate regression model. Specifically, the estimated map of the soil moisture index (SMI) can be used to better understand the severity of droughts with the variability of soil moisture. Full article
(This article belongs to the Special Issue Satellite Remote Sensing for Water Resources in a Changing Climate)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of the study. For the satellite data, MODIS is the Terra moderate-resolution imaging spectroradiometer. For the soil moisture data, AAOS is the automated agriculture observing system operated by KMA, TDR is the time domain reflectometry, and RDA is the rural development administration.</p>
Full article ">Figure 2
<p>Distributions of observation stations: (<b>a</b>) the 687 automatic weather system (AWS) stations for continuous monitoring throughout South Korea and (<b>b</b>) the 58 soil moisture stations used for calibration of the multiple linear regression (MLR) model.</p>
Full article ">Figure 3
<p>Conditional merging process for MODIS LST: (<b>a</b>) Observed LST are collected at 71 stations of KMA; (<b>b</b>) LST values measured at the 71 stations are interpolated using the ordinary kriging technique; (<b>c</b>) Satellite LST data; (<b>d</b>) Satellite LST at the 71 gauging stations are extracted and then spatially interpolated using the ordinary kriging technique; (<b>e</b>) Residual between (<b>c</b>) and (<b>d</b>); (<b>f</b>) Residual values of (<b>e</b>) added to the data from (<b>b</b>).</p>
Full article ">Figure 4
<p>Distribution map of soil information with a 1 km spatial resolution: (<b>a</b>) Soil type (silt, clay, loam and sand); (<b>b</b>) soil field capacity (<span class="html-italic">FC</span>); and (<b>c</b>) soil wilting point (<span class="html-italic">WP</span>).</p>
Full article ">Figure 5
<p>Map of observed land surface temperature (LST) stations: ellipses (in green) denote the stations excluded from the conditional merging (CM) process and used for verification. The number above each pentagon (in red) is the LST station number.</p>
Full article ">Figure 6
<p>Comparison of observed and simulated LST graphs at verified stations: (<b>a</b>) 129 sites; (<b>b</b>) 192 sites; and (<b>c</b>) 238 sites. The left graphs are the original MODIS LST values. The right graphs are the corrected MODIS LST values after applying conditional merging (CM).</p>
Full article ">Figure 7
<p>Final monthly spatial distribution maps of LST during drought years.</p>
Full article ">Figure 8
<p>Comparison of observed soil moisture and predicted soil moisture for each soil type. The black line is observed soil moisture, and red points are soil moisture values predicted using the multiple linear regression model. These graphs are representative results of each soil type.</p>
Full article ">Figure 9
<p>Spatial comparison of monthly (2013–2015) (<b>a</b>) soil moisture and (<b>b</b>) rainfall throughout South Korea.</p>
Full article ">Figure 10
<p>Comparison of the soil moisture index (SMI) and standardized precipitation index (SPI): green and red dashed circles indicate areas where the SMI and SPI exhibited good agreement.</p>
Full article ">
6407 KiB  
Article
Detection of Asian Dust Storm Using MODIS Measurements
by Yong Xie, Wenhao Zhang and John J. Qu
Remote Sens. 2017, 9(8), 869; https://doi.org/10.3390/rs9080869 - 22 Aug 2017
Cited by 27 | Viewed by 6729
Abstract
Every year, a large number of aerosols are released from dust storms into the atmosphere, which may have potential impacts on the climate, environment, and air quality. Detecting dust aerosols and monitoring their movements and evolutions in a timely manner is a very [...] Read more.
Every year, a large number of aerosols are released from dust storms into the atmosphere, which may have potential impacts on the climate, environment, and air quality. Detecting dust aerosols and monitoring their movements and evolutions in a timely manner is a very significant task. Satellite remote sensing has been demonstrated as an effective means for observing dust aerosols. In this paper, an algorithm based on the multi-spectral technique for detecting dust aerosols was developed by combining measurements of moderate resolution imaging spectroradiometer (MODIS) reflective solar bands and thermal emissive bands. Data from dust events that occurred during the past several years were collected as training data for spectral and statistical analyses. According to the spectral curves of various scene types, a series of spectral bands was selected individually or jointly, and corresponding thresholds were defined for step-by-step scene classification. The multi-spectral algorithm was applied mainly to detect dust storms in Asia. The detection results were validated not only visually with MODIS true color images, but also quantitatively with products of Ozone Monitoring Instrument (OMI) and Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP). The validations showed that this multi-spectral detection algorithm was suitable to monitor dust aerosols in the selected study areas. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Response curves of dust and cloud (left <span class="html-italic">y</span>-axis corresponds to B8, B3, B4, B1, B2, and B7; right <span class="html-italic">y</span>-axis corresponds to B20, B29, B31, and B32). Red line: the reflectance/bright temperature of the cloud.</p>
Full article ">Figure 2
<p>Response curves of dust and clear dark pixels.</p>
Full article ">Figure 3
<p>Response curves of dust and clear bright pixels.</p>
Full article ">Figure 4
<p>Statistical analyses of training data for deciding thresholds. (<b>a</b>) Bright Temperature Difference (BTD) (12, 11) values for dust and cloud; (<b>b</b>) Normalized Difference Dust Index (NDDI) values for dust and cloud; (<b>c</b>) BTD (3.7, 11) values for dust and clear pixels over bright surfaces; (<b>d</b>) Ln (R1) values for dust and clear pixels over bright surfaces; (<b>e</b>) BTD (3.7, 11) values for dust and clear pixels over dark surfaces; and (<b>f</b>) Ln (R1) values for dust and dark pixels over dark surfaces.</p>
Full article ">Figure 5
<p>The flowchart of the multi-spectral dust storm detection technique.</p>
Full article ">Figure 6
<p>Dust storm over Taklimakan Desert on 25 June 2005. (<b>a</b>) MODIS true color image; (<b>b</b>) dust image; and (<b>c</b>) Aerosol Optical Depth (AOD) image.</p>
Full article ">Figure 7
<p>Dust storm in the Northeastern China on 7 April 2001. (<b>a</b>) MODIS true color image; (<b>b</b>) dust image; and (<b>c</b>) Aerosol Optical Depth (AOD) image.</p>
Full article ">Figure 8
<p>Dust storm cross Pakistan and Afghanistan on 10 August 2008. (<b>a</b>) MODIS true color image; (<b>b</b>) dust image; and (<b>c</b>) Aerosol Optical Depth (AOD) image.</p>
Full article ">Figure 9
<p>Validation of dust image with OMI UVAI (ultraviolet Aerosol Index) on 23 October 2007 at UTC time 04:55. (<b>a</b>) RGB (Red Green Blue) image; (<b>b</b>) dust image; (<b>c</b>) OMI UVAI image; and (<b>d</b>) the difference between UVAI and dust image.</p>
Full article ">Figure 10
<p>(<b>a</b>) The MODIS true color image of the dust storm over the Taklimakan Desert on 26 July 2006. The blue solid line is the footprint of CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization); (<b>b</b>) The dust image of the dust storm in the Taklimakan Desert on 26 July 2006.</p>
Full article ">Figure 11
<p>The CALIOP Vertical Feature Mask (VFM) data product on 26 July 2006 at UTC time 07:30. The colors stand for different scene features listed as follows: 1 = invalid (bad or missing data); 2 = clear air; 3 = cloud; 4 = aerosol; 5 = stratospheric feature; polar stratospheric cloud or stratospheric aerosol; 6 = surface; and 7 = no signal.</p>
Full article ">Figure 12
<p>The error statistics of validating MODIS dust aerosol detection results with CALIOP Vertical Feature Mask (VFM) data product.</p>
Full article ">Figure 13
<p>The profile of dust storm in the CALIOP motion direction shown in <a href="#remotesensing-09-00869-f010" class="html-fig">Figure 10</a>.</p>
Full article ">
3658 KiB  
Article
Optimal Decision Fusion for Urban Land-Use/Land-Cover Classification Based on Adaptive Differential Evolution Using Hyperspectral and LiDAR Data
by Yanfei Zhong, Qiong Cao, Ji Zhao, Ailong Ma, Bei Zhao and Liangpei Zhang
Remote Sens. 2017, 9(8), 868; https://doi.org/10.3390/rs9080868 - 22 Aug 2017
Cited by 57 | Viewed by 6412
Abstract
Hyperspectral images and light detection and ranging (LiDAR) data have, respectively, the high spectral resolution and accurate elevation information required for urban land-use/land-cover (LULC) classification. To combine the respective advantages of hyperspectral and LiDAR data, this paper proposes an optimal decision fusion method [...] Read more.
Hyperspectral images and light detection and ranging (LiDAR) data have, respectively, the high spectral resolution and accurate elevation information required for urban land-use/land-cover (LULC) classification. To combine the respective advantages of hyperspectral and LiDAR data, this paper proposes an optimal decision fusion method based on adaptive differential evolution, namely ODF-ADE, for urban LULC classification. In the ODF-ADE framework the normalized difference vegetation index (NDVI), gray-level co-occurrence matrix (GLCM) and digital surface model (DSM) are extracted to form the feature map. The three different classifiers of the maximum likelihood classifier (MLC), support vector machine (SVM) and multinomial logistic regression (MLR) are used to classify the extracted features. To find the optimal weights for the different classification maps, weighted voting is used to obtain the classification result and the weights of each classification map are optimized by the differential evolution algorithm which uses a self-adaptive strategy to obtain the parameter adaptively. The final classification map is obtained after post-processing based on conditional random fields (CRF). The experimental results confirm that the proposed algorithm is very effective in urban LULC classification. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of the differential evolution algorithm.</p>
Full article ">Figure 2
<p>Framework of the proposed methodology.</p>
Full article ">Figure 3
<p>Initial population.</p>
Full article ">Figure 4
<p>The objective function based on the minimum distance.</p>
Full article ">Figure 5
<p>The self-adaptive encoding strategy.</p>
Full article ">Figure 6
<p>Experimental data. (<b>a</b>) False-color image of the hyperspectral image. (<b>b</b>) LiDAR-derived DSM.</p>
Full article ">Figure 7
<p>Location and distribution of the training and validation samples. (<b>a</b>) Location and distribution of the training samples. (<b>b</b>) Location and distribution of the validation samples.</p>
Full article ">Figure 8
<p>Final classification map.</p>
Full article ">Figure 9
<p>The extracted viaduct.</p>
Full article ">Figure 10
<p>Sensitivity to the parameters of ODF-DE. (<b>a</b>) Sensitivity of ODF-DE in relation to F. (<b>b</b>) Sensitivity of ODF-DE in relation to CR. (<b>c</b>) Sensitivity of ODF-DE in relation to NP.</p>
Full article ">
5375 KiB  
Article
The Effects of Aerosol on the Retrieval Accuracy of NO2 Slant Column Density
by Hyunkee Hong, Jhoon Kim, Ukkyo Jeong, Kyung-soo Han and Hanlim Lee
Remote Sens. 2017, 9(8), 867; https://doi.org/10.3390/rs9080867 - 22 Aug 2017
Cited by 7 | Viewed by 6298
Abstract
We investigate the effects of aerosol optical depth (AOD), single scattering albedo (SSA), aerosol peak height (APH), measurement geometry (solar zenith angle (SZA) and viewing zenith angle (VZA)), relative azimuth angle, and surface reflectance on the accuracy of NO2 slant column density [...] Read more.
We investigate the effects of aerosol optical depth (AOD), single scattering albedo (SSA), aerosol peak height (APH), measurement geometry (solar zenith angle (SZA) and viewing zenith angle (VZA)), relative azimuth angle, and surface reflectance on the accuracy of NO2 slant column density using synthetic radiance. High AOD and APH are found to decrease NO2 SCD retrieval accuracy. In moderately polluted (5 × 1015 molecules cm−2 < NO2 vertical column density (VCD) < 2 × 1016 molecules cm−2) and clean regions (NO2 VCD < 5 × 1015 molecules cm−2), the correlation coefficient (R) between true NO2 SCDs and those retrieved is 0.88 and 0.79, respectively, and AOD and APH are about 0.1 and is 0 km, respectively. However, when AOD and APH are about 1.0 and 4 km, respectively, the R decreases to 0.84 and 0.53 in moderately polluted and clean regions, respectively. On the other hand, in heavily polluted regions (NO2 VCD > 2 × 1016 molecules cm−2), even high AOD and APH values are found to have a negligible effect on NO2 SCD precision. In high AOD and APH conditions in clean NO2 regions, the R between true NO2 SCDs and those retrieved increases from 0.53 to 0.58 via co-adding four pixels spatially, showing the improvement in accuracy of NO2 SCD retrieval. In addition, the high SZA and VZA are also found to decrease the accuracy of the NO2 SCD retrieval. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of synthetic radiance and true air mass factor calculation.</p>
Full article ">Figure 2
<p>NO<sub>2</sub> mixing ratio profiles as a function of NO<sub>2</sub> vertical column density (VCD) used to calculate synthetic radiances.</p>
Full article ">Figure 3
<p>Synthetic radiances as a function of wavelength with (<b>a</b>) AOD of 0.1, 0.5, and 1.0 (SZA = 20°; VZA = 20°; RAA = 50°; SSA = 0.95; APH = 0 km), and (<b>b</b>) APH of 0, 2, and 4 km (SZA = 20°; VZA = 20°; RAA = 50°; SSA = 0.95; AOD = 1.0). (Aerosol optical depth (AOD), solar zenith angle (SZA), viewing zenith angle (VZA), relative azimuth angle (RAA), single scattering albedo (SSA) and aerosol peak height (APH)).</p>
Full article ">Figure 4
<p>Example of spectral fit for NO<sub>2</sub> in the range 432 to 450 nm when NO<sub>2</sub> VCD is 1 × 10<sup>15</sup> mole cm<sup>−2</sup>, surface reflectance is 0.04, SZA and VZA are 20°, APH is 0 km, AOD is 0.1, and SSA is 0.999. The blue line is the NO<sub>2</sub> optical density (the cross section multiplied by the retrieved NO<sub>2</sub> slant column) and the red line is the blue line plus the fit residual. (Vertical column density (VCD), solar zenith angle (SZA), viewing zenith angle (VZA), aerosol peak height (APH), aerosol optical depth (AOD), single scattering albedo (SSA)).</p>
Full article ">Figure 5
<p>Absolute percentage difference (APD) under no-noise conditions between true and retrieved NO<sub>2</sub> SCDs as a function of NO<sub>2</sub> VCD with (<b>a</b>) AOD of 0.1, 0.5, and 1.0; (<b>b</b>) SSA of 0.999, 0.900, and 0.820; (<b>c</b>) APH of 0, 2, and 4 km; (<b>d</b>) AMF<sub>G</sub> of 2.3, 2.6, and 4.2; (<b>e</b>) RAA of 0°, 90°, and 180°; and (<b>f</b>) SFR of 0.04, 0.08, and 0.12. (Slant column density (SCD), vertical column density (VCD), aerosol optical depth (AOD), single scattering albedo (SSA), aerosol peak height (APH), air mass factor (AMF), relative azimuth angle (RAA), and surface reflectance (SFR)).</p>
Full article ">Figure 6
<p>Absolute percentage difference (APD) between true and retrieved NO<sub>2</sub> SCDs using a spectrum with noise (SNR = 2000) as a function of NO<sub>2</sub> VCD with (<b>a</b>) AOD of 0.1, 0.5, and 1.0; (<b>b</b>) SSA of 0.999, 0.900, and 0.820; (<b>c</b>) APH of 0, 2, and 4 km; (<b>d</b>) AMF<sub>G</sub> of 2.3, 2.6, and 4.2; (<b>e</b>) RAA of 0°, 90°, and 180°; and (<b>f</b>) SFR of 0.04, 0.08, and 0.12. (Slant column density (SCD), signal-to-noise ratio (SNR), vertical column density (VCD), aerosol optical depth (AOD), single scattering albedo (SSA), aerosol peak height (APH), air mass factor (AMF), relative azimuth angle (RAA), and surface reflectance (SFR)).</p>
Full article ">Figure 7
<p>Absolute percentage difference (APD) between true and retrieved NO<sub>2</sub> SCDs as a function of NO<sub>2</sub> VCD under various SNR conditions. (SZA = 20°; VZA = 20°; RAA = 50°; SSA = 0.95; AOD = 0.1; APH = 0 km). (Slant column density (SCD), vertical column density (VCD), signal-to-noise ratio (SNR), solar zenith angle (SZA), viewing zenith angle (VZA), relative azimuth angle (RAA), single scattering albedo (SSA), aerosol optical depth (AOD) and aerosol peak height (APH)).</p>
Full article ">Figure 8
<p>(<b>a</b>) True NO<sub>2</sub> SCDs, (<b>b</b>) retrieved NO<sub>2</sub> SCDs, and (<b>c</b>) a scatter plot between true NO<sub>2</sub> SCDs and retrieved NO<sub>2</sub> SCDs in the Hong Kong-Macau region in December 2011 under low-AOD conditions. Panels (<b>d</b>–<b>f</b>) are the same as (<b>a</b>,<b>b</b>) except under high AOD conditions. (Slant column density (SCD) and aerosol optical depth (AOD)).</p>
Full article ">Figure 9
<p>(<b>a</b>) True NO<sub>2</sub> SCD, (<b>b</b>) retrieved NO<sub>2</sub> SCD, and scatter plots of true and retrieved NO<sub>2</sub> SCD in (<b>c</b>) heavily polluted and (<b>d</b>) moderately polluted regions in Japan in December 2011 under low-AOD conditions. (Slant column density (SCD) and aerosol optical depth (AOD)).</p>
Full article ">Figure 10
<p>(<b>a</b>) True NO<sub>2</sub> SCD, (<b>b</b>) retrieved NO<sub>2</sub> SCD, and scatter plots of true and retrieved NO<sub>2</sub> SCD in (<b>c</b>) heavily polluted and (<b>d</b>) moderately polluted regions in Japan in December 2011 under high-AOD conditions. (Slant column density (SCD) and aerosol optical depth (AOD)).</p>
Full article ">Figure 11
<p>(<b>a</b>) True NO<sub>2</sub> SCDs, (<b>b</b>) retrieved NO<sub>2</sub> SCDs, and (<b>c</b>) a scatter plot of true and retrieved NO<sub>2</sub> SCDs in Manila in December 2011 under low-AOD conditions. Panels (<b>d</b>–<b>f</b>) are the same as (<b>a</b>–<b>c</b>) except under high-AOD conditions. (Slant column density (SCD) and aerosol optical depth (AOD)).</p>
Full article ">Figure 12
<p>(<b>a</b>) True NO<sub>2</sub> SCDs, (<b>b</b>) retrieved NO<sub>2</sub> SCDs and (<b>c</b>) a scatter plot of true and retrieved NO<sub>2</sub> SCDs using the pixel co-adding method under high-AOD conditions. (Slant column density (SCD) and aerosol optical depth (AOD)).</p>
Full article ">
14199 KiB  
Article
Azimuth Ambiguities Removal in Littoral Zones Based on Multi-Temporal SAR Images
by Xiangguang Leng, Kefeng Ji, Shilin Zhou and Huanxin Zou
Remote Sens. 2017, 9(8), 866; https://doi.org/10.3390/rs9080866 - 22 Aug 2017
Cited by 17 | Viewed by 8241
Abstract
Synthetic aperture radar (SAR) is one of the most important techniques for ocean monitoring. Azimuth ambiguities are a real problem in SAR images today, which can cause performance degradation in SAR ocean applications. In particular, littoral zones can be strongly affected by land-based [...] Read more.
Synthetic aperture radar (SAR) is one of the most important techniques for ocean monitoring. Azimuth ambiguities are a real problem in SAR images today, which can cause performance degradation in SAR ocean applications. In particular, littoral zones can be strongly affected by land-based sources, whereas they are usually regions of interest (ROI). Given the presence of complexity and diversity in littoral zones, azimuth ambiguities removal is a tough problem. As SAR sensors can have a repeat cycle, multi-temporal SAR images provide new insight into this problem. A method for azimuth ambiguities removal in littoral zones based on multi-temporal SAR images is proposed in this paper. The proposed processing chain includes co-registration, local correlation, binarization, masking, and restoration steps. It is designed to remove azimuth ambiguities caused by fixed land-based sources. The idea underlying the proposed method is that sea surface is dynamic, whereas azimuth ambiguities caused by land-based sources are constant. Thus, the temporal consistence of azimuth ambiguities is higher than sea clutter. It opens up the possibilities to use multi-temporal SAR data to remove azimuth ambiguities. The design of the method and the experimental procedure are based on images from the Sentinel data hub of Europe Space Agency (ESA). Both Interferometric Wide Swath (IW) and Stripmap (SM) mode images are taken into account to validate the proposed method. This paper also presents two RGB composition methods for better azimuth ambiguities visualization. Experimental results show that the proposed method can remove azimuth ambiguities in littoral zones effectively. Full article
(This article belongs to the Special Issue Ocean Remote Sensing with Synthetic Aperture Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of azimuth ambiguity formation in SAR images, targets A and B have equal Doppler histories due to aliasing [<a href="#B20-remotesensing-09-00866" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Examples of azimuth ambiguities in current synthetic aperture radar (SAR) imagery, (<b>a</b>) TerraSAR-X StripMap image, azimuth displacement is ~4.4 km (<b>b</b>) COSMOS-SkyMed StripMap image, azimuth displacement is ~4.4 km (<b>c</b>) RADARSAT-2 Standard mode image, azimuth displacement is ~5.2 km (<b>d</b>) Sentinel-1 IW image, azimuth displacement is ~4.7 km (beam IW2).</p>
Full article ">Figure 3
<p>The flowchart of the proposed method. It includes co-registration, local correlation, binarization, masking, and restoration steps. The idea underlying the proposed method is that sea surface is dynamic whereas azimuth ambiguities caused by land-based sources are constant. It is designed to remove azimuth ambiguities caused by fixed land-based sources.</p>
Full article ">Figure 4
<p>The Sentinel-1A multi-temporal dataset. The similar clusters of dots in T1, T2 and T3 at the same location are prominent azimuth ambiguities from the strong source targets on land. T1: S1A_IW_GRDH_1SDV_20170206T165649_20170206T165714_015166_018D05_8064; T2: S1A_IW_GRDH_1SDV_20170218T165649_20170218T165714_015341_01927E_D691; T3: S1A_IW_GRDH_1SDV_20170302T165648_20170302T165713_015516_0197CF_75D4.</p>
Full article ">Figure 5
<p>Google Earth image indicating the Sentinel-1A images. The red rectangle in (<b>a</b>) indicates the subimage location in <a href="#remotesensing-09-00866-f004" class="html-fig">Figure 4</a>, (<b>b</b>) is the corresponding optical image of the subimage, yellow rectangles 1 and 2 represent the typical ports (<b>c</b>) and (<b>d</b>) which can cause azimuth ambiguities respectively.</p>
Full article ">Figure 6
<p>RGB composition results of the Sentinel-1A multi-temporal data. Azimuth ambiguities appearing white in (<b>a</b>–<b>c</b>) can be recognized more easily than in the original images. A ship target in (<b>b</b>) seems to be white as there is a blue target near it. (<b>c</b>) has a smaller valid size because of the shift operation.</p>
Full article ">Figure 7
<p>Azimuth ambiguity mask, (<b>a</b>) local correlation coefficient map, (<b>b</b>) binarization result, (<b>c</b>) T3 ambiguity mask.</p>
Full article ">Figure 8
<p>Restored images, (<b>a</b>) T2 restored image, (<b>b</b>) T3 restored image. The zoomed patches before and after restoration in yellow rectangles are shown on the side.</p>
Full article ">Figure 9
<p>Inversion to mask strong land sources, (<b>a</b>) T2 image, (<b>b</b>) T3 image. The strong land sources are masked as red. Note that the azimuth displacement in different beams is different, ~5.3 km in beam IW1 and 4.8 km in beam IW2. Red masks on the land can be recognized as 1st order azimuth ambiguities whereas green masks can be recognized as 2nd order azimuth ambiguities or fixed artifacts on the sea.</p>
Full article ">Figure 10
<p>Azimuth ambiguities removal results. (<b>a</b>,<b>b</b>) Original Sentinel 1-b images, (<b>c</b>,<b>d</b>) Restored Sentinel-1b images. The zoomed patches before and after restoration in yellow rectangles are shown on the side. T1: S1B_S6_GRDH_1SDH_20170320T002541_20170320T002604_004785_0085BB_9204; T2: S1B_S6_GRDH_1SDH_20170401T002541_20170401T002605_004960_008AC4_D8E2.</p>
Full article ">
5098 KiB  
Article
Erosion Associated with Seismically-Induced Landslides in the Middle Longmen Shan Region, Eastern Tibetan Plateau, China
by Zhikun Ren, Zhuqi Zhang and Jinhui Yin
Remote Sens. 2017, 9(8), 864; https://doi.org/10.3390/rs9080864 - 21 Aug 2017
Cited by 11 | Viewed by 5691
Abstract
The 2008 Wenchuan earthquake and associated co-seismic landslide was the most recent expression of the rapid deformation and erosion occurring in the eastern Tibetan Plateau. The erosion associated with co-seismic landslides balances the long-term tectonic uplift in the topographic evolution of the region; [...] Read more.
The 2008 Wenchuan earthquake and associated co-seismic landslide was the most recent expression of the rapid deformation and erosion occurring in the eastern Tibetan Plateau. The erosion associated with co-seismic landslides balances the long-term tectonic uplift in the topographic evolution of the region; however, the quantitative relationship between earthquakes, uplift, and erosion is still unknown. In order to quantitatively distinguish the seismically-induced erosion in the total erosion, here, we quantify the Wenchuan earthquake-induced erosion using the digital elevation model (DEM) differential method and previously-reported landslide volumes. Our results show that the seismically-induced erosion is comparable with the pre-earthquake short-term erosion. The seismically-induced erosion rate contributes ~50% of the total erosion rate, which suggests that the local topographic evolution of the middle Longmen Shan region may be closely related to tectonic events, such as the 2008 Wenchuan earthquake. We propose that seismically-induced erosion is a very important component of the total erosion, particularly in active orogenic regions. Our results demonstrate that the remote sensing technique of differential DEM provides a powerful tool for evaluating the volume of co-seismic landslides produced in intermountain regions by strong earthquakes. Full article
(This article belongs to the Special Issue Remote Sensing of Landslides)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Simplified geological structures of the Longmen Shan region. Thick red lines indicate the co-seismic surface ruptures of the 2008 Mw 7.9 Wenchuan earthquake [<a href="#B16-remotesensing-09-00864" class="html-bibr">16</a>,<a href="#B17-remotesensing-09-00864" class="html-bibr">17</a>,<a href="#B18-remotesensing-09-00864" class="html-bibr">18</a>]. The gray polygons indicate the landslide inventory (courtesy of Joshua West and Gen Li), and the yellow polygon indicates the area covered by the high-resolution DEM.</p>
Full article ">Figure 2
<p>Simplified comparison of erosion rates according to the residence time of the co-seismic landslide material.</p>
Full article ">Figure 3
<p>The seismically-induced erosion rates derived using the differential DEM method shown within the area covered by the high-resolution DEM. Erosion rates are calculated within small catchments by summing up the landslide volume using the elevation difference.</p>
Full article ">Figure 4
<p>The seismically-induced erosion rates within each catchment in the Longmen Shan region. Erosion rates are calculated using the summed landslide volumes within each catchment.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distribution of short-term erosion rates from previous studies (modified from Reference [<a href="#B24-remotesensing-09-00864" class="html-bibr">24</a>]) and (<b>b</b>) seismically- induced erosion rates from this study.</p>
Full article ">
19550 KiB  
Technical Note
A Dynamic Landsat Derived Normalized Difference Vegetation Index (NDVI) Product for the Conterminous United States
by Nathaniel P. Robinson, Brady W. Allred, Matthew O. Jones, Alvaro Moreno, John S. Kimball, David E. Naugle, Tyler A. Erickson and Andrew D. Richardson
Remote Sens. 2017, 9(8), 863; https://doi.org/10.3390/rs9080863 - 21 Aug 2017
Cited by 186 | Viewed by 27338
Abstract
Satellite derived vegetation indices (VIs) are broadly used in ecological research, ecosystem modeling, and land surface monitoring. The Normalized Difference Vegetation Index (NDVI), perhaps the most utilized VI, has countless applications across ecology, forestry, agriculture, wildlife, biodiversity, and other disciplines. Calculating satellite derived [...] Read more.
Satellite derived vegetation indices (VIs) are broadly used in ecological research, ecosystem modeling, and land surface monitoring. The Normalized Difference Vegetation Index (NDVI), perhaps the most utilized VI, has countless applications across ecology, forestry, agriculture, wildlife, biodiversity, and other disciplines. Calculating satellite derived NDVI is not always straight-forward, however, as satellite remote sensing datasets are inherently noisy due to cloud and atmospheric contamination, data processing failures, and instrument malfunction. Readily available NDVI products that account for these complexities are generally at coarse resolution; high resolution NDVI datasets are not conveniently accessible and developing them often presents numerous technical and methodological challenges. We address this deficiency by producing a Landsat derived, high resolution (30 m), long-term (30+ years) NDVI dataset for the conterminous United States. We use Google Earth Engine, a planetary-scale cloud-based geospatial analysis platform, for processing the Landsat data and distributing the final dataset. We use a climatology driven approach to fill missing data and validate the dataset with established remote sensing products at multiple scales. We provide access to the composites through a simple web application, allowing users to customize key parameters appropriate for their application, question, and region of interest. Full article
(This article belongs to the Collection Google Earth Engine Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) A 30 m continuous CONUS Landsat NDVI composite for 28 July 2015. Our methods produce broad scale composites with minimal gaps in data and reduce the effect of scene edges and Local scale comparison of (<b>b</b>) Landsat NDVI at 30 m and (<b>c</b>) MODIS MOD13Q1 at 250 m from the same composite period. The Landsat product provides added spatial detail important in measuring certain ecological processes.</p>
Full article ">Figure 2
<p>(<b>a</b>) A simple 16-day mean NDVI composite from 28 July to 12 August 2015 created from Landsat 7 and 8 sensors. The composite contains missing data due to cloud cover and scene edges are apparent due to differing acquisition dates. (<b>b</b>) A 16-day climatology (5-year) gap filled composite for the same time and location. The climatology is user defined in order to produce an appropriate composite for the question being asked.</p>
Full article ">Figure 3
<p>A timeline showing the data availability for Landsat NDVI, based upon Landsat surface reflectance products and MOD13Q1. The extended Landsat record provides a longer continuous record of high resolution NDVI.</p>
Full article ">Figure 4
<p>A flow chart demonstrating the NDVI compositing process, in which the best available pixels from all available Landsat sensors are selected and combined to produce the final NDVI composite value.</p>
Full article ">Figure 5
<p>A screen shot of the NDVI web application (<a href="https://ndvi.ntsg.umt.edu" target="_blank">https://ndvi.ntsg.umt.edu</a>). To download a composite, users set their desired parameters in the left panel. The region of interest can either be an uploaded shapefile or a polygon drawn directly on the map. The composite is processed on the fly and users are notified via email when it is ready to download.</p>
Full article ">Figure 6
<p>The distribution of Pearson correlation coefficients between MOD13Q1 NDVI and Landsat NDVI for each land cover class. * represent suspected outliers (observations that fall outside the upper or lower quartiles plus or minus 1.5 times the interquartile distance).</p>
Full article ">Figure 7
<p>Time series of 30 m Landsat NDVI and 250 m MOD13Q1 NDVI time series from 2013 to 2015, separated by land cover class. After April 2013, the Landsat NDVI time series include data from both Landsat 7 and 8, while before April 2013 they included just Landsat 7 data. Each time series is from a single point, within a homogenous area (i.e., pixels where both Landsat and MOD13Q1 represent the same land cover), sampled at a location indicative of the major land cover classes.</p>
Full article ">Figure 8
<p>(<b>a</b>) Pixel locations in central Washington, USA. Landsat derived NDVI can provide increased detail in heterogeneous landscapes. The difference in pixel shape is due to native projections being transformed to a common projection. (<b>b</b>) Chart for 2015 of a Landsat derived NDVI and MOD13Q1 NDVI time series.</p>
Full article ">
11338 KiB  
Article
Mapping Regional Urban Extent Using NPP-VIIRS DNB and MODIS NDVI Data
by Run Wang, Bo Wan, Qinghua Guo, Maosheng Hu and Shunping Zhou
Remote Sens. 2017, 9(8), 862; https://doi.org/10.3390/rs9080862 - 21 Aug 2017
Cited by 41 | Viewed by 8789
Abstract
The accurate and timely monitoring of regional urban extent is helpful for analyzing urban sprawl and studying environmental issues related to urbanization. This paper proposes a classification scheme for large-scale urban extent mapping by combining the Day/Night Band of the Visible Infrared Imaging [...] Read more.
The accurate and timely monitoring of regional urban extent is helpful for analyzing urban sprawl and studying environmental issues related to urbanization. This paper proposes a classification scheme for large-scale urban extent mapping by combining the Day/Night Band of the Visible Infrared Imaging Radiometer Suite on the Suomi National Polar-orbiting Partnership Satellite (NPP-VIIRS DNB) and the Normalized Difference Vegetation Index from the Moderate Resolution Imaging Spectroradiometer products (MODIS NDVI). A Back Propagation (BP) neural network based one-class classification method, the Present-Unlabeled Learning (PUL) algorithm, is employed to classify images into urban and non-urban areas. Experiments are conducted in mainland China (excluding surrounding islands) to detect urban areas in 2012. Results show that the proposed model can successfully map urban area with a kappa of 0.842 on the pixel level. Most of the urban areas are identified with a producer’s accuracy of 79.63%, and only 10.42% the generated urban areas are misclassified with a user’s accuracy of 89.58%. At the city level, among 647 cities, only four county-level cities are omitted. To evaluate the effectiveness of the proposed scheme, three contrastive analyses are conducted: (1) comparing the urban map obtained in this paper with that generated by the Defense Meteorological Satellite Program/Operational Linescan System Nighttime Light Data (DMSP/OLS NLD) and MODIS NDVI and with that extracted from MCD12Q1 in MODIS products; (2) comparing the performance of the integration of NPP-VIIRS DNB and MODIS NDVI with single input data; and (3) comparing the classification method used in this paper (PUL) with a linear method (Large-scale Impervious Surface Index (LISI)). According to our analyses, the proposed classification scheme shows great potential to map regional urban extents in an effective and efficient manner. Full article
(This article belongs to the Special Issue Recent Advances in Remote Sensing with Nighttime Lights)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area that contains 30 provinces in inland China and the location of eight typical cities that are used in <a href="#sec5-remotesensing-09-00862" class="html-sec">Section 5</a>.</p>
Full article ">Figure 2
<p>Pre-processing procedure of the original nighttime light image composited by the Day/Night Band of the Visible Infrared Imaging Radiometer Suite on the Suomi National Polar-orbiting Partnership Satellite (the NPP-VIIRS DNB data).</p>
Full article ">Figure 3
<p>Flowchart of extracting urban extent in the study area.</p>
Full article ">Figure 4
<p>Urban map obtained using the proposed classification scheme.</p>
Full article ">Figure 5
<p>Kappa coefficient distribution at provincial level. The values ranging from 0.73 to 0.99 are divided into six classes by equal intervals.</p>
Full article ">Figure 6
<p>The user’s and producer’s accuracy of 30 provinces in ascending order of the kappa coefficient.</p>
Full article ">Figure 7
<p>The distribution of 647 cities, represented by the locations of the administrative department.</p>
Full article ">Figure 8
<p>Urban extent of five cities (Beijing, Wuhan, Hohhot, Datong and Suihua) in different productions. The Day/Night Band of the Visible Infrared Imaging Radiometer Suite on the Suomi National Polar-orbiting Partnership Satellite (NPP-VIIRS DNB), the the Defense Meteorological Satellite Program/Operational Linescan System Nighttime Light Data (DMSP/OLS NLD), the Normalized Difference Vegetation Index from the Moderate Resolution Imaging Spectroradiometer products (MODIS NDVI) and the public land cover datasets in MODIS products (MCD12Q1) are collected in 2012. Landsat images are collect between October 2012 and August 2013 for the best display effect.</p>
Full article ">Figure 9
<p>Latitudinal transects of the normalized NPP-VIIRS DNB, the normalized DMSP/OLS NLD and MODIS NDVI along one transect (the black guide line) in Beijing.</p>
Full article ">Figure 10
<p>The distribution frequencies of the NDVI value of four easily confused land cover types: (<b>a</b>) artificial surfaces, (<b>b</b>) cultivated land, (<b>c</b>) water bodies and (<b>d</b>) bare land. The <span class="html-italic">x</span>-axis is the maximum NDVI value in 2012, and the <span class="html-italic">y</span>-axis is the distribution frequency.</p>
Full article ">Figure 11
<p>The distribution frequencies of the NPP-VIIRS DNB value of four easily confused land cover types: (<b>a</b>) artificial surfaces, (<b>b</b>) cultivated land, (<b>c</b>) water bodies and (<b>d</b>) bare land. The <span class="html-italic">x</span>-axis is the original NPP-VIIRS DNB value in 2012, and the <span class="html-italic">y</span>-axis is the distribution frequency. Pixels with DNB values greater than 50 nW·cm<sup>−2</sup>·sr<sup>−1</sup> are not shown.</p>
Full article ">Figure 12
<p>The distribution frequency of the NPP-VIIRS DNB value and the MODIS NDVI value of four easily-confused land cover types (200 random samples): water bodies, cultivated land, artificial surfaces and bare land. The <span class="html-italic">x</span>-axis is the original NPP-VIIRS DNB value in 2012, and the <span class="html-italic">y</span>-axis is the maximum NDVI in 2012. Points with DNB values greater than 50 nW·cm<sup>−2</sup>·sr<sup>−1</sup> are not displayed.</p>
Full article ">Figure 13
<p>The distribution frequency of the NPP-VIIRS DNB value and the MODIS NDVI value of two types of artificial surfaces. The <span class="html-italic">x</span>-axis is the original NPP-VIIRS DNB value in 2012, and the <span class="html-italic">y</span>-axis is the maximum NDVI in 2012. Points with DNB values greater than 50 nW·cm<sup>−2</sup>·sr<sup>−1</sup> are not displayed.</p>
Full article ">Figure 14
<p>The standard deviations of Present-Unlabeled Learning (PUL) and Large-scale Impervious Surface Index (LISI) in three regions.</p>
Full article ">Figure 15
<p>(<b>a</b>) The urban pixel count distributions of PUL in three regions; and (<b>b</b>) the urban pixel count distributions of LISI in three regions. The <span class="html-italic">x</span>-axis is the value of the threshold; and the <span class="html-italic">y</span>-axis is the pixel count of the urban area.</p>
Full article ">
11020 KiB  
Article
Effects of Small-Scale Gold Mining Tailings on the Underwater Light Field in the Tapajós River Basin, Brazilian Amazon
by Felipe De Lucia Lobo, Maycira Costa, Evlyn Márcia Leão De Moraes Novo and Kevin Telmer
Remote Sens. 2017, 9(8), 861; https://doi.org/10.3390/rs9080861 - 21 Aug 2017
Cited by 17 | Viewed by 8553
Abstract
Artisanal and Small-scale Gold Mining (ASGM) within the Amazon region has created several environmental impacts, such as mercury contamination and changes in water quality due to increased siltation. This paper describes the effects of water siltation on the underwater light environment of rivers [...] Read more.
Artisanal and Small-scale Gold Mining (ASGM) within the Amazon region has created several environmental impacts, such as mercury contamination and changes in water quality due to increased siltation. This paper describes the effects of water siltation on the underwater light environment of rivers under different levels of gold mining activities in the Tapajós River Basin. Furthermore, it investigates possible impacts on the phytoplankton community. Two field campaigns were conducted in the Tapajós River Basin, during high water level and during low water level seasons, to measure Inherent and Apparent Optical Properties (IOPs, AOPs), including scattering (b) and absorption (a) coefficients and biogeochemical data (sediment content, pigments, and phytoplankton quantification). The biogeochemical data was separated into five classes according to the concentration of total suspended solids (TSS) ranging from 1.8 mg·L−1 to 113.6 mg·L−1. The in-water light environment varied among those classes due to a wide range of concentrations of inorganic TSS originated from different levels of mining activities. For tributaries with low or no influence of mining tailings (TSS up to 6.8 mg·L−1), waters are relatively more absorbent with b:a ratio of 0.8 at 440 nm and b660 magnitude of 2.1 m−1. With increased TSS loadings from mining operations (TSS over 100 mg·L−1), the scattering process prevails over absorption (b:a ratio of 10.0 at 440 nm), and b660 increases to 20.8 m−1. Non-impacted tributaries presented a critical depth for phytoplankton productivity of up to 6.0 m with available light evenly distributed throughout the spectra. Whereas for greatly impacted waters, attenuation of light was faster, reducing the critical depth to about 1.7 m, with most of the available light comprising of red wavelengths. Overall, a dominance of diatoms was observed for the upstream rivers, whereas cyanobacteria prevailed in the low section of the Tapajós River. The results suggest that the spatial and temporal distribution of phytoplankton in the Tapajós River Basin is not only a function of light availability, but rather depends on the interplay of factors, including flood pulse, water velocity, nutrient availability, and seasonal variation of incoming irradiance. Ongoing research indicates that the effects of mining tailings on the aquatic environment, described here, are occurring in several rivers within the Amazon River Basin. Full article
(This article belongs to the Special Issue Remote Sensing of Water Quality)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. (<b>a</b>) Overview of the Brazilian Amazon. (<b>b</b>) Tapajós River Basin in the Brazilian Amazon showing the main tributaries, sample sites (see <a href="#sec4dot1-remotesensing-09-00861" class="html-sec">Section 4.1</a>), deforestation [<a href="#B22-remotesensing-09-00861" class="html-bibr">22</a>] (light grey), and mines [<a href="#B23-remotesensing-09-00861" class="html-bibr">23</a>]. (<b>c</b>) Water flow (<span class="html-italic">Q</span>) and water speed (<span class="html-italic">v</span>) are also shown for the Tapajós River at the Itaituba City region during high and low water level.</p>
Full article ">Figure 2
<p>The methodology comprises (<b>a</b>) field campaigns for acquisition of optical properties and biogeochemical measurements to quantify underwater light field changes from non-impacted to impacted rivers; (<b>b</b>) assessment of light availability for phytoplankton, including critical depth analyses.</p>
Full article ">Figure 3
<p>Schematic representation of TSS concentrations along the Tapajós River and its tributaries for high (April 2011) and low (September 2012) water levels. The river and its tributaries were classified according to TSS (mg·L<sup>−1</sup>) concentrations. TSS for Tocantins, Novo, and Jamanxim rivers during the low water season were retrieved from Landsat surface reflectance (red band) [<a href="#B51-remotesensing-09-00861" class="html-bibr">51</a>]. The level of mining impact is an arbitrary classification considering the intensity level of mining and mining area distribution. Not to scale.</p>
Full article ">Figure 4
<p>(<b>a</b>) Spatial distribution of phytoplankton groups (mm<sup>3</sup>·L<sup>−1</sup>) along the Tapajós River for low and high water level periods. (<b>b</b>) Spatial distribution of pigments concentration (μg·L<sup>−1</sup>) for the same seasons. The correspondent TSS class is indicated in parentheses for each sample point, except for point stations with no data available, which is indicated by asterisks.</p>
Full article ">Figure 5
<p>Spectral distribution of in situ IOPs for different water classes: absorption by particles (<b>a</b>) and CDOM (<b>b</b>); and particulate scattering (<b>c</b>) and backscattering (<b>d</b>). Gray scale curves represent the five classes of water as explained in the section above. Note that <span class="html-italic">Class 5</span> is not shown in (<b>b</b>–<b>d</b>) due to lack of data.</p>
Full article ">Figure 6
<p>Diffuse attenuation coefficient <span class="html-italic">K<sub>d</sub></span> (<span class="html-italic">λ</span>), for both field campaigns for classes under different mining impacts from low impact (<span class="html-italic">Class 1</span>) to very high (<span class="html-italic">Class 5</span>).</p>
Full article ">Figure 7
<p>Normalized scalar irradiance at (<b>a</b>) 0.3 m and (<b>b</b>) 2.0 m for all samples grouped by TSS. (<b>c</b>) Spectral profile of <span class="html-italic">Z</span><sub>1%</sub> averaged for each class, and (<b>d</b>) <span class="html-italic">E<sub>o</sub></span>(PAR) availability from surface to bottom with depth. Compensation irradiance <span class="html-italic">E<sub>c</sub></span>(PAR) for <span class="html-italic">Chlamydomonas</span> sp. is indicated (thick vertical black line). The correspondent critical depth, <span class="html-italic">Z<sub>c</sub></span>(PAR), for each class can be drawn from the intersection of <span class="html-italic">E<sub>o</sub></span>(PAR) with the <span class="html-italic">E<sub>c</sub></span>(PAR) line. <span class="html-italic">E<sub>max</sub></span>, is the maximum level of irradiance that yields photosynthesis. Above this point, the incident light combined with upwelling light can be harmful for phytoplankton cells. The optimum irradiance level, <span class="html-italic">E<sub>k</sub></span>, is between <span class="html-italic">E<sub>c</sub></span> (critical) and <span class="html-italic">E<sub>max</sub></span>.</p>
Full article ">
1615 KiB  
Article
Wave Height Estimation from Shadowing Based on the Acquired X-Band Marine Radar Images in Coastal Area
by Yanbo Wei, Zhizhong Lu, Gen Pian and Hong Liu
Remote Sens. 2017, 9(8), 859; https://doi.org/10.3390/rs9080859 - 21 Aug 2017
Cited by 23 | Viewed by 5115
Abstract
In this paper, the retrieving significant wave height from X-band marine radar images based on shadow statistics is investigated, since the retrieving accuracy can not be seriously affected by environmental factors and the method has the advantage of without any external reference to [...] Read more.
In this paper, the retrieving significant wave height from X-band marine radar images based on shadow statistics is investigated, since the retrieving accuracy can not be seriously affected by environmental factors and the method has the advantage of without any external reference to calibrate. However, the accuracy of the significant wave height estimated from the radar image acquired at the near-shore area is not ideal. To solve this problem, the effect of water depth is considered in the theoretical derivation of estimated wave height based on the sea surface slope. And then, an improved retrieving algorithm which is suitable for both in deep water area and shallow water area is developed. In addition, the radar data are sparsely processed in advance in order to achieve high quality edge image for the requirement of shadow statistic algorithm, since the high resolution radar images will lead to angle-blurred for the image edge detection and time-consuming in the estimation of sea surface slope. The data acquired from Pingtan Test Base in Fujian Province were used to verify the effectiveness of the proposed algorithm. The experimental results demonstrate that the improved method which takes into account the water depth is more efficient and effective and has better performance for retrieving significant wave height in the shallow water area, compared to the in situ buoy data as the ground truth and that of the existing shadow statistic method. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The relationship between the function <math display="inline"> <semantics> <mrow> <mi>t</mi> <mi>a</mi> <mi>n</mi> <mi>h</mi> <mo>(</mo> <mi>k</mi> <mi>d</mi> <mo>)</mo> </mrow> </semantics> </math> and the approximate function.</p>
Full article ">Figure 2
<p>The experimental site and radar image. (<b>a</b>) The installed location of the marine radar and the wave buoy. The yellow triangle and the red star represent the radar position and buoy position, respectively. The black circle indicates the coverage area of the radar; (<b>b</b>) The acquired shore-based marine radar image.</p>
Full article ">Figure 3
<p>The selected analysis area of radar image in Cartesian coordinate system.</p>
Full article ">Figure 4
<p>The radar image after sparse processing.</p>
Full article ">Figure 5
<p>The obtained edge image after superimposing the eight directional edge images.</p>
Full article ">Figure 6
<p>The edge image after thresholding.</p>
Full article ">Figure 7
<p>The edge image after thresholding and filtering out the single-point noise.</p>
Full article ">Figure 8
<p>The distribution of gray-scale statistics.</p>
Full article ">Figure 9
<p>The shadow image after thresholding.</p>
Full article ">Figure 10
<p>The three-dimensional illumination curve.</p>
Full article ">Figure 11
<p>The calculated illumination as a function of grazing angle and the fitted Smith’s function.</p>
Full article ">Figure 12
<p>The estimated sea surface slope in azimuth.</p>
Full article ">Figure 13
<p>The time sequences of significant wave height. The horizontal and vertical axes represent time sequence and significant wave height, respectively. The black square denotes the radar-retrieved significant wave height based on the SVR method and shadowing information. The green circle denotes the radar-retrieved significant wave height based on the traditional method. The blue triangle denotes the radar-retrieved significant wave height based on the modified method. The red cross denotes the significant wave height of buoy recorded.</p>
Full article ">Figure 14
<p>The scatter plots of the significant wave height between the radar-derived and the buoy-derived. (<b>a</b>) The original algorithm; (<b>b</b>) The modified algorithm. (<b>c</b>) The SVR algorithm.</p>
Full article ">
6002 KiB  
Article
Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection
by Miao Kang, Kefeng Ji, Xiangguang Leng and Zhao Lin
Remote Sens. 2017, 9(8), 860; https://doi.org/10.3390/rs9080860 - 20 Aug 2017
Cited by 323 | Viewed by 13601
Abstract
Synthetic aperture radar (SAR) ship detection has been playing an increasingly essential role in marine monitoring in recent years. The lack of detailed information about ships in wide swath SAR imagery poses difficulty for traditional methods in exploring effective features for ship discrimination. [...] Read more.
Synthetic aperture radar (SAR) ship detection has been playing an increasingly essential role in marine monitoring in recent years. The lack of detailed information about ships in wide swath SAR imagery poses difficulty for traditional methods in exploring effective features for ship discrimination. Being capable of feature representation, deep neural networks have achieved dramatic progress in object detection recently. However, most of them suffer from the missing detection of small-sized targets, which means that few of them are able to be employed directly in SAR ship detection tasks. This paper discloses an elaborately designed deep hierarchical network, namely a contextual region-based convolutional neural network with multilayer fusion, for SAR ship detection, which is composed of a region proposal network (RPN) with high network resolution and an object detection network with contextual features. Instead of using low-resolution feature maps from a single layer for proposal generation in a RPN, the proposed method employs an intermediate layer combined with a downscaled shallow layer and an up-sampled deep layer to produce region proposals. In the object detection network, the region proposals are projected onto multiple layers with region of interest (ROI) pooling to extract the corresponding ROI features and contextual features around the ROI. After normalization and rescaling, they are subsequently concatenated into an integrated feature vector for final outputs. The proposed framework fuses the deep semantic and shallow high-resolution features, improving the detection performance for small-sized ships. The additional contextual features provide complementary information for classification and help to rule out false alarms. Experiments based on the Sentinel-1 dataset, which contains twenty-seven SAR images with 7986 labeled ships, verify that the proposed method achieves an excellent performance in SAR ship detection. Full article
(This article belongs to the Special Issue Ocean Remote Sensing with Synthetic Aperture Radar)
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed network. SAR images are fed into VGG16 which has five sets of layers for feature extraction. The upper part is the RPN of the network. The white blocks and light blue blocks represent contextual features and ROI features respectively, which are processed concurrently in the object detection network.</p>
Full article ">Figure 2
<p>The feature maps of different layers in VGG16. With the increasing of resolution, the feature map becomes smaller and more semantic.</p>
Full article ">Figure 3
<p>An illustration of deconvolution. It consists of inserting zeros and convolution operation [<a href="#B32-remotesensing-09-00860" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>An illustration of context information. The bounding box in purple represents the region proposal of the network and the outer orange bounding box is the boundary of context information.</p>
Full article ">Figure 5
<p>The distribution of the ship area in the Sentinel-1 dataset (only the ships have Automatic Identification System (AIS) information). More than 85% of ships have an area smaller than 80 pixels on SAR images.</p>
Full article ">Figure 6
<p>The comparisons of detection results with different layer combination strategies. The red and yellow rectangles represent the detection results and missing ships of detectors respectively. (<b>a</b>) conv5; (<b>b</b>) conv 3+4+5; (<b>c</b>) conv 1+3+5; (<b>d</b>) conv 1+2+3.</p>
Full article ">Figure 7
<p>The performance curves of different methods. The yellow, green and red curves represent the performance of Faster RCNN, CMS-RCNN and the proposed method respectively. The <span class="html-italic">x</span> label and <span class="html-italic">y</span> label represent <math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mi>f</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mi>d</mi> </msub> </mrow> </semantics> </math> respectively.</p>
Full article ">Figure 8
<p>The detection results of the proposed method near the harbor area. The red, yellow and purple boxes represent the detected target, the false alarms and missing targets respectively.</p>
Full article ">Figure 9
<p>Some typical false alarms and missing detection targets among the detection results of the proposed method. The chips in the blue box and the purple box are false alarms and missing ships respectively.</p>
Full article ">
3859 KiB  
Article
Deriving Hourly PM2.5 Concentrations from Himawari-8 AODs over Beijing–Tianjin–Hebei in China
by Wei Wang, Feiyue Mao, Lin Du, Zengxin Pan, Wei Gong and Shenghui Fang
Remote Sens. 2017, 9(8), 858; https://doi.org/10.3390/rs9080858 - 19 Aug 2017
Cited by 115 | Viewed by 12092
Abstract
Monitoring fine particulate matter with diameters of less than 2.5 μm (PM2.5) is a critical endeavor in the Beijing–Tianjin–Hebei (BTH) region, which is one of the most polluted areas in China. Polar orbit satellites are limited by observation frequency, which is insufficient for [...] Read more.
Monitoring fine particulate matter with diameters of less than 2.5 μm (PM2.5) is a critical endeavor in the Beijing–Tianjin–Hebei (BTH) region, which is one of the most polluted areas in China. Polar orbit satellites are limited by observation frequency, which is insufficient for understanding PM2.5 evolution. As a geostationary satellite, Himawari-8 can obtain hourly optical depths (AODs) and overcome the estimated PM2.5 concentrations with low time resolution. In this study, the evaluation of Himawari-8 AODs by comparing with Aerosol Robotic Network (AERONET) measurements showed Himawari-8 retrievals (Level 3) with a mild underestimate of about −0.06 and approximately 57% of AODs falling within the expected error established by the Moderate-resolution Imaging Spectroradiometer (MODIS) (±(0.05 + 0.15AOD)). Furthermore, the improved linear mixed-effect model was proposed to derive the surface hourly PM2.5 from Himawari-8 AODs from July 2015 to March 2017. The estimated hourly PM2.5 concentrations agreed well with the surface PM2.5 measurements with high R2 (0.86) and low RMSE (24.5 μg/m3). The average estimated PM2.5 in the BTH region during the study time range was about 55 μg/m3. The estimated hourly PM2.5 concentrations ranged extensively from 35.2 ± 26.9 μg/m3 (1600 local time) to 65.5 ± 54.6 μg/m3 (1100 local time) at different hours. Full article
(This article belongs to the Special Issue Remote Sensing of Atmospheric Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Elevation map of (<b>a</b>) China and (<b>b</b>) Beijing–Tianjin–Hebei region. (b) Spatial distributions of fine particulate matter (PM) and AERONET sites in Beijing–Tianjin–Hebei region.</p>
Full article ">Figure 2
<p>Collocations scatterplots of Himawari-8 and AERONET AODs at five sites of (<b>a</b>) Beijing_CAMS; (<b>b</b>) Beijing; (<b>c</b>) Beijing_PKU; (<b>d</b>) Beijing_RADI; and (<b>e</b>) Xianghe. The study period is from July 2015 to March 2017. The width of each pixel is 0.04 AOD, and the number of collocations falling within/above/below EE are represented in each figure. The yellow line is the regression line, the gray solid line is the 1:1 line, and the gray dashed lines are the expected errors (EE) envelopes.</p>
Full article ">Figure 3
<p>Time series of hourly AODs of Himawari-8 and AERONET, and hourly AOD difference between Himawari-8 and AERONET from the collocated matchups and standard deviations (shadows) over (<b>a</b>) Beijing_CAMS; (<b>b</b>) Beijing; (<b>c</b>) Beijing_PKU; (<b>d</b>) Beijing_RADI; and (<b>e</b>) Xianghe. The study period is from July 2015 to March 2017.</p>
Full article ">Figure 4
<p>Spatial distributions of the averaged AOD derived from the Himawari-8 for all available data (<b>a</b>) and different hours (0900–1600 local time) (<b>b</b>–<b>i</b>). The study period is from July 2015 to March 2017.</p>
Full article ">Figure 5
<p>10-fold cross-validation of estimated PM2.5 concentrations by comparing measured PM2.5 from all available data (<b>a</b>) and in different hours (0900–1600 local time) (<b>b</b>–<b>i</b>). The number of samplings (N), correlation coefficients (R), and linear regressions are included in the plot.</p>
Full article ">Figure 6
<p>Differences in estimated and measured PM2.5 for individual PM monitoring sites: (<b>a</b>) all available data; (<b>b</b>–<b>i</b>) different hours (0900–1600 local time).</p>
Full article ">Figure 7
<p>Spatial distribution of averaged PM2.5 estimations obtained from improved LME model for (<b>a</b>) all available data and (<b>b</b>–<b>i</b>) different hours (0900–1600 local time). The study period is from July 2015 to March 2017.</p>
Full article ">Figure 8
<p>Spatial distribution of seasonally-averaged PM2.5 estimations obtained from the improved LME model.</p>
Full article ">
5235 KiB  
Article
A Robust Algorithm for Estimating Surface Fractional Vegetation Cover from Landsat Data
by Linqing Yang, Kun Jia, Shunlin Liang, Xiangqin Wei, Yunjun Yao and Xiaotong Zhang
Remote Sens. 2017, 9(8), 857; https://doi.org/10.3390/rs9080857 - 19 Aug 2017
Cited by 42 | Viewed by 6429
Abstract
Fractional vegetation cover (FVC) is an essential land surface parameter for Earth surface process simulations and global change studies. The currently existing FVC products are mostly obtained from low or medium resolution remotely sensed data, while many applications require the fine spatial resolution [...] Read more.
Fractional vegetation cover (FVC) is an essential land surface parameter for Earth surface process simulations and global change studies. The currently existing FVC products are mostly obtained from low or medium resolution remotely sensed data, while many applications require the fine spatial resolution FVC product. The availability of well-calibrated coverage of Landsat imagery over large areas offers an opportunity for the production of FVC at fine spatial resolution. Therefore, the objective of this study is to develop a general and reliable land surface FVC estimation algorithm for Landsat surface reflectance data under various land surface conditions. Two machine learning methods multivariate adaptive regression splines (MARS) model and back-propagation neural networks (BPNNs) were trained using samples from PROSPECT leaf optical properties model and the scattering by arbitrarily inclined leaves (SAIL) model simulations, which included Landsat reflectance and corresponding FVC values, and evaluated to choose the method which had better performance. Thereafter, the MARS model, which had better performance in the independent validation, was evaluated using ground FVC measurements from two case study areas. The direct validation of the FVC estimated using the proposed algorithm (Heihe: R2 = 0.8825, RMSE = 0.097; Chengde using Landsat 7 ETM+: R2 = 0.8571, RMSE = 0.078, Chengde using Landsat 8 OLI: R2 = 0.8598, RMSE = 0.078) showed the proposed method had good performance. Spatial-temporal assessment of the estimated FVC from Landsat 7 ETM+ and Landsat 8 OLI data confirmed the robustness and consistency of the proposed method. All these results indicated that the proposed algorithm could obtain satisfactory accuracy and had the potential for the production of high-quality FVC estimates from Landsat surface reflectance data. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the fractional vegetation cover (FVC) estimation method for Landsat data. Notes: PROSAIL: PROSPECT leaf optical properties model and the scattering by arbitrarily inclined leaves (SAIL) model; Ref: Reflectance; BPNNs: back-propagation neural networks; MARS: multivariate adaptive regression splines; Fmask: Function of mask; ETM+: Enhanced Thematic Mapper Plus; SLC: Scan Line Corrector; GNSPI: Geostatistical Neighborhood Similar Pixel Interpolator.</p>
Full article ">Figure 2
<p>Reflectance of the 40 soils used to represent the possible range of spectral shapes.</p>
Full article ">Figure 3
<p>The geographic location of the Heihe test area. The yellow points represent the location of sample plots.</p>
Full article ">Figure 4
<p>The geographic location of the Chengde test area and locations of the ground measurements are displayed as yellow dots.</p>
Full article ">Figure 5
<p>Theoretical performances of: back-propagation neural networks (BPNNs) (<b>left</b>); and multivariate adaptive regression splines (MARS) (<b>right</b>) using Landsat 7 ETM+ simulated data.</p>
Full article ">Figure 6
<p>Performance in the test set averaged over the Landsat 7 ETM+ training-test data splitting as a function of the number of used predictions.</p>
Full article ">Figure 7
<p>Comparison of the FVC estimates from Landsat 7 ETM+ reflectance data using the proposed algorithm and those extracted from the field photos in Heihe region.</p>
Full article ">Figure 8
<p>Scatter plots of the estimated FVC from: Landsat 7 ETM+ (<b>left</b>); and Landsat 8 OLI (<b>right</b>) reflectance data using the proposed algorithm and FVC extracted from the field photos in Chengde area.</p>
Full article ">Figure 9
<p>The FVC maps derived from Landsat 7 ETM+ and Landsat 8 OLI reflectance data using the proposed algorithm (<b>c1</b>–<b>c6</b>). (<b>a1</b>–<b>a6</b>) The original standard false color composited Landsat images; and (<b>b1</b>–<b>b6</b>) the corresponding standard false color composited Landsat images with cloud and cloud shadow removed as well as gaps filled.</p>
Full article ">Figure 10
<p>The FVC maps (<b>a2</b>) and (<b>b2</b>) derived from Landsat 7 ETM+ and Landsat 8 OLI reflectance data using the proposed algorithm. (<b>a1</b>) and (<b>b1</b>) The corresponding standard false color composited Landsat images with cloud and cloud shadow removed as well as gaps filled.</p>
Full article ">Figure 11
<p>Comparison of relationship between FVC values estimated from Landsat 7 ETM+ and FVC values estimated from Landsat 8 OLI. The density map presents the densities of the points. The FVC values estimated from Landsat 7 ETM+: (<b>a</b>) from actual scanned pixels; and (<b>b</b>) estimated from predicted values of un-scanned pixels.</p>
Full article ">Figure 12
<p>Temporal dynamics of the Landsat 7 ETM+ and Landsat 8 OLI FVC estimates within six representative vegetation types in the Heihe River area.</p>
Full article ">
41986 KiB  
Article
A Hierarchical Extension of General Four-Component Scattering Power Decomposition
by Sinong Quan, Deliang Xiang, Boli Xiong, Canbin Hu and Gangyao Kuang
Remote Sens. 2017, 9(8), 856; https://doi.org/10.3390/rs9080856 - 18 Aug 2017
Cited by 19 | Viewed by 4008
Abstract
The overestimation of volume scattering (OVS) is an intrinsic drawback in model-based polarimetric synthetic aperture radar (PolSAR) target decomposition. It severely impacts the accuracy measurement of scattering power and leads to scattering mechanism ambiguity. In this paper, a hierarchical extended general four-component scattering [...] Read more.
The overestimation of volume scattering (OVS) is an intrinsic drawback in model-based polarimetric synthetic aperture radar (PolSAR) target decomposition. It severely impacts the accuracy measurement of scattering power and leads to scattering mechanism ambiguity. In this paper, a hierarchical extended general four-component scattering power decomposition method (G4U) is presented. The conventional G4U is first proposed by Singh et al. and it has advantages in full use of information and volume scattering characterization. However, the OVS still exists in the G4U and it causes a scattering mechanism ambiguity in some oriented urban areas. In the proposed method, matrix rotations by the orientation angle and the helix angle are applied. Afterwards, the transformed coherency matrix is applied to the four-component decomposition scheme with two refined models. Moreover, the branch condition applied in the G4U is substituted by the ratio of correlation coefficient (RCC), which is used as a criterion for hierarchically implementing the decomposition. The performance of this approach is demonstrated and evaluated with the Airborne Synthetic Aperture Radar (AIRSAR), Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Radarsat-2, and the Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) fully polarimetric data over different test sites. Comparison studies are carried out and demonstrated that the proposed method exhibits promising improvements in the OVS and scattering mechanism characterization. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Division of the fourth component.</p>
Full article ">Figure 2
<p>General scheme of the proposed method.</p>
Full article ">Figure 3
<p>Pauli decomposition of PolSAR data and selected patches in the test site: (<b>a</b>) C-band data; (<b>b</b>) L-band data; and (<b>c</b>) ground reference of San Francisco.</p>
Full article ">Figure 4
<p>Color-coded scattering power decomposition with red (urban scattering), green (volume scattering), and blue (surface scattering): (<b>a</b>–<b>d</b>) decomposition results for C-band data using the conventional general four-component scattering power decomposition (G4U), the adaptive G4U (AG4U), the extended G4U (ExG4UCdr), and the hierarchical extended G4U (ExG4URcc) methods, respectively; and (<b>e</b>–<b>h</b>) decomposition results for L-band data using the G4U, AG4U, ExG4UCdr, and ExG4URcc methods, respectively.</p>
Full article ">Figure 5
<p>Radar map of averaged power contribution: (<b>a</b>,<b>b</b>) results of the white box areas derived from C- and L-band data, respectively; and (<b>c</b>,<b>d</b>) results of the white box areas derived from C- and L-band data, respectively.</p>
Full article ">Figure 6
<p>Enlarged fragments of color-coded representations of decomposed red box areas in <a href="#remotesensing-09-00856-f004" class="html-fig">Figure 4</a> using the: (<b>a</b>) G4U; (<b>b</b>) AG4U; (<b>c</b>) ExG4UCdr; and (<b>d</b>) ExG4URcc methods; and (<b>e</b>) the corresponding optical image. Courtesy: Google earth.</p>
Full article ">Figure 7
<p>Decrements of the <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mn>33</mn> </mrow> </msub> </mrow> </semantics> </math> term via different angle compensations on the selected lines: (<b>a</b>,<b>b</b>) decrements along the white line for C- and L-band data, respectively; and (<b>c</b>,<b>d</b>) decrements along the black line for C- and L-band data, respectively.</p>
Full article ">Figure 8
<p>Second study area and UAVSAR data: (<b>a</b>) ground reference (NLCD 2011); and (<b>b</b>) Pauli-coded SAR image with L-band.</p>
Full article ">Figure 9
<p>Magnitudes of different criteria and histograms of selected patches: (<b>a</b>–<b>c</b>) the magnitudes of <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>dr</mi> </mrow> </msub> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>cc</mi> </mrow> </msub> </mrow> </semantics> </math> respectively; and (<b>d</b>–<b>f</b>) the fitting histograms of the four patches.</p>
Full article ">Figure 10
<p>Decomposition results of UAVSAR data: (<b>a</b>–<b>d</b>) surface, double-bounce, volume, and oriented dihedral scattering powers, respectively; (<b>e</b>) urban scattering power; and (<b>f</b>) color-coded decomposition result (blue, surface scattering; red, urban scattering; and green, volume scattering).</p>
Full article ">Figure 11
<p>Bar chart of decomposition power contribution of different land covers: (<b>a</b>) orthogonal buildings; (<b>b</b>) oriented buildings; (<b>c</b>) woods; and (<b>d</b>) ocean.</p>
Full article ">Figure 12
<p>Proposed decomposition results (blue-surface scattering, red-urban scattering, and green-volume scattering) and ground references for spaceborne data: (<b>a</b>,<b>b</b>) Radarsat-2 C-band data; and (<b>c</b>,<b>d</b>) ALOS PALSAR L-band data.</p>
Full article ">Figure 13
<p>Profile of decomposition scattering power: (<b>a</b>) Radarsat-2 C-band data; and (<b>b</b>) ALOS PALSAR L-band data.</p>
Full article ">
6956 KiB  
Article
Multi-Year Mapping of Maize and Sunflower in Hetao Irrigation District of China with High Spatial and Temporal Resolution Vegetation Index Series
by Bing Yu and Songhao Shang
Remote Sens. 2017, 9(8), 855; https://doi.org/10.3390/rs9080855 - 18 Aug 2017
Cited by 45 | Viewed by 8167
Abstract
Crop identification in large irrigation districts is important for crop yield estimation, hydrological simulation, and agricultural water management. Remote sensing provides an opportunity to visualize crops in the regional scale. However, the use of coarse resolution remote sensing images for crop identification usually [...] Read more.
Crop identification in large irrigation districts is important for crop yield estimation, hydrological simulation, and agricultural water management. Remote sensing provides an opportunity to visualize crops in the regional scale. However, the use of coarse resolution remote sensing images for crop identification usually causes great errors due to the presence of mixed pixels in regions with complex planting structure of crops. Therefore, it is preferable to use remote sensing data with high spatial and temporal resolutions in crop identification. This study aimed to map multi-year distributions of major crops (maize and sunflower) in Hetao Irrigation District, the third largest irrigation district in China, using HJ-1A/1B CCD images with high spatial and temporal resolutions. The Normalized Difference Vegetation Index (NDVI) series obtained from HJ-1A/1B CCD images was fitted with an asymmetric logistic curve to find the NDVI characteristics and phenological metrics for both maize and sunflower. Nine combinations of NDVI characteristics and phenological metrics were compared to obtain the optimal classifier to map maize and sunflower from 2009 to 2015. Results showed that the classification ellipse with the NDVI characteristic of the left inflection point in the NDVI curve and the phenological metric from the left inflection point to the peak point normalized, with mean values of corresponding grassland indexes achieving the minimum mean relative error of 10.82% for maize and 4.38% for sunflower. The corresponding Kappa coefficient was 0.62. These results indicated that the vegetation and phenology-based classifier using HJ-1A/1B data could effectively identify multi-year distribution of maize and sunflower in the study region. It was found that maize was mainly distributed in the middle part of the irrigation district (Hangjinhouqi and Linhe), while sunflower mainly in the east part (Wuyuan). The planting sites of sunflower had been gradually expanded from Wuyuan to the north part of Hangjinhouqi and Linhe. These results were in agreement with the local economic policy. Results also revealed the increasing trends of both maize and sunflower planting areas during the study period. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area and sampling points in 2012 and verification points in 2014 and 2015.</p>
Full article ">Figure 2
<p>Land use map for the study area.</p>
Full article ">Figure 3
<p>Variations of major crop planting areas in the Hetao Irrigation District from 2009 to 2015.</p>
Full article ">Figure 4
<p>The asymmetric logistic curve and characteristic points.</p>
Full article ">Figure 5
<p>Average Normalized Difference Vegetation Index (NDVI) series of sampling points for maize and sunflower and logistic curve fitting results in 2012.</p>
Full article ">Figure 6
<p>Relationships between normalized NDVI characteristics and normalized phenological metrics selected for the identification of maize and sunflower ((<b>a1</b>–<b>c3</b>) correspond to classifiers in <a href="#remotesensing-09-00855-t005" class="html-table">Table 5</a>).</p>
Full article ">Figure 7
<p>Performance of different classifiers for maize (<b>a</b>) and sunflower (<b>b</b>).</p>
Full article ">Figure 8
<p>Comparisons of total area of maize (<b>a</b>) and sunflower (<b>b</b>) from official statistics and identified maps.</p>
Full article ">Figure 9
<p>Crop maps of maize and sunflower from 2009 to 2015 (<b>a</b>–<b>g</b>).</p>
Full article ">
12254 KiB  
Article
Technical Evaluation of Sentinel-1 IW Mode Cross-Pol Radar Backscattering from the Ocean Surface in Moderate Wind Condition
by Lanqing Huang, Bin Liu, Xiaofeng Li, Zenghui Zhang and Wenxian Yu
Remote Sens. 2017, 9(8), 854; https://doi.org/10.3390/rs9080854 - 17 Aug 2017
Cited by 28 | Viewed by 7577
Abstract
The Sentinel-1 synthetic aperture radar (SAR) allows sufficient resources for cross-pol wind speed retrievals over the ocean. In this paper, we present technical evaluation on wind retrieval from both Sentinel-1A and Sentinel-1B IW cross-pol images. Algorithms are based on the existing theoretical and [...] Read more.
The Sentinel-1 synthetic aperture radar (SAR) allows sufficient resources for cross-pol wind speed retrievals over the ocean. In this paper, we present technical evaluation on wind retrieval from both Sentinel-1A and Sentinel-1B IW cross-pol images. Algorithms are based on the existing theoretical and empirical ones derived from the RADARSAT-2 cross-pol data. First, to better understand the Sentinel-1 observed normalized radar cross section (NRCS) values under various environmental conditions, we constructed a dataset that integrates SAR images with wind field information from scatterometer measurements. There are 11,883 matchup data in the experimental dataset. We then calculated the systemic noise floor of Sentinel-1 IW mode, and presented its unique noise characteristics among different sub-bands. Based on the calculated NESZ measurements, the noise is removed for all matchup data. Empirical relationships among the noise free NRCS σ VH 0 , wind speed, wind direction, and radar incidence angle are analyzed for each sub-band, and a piecewise model is proposed. We showed that a larger correlation coefficient, r, is achieved by including both wind direction and incidence terms in the model. Validation against scatterometer measurements showed the suitability of the proposed model. Full article
(This article belongs to the Special Issue Ocean Remote Sensing with Synthetic Aperture Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>NESZ measures for IW mode VH images.</p>
Full article ">Figure 2
<p>NRCS in radar range direction (blue) with and (green) without noise removal.</p>
Full article ">Figure 3
<p>Wind speed histogram of the matchup dataset.</p>
Full article ">Figure 4
<p>Wind direction histogram of the matchup dataset.</p>
Full article ">Figure 5
<p>Incidence angle histogram of the matchup dataset.</p>
Full article ">Figure 6
<p>Spatial distance error histogram of the matchup dataset.</p>
Full article ">Figure 7
<p>Relationship between NRCS and incidence angle (different colors represent different wind speeds).</p>
Full article ">Figure 8
<p>Relationship between NRCS and incidence angle (different colors represent different wind directions).</p>
Full article ">Figure 9
<p>Relationship between NRCS and wind speeds for IW1-band.</p>
Full article ">Figure 10
<p>Relationship between NRCS and wind speeds for IW2-band.</p>
Full article ">Figure 11
<p>Relationship between NRCS and wind speeds for IW3-band.</p>
Full article ">Figure 12
<p>Dependencies of the NRCS on wind direction. (<b>a</b>–<b>f</b>) show wind speed intervals of <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math> at 3, 5, 7, 9, 11, and <math display="inline"> <semantics> <mrow> <mn>13</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> </semantics> </math>, respectively. The red line represents the trendline.</p>
Full article ">Figure 13
<p>Incidence angle function for IW1-band.</p>
Full article ">Figure 14
<p>Incidence angle function for IW2-band.</p>
Full article ">Figure 15
<p>Estimated NRCS of the proposed model for IW1-band.</p>
Full article ">Figure 16
<p>Estimated NRCS of the proposed model for IW2-band.</p>
Full article ">Figure 17
<p>Comparison of the estimated wind speed with ASCAT wind speed.</p>
Full article ">
3615 KiB  
Article
Specular Reflection Effects Elimination in Terrestrial Laser Scanning Intensity Data Using Phong Model
by Kai Tan and Xiaojun Cheng
Remote Sens. 2017, 9(8), 853; https://doi.org/10.3390/rs9080853 - 17 Aug 2017
Cited by 36 | Viewed by 14763
Abstract
The intensity value recorded by terrestrial laser scanning (TLS) systems is significantly influenced by the incidence angle. The incidence angle effect is an object property, which is mainly related to target scattering properties, surface structures, and even some instrumental effects. Most existing models [...] Read more.
The intensity value recorded by terrestrial laser scanning (TLS) systems is significantly influenced by the incidence angle. The incidence angle effect is an object property, which is mainly related to target scattering properties, surface structures, and even some instrumental effects. Most existing models focus on diffuse reflections of rough surfaces and ignore specular reflections, despite that both reflections simultaneously exist in all natural surfaces. Due to the coincidence of the emitter and receiver in TLS, specular reflections can be ignored at large incidence angles. On the contrary, at small incidence angles, TLS detectors can receive a portion of specular reflections. The received specular reflections can trigger highlight phenomenon (hot-spot effects) in the intensity data of the scanned targets, particularly those with a relatively smooth or highly-reflective surface. In this study, a new method that takes diffuse and specular reflections, as well as the instrumental effects into consideration, is proposed to eliminate the specular reflection effects in TLS intensity data. Diffuse reflections and instrumental effects are modeled by a polynomial based on Lambertian reference targets, whereas specular reflections are modeled by the Phong model. The proposed method is tested and validated on different targets scanned by the Faro Focus3D 120 terrestrial scanner. Results imply that the coefficient of variation of the intensity data from a homogeneous surface is reduced by approximately 38% when specular reflections are considered. Compared with existing methods, the proposed method exhibits good feasibility and high accuracy in eliminating the specular reflection effects for intensity image interpretation and 3D point cloud representation by intensity. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) At incidence angles larger than 45°, only diffuse reflections reach the receiver. (<b>b</b>) At incidence angles smaller than 45°, both diffuse and specular reflections can be received. The red dotted lines are perpendicular to the reflection directions.</p>
Full article ">Figure 2
<p>Phong model with different parameters. (<b>a</b>) Specular reflections are dominant at incidence angles smaller than 45° and must be considered (smooth surfaces). (<b>b</b>) Specular reflections are subtle (rough surfaces).</p>
Full article ">Figure 3
<p>Original intensity images created by Faro SCENE. (<b>a</b>) Scan 1. (<b>b</b>) Scan 2. (<b>c</b>) Scan 3. (<b>d</b>) Scan 4. (<b>e</b>) Scan 5. Highlights exist in Scans 2 and 3 because the surface of the door is smooth. Scans 1, 4, and 5 do not have highlight regions as the surface of the wall is relatively rough.</p>
Full article ">Figure 4
<p>(<b>a</b>) Distance-corrected intensity values of the door and the curves of the fitting polynomial and Phong model. (<b>b</b>) Distance-corrected intensity values of the white lime wall and the curve of the fitting polynomial.</p>
Full article ">Figure 5
<p>Point cloud of the door colored by intensity. (<b>a</b>) Original intensities in Scan 2. (<b>b</b>) Intensities corrected by the polynomial method in Scan 2. (<b>c</b>) Intensities corrected by the reference targets method in Scan 2. (<b>d</b>) Intensities corrected by the proposed method in Scan 2. (<b>e</b>) Original intensity in Scan 3. (<b>f</b>) Intensities corrected by the polynomial method in Scan 3. (<b>g</b>) Intensities corrected by the reference targets method in Scan 3. (<b>h</b>) Intensities corrected by the proposed method in Scan 3.</p>
Full article ">Figure 6
<p>Original intensity images. Specular reflection effects exist in the red dotted rectangles. (<b>a</b>) A plastic curtain. (<b>b</b>) A building facade. (<b>c</b>) A plywood door. (<b>d</b>) A marble wall. (<b>e</b>) An iron bookcase. (<b>f</b>) A rubber decorative board.</p>
Full article ">Figure 7
<p>Distance corrected intensities and the curves of the fitting polynomial and Phong model. (<b>a</b>) Curtain. (<b>b</b>) Building facade. (<b>c</b>) Plywood. (<b>d</b>) Marble. (<b>e</b>) Bookcase. (<b>f</b>) Rubber board.</p>
Full article ">Figure 8
<p>Point cloud colored by intensity. (<b>I</b>) Original intensities. (<b>II</b>) Intensities corrected by the polynomial method. (<b>III</b>) Intensities corrected by the reference targets method. (IV,<b>V</b>) Intensities corrected by the proposed method with different parameters. (<b>a</b>) Curtain, (<b>IV</b>,<b>V</b>): parameters of the curtain and door. (<b>b</b>) Building facade, (IV,<b>V</b>): parameters of the building facade and curtain. (<b>c</b>) Plywood, (<b>IV</b>,<b>V</b>): parameters of the plywood and rubber. (<b>d</b>) Marble, (<b>IV</b>,<b>V</b>): parameters of the marble and bookcase. (<b>e</b>) Bookcase, (<b>IV</b>,<b>V</b>): parameters of the bookcase and rubber. (<b>f</b>) Rubber, (<b>IV</b>,<b>V</b>): parameters of the rubber and bookcase.</p>
Full article ">
3732 KiB  
Article
Influence of Droughts on Mid-Tropospheric CO2
by Xun Jiang, Angela Kao, Abigail Corbett, Edward Olsen, Thomas Pagano, Albert Zhai, Sally Newman, Liming Li and Yuk Yung
Remote Sens. 2017, 9(8), 852; https://doi.org/10.3390/rs9080852 - 17 Aug 2017
Cited by 3 | Viewed by 4861
Abstract
Using CO2 data from the Atmospheric Infrared Sounder (AIRS), it is found for the first time that the mid-tropospheric CO2 concentration is ~1 part per million by volume higher during dry years than wet years over the southwestern USA from June [...] Read more.
Using CO2 data from the Atmospheric Infrared Sounder (AIRS), it is found for the first time that the mid-tropospheric CO2 concentration is ~1 part per million by volume higher during dry years than wet years over the southwestern USA from June to September. The mid-tropospheric CO2 differences between dry and wet years are related to circulation and CO2 surface fluxes. During drought conditions, vertical pressure velocity from NCEP2 suggests that there is more rising air over most regions, which can help bring high surface concentrations of CO2 to the mid-troposphere. In addition to the circulation, there is more CO2 emitted from the biosphere to the atmosphere during droughts in some regions, which can contribute to higher concentrations of CO2 in the atmosphere. Results obtained from this study demonstrate the significant impact of droughts on atmospheric CO2 and therefore on a feedback cycle contributing to greenhouse gas warming. It can also help us better understand atmospheric CO2, which plays a critical role in our climate system. Full article
(This article belongs to the Special Issue Remote Sensing of Greenhouse Gases)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>TRMM precipitation (black solid line) averaged over southwestern USA (32°N–42°N, 235°E–249°E) from June to September in 2003–2013. Red dashed lines represent mean precipitation ±0.5 times standard deviation of precipitation. Units for precipitation are mm/month. Red dots, green dots, and black dots indicate wet years, dry years, and normal years, respectively.</p>
Full article ">Figure 2
<p>(<b>a</b>) The mean value of AIRS CO<sub>2</sub> concentration in dry years (JJAS of 2003, 2007, and 2010); (<b>b</b>) the mean value of AIRS CO<sub>2</sub> concentration in wet years (JJAS of 2006, 2008, 2011, and 2012); (<b>c</b>) CO<sub>2</sub> differences between dry and wet years; and (<b>d</b>) CO<sub>2</sub> differences within 5% significance level and 1% significance level are highlighted in light green and dark green. Units for CO<sub>2</sub> are ppm in (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 3
<p>(<b>a</b>) The mean value of 500 hPa vertical pressure velocity, from NCEP2, in dry years (JJAS of 2003, 2007, and 2010); (<b>b</b>) the mean value of 500 hPa vertical pressure velocity in wet years (JJAS of 2006, 2008, 2011, and 2012); (<b>c</b>) 500 hPa vertical pressure velocity differences between the dry and wet years; and (<b>d</b>) vertical pressure velocity differences within 5% significance level and 1% significance level are highlighted in light green and dark green. Units for Vertical pressure velocity are 10<sup>−2</sup> Pa/s in (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 4
<p>(<b>a</b>) The mean value of Net Ecosystem CO<sub>2</sub> Exchange (NEE), from the CASA model, in dry years (JJAS of 2003, 2007, and 2010); (<b>b</b>) the mean value of NEE in wet years (JJAS of 2006, 2008, 2011, and 2012); (<b>c</b>) NEE differences between the dry and wet years; and (<b>d</b>) NEE differences within 5% significance level and 1% significance level are highlighted in light green and dark green. Units for NEE are g C m<sup>−2</sup> mon<sup>−1</sup> in (<b>a</b>–<b>c</b>).</p>
Full article ">
34337 KiB  
Article
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
by Weimin Wang, Ken Sakurada and Nobuo Kawaguchi
Remote Sens. 2017, 9(8), 851; https://doi.org/10.3390/rs9080851 - 16 Aug 2017
Cited by 71 | Viewed by 14231
Abstract
This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on the 3D corner estimation of the chessboard from the sparse point [...] Read more.
This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on the 3D corner estimation of the chessboard from the sparse point cloud generated by one frame scan of the LiDAR. To estimate the corners, we formulate a full-scale model of the chessboard and fit it to the segmented 3D points of the chessboard. The model is fitted by optimizing the cost function under constraints of correlation between the reflectance intensity of laser and the color of the chessboard’s patterns. Powell’s method is introduced for resolving the discontinuity problem in optimization. The corners of the fitted model are considered as the 3D corners of the chessboard. Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem. The corresponding 3D-2D points are used to calculate the absolute pose of the two sensors with Unified Perspective-n-Point (UPnP). Further, the calculated parameters are regarded as initial values and are refined using the Levenberg-Marquardt method. The performance of the proposed corner detection method from the 3D point cloud is evaluated using simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR and a Ladybug3 camera under the proposed re-projection error metric, qualitatively and quantitatively demonstrate the accuracy and stability of the final extrinsic calibration parameters. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Data from an identical scene captured by the LiDAR sensor and the panoramic camera. (<b>a</b>) The points are colored by the reflectance intensity (blue indicates low intensity, red indicates high intensity); (<b>b</b>) The zoomed chessboard. We can see the changes in reflection intensity of the point cloud between the white and black patterns; (<b>c</b>) The panoramic image of the same scene.</p>
Full article ">Figure 2
<p>Overview of the proposed method.</p>
Full article ">Figure 3
<p>Angular resolution of the used LiDAR in this work. The left figure is the top view and the right one is the side view of the LiDAR and the chessboard.</p>
Full article ">Figure 4
<p>Uniformity of the points distribution. (<b>a</b>) shows an example of the point cloud with better uniformity than (<b>b</b>).</p>
Full article ">Figure 5
<p>The principle used to estimate corners in the points. (<b>a</b>) The chessboard model; (<b>b</b>) The scanned point cloud of the chessboard. Colors indicate the intensity (blue for low and red for high reflectance intensity); (<b>c</b>) Find a matrix that translates the most 3D points on the corresponding patterns. Green points are estimated corners; (<b>d</b>) Consider the corners of the chessboard model as the corners of the point cloud.</p>
Full article ">Figure 6
<p>Directions of the basis vectors relative to the LiDAR coordinate system. Blue arrow lines in the left figure represent the basis vectors decomposed by PCA. After transformation with the basis vectors, chessboard’s points are mapped to the <math display="inline"> <semantics> <mrow> <msup> <mi>X</mi> <mi>P</mi> </msup> <mi>O</mi> <msup> <mi>Y</mi> <mi>P</mi> </msup> </mrow> </semantics> </math> (chessboard plane). Then we can apply the model described in <a href="#remotesensing-09-00851-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Estimated parameters for each frame. (<b>a</b>) Scatter diagram of the intensity for all points in the chessboard; (<b>b</b>) The histogram of the intensity. <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>R</mi> <mi>H</mi> </msub> </mrow> </semantics> </math> can be found at the peaks of the two sides.</p>
Full article ">Figure 8
<p>Cost definition for corner detection in the point cloud. (<b>a</b>) An example of the point falling into the wrong pattern. The square represents a white pattern of the chessboard; (<b>b</b>) An example of the point falling out of the chessboard model; (<b>a</b>) describes the first term and (<b>b</b>) describes the second term of the cost function (Equation (<a href="#FD5-remotesensing-09-00851" class="html-disp-formula">5</a>)).</p>
Full article ">Figure 9
<p>Setup. (<b>a</b>) the image of the setup for two sensors; (<b>b</b>) a scene for the data acquisition.</p>
Full article ">Figure 10
<p>Distribution of the 20 chessboard positions. The chessboard is captured by the LiDAR from different heights and angles. The length of the coordinate axis is 1 m. Four groups of colors represent four positions of the chessboard for each horizontal camera. (<b>a</b>) Top view of the point clouds of the chessboard; (<b>b</b>) Side view of the point clouds of the chessboard.</p>
Full article ">Figure 11
<p>Vertical field view of Velodyne HDL-32e and the relationship between interval and distance. (<b>a</b>) Vertical angles of Velodyne HDL-32e; (<b>b</b>) Vertical field of view; (<b>c</b>) Relationship between the horizontal interval of two adjacent lasers and noise of the point cloud; (<b>d</b>) Relationship between the interval of two successive points of the scanline and the distance of the chessboard. Red lines in (<b>c</b>,<b>d</b>) show the range of chessboard’s distance we place in this work.</p>
Full article ">Figure 12
<p>Front view and side view of chessboard’s point clouds from simulation results and real data. (<b>a</b>–<b>c</b>) Simulated point clouds with the multiplier 1 at different distances; (<b>d</b>–<b>f</b>) Simulated point clouds with the multiplier 2 at different distances; (<b>g</b>–<b>i</b>) Simulated point clouds with the multiplier 3 at different distances; (<b>j</b>–<b>l</b>) Real point clouds at different distances.</p>
Full article ">Figure 13
<p>Corner detection error by simulation. The horizontal axes represent different simulation conditions and the vertical axes represent the relative error. Red points represent the mean value and the vertical lines represent the <math display="inline"> <semantics> <mrow> <mn>3</mn> <mi>σ</mi> </mrow> </semantics> </math> range of the results simulated by 100 times at each simulation condition. (<b>a</b>) Relationship between the error and noise of the point cloud at 1 m. The <span class="html-italic">x</span> axis represent the multiplier for the noise baseline; (<b>b</b>) Relationship between the error and the distance of the chessboard with the baseline noise.</p>
Full article ">Figure 14
<p>Detected corners from the panoramic images. (<b>a</b>–<b>c</b>) Example results of detected corners from images with different poses and distances.</p>
Full article ">Figure 15
<p>Estimated corners of chessboard. (<b>a</b>) The fitted chessboard model of the point cloud in the real Velobug LiDAR coordinate system; (<b>b</b>) The front view of the zoom checkerboard; (<b>c</b>) The side view of the zoom checkerboard.</p>
Full article ">Figure 16
<p>Estimated parameters by the proposed method and Pandey’s Mutual Information (MI) method [<a href="#B8-remotesensing-09-00851" class="html-bibr">8</a>] with different initial values as the numbers of frame increases. (<b>a</b>–<b>c</b>) Rotation angle along each axis; (<b>d</b>–<b>f</b>) Translation along each axis.</p>
Full article ">Figure 17
<p>Re-projection error calculation and results. (<b>a</b>,<b>b</b>) Shaded quadrilaterals show the regions of black and white patterns respectively. Points mapped into these regions are counted for error calculation; (<b>c</b>) The errors for parameters estimated by different numbers of frames. The point and vertical line represent the mean and <math display="inline"> <semantics> <mrow> <mn>3</mn> <mi>σ</mi> </mrow> </semantics> </math> range of all errors calculated by applying the estimated parameters to the different number of frames.</p>
Full article ">Figure 18
<p>Re-projected corners and points of the chessboard (best viewed when zoomed in). Big green circles and cyan lines indicate the detected corners. Small pink circles in big green circles and pink lines indicate re-projected corners estimated from the point cloud. Big blue and red circles represent the start and end for counting the corners. Blue points indicate low reflectance intensity and red points indicate high reflectance intensity.</p>
Full article ">Figure 19
<p>Re-projection results. (<b>a</b>) All re-projected points on the color panoramic image and colored by intensity; (<b>b</b>) All re-projected points on the color panoramic image and colored by distance; (<b>c</b>) Re-projected result on edge extracted image of all points colored by intensity; (<b>d</b>) Re-projected result on edge extracted image of all points colored by distance.</p>
Full article ">Figure 19 Cont.
<p>Re-projection results. (<b>a</b>) All re-projected points on the color panoramic image and colored by intensity; (<b>b</b>) All re-projected points on the color panoramic image and colored by distance; (<b>c</b>) Re-projected result on edge extracted image of all points colored by intensity; (<b>d</b>) Re-projected result on edge extracted image of all points colored by distance.</p>
Full article ">Figure 20
<p>Zoomed details of re-projected points. (<b>a</b>–<b>d</b>) Re-projected results on chessboard, human, pillar and car respectively. Each column represents re-projected points colored by intensity and distance on original RGB images and edges-extracted images. Blue indicates low value and red indicates high value.</p>
Full article ">Figure 20 Cont.
<p>Zoomed details of re-projected points. (<b>a</b>–<b>d</b>) Re-projected results on chessboard, human, pillar and car respectively. Each column represents re-projected points colored by intensity and distance on original RGB images and edges-extracted images. Blue indicates low value and red indicates high value.</p>
Full article ">Figure 21
<p>Projection of the RGB information from the image to the point cloud with the estimated extrinsic parameters. (<b>a</b>) An example of the colored point cloud; (<b>b</b>) The zoomed view of the chessboard in (<b>a</b>). The red points in (<b>a</b>,<b>b</b>) are occluded region caused by the chessboard; (<b>c</b>) The zoomed view of the car in (<b>a</b>).</p>
Full article ">
3292 KiB  
Letter
What is the Direction of Land Change? A New Approach to Land-Change Analysis
by Mingde You, Anthony M. Filippi, İnci Güneralp and Burak Güneralp
Remote Sens. 2017, 9(8), 850; https://doi.org/10.3390/rs9080850 - 16 Aug 2017
Cited by 4 | Viewed by 6085
Abstract
Accurate characterization of the direction of land change is a neglected aspect of land dynamics. Knowledge on direction of historical land change can be useful information when understanding relative influence of different land-change drivers is of interest. In this study, we present a [...] Read more.
Accurate characterization of the direction of land change is a neglected aspect of land dynamics. Knowledge on direction of historical land change can be useful information when understanding relative influence of different land-change drivers is of interest. In this study, we present a novel perspective on land-change analysis by focusing on directionality of change. To this end, we employed Maximum Cross-Correlation (MCC) approach to estimate the directional change in land cover in a dynamic river floodplain environment using Landsat 5 Thematic Mapper (TM) images. This approach has previously been used for detecting and measuring fluid and ice motions but not to study directional changes in land cover. We applied the MCC approach on land-cover class membership layers derived from fuzzy remote-sensing image classification. We tested the sensitivity of the resulting displacement vectors to three user-defined parameters—template size, search window size, and a threshold parameter to determine valid (non-noisy) displacement vectors—that directly affect the generation of change, or displacement, vectors; this has not previously been thoroughly investigated in any application domain. The results demonstrate that it is possible to quantitatively measure the rate of directional change in land cover in this floodplain environment using this particular approach. Sensitivity analyses indicate that template size and MCC threshold parameter are more influential on the displacement vectors than search window size. The results vary by land-cover class, suggesting that spatial configuration of land-cover classes should be taken into consideration in the implementation of the method. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Depiction of the Maximum Cross-Correlation (MCC) method, including a template, search window, and displacement vector. Example feature (in red) within a given template extent at time <span class="html-italic">t</span> is shown, as is the direction and magnitude of shift in the feature position at time <span class="html-italic">t</span> + 1, relative to that at time <span class="html-italic">t</span> (after [<a href="#B6-remotesensing-09-00850" class="html-bibr">6</a>]).</p>
Full article ">Figure 2
<p>(<b>a</b>) Landsat 5 Thematic Mapper (TM) image of the study area, acquired on 2 August 1987, with bands 4,5,1 as R,G,B (pixel size = 30 m); (<b>b</b>) fuzzy membership image for the forest class, with example enlarged spatial subset given in the upper left to facilitate illustration of floodplain complexity, and location of spatial subset designated by red box. The study area extends between approximately 12°31′34′′ and 13°49′19′′ South latitude, and between 66°54′04′′ and 67°32′05′′ West longitude, an area comprising ~2060 km<sup>2</sup>.</p>
Full article ">Figure 3
<p>Sensitivity of MCC method to template size. Average valid displacement vector length for the periods of (<b>a</b>) 1987–1990 and (<b>b</b>) 1987–2006 and the ratio of the number of valid displacement vectors (NVDV) to the number of possible templates (NPT) for the periods of (<b>c</b>) 1987–1990 and (<b>d</b>) 1987–2006 as a function of template size. Search window size = 50 × 50 pixels, MCC threshold = 0.6.</p>
Full article ">Figure 4
<p>Sensitivity of MCC method to search window size. Average valid displacement vector length for the periods of (<b>a</b>) 1987–1990 and (<b>b</b>) 1987–2006, and the ratio of the number of valid displacement vectors (NVDV) to the number of possible templates (NPT) for the periods of (<b>c</b>) 1987–1990 and (<b>d</b>) 1987–2006 as a function of search window size. Template size = 13 × 13 pixels, MCC threshold = 0.6.</p>
Full article ">Figure 5
<p>Sensitivity of MCC method to MCC threshold. Average valid displacement vector length for the periods of (<b>a</b>) 1987–1990 and (<b>b</b>) 1987–2006, and the ratio of the number of valid displacement vectors (NVDV) to the number of possible templates (NPT) for the periods of (<b>c</b>) 1987–1990 and (<b>d</b>) 1987–2006 as a function of MCC threshold. Template size = 13 × 13 pixels, search window size = 31 × 31 pixels.</p>
Full article ">Figure 6
<p>River class-membership images for 1987 and 1990, and MCC vectors (in red) derived from this pair of images. The 1987 river layer and the 1990 river layer (after change) are shown in blue and black, respectively (blue is 30% transparent and overlaid on black layer). Four inset zoom-in windows along the figure margins correspond to their respective specific areas along the river reach, identified by their corresponding labels (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 7
<p>River class-membership images for 1987 and 2006, and MCC vectors (in red) derived from this pair of images. The 1987 river layer and the 2006 river layer (after change) are shown in blue and black, respectively (blue is 30% transparent and overlaid on black layer). Four inset zoom-in windows along the figure margins correspond to their respective specific areas along the river reach, identified by their corresponding labels (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 8
<p>CVA result associated with the river class for the time period 1987 to 2006, with MCC vectors (in red) derived from this pair of images. As determined by CVA, pixels that changed from the river class over this time period are colored green, whereas pixels that changed to the river class are in blue. Four inset zoom-in windows along the figure margins correspond to their respective specific areas along the river reach, identified by their corresponding labels (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 9
<p>Moving average rose diagrams (MARDs) summarizing the MCC change/displacement vectors for the river class within the example river reach given in <a href="#remotesensing-09-00850-f007" class="html-fig">Figure 7</a> and <a href="#remotesensing-09-00850-f008" class="html-fig">Figure 8</a>, for the periods of (<b>a</b>) 1987–1990; and (<b>b</b>) 1987–2006. Values on the radar plot axes indicate average weighted frequencies of azimuths.</p>
Full article ">
143 KiB  
Correction
Correction: Yao, P. et al. Rebuilding Long Time Series Global Soil Moisture Products Using the Neural Network Adopted the Microwave Vegetation Index. Remote Sens. 2017, 9, 35
by Panpan Yao, Jiancheng Shi, Tianjie Zhao, Hui Lu and Amen Al-Yaari
Remote Sens. 2017, 9(8), 849; https://doi.org/10.3390/rs9080849 - 16 Aug 2017
Cited by 4 | Viewed by 3600
Abstract
After publication of the research paper [1], the authors wish to make the following correction to this paper. In the fourth line from the bottom in abstract, due to a typing error, “RMSE = 0.84 m3/m3” should be replaced with “RMSE = 0.084 [...] Read more.
After publication of the research paper [1], the authors wish to make the following correction to this paper. In the fourth line from the bottom in abstract, due to a typing error, “RMSE = 0.84 m3/m3” should be replaced with “RMSE = 0.084 m3/m3”.[...] Full article
9400 KiB  
Article
Pre-Trained AlexNet Architecture with Pyramid Pooling and Supervision for High Spatial Resolution Remote Sensing Image Scene Classification
by Xiaobing Han, Yanfei Zhong, Liqin Cao and Liangpei Zhang
Remote Sens. 2017, 9(8), 848; https://doi.org/10.3390/rs9080848 - 16 Aug 2017
Cited by 275 | Viewed by 38554
Abstract
The rapid development of high spatial resolution (HSR) remote sensing imagery techniques not only provide a considerable amount of datasets for scene classification tasks but also request an appropriate scene classification choice when facing with finite labeled samples. AlexNet, as a relatively simple [...] Read more.
The rapid development of high spatial resolution (HSR) remote sensing imagery techniques not only provide a considerable amount of datasets for scene classification tasks but also request an appropriate scene classification choice when facing with finite labeled samples. AlexNet, as a relatively simple convolutional neural network (CNN) architecture, has obtained great success in scene classification tasks and has been proven to be an excellent foundational hierarchical and automatic scene classification technique. However, current HSR remote sensing imagery scene classification datasets always have the characteristics of small quantities and simple categories, where the limited annotated labeling samples easily cause non-convergence. For HSR remote sensing imagery, multi-scale information of the same scenes can represent the scene semantics to a certain extent but lacks an efficient fusion expression manner. Meanwhile, the current pre-trained AlexNet architecture lacks a kind of appropriate supervision for enhancing the performance of this model, which easily causes overfitting. In this paper, an improved pre-trained AlexNet architecture named pre-trained AlexNet-SPP-SS has been proposed, which incorporates the scale pooling—spatial pyramid pooling (SPP) and side supervision (SS) to improve the above two situations. Extensive experimental results conducted on the UC Merced dataset and the Google Image dataset of SIRI-WHU have demonstrated that the proposed pre-trained AlexNet-SPP-SS model is superior to the original AlexNet architecture as well as the traditional scene classification methods. Full article
(This article belongs to the Special Issue Remote Sensing Big Data: Theory, Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>The AlexNet architecture.</p>
Full article ">Figure 2
<p>The AlexNet architecture with spatial pyramid pooling.</p>
Full article ">Figure 3
<p>The AlexNet architecture with side supervision.</p>
Full article ">Figure 4
<p>The pre-trained AlexNet architecture with spatial pyramid pooling and side supervision.</p>
Full article ">Figure 5
<p>Representative images of the 21 land-use categories in the UC Merced dataset: (<b>a</b>) agriculture; (<b>b</b>) airplane; (<b>c</b>) baseball diamond; (<b>d</b>) beach; (<b>e</b>) buildings; (<b>f</b>) chaparral; (<b>g</b>) dense residential; (<b>h</b>) forest; (<b>i</b>) freeway; (<b>j</b>) golf course; (<b>k</b>) harbor; (<b>l</b>) intersection; (<b>m</b>) medium residential; (<b>n</b>) mobile home park; (<b>o</b>) overpass; (<b>p</b>) parking lot; (<b>q</b>) river; (<b>r</b>) runway; (<b>s</b>) sparse residential; (<b>t</b>) storage tanks; (<b>u</b>) tennis court.</p>
Full article ">Figure 6
<p>Representative images of the Google Image dataset of SIRI-WHU: (<b>a</b>) meadow; (<b>b</b>) pond; (<b>c</b>) harbor; (<b>d</b>) industrial; (<b>e</b>) park; (<b>f</b>) river; (<b>g</b>) residential; (<b>h</b>) overpass; (<b>i</b>) agriculture; (<b>j</b>) commercial; (<b>k</b>) water; (<b>l</b>) idle land.</p>
Full article ">Figure 7
<p>Representative images of the WHU-RS dataset: (<b>a</b>) airport; (<b>b</b>) beach; (<b>c</b>) bridge; (<b>d</b>) commercial; (<b>e</b>) desert; (<b>f</b>) farmland; (<b>g</b>) football field; (<b>h</b>) forest; (<b>i</b>) industrial; (<b>j</b>) meadow; (<b>k</b>) mountain; (<b>l</b>) park; (<b>m</b>) parking; (<b>n</b>) pond; (<b>o</b>) port; (<b>p</b>) railway station; (<b>q</b>) residential; (<b>r</b>) river; (<b>s</b>) viaduct.</p>
Full article ">Figure 8
<p>Confusion matrix for the pre-trained AlexNet-SPP-SS model with the UC Merced dataset.</p>
Full article ">Figure 9
<p>Classification accuracies of each category for the different algorithms with the UC Merced dataset.</p>
Full article ">Figure 10
<p>Confusion matrix for the pre-trained AlexNet-SPP-SS model with the Google image dataset of SIRI-WHU.</p>
Full article ">Figure 11
<p>Classification accuracies of each category for different algorithms with the Google image dataset of SIRI-WHU.</p>
Full article ">Figure 12
<p>Confusion matrix for the pre-trained AlexNet-SPP-SS model with the WHU-RS dataset.</p>
Full article ">Figure 13
<p>Classification accuracies of each category for different algorithms with the WHU-RS dataset.</p>
Full article ">Figure 14
<p>The influence of the spatial pyramid layer number for the pre-trained AlexNet-SPP-SS model with the UC Merced dataset.</p>
Full article ">Figure 15
<p>The influence of the spatial pyramid number for the pre-trained AlexNet-SPP-SS model with the Google Image dataset of SIRI-WHU.</p>
Full article ">Figure 16
<p>The influence of the spatial pyramid number for the pre-trained AlexNet-SPP-SS model with the WHU-RS dataset.</p>
Full article ">Figure 17
<p>The influence of the training sample ratios with the different algorithms for the UC Merced dataset.</p>
Full article ">Figure 18
<p>The influence of the training sample ratios with the different algorithms for the Google image dataset of SIRI-WHU.</p>
Full article ">Figure 19
<p>The influence of the training sample ratios with the different algorithms for the Google image dataset of SIRI-WHU.</p>
Full article ">
2342 KiB  
Review
Stochastic Bias Correction and Uncertainty Estimation of Satellite-Retrieved Soil Moisture Products
by Ju Hyoung Lee, Chuanfeng Zhao and Yann Kerr
Remote Sens. 2017, 9(8), 847; https://doi.org/10.3390/rs9080847 - 15 Aug 2017
Cited by 13 | Viewed by 5168
Abstract
To apply satellite-retrieved soil moisture to a short-range weather prediction, we review a stochastic approach for reducing foot print scale biases and estimating its uncertainties. First, we discuss a challenge of representativeness errors. Before describing retrieval errors in more detail, we clarify a [...] Read more.
To apply satellite-retrieved soil moisture to a short-range weather prediction, we review a stochastic approach for reducing foot print scale biases and estimating its uncertainties. First, we discuss a challenge of representativeness errors. Before describing retrieval errors in more detail, we clarify a conceptual difference between error and uncertainty in basic metrological terms of the International Organization for Standardization (ISO), and briefly summarize how current retrieval algorithms deal with a challenge of land surface heterogeneity. As compared to relative approaches such as Triple Collocation, or cumulative distribution function (CDF) matching that aim for climatology stationary errors at time-scale of years, we address a stochastic approach for reducing instantaneous retrieval errors at time-scale of several hours to days. The stochastic approach has a potential as a global scheme to resolve systematic errors introducing from instrumental measurements, geo-physical parameters, and surface heterogeneity across the globe, because it does not rely on the ground measurements or reference data to be compared with. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Relationships between the mean and standard deviation of point-scale soil moisture measurements sampled in different spatial extents (2.5<sup>2</sup>, 100<sup>2</sup>, 800<sup>2</sup> m<sup>2</sup>, and 50<sup>2</sup> km<sup>2</sup>) [<a href="#B20-remotesensing-09-00847" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Penetration depth of microwaves and radio waves in various types of soil [<a href="#B31-remotesensing-09-00847" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>Nonlinear error propagation of roughness to SAR soil moisture [<a href="#B61-remotesensing-09-00847" class="html-bibr">61</a>]: ASAR backscattering differently retrieved soil moisture products under four roughness conditions indicated in Table. Only scheme #4 is outside of an optimal roughness range.</p>
Full article ">Figure 4
<p>A histogram of retrieved soil moisture content (mv<sub>retr</sub>) [<a href="#B102-remotesensing-09-00847" class="html-bibr">102</a>].</p>
Full article ">Figure 5
<p>Time-average RMSEs (m<sup>3</sup>/m<sup>3</sup>) of the original SMOS data, CDF matching, and retrieval ensemble approaches [<a href="#B52-remotesensing-09-00847" class="html-bibr">52</a>].</p>
Full article ">Figure 6
<p>Spatial distribution of surface soil moisture, m<sup>3</sup>/m<sup>3</sup> (red is wet, while blue is dry): (<b>a</b>) SMOS product; (<b>b</b>) CDF matching; (<b>c</b>) ensemble method; and (<b>d</b>) difference between before and after bias correction [<a href="#B52-remotesensing-09-00847" class="html-bibr">52</a>].</p>
Full article ">Figure 7
<p>Different retrieval ensembles by different perturbation regimes: GH stands for a random perturbation of geo-physical parameters; TB for brightness temperature; and FR for land cover fraction (or sub-pixel land surface information) [<a href="#B18-remotesensing-09-00847" class="html-bibr">18</a>].</p>
Full article ">
5576 KiB  
Article
A Novel Method of Change Detection in Bi-Temporal PolSAR Data Using a Joint-Classification Classifier Based on a Similarity Measure
by Jinqi Zhao, Jie Yang, Zhong Lu, Pingxiang Li, Wensong Liu and Le Yang
Remote Sens. 2017, 9(8), 846; https://doi.org/10.3390/rs9080846 - 15 Aug 2017
Cited by 17 | Viewed by 5726
Abstract
Accurate and timely change detection of the Earth’s surface features is extremely important for understanding the relationships and interactions between people and natural phenomena. Owing to the all-weather response capability, polarimetric synthetic aperture radar (PolSAR) has become a key tool for change detection. [...] Read more.
Accurate and timely change detection of the Earth’s surface features is extremely important for understanding the relationships and interactions between people and natural phenomena. Owing to the all-weather response capability, polarimetric synthetic aperture radar (PolSAR) has become a key tool for change detection. Change detection includes both unsupervised and supervised methods. Unsupervised change detection is simple and effective, but cannot detect the type of land cover change. Supervised change detection can detect the type of land cover change, but is easily affected and depended by the human interventions. To solve these problems, a novel method of change detection using a joint-classification classifier (JCC) based on a similarity measure is introduced. The similarity measure is obtained by a test statistic and the Kittler and Illingworth (TSKI) minimum-error thresholding algorithm, which is used to automatically control the JCC. The efficiency of the proposed method is demonstrated by the use of bi-temporal PolSAR images acquired by RADARSAT-2 over Wuhan, China. The experimental results show that the proposed method can identify the different types of land cover change and can reduce both the false detection rate and false alarm rate in the change detection. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow chart of Joint-Classification Classifier based on Test statistics and the Kittler and Illingworth (JCC-TSKI).</p>
Full article ">Figure 2
<p>Location of the study areas.</p>
Full article ">Figure 3
<p>The Pauli-RGB images of Wuhan after preprocessing, for (<b>a</b>) 25 June 2015; and (<b>b</b>) 6 July 2016; (<b>c</b>) the ground reference (white denotes the change and black denotes the non-change). In (<b>a</b>), region 1 is YanDong Lake, region 2 is LiangZi Lake, region 3 is YanXi Lake, and region 4 is Nan Lake.</p>
Full article ">Figure 4
<p>(<b>a</b>) the result of <span class="html-italic">S</span>; (<b>b</b>) training samples.</p>
Full article ">Figure 5
<p>The result of the change detection: (<b>a</b>) test statistics and the Kittler and Illingworth (TSKI); (<b>b</b>) post-classification comparison (PCC); and (<b>c</b>) the proposed method.</p>
Full article ">Figure 6
<p>RADARSAT-2 PolSAR images of the red box labeled area 1 acquired on (<b>a</b>) 25 June 2015; and (<b>b</b>) 6 July 2016; (<b>c</b>) the ground truth; change detection results of (<b>d</b>) TSKI; (<b>e</b>) PCC; and (<b>f</b>) JCC-TSKI.</p>
Full article ">Figure 7
<p>RADARSAT-2 PolSAR images of the red box labeled area 2 acquired on (<b>a</b>) 25 June 2015; and (<b>b</b>) 6 July 2016; (<b>c</b>) the ground truth; change detection results of (<b>d</b>) TSKI; (<b>e</b>) PCC; and (<b>f</b>) JCC-TSKI.</p>
Full article ">Figure 8
<p>RADARSAT-2 PolSAR images of the red box labeled area 3 acquired on (<b>a</b>) 25 June 2015; and (<b>b</b>) 6 July 2016; (<b>c</b>) the ground truth; change detection results of (<b>d</b>) TSKI; (<b>e</b>) PCC; and (<b>f</b>) JCC-TSKI.</p>
Full article ">Figure 9
<p>RADARSAT-2 PolSAR images of the red box labeled area 4 acquired on (<b>a</b>) 25 June 2015; and (<b>b</b>) 6 July 2016; (<b>c</b>) the ground truth; Change detection results of (<b>d</b>) TSKI; (<b>e</b>) PCC; and (<b>f</b>) JCC-TSKI.</p>
Full article ">Figure 10
<p>The type of land cover change detected by (<b>a</b>) PCC; and (<b>b</b>) the proposed method.</p>
Full article ">
11999 KiB  
Article
Assimilation of Sentinel-1 Derived Sea Surface Winds for Typhoon Forecasting
by Yi Yu, Xiaofeng Yang, Weimin Zhang, Boheng Duan, Xiaoqun Cao and Hongze Leng
Remote Sens. 2017, 9(8), 845; https://doi.org/10.3390/rs9080845 - 14 Aug 2017
Cited by 12 | Viewed by 5012
Abstract
High-resolution synthetic aperture radar (SAR) wind observations provide fine structural information for tropical cycles and could be assimilated into numerical weather prediction (NWP) models. However, in the conventional method assimilating the u and v components for SAR wind observations (SAR_uv), the wind direction [...] Read more.
High-resolution synthetic aperture radar (SAR) wind observations provide fine structural information for tropical cycles and could be assimilated into numerical weather prediction (NWP) models. However, in the conventional method assimilating the u and v components for SAR wind observations (SAR_uv), the wind direction is not a state vector and its observational error is not considered during the assimilation calculation. In this paper, an improved method for wind observation directly assimilates the SAR wind observations in the form of speed and direction (SAR_sd). This method was implemented to assimilate the sea surface wind retrieved from Sentinel-1 synthetic aperture radar (SAR) in the basic three-dimensional variational system for the Weather Research and Forecasting Model (WRF 3DVAR). Furthermore, a new quality control scheme for wind observations is also presented. Typhoon Lionrock in August 2016 is chosen as a case study to investigate and compare both assimilation methods. The experimental results show that the SAR wind observations can increase the number of the effective observations in the area of a typhoon and have a positive impact on the assimilation analysis. The numerical forecast results for this case show better results for the SAR_sd method than for the SAR_uv method. The SAR_sd method looks very promising for winds assimilation under typhoon conditions, but more cases need to be considered to draw final conclusions. Full article
(This article belongs to the Special Issue Ocean Remote Sensing with Synthetic Aperture Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Windfield retrieved from Sentinel-1A synthetic aperture radar (SAR) Extra-wide Swath Mode data on 29 August 2016 at 08:32 UTC. (<b>A</b>) Sea surface radar backscattering map; (<b>B</b>) the SAR derived sea surface wind map.</p>
Full article ">Figure 2
<p>Diagram of background wind vector BKG and the observation wind vectors (OBS1, OBS2, OBS3, and OBS4) used to present the difference between thequality control procedures, QC_co and QC_al, of SAR_uv (standard assimilation method in theWeather Research and Forecasting Model Data Assimilation system (WRFDA), assimilating SAR wind observation in the form of <span class="html-italic">u</span> and <span class="html-italic">v</span> components) and SAR_sd (new assimilated method, assimilating SAR wind observation in the forms of wind speed and direction).</p>
Full article ">Figure 3
<p>The accepted observtion wind vectors in the (<b>a</b>) SAR_sd and in the (<b>b</b>) SAR_uv experiments. The blue points are the wind vectors.</p>
Full article ">Figure 3 Cont.
<p>The accepted observtion wind vectors in the (<b>a</b>) SAR_sd and in the (<b>b</b>) SAR_uv experiments. The blue points are the wind vectors.</p>
Full article ">Figure 4
<p>The domain area for theWeather Research and Forecasting (WRF) model simulation.The corverage of SAR observations is shownin the red frame.</p>
Full article ">Figure 5
<p>The 10m wind field at 0900 UTC 29 August 2016 from (<b>a</b>) the NCEP FNL data in (<b>b</b>) the background field; the wind analysis field at 10m from (<b>c</b>) SAR_sd and (<b>d</b>) SAR_uv. The bottom two panelsare the analysis bias derived from the analysis subtracting the FNL data for (<b>e</b>) SAR_sd and (<b>f</b>) SAR_uv. The red frames mark the area of the SAR wind observations. Note the different spatial scales of (<b>e</b>,<b>f</b>) as compared to (<b>a</b>–<b>d</b>).</p>
Full article ">Figure 6
<p>The analysis bias the SAR_sd (left panels) and the SAR_uv (right panels) at (<b>a</b>,<b>b</b>) 10 m at (<b>c</b>,<b>d</b>) 850 hPa, and at (<b>e</b>,<b>f</b>) 700 hPa.</p>
Full article ">Figure 7
<p>Root mean square error (RMSE) profiles of the analysis increments from the SAR_sd experiment (round point) and the SAR_uv experiment (triagle piont) for (<b>a</b>) <span class="html-italic">u</span> wind, (<b>b</b>) <span class="html-italic">v</span> wind, (<b>c</b>) temperature, and (<b>d</b>) relatively humidity.</p>
Full article ">Figure 8
<p>For Typhoon Lionrock: (<b>a</b>) 33h track forecast initialized at 0900 UTC August 2016; (<b>b</b>) mean absolute track errors; (<b>c</b>) mean absolute maximum wind speed errors; (<b>d</b>) minimum sea level pressure as a function of forecast lead time.</p>
Full article ">
11236 KiB  
Article
Extension of a Fast GLRT Algorithm to 5D SAR Tomography of Urban Areas
by Alessandra Budillon, Angel Caroline Johnsy and Gilda Schirinzi
Remote Sens. 2017, 9(8), 844; https://doi.org/10.3390/rs9080844 - 14 Aug 2017
Cited by 34 | Viewed by 5721
Abstract
This paper analyzes a method for Synthetic Aperture Radar (SAR) Tomographic (TomoSAR) imaging, allowing the detection of multiple scatterers that can exhibit time deformation and thermal dilation by using a CFAR (Constant False Alarm Rate) approach. In the last decade, several methods for [...] Read more.
This paper analyzes a method for Synthetic Aperture Radar (SAR) Tomographic (TomoSAR) imaging, allowing the detection of multiple scatterers that can exhibit time deformation and thermal dilation by using a CFAR (Constant False Alarm Rate) approach. In the last decade, several methods for TomoSAR have been proposed. The objective of this paper is to present the results obtained on high resolution tomographic SAR data of urban areas, by using a statistical test for detecting multiple scatterers that takes into account phase variations due to possible deformations and/or thermal dilation. The test can be evaluated in terms of probability of detection (PD) and probability of false alarm (PFA), and is based on an approximation of a Generalized Likelihood Ratio Test (GLRT), denoted as Fast-Sup-GLRT. It was already applied and validated by the authors in the 3D case, while here it is extended and experimented in the 5D case. Numerical experiments on simulated and on StripMap TerraSAR-X (TSX) data have been carried out. The presented results show that the adopted method allows the detection of a large number of scatterers and the estimation of their position with a good accuracy, and that the consideration of the thermal dilation and surface deformation helps in recovering more single and double scatterers, with respect to the case in which these contributions are not taken into account. Moreover, the capability of method to provide reliable estimates of the deformations in urban structure suggests its use in structure stress monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) distribution of the spatial baselines (orthogonal components) vs. the temporal baselines; (<b>b</b>) distribution of the temperature values vs. the temporal baselines; (<b>c</b>) distribution of the spatial baselines (orthogonal components) vs. temperature values.</p>
Full article ">Figure 2
<p>Probabilities of detection <span class="html-italic">P<sub>D2</sub></span> for Fast-Sup-GLRT, with a fixed <span class="html-italic">P<sub>FA</sub></span> = <span class="html-italic">P<sub>FD2</sub></span> = 10<sup>−3</sup>, <span class="html-italic">M</span> = 38 and different values of <span class="html-italic">SNR</span>, for two scatterers at distance <span class="html-italic">D<sub>s</sub></span> = 3.1 m, for the cases of fully correlated scatterers (solid, red), and partially correlated scatterers with average coherence 0.93 (circle, blue), 0.85 (square, green), 0.8 (diamond, black) using the simulated thresholds obtained for the no decorrelation case.</p>
Full article ">Figure 3
<p>Probabilities of detection <span class="html-italic">P<sub>D</sub></span><sub>1</sub> and <span class="html-italic">P<sub>D</sub></span><sub>2</sub> for Fast-Sup-GLRT, with a fixed <span class="html-italic">P<sub>FA</sub></span> = <span class="html-italic">P<sub>FD2</sub></span> = 10<sup>−3</sup>, <span class="html-italic">M</span> = 38 and different values of <span class="html-italic">SNR</span>, for two scatterers at distance <span class="html-italic">D<sub>s</sub></span> = 3.1 m, and exhibiting thermal dilation. The <span class="html-italic">P<sub>D</sub></span><sub>1</sub> (in dashed line) and <span class="html-italic">P<sub>D</sub></span><sub>2</sub> (in solid line) for the 3D detector are shown in red for <span class="html-italic">k</span> = 0.3 mm/°C, and in blue for <span class="html-italic">k</span> = 0.4 mm/°C. The <span class="html-italic">P<sub>D</sub></span><sub>1</sub> (in dashed line) and <span class="html-italic">P<sub>D</sub></span><sub>2</sub> (in solid line) for the 5D detector are shown in black and are the same for both the thermal coefficient values.</p>
Full article ">Figure 4
<p>Intensity SAR image of Art Hotel Tower, Barcelona, Spain (Copyright German Aerospace Centre DLR 2007–2010).</p>
Full article ">Figure 5
<p>Single scatterers detected and positioned on 3D Google Earth optical image, using 3D Fast-Sup-GLRT (<b>a</b>); 5D Fast-Sup-GLRT (<b>b</b>); and 5D SGLRTC (<b>c</b>).</p>
Full article ">Figure 6
<p>Double Scatterers (first in blue, second in yellow) detected and positioned Fast-Sup-GLRT on 3D Google Earth optical image, using 3D Fast-Sup-GLRT (<b>a</b>), 5D Fast-Sup-GLRT (<b>b</b>), and 5D SGLRTC (<b>c</b>).</p>
Full article ">Figure 7
<p>Single (red) and Double Scatterers (lower scatterer in blue, higher scatterer in yellow) detected using 5D Fast-Sup-GLRT, in the ground range–height plane and the three scatterers used for the phase history in cyan contour.</p>
Full article ">Figure 8
<p>Scatterers detected and positioned on 3D Google Earth optical image. Height values have been used for colorization of the points (single circle, double square): using 3D Fast-Sup-GLRT (<b>a</b>); 5D Fast-Sup-GLRT (<b>b</b>); and 5D SGLRTC (<b>c</b>).</p>
Full article ">Figure 9
<p>Scatterers detected used to asses height estimation errors, are shown on SAR intensity image using 3D Fast-Sup-GLRT (<b>a</b>); 5D Fast-Sup-GLRT (<b>b</b>); and 5D SGLRTC (<b>c</b>); and on the 3D Google Earth optical image for 5D Fast-Sup-GLRT (<b>d</b>).</p>
Full article ">Figure 10
<p>Estimated average deformation velocity map, with velocity expressed in cm/year, using 5D Fast-Sup-GLRT (<b>a</b>) and 5D SGLRTC (<b>b</b>).</p>
Full article ">Figure 11
<p>Estimated thermal dilation map, with dilation coefficient expressed in mm/°C, using 5D Fast-Sup-GLRT (<b>a</b>) and 5D SGLRTC (<b>b</b>).</p>
Full article ">Figure 12
<p>Two single scatterers (blue and red circle) detected on the tower at the heights 78 m and 83 m and one double scatterer (yellow square) at heights 45 m and 10 m.</p>
Full article ">Figure 13
<p>(<b>a</b>) temperatures history vs. temporal baseline; (<b>b</b>) residual phase history for the blue scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a>; (<b>c</b>) residual phase history for the red scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) temperatures history vs. temporal baseline; (<b>b</b>) residual phase history for the blue scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a>; (<b>c</b>) residual phase history for the red scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>(<b>a</b>) residual phase vs. temperature for the blue scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a> (blue asterisks) and linear behavior of its estimated thermal dilation (black, solid); (<b>b</b>) residual phase vs. temperature for the red scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a> (red asterisks) and linear behavior of its estimated thermal dilation (black, solid).</p>
Full article ">Figure 15
<p>Residual phases vs. temperature for the yellow double scatterer in <a href="#remotesensing-09-00844-f012" class="html-fig">Figure 12</a>: (<b>a</b>) residual phase for the higher scatterer (yellow squares) and linear behavior of its estimated thermal dilation (black, solid); (<b>b</b>) residual phase for the lower scatterer (yellow circles) and linear behavior of its estimated thermal dilation (black, solid); (<b>c</b>) residual phase for the single scatterer detected using <span class="html-italic">K<sub>max</sub></span> = 1 (black asterisks) and linear behavior of its estimated thermal dilation (black, solid).</p>
Full article ">
3181 KiB  
Article
An Accuracy Assessment of Derived Digital Elevation Models from Terrestrial Laser Scanning in a Sub-Tropical Forested Environment
by Jasmine Muir, Nicholas Goodwin, John Armston, Stuart Phinn and Peter Scarth
Remote Sens. 2017, 9(8), 843; https://doi.org/10.3390/rs9080843 - 14 Aug 2017
Cited by 12 | Viewed by 5489
Abstract
Forest structure attributes produced from terrestrial laser scanning (TLS) rely on normalisation of the point cloud values from sensor coordinates to height above ground. One method to do this is through the derivation of an accurate and repeatable digital elevation model (DEM) from [...] Read more.
Forest structure attributes produced from terrestrial laser scanning (TLS) rely on normalisation of the point cloud values from sensor coordinates to height above ground. One method to do this is through the derivation of an accurate and repeatable digital elevation model (DEM) from the TLS point cloud that is used to adjust the height. The primary aim of this paper was to test a number of TLS scan configurations, filtering options and output DEM grid resolutions (from 0.02 m to 1.0 m) to define a best practice method for DEM generation in sub-tropical forest environments. The generated DEMs were compared to both total station (TS) spot heights and a 1-m DEM generated from airborne laser scanning (ALS) to assess accuracy. The comparison to TS spot heights found that a DEM produced using the minimum elevation (minimum Z value) from a point cloud derived from a single scan had mean errors >1 m for DEM grid resolutions <0.2 m at a 25-m plot radius. At a 1-m grid resolution, the mean error was 0.19 m. The addition of a filtering approach that combined a median filter with a progressive morphological filter and a global percentile filter was able to reduce mean error of the 0.02-m grid resolution DEM to 0.31 m at a 25-m plot radius using all returns. Using multiple scan positions to derive the DEM reduced the mean error for all DEM methods. Our results suggest that a simple minimum Z filtering DEM method using a single scan at the grid resolution of 1 m can produce mean errors <0.2 m, but for a small grid resolution, such as 0.02 m, a more complex filtering approach and multiple scan positions are required to reduced mean errors. The additional validation data provided by the 1-m ALS DEM showed that when using the combined filtering method on a point cloud derived from a single scan at the plot centre, errors between 0.1 and 0.5 m occurred in the TLS DEM for all tested grid resolutions at a plot radius of 25 m. These findings present a protocol for DEM production from TLS data at a range of grid resolutions and provide an overview of factors affecting DEMs produced from single and multiple TLS scan positions. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GOLD0101 site location (<b>top left</b>), photo of the study site (<b>top right</b>), DEM (<b>bottom left</b>) and slope (<b>bottom right</b>) derived from ALS captured in 2014. A white circle shows the 25-m radius from the centre TLS scan position (Scan Position 1).</p>
Full article ">Figure 1 Cont.
<p>GOLD0101 site location (<b>top left</b>), photo of the study site (<b>top right</b>), DEM (<b>bottom left</b>) and slope (<b>bottom right</b>) derived from ALS captured in 2014. A white circle shows the 25-m radius from the centre TLS scan position (Scan Position 1).</p>
Full article ">Figure 2
<p>Position of reflective targets used in Scan Position 1 (centre scan) registration (red cross on white circle) and permanent control points (PCPs) used for registration of scans to real-world coordinates (blue circle) for the 27 November 2015 TLS data acquisition. The location of all seven scan positions is shown in <a href="#remotesensing-09-00843-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Position of virtual transects used to test the effect of range on the elevation difference between the filtered minimum Z images produced from all returns and single/last returns at a 0.2-m grid resolution for the single-centre scan (Scan Position 1). The outer circle is the 50-m plot radius, and the inner circle is the 25-m plot radius.</p>
Full article ">Figure 4
<p>DEMs produced using the minimum Z value for each of the grid resolutions tested for the single-centre scan position at a 25-m plot radius. The display stretch is the minimum and maximum value of the airborne LiDAR DEM within the plot. Dark blue shows areas of the TLS DEM with elevation values higher than the ALS DEM.</p>
Full article ">Figure 5
<p>Mean elevation difference of the filtered elevation values (0.2-m grid resolution) for all returns and only single/last returns (single/last returns—all returns) binned at 1-m intervals of range from the scan origin for centre Scan Position 1.</p>
Full article ">Figure 6
<p>Mean residual error (m) of TS spot heights for different methods of generating DEMs from the single-centre scan at a 25-m plot radius (<span class="html-italic">n</span> = 71).</p>
Full article ">Figure 7
<p>DEMs produced using each of the filtering methods tested at a 0.02-m pixel size, for the single-centre scan position, at a 25-m plot radius. The display stretch is the minimum and maximum value of the airborne LiDAR DEM within the plot. Dark blue shows areas of the TLS DEM with elevation values higher than the ALS DEM.</p>
Full article ">Figure 8
<p>Difference between the single-centre scan, four scan positions (1, 2, 4, 6) and all scan positions, for the 2014 ALS dataset (1-m pixel resolution) at a 25-m range (inner circle) and a 50-m (outer circle) range from the plot centre. Note that there was too much space between ground points at 0.02-m resolution for the interpolation to work across the entire area of the 25 m–50 m plot radius for the single-centre scan (Scan Position 1).</p>
Full article ">
6364 KiB  
Article
Long-Term Water Storage Changes of Lake Volta from GRACE and Satellite Altimetry and Connections with Regional Climate
by Shengnan Ni, Jianli Chen, Clark R. Wilson and Xiaogong Hu
Remote Sens. 2017, 9(8), 842; https://doi.org/10.3390/rs9080842 - 14 Aug 2017
Cited by 29 | Viewed by 8891
Abstract
Satellite gravity data from the Gravity Recovery and Climate Experiment (GRACE) provides a quantitative measure of terrestrial water storage (TWS) change at different temporal and spatial scales. In this study, we investigate the ability of GRACE to quantitatively monitor long-term hydrological characteristics over [...] Read more.
Satellite gravity data from the Gravity Recovery and Climate Experiment (GRACE) provides a quantitative measure of terrestrial water storage (TWS) change at different temporal and spatial scales. In this study, we investigate the ability of GRACE to quantitatively monitor long-term hydrological characteristics over the Lake Volta region. Principal component analysis (PCA) is employed to study temporal and spatial variability of long-term TWS changes. Long-term Lake Volta water storage change appears to be the dominant long-term TWS change signal in the Volta basin. GRACE-derived TWS changes and precipitation variations compiled by the Global Precipitation Climatology Centre (GPCC) are related both temporally and spatially, but spatial leakage attenuates the magnitude of GRACE estimates, especially at small regional scales. Using constrained forward modeling, we successfully remove leakage error in GRACE estimates. After this leakage correction, GRACE-derived Lake Volta water storage changes agree remarkably well with independent estimates from satellite altimetry at interannual and longer time scales. This demonstrates the value of GRACE estimates to monitor and quantify water storage changes in lakes, especially in relatively small regions with complicated topography. Full article
(This article belongs to the Special Issue Remote Sensing of Climate Change and Water Resources)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The map of Volta River basin in West Africa. Original map adapted from <a href="http://www.zef.de/publ_maps.html" target="_blank">http://www.zef.de/publ_maps.html</a>.</p>
Full article ">Figure 2
<p>Total water storage changes from the Gravity Recovery and Climate Experiment (GRACE) over the Volta River basin as outlined in <a href="#remotesensing-09-00842-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Principal component analysis (PCA)-derived spatial and temporal patterns of terrestrial water storage (TWS) variability (with annual and semiannual signals removed) over the Volta River basin. (<b>a</b>,<b>b</b>) are spatial patterns of the first two modes derived from PCA; (<b>c</b>,<b>d</b>) are corresponding temporal patterns. The percentages of the total variance explained by the first two principal components are 30.9% and 20.7%, respectively.</p>
Full article ">Figure 4
<p>GRACE water storage changes and satellite altimetry water level changes for Lake Volta. Both time series in (<b>a</b>,<b>b</b>) have an increasing (2007–2010) and declining (2011–2015) rate; (<b>c</b>) is the comparison between GRACE and satellite altimetry at long-term time scale. We have removed the annual and semi-annual signals using least squares fitting. Please notice the different <span class="html-italic">y</span>-axis scales used in (<b>c</b>).</p>
Full article ">Figure 5
<p>Global Precipitation Climatology Centre (GPCC) monthly precipitation over the Volta River basin with the climatologic average removed. The climatological precipitation is calculated by averaging the monthly precipitation of all the same months over a certain period (e.g., the 20-year period from January 1996 to December 2015). The red line is the nonseasonal precipitation anomaly smoothed with a Butterworth low-pass (below 0.5 cpy) filter.</p>
Full article ">Figure 6
<p>GRACE TWS long-term change rates and GPCC mean precipitation anomalies over the Volta River basin during the periods of 2007–2010 and 2011–2015. (<b>a</b>) is TWS long-term change rates from 2007 to 2010 after P4M6 decorrelation filtering and 300 km Gaussian smoothing; (<b>b</b>) is mean precipitation anomalies from 2007 to 2010 without any smoothing filter; (<b>c</b>) is TWS long-term change rates from 2011 to 2015 after P4M6 decorrelation filtering and 300 km Gaussian smoothing; (<b>d</b>) is mean precipitation anomalies from 2011 to 2015 without any smoothing filter. Mean precipitation anomalies are the average values of precipitation with a certain period (e.g., 20 years) climatology removed.</p>
Full article ">Figure 7
<p>Mass rates (January 2007–December 2010) in cm/year of equivalent water height. (<b>a</b>) Apparent long-term TWS change rates from GRACE after P4M6 decorrelation filtering and 300 km Gaussian smoothing; (<b>b</b>) Restored “true” long-term TWS change rates from constrained forward modeling after 300 iterations; (<b>c</b>) Predicted TWS change rates from model rates of (<b>b</b>); (<b>d</b>) Difference between observed and modeled apparent mass rates (i.e., (<b>a</b>–<b>c</b>)). Please notice the different color scale used in the four panels.</p>
Full article ">Figure 8
<p>Mass rates (January 2011–December 2015) in cm/year of equivalent water height. (<b>a</b>) Apparent long-term TWS change rates from GRACE after P4M6 decorrelation filtering and 300 km Gaussian smoothing; (<b>b</b>) Restored “true” long-term TWS change rates from constrained forward modeling after 200 iterations; (<b>c</b>) Predicted TWS change rates from model rates of (<b>b</b>); (<b>d</b>) Difference between observed and modeled apparent mass rates (i.e., (<b>a</b>–<b>c</b>)). Please notice the different color scale used in the four panels.</p>
Full article ">Figure 9
<p>Residuals between observed and apparent mass rates in constrained forward modeling. The residual is computed as the root mean square (RMS) value of difference between observed and modeled data at each grid point over the entire rectangle region shown in <a href="#remotesensing-09-00842-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 10
<p>Residuals between observed and apparent mass rates in <a href="#remotesensing-09-00842-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 11
<p>GRACE water storage changes (equivalent water volume) with leakage correction and satellite altimetry water volume changes for Lake Volta. The red curve can be obtained by multiplying the red curve in <a href="#remotesensing-09-00842-f004" class="html-fig">Figure 4</a>c with both scale factor (~41.3) and the area of lake mask. The blue curve is the product of altimetry water level change (blue curve in <a href="#remotesensing-09-00842-f004" class="html-fig">Figure 4</a>c) with estimated lake area. Please notice that the <span class="html-italic">y</span>-axis scale in this figure is the same for GRACE and satellite altimetry data.</p>
Full article ">
3545 KiB  
Article
A Probabilistic Weighted Archetypal Analysis Method with Earth Mover’s Distance for Endmember Extraction from Hyperspectral Imagery
by Weiwei Sun, Dianfa Zhang, Yan Xu, Long Tian, Gang Yang and Weiyue Li
Remote Sens. 2017, 9(8), 841; https://doi.org/10.3390/rs9080841 - 14 Aug 2017
Cited by 6 | Viewed by 5081
Abstract
A Probabilistic Weighted Archetypal Analysis method with Earth Mover’s Distance (PWAA-EMD) is proposed to extract endmembers from hyperspectral imagery (HSI). The PWAA-EMD first utilizes the EMD dissimilarity matrix to weight the coefficient matrix in the regular Archetypal Analysis (AA). The EMD metric considers [...] Read more.
A Probabilistic Weighted Archetypal Analysis method with Earth Mover’s Distance (PWAA-EMD) is proposed to extract endmembers from hyperspectral imagery (HSI). The PWAA-EMD first utilizes the EMD dissimilarity matrix to weight the coefficient matrix in the regular Archetypal Analysis (AA). The EMD metric considers manifold structures of spectral signatures in the HSI data and could better quantify the dissimilarity features among pairwise pixels. Second, the PWAA-EMD adopts the Bayesian framework and formulates the improved AA into a probabilistic inference problem by maximizing a joint posterior density. Third, the optimization problem is solved by the iterative multiplicative update scheme, with a careful initialization from the two-stage algorithm and the proper endmembers are finally obtained. The synthetic and real Cuprite Hyperspectral datasets are utilized to verify the performance of PWAA-EMD and five popular methods are implemented to make comparisons. The results show that PWAA-EMD surpasses all the five methods in the average results of spectral angle distance (SAD) and root-mean-square-error (RMSE). Especially, the PWAA-EMD obtains more accurate estimation than AA in almost all the classes of endmembers including two similar ones. Therefore, the PWAA-EMD could be an alternative choice for endmember extraction on the hyperspectral data. Full article
Show Figures

Figure 1

Figure 1
<p>The necessity of adding dissimilarity information between two pixels into the AA.</p>
Full article ">Figure 2
<p>The procedure of PWAA-EMD for endmember extraction.</p>
Full article ">Figure 3
<p>The synthetic hyperspectral data. (<b>a</b>) Spectrum plots of six endmembers in the synthetic data (<b>b</b>) The six abundance images of the synthetic HSI data.</p>
Full article ">Figure 4
<p>Comparison of all six methods with different levels of noise in terms of (<b>a</b>) SAD and (<b>b</b>) RMSE.</p>
Full article ">Figure 5
<p>The PWAA-EMD endmembers on the synthetic data with SNR = 30 dB. (<b>a</b>) asphalt-gds367; (<b>b</b>) brick-gds350; (<b>c</b>) cedar-gds360; (<b>d</b>) particleboard-gds364; (<b>e</b>) plastic-gds394; and (<b>f</b>) woodbeam-gds363.</p>
Full article ">Figure 6
<p>The Cuprite HSI data. (<b>a</b>) The Ground truth of different mineral materials in the Cuprite data; (<b>b</b>) the image scene of our experiment dataset; and (<b>c</b>) Spectrum plots of twelve materials in our experiment dataset.</p>
Full article ">Figure 7
<p>The PWAA-EMD endmembers on the Cuprite data. (<b>a</b>) Alunite1; (<b>b</b>) Alunite2; (<b>c</b>) Pyrophyllite; (<b>d</b>) Buddingtonite; (<b>e</b>) Chalcedony; (<b>f</b>) Jarosite; (<b>g</b>) Kaolinite1; (<b>h</b>) Kaolinite2; (<b>i</b>) Montmorillonite; (<b>j</b>) Muscovite1; (<b>k</b>) Muscovite2; and (<b>l</b>) Nontronite.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop