Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 11, August-1
Previous Issue
Volume 11, July-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 14 (July-2 2019) – 106 articles

Cover Story (view full-size image): In recent years, there has been a large focus on the Arctic due to the rapid changes in the region. Arctic sea level determination is challenging due to the seasonal to permanent sea-ice cover, lack of regional coverage of satellites, satellite instruments’ ability to measure ice, insufficient geophysical models, residual orbit errors, and challenging retracking of satellite altimeter data. This study presents the ESA CCI DTU/TUM Sea Level Anomaly (SLA) record based on radar satellite altimetry data in the Arctic Ocean from ERS-1 (1991) to CryoSat-2 (2018). This is the longest time series available to date. The study focuses on the transition between conventional and synthetic aperture radar altimeter data to make a smooth time series regarding the measurement method. The SLA record was validated against tide gauges and shows good results. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3651 KiB  
Article
Improvement of Remote Sensing-Based Assessment of Defoliation of Pinus spp. Caused by Thaumetopoea pityocampa Denis and Schiffermüller and Related Environmental Drivers in Southeastern Spain
by Javier Pérez-Romero, Rafael María Navarro-Cerrillo, Guillermo Palacios-Rodriguez, Cristina Acosta and Francisco Javier Mesas-Carrascosa
Remote Sens. 2019, 11(14), 1736; https://doi.org/10.3390/rs11141736 - 23 Jul 2019
Cited by 10 | Viewed by 5009
Abstract
This study used Landsat temporal series to describe defoliation levels due to the Pine Processionary Moth (PPM) in Pinus forests of southeastern Andalusia (Spain), utilizing Google Earth Engine. A combination of remotely sensed data and field survey data was used to detect the [...] Read more.
This study used Landsat temporal series to describe defoliation levels due to the Pine Processionary Moth (PPM) in Pinus forests of southeastern Andalusia (Spain), utilizing Google Earth Engine. A combination of remotely sensed data and field survey data was used to detect the defoliation levels of different Pinus spp. and the main environmental drivers of the defoliation due to the PPM. Four vegetation indexes were also calculated for remote sensing defoliation assessment, both inside the stand and in a 60-m buffer area. In the area of study, all Pinus species are affected by defoliation due to the PPM, with a cyclic behavior that has been increasing in frequency in recent years. Defoliation levels were practically equal for all species, with a high increase in defoliation levels 2 and 3 since 2014. The Moisture Stress Index (MSI) and Normalized Difference Infrared Index (NDII) exhibited similar overall (p < 0.001) accuracy in the assessment of defoliation due to the PPM. The synchronization of NDII-defoliation data had a similar pattern for all together and individual Pinus species, showing the ability of this index to adjust the model parameters based on the characteristics of specific defoliation levels. Using Landsat-based NDII-defoliation maps and interpolated environmental data, we have shown that the PPM defoliation in southeastern Spain is driven by the minimum temperature in February and the precipitation in June, March, September, and October. Therefore, the NDII-defoliation assessment seems to be a general index that can be applied to forests in other areas. The trends of NDII-defoliation related to environmental variables showed the importance of summer drought stress in the expansion of the PPM on Mediterranean Pinus species. Our results confirm the potential of Landsat time-series data in the assessment of PPM defoliation and the spatiotemporal patterns of the PPM; hence, these data are a powerful tool that can be used to develop a fully operational system for the monitoring of insect damage. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart describing the methodological steps of Landsat image collection processing to describe Pine Processionary Moth defoliation levels for the southeastern Andalusia <span class="html-italic">Pinus</span> forests utilizing Google Earth Engine.</p>
Full article ">Figure 2
<p>(<b>a</b>) Total area of <span class="html-italic">Pinus</span> forests affected by a processionary moth (<span class="html-italic">Thaumetopoea pityocampa</span> Denis and Schiffermüller) in Eastern Andalusia (Spain), (<b>b</b>) Map of frequency form moderate to severe PPM defoliation within the Southern Andalusia study area between 1994 and 2016. The key indicates the number of years in which defoliation was recorded on a scale from green (no defoliation) to red (severe defoliation). Three zones of high frequency (northern: 1, central: 2, southern: 3) are separated by two corridors of less frequent defoliation.</p>
Full article ">Figure 3
<p>Mean values of (<b>a</b>) Moisture Stress Index (MSI), (<b>b</b>) Normalized Difference Infrared Index (NDII), (<b>c</b>) Normalized Difference Vegetation Index (NDVI) and (<b>d</b>) Ratio Vegetation Index (RVI) according to defoliation grades of a processionary moth for the stand buffer area (upper) and inside stand area (lower). (*** <span class="html-italic">p</span>-value &lt; 0.001).</p>
Full article ">Figure 4
<p>Mean values of NDII-defoliation index synchronization for all <span class="html-italic">Pinus</span> species studied (<b>left</b>) and for five <span class="html-italic">Pinus</span> species (<b>right</b>) affected by processionary moth for the southeaster Andalusia <span class="html-italic">Pinus</span> forests.</p>
Full article ">Figure 5
<p>Importance of environmental predictors in predicting the presence or absence of defoliation of processionary moth in southeastern Andalusia (Spain). Importance is measured by Random Forest (variable descriptions: <a href="#app1-remotesensing-11-01736" class="html-app">Table S4</a>: Climate variables from REDIAM and GEE, <a href="#app1-remotesensing-11-01736" class="html-app">Supplementary Material</a>).</p>
Full article ">Figure 6
<p>Comparison between real and predicted Landsat derived NDII index values based on a Random Forest model with the most significant environmental variables (minimum temperature of February, and precipitation of June, September and March) to describe Pine Processionary Moth defoliation levels for the southeastern Andalusia <span class="html-italic">Pinus</span> forests. Red line: represents 1:1; Blue line: random forest model.</p>
Full article ">Figure 7
<p>Trend of selected environmental variables according to NDII-defoliation values of <span class="html-italic">Pinus</span> forests affecting Pine Processionary Moth in Eastern Andalusia (Spain), (<b>a</b>) February minimum temperature (<b>b</b>) June precipitation, (<b>c</b>) September precipitation, and (<b>d</b>) March precipitation. *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
19 pages, 4876 KiB  
Article
Giving Ecological Meaning to Satellite-Derived Fire Severity Metrics across North American Forests
by Sean A. Parks, Lisa M. Holsinger, Michael J. Koontz, Luke Collins, Ellen Whitman, Marc-André Parisien, Rachel A. Loehman, Jennifer L. Barnes, Jean-François Bourdon, Jonathan Boucher, Yan Boucher, Anthony C. Caprio, Adam Collingwood, Ron J. Hall, Jane Park, Lisa B. Saperstein, Charlotte Smetanka, Rebecca J. Smith and Nick Soverel
Remote Sens. 2019, 11(14), 1735; https://doi.org/10.3390/rs11141735 - 23 Jul 2019
Cited by 72 | Viewed by 12494
Abstract
Satellite-derived spectral indices such as the relativized burn ratio (RBR) allow fire severity maps to be produced in a relatively straightforward manner across multiple fires and broad spatial extents. These indices often have strong relationships with field-based measurements of fire severity, thereby justifying [...] Read more.
Satellite-derived spectral indices such as the relativized burn ratio (RBR) allow fire severity maps to be produced in a relatively straightforward manner across multiple fires and broad spatial extents. These indices often have strong relationships with field-based measurements of fire severity, thereby justifying their widespread use in management and science. However, satellite-derived spectral indices have been criticized because their non-standardized units render them difficult to interpret relative to on-the-ground fire effects. In this study, we built a Random Forest model describing a field-based measure of fire severity, the composite burn index (CBI), as a function of multiple spectral indices, a variable representing spatial variability in climate, and latitude. CBI data primarily representing forested vegetation from 263 fires (8075 plots) across the United States and Canada were used to build the model. Overall, the model performed well, with a cross-validated R2 of 0.72, though there was spatial variability in model performance. The model we produced allows for the direct mapping of CBI, which is more interpretable compared to spectral indices. Moreover, because the model and all spectral explanatory variables were produced in Google Earth Engine, predicting and mapping of CBI can realistically be undertaken on hundreds to thousands of fires. We provide all necessary code to execute the model and produce maps of CBI in Earth Engine. This study and its products will be extremely useful to managers and scientists in North America who wish to map fire effects over large landscapes or regions. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Locations of the 263 fires representing 8075 plots in the US and Canada used in our model describing CBI.</p>
Full article ">Figure 2
<p>Fitted relationship of a Random Forest model describing CBI as a function of RBR using the default parameter of <tt>minLeafPopulation</tt> = 1 (<b>a</b>). Fitted relationship of a Random Forest model using equation 1 to define <tt>minLeafPopulation</tt> (<b>b</b>). To reduce overfitting, equation 1 was used to define <tt>minLeafPopulation</tt> in all models.</p>
Full article ">Figure 3
<p>Fitted splines of the residuals (predicted CBI – observed CBI) represent bias in model predictions. The native model overpredicts at low CBI values and underpredicts at high CBI values (dashed red line). We applied a bias correction to address this issue (Eqn. 1). This bias correction only applies to mapped CBI predictions (Figure 9), though the code distributed with this paper produces predictions with and without the bias correction. Except for Figure 9, all tables and figures refer to the predictions without bias correction.</p>
Full article ">Figure 4
<p>Observed vs. predicted CBI for the full Random Forest model. 1:1 line shown in red.</p>
Full article ">Figure 5
<p>Observed vs. predicted CBI for the Random Forest model for each state, province, and territory. 1:1 line shown in red. North Carolina not shown due to the low sample size (n = 4).</p>
Full article ">Figure 6
<p>Observed vs. predicted CBI for the Random Forest model for large geopolitical regions. 1:1 line shown in red. <b>NW US</b>: Idaho, Oregon, Montana, South Dakota, Washington, Wyoming; <b>SE US</b>: Arkansas, Florida, Kentucky, Missouri, North Carolina, Tennessee, Virginia; <b>SW US</b>: Arizona, California, Colorado, New Mexico, Utah.</p>
Full article ">Figure 7
<p>Observed vs. predicted CBI for the Random Forest model for ecoregions. 1:1 line shown in red.</p>
Full article ">Figure 8
<p>Satellite-derived fire severity values (in this case, RBR) associated with specific composite burn index (CBI) values increase with latitude (<b>a</b>) and decrease with climatic water deficit (<b>b</b>). This indicates that spectral fire severity indices such as the relativized burn ratio have slightly different meanings across geographic and climatic gradients. For example, at southern latitudes, a given RBR value corresponds to lower CBI compared to northern latitudes, and in regions that are less moisture limited (low CWD), a given RBR value corresponds to higher CBI compared to moisture limited regions with high CWD.</p>
Full article ">Figure 9
<p>Example of predicted CBI for select fires. Note that we do not have CBI data for all of these fires, illustrating that CBI predictions can be produced for fires that have occurred since 1984, provided pre- and post-fire TM and OLI imagery are available.</p>
Full article ">
16 pages, 3401 KiB  
Article
Advantages of Geostationary Satellites for Ionospheric Anomaly Studies: Ionospheric Plasma Depletion Following a Rocket Launch
by Giorgio Savastano, Attila Komjathy, Esayas Shume, Panagiotis Vergados, Michela Ravanelli, Olga Verkhoglyadova, Xing Meng and Mattia Crespi
Remote Sens. 2019, 11(14), 1734; https://doi.org/10.3390/rs11141734 - 23 Jul 2019
Cited by 30 | Viewed by 6070
Abstract
In this study, we analyzed signals transmitted by the U.S. Wide Area Augmentation System (WAAS) geostationary (GEO) satellites using the Variometric Approach for Real-Time Ionosphere Observation (VARION) algorithm in a simulated real-time scenario, to characterize the ionospheric response to the 24 August 2017 [...] Read more.
In this study, we analyzed signals transmitted by the U.S. Wide Area Augmentation System (WAAS) geostationary (GEO) satellites using the Variometric Approach for Real-Time Ionosphere Observation (VARION) algorithm in a simulated real-time scenario, to characterize the ionospheric response to the 24 August 2017 Falcon 9 rocket launch from Vandenberg Air Force Base in California. VARION is a real-time Global Navigation Satellites Systems (GNSS)-based algorithm that can be used to detect various ionospheric disturbances associated with natural hazards, such as tsunamis and earthquakes. A noise reduction algorithm was applied to the VARION-GEO solutions to remove the satellite-dependent noise term. Our analysis showed that the interactions of the exhaust plume with the ionospheric plasma depleted the total electron content (TEC) to a level comparable with nighttime TEC values. During this event, the geometry of the satellite-receiver link is such that GEO satellites measured the depleted plasma hole before any GPS satellites. We estimated that the ionosphere relaxed back to a pre-perturbed state after about 3 h, and the hole propagated with a mean speed of about 600 m/s over a region of 700 km in radius. We conclude that the VARION-GEO approach can provide important ionospheric TEC real-time measurements, which are not affected by the motion of the ionospheric pierce points (IPPs). Furthermore, the VARION-GEO measurements experience a steady noise level throughout the entire observation period, making this technique particularly useful to augment and enhance the capabilities of well-established GNSS-based ionosphere remote sensing techniques and future ionospheric-based early warning systems. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>left</b>) Map showing the IPPs location for satellites S35 (<b>blue dots</b>) and S38 (<b>yellow dots</b>) seen from the 62 GNSS stations. The IPPs for GEO satellites can be considered to be fixed over time. The red dot represents the location of the ionosonde site PA836. (<b>right</b>) Two maps representing the Earth as seen from WAAS-GEO satellites S35 and S38.</p>
Full article ">Figure 2
<p>VARION-GEO <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> results for satellite S35. Day before (<b>left column</b>) and day of the event (<b>right column</b>). The first row (<b>a</b>,<b>b</b>) represents the unfiltered VARION-GEO <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions far from the ionospheric hole (distance greater than 700 km). Time zero represents the time of the Falcon 9 launch (11:51 PDT). The second row (<b>c</b>,<b>d</b>) represents the unfiltered VARION-GEO <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions close to the ionospheric hole (distance smaller than 700 km).</p>
Full article ">Figure 3
<p>Unfiltered VARION-GPS <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions (<b>red curves</b>) for satellites G12, G25, G02, and G05. The blue curves show the elevation angle for each satellite-receiver link. The <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions show a lower high-frequency noise compared to the VARION-GEO solutions (<a href="#remotesensing-11-01734-f002" class="html-fig">Figure 2</a>). However, the higher long period trends in the unfiltered VARION-GPS solutions do not allow fully capturing the ionospheric response to the rocket launch.</p>
Full article ">Figure 4
<p>Filtered VARION-GEO <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> results for satellite S35. Day before (<b>left column</b>) and day of the event (<b>right column</b>). The first row (<b>a</b>,<b>b</b>) represents the solutions far from the ionospheric hole (<b>blue curves</b>) and close to the ionospheric hole (<b>red curves</b>). Time zero represents the time of the Falcon 9 launch (11:51 PDT). The second row (<b>c</b>,<b>d</b>) is a zoom in 10 min before to 60 min after the launch. The ionospheric depletion is clearly captured by the filtered VARION-GEO solutions near the ionospheric hole for the day. No depletion is showed either the day before or far from the hole.</p>
Full article ">Figure 5
<p>(<b>a</b>) The VARION-GEO <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions obtained from station p215, satellite S38; (<b>b</b>) the NmF2 time variability obtained from ionosonde PA836; and (<b>c</b>) the down-sampled and normalized <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions (<b>red curve</b>) and the normalized NmF2 time series (<b>blue curve</b>) plotted using a common scale [0, 1]. This figure shows a high correlation between the VARION-GEO <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions and ionosonde data. The correlation coefficient between the two curves is 0.97.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>c</b>) The filtered VARION-GEO <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions for satellite S35. The black dots represent 50% of the minimum of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math>. (<b>c</b>) The <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> solutions plotted on a time vs. distance plot. We interpolated the black dots to estimate a mean expansion velocity of the ionospheric hole. (<b>b</b>,<b>d</b>) The results of the diffusion simulation at 450 km of altitude. (<b>b</b>) The density time series at a fixed distance from the diffusion point. Following the black arrow, we computed the curves from 100 to 600 km, every 5 km. (<b>d</b>) The time vs. distance plot for the simulation. We computed the mean diffusion velocity of the number density maxima.</p>
Full article ">Figure 7
<p>Space-time <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>s</mi> <mi>T</mi> <mi>E</mi> <mi>C</mi> </mrow> </semantics></math> variations for 30 min after the launch (one frame every 5 min) at the SIPs (same positions of the corresponding IPPs on the map) for the 2 GEO satellites (square symbols) and six GPS satellites (denoted by circles) seen from the 62 GNSS permanent stations. The ionospheric hole is detected from both GEO satellites 5 min after the rocket launch. The coordinates are expressed in geodetic latitude (in degrees North) and longitude (in degrees west).</p>
Full article ">
29 pages, 9733 KiB  
Article
Quantifying the Impacts of Land-Use and Climate on Carbon Fluxes Using Satellite Data across Texas, U.S.
by Ram L. Ray, Ademola Ibironke, Raghava Kommalapati and Ali Fares
Remote Sens. 2019, 11(14), 1733; https://doi.org/10.3390/rs11141733 - 23 Jul 2019
Cited by 8 | Viewed by 6124
Abstract
Climate change and variability, soil types and soil characteristics, animal and microbial communities, and photosynthetic plants are the major components of the ecosystem that affect carbon sequestration potential of any location. This study used NASA’s Soil Moisture Active Passive (SMAP) Level 4 carbon [...] Read more.
Climate change and variability, soil types and soil characteristics, animal and microbial communities, and photosynthetic plants are the major components of the ecosystem that affect carbon sequestration potential of any location. This study used NASA’s Soil Moisture Active Passive (SMAP) Level 4 carbon products, gross primary productivity (GPP), and net ecosystem exchange (NEE) to quantify their spatial and temporal variabilities for selected terrestrial ecosystems across Texas during the 2015–2018 study period. These SMAP carbon products are available at 9 km spatial resolution on a daily basis. The ten selected SMAP grids are located in seven climate zones and dominated by five major land uses (developed, crop, forest, pasture, and shrub). Results showed CO2 emissions and uptake were affected by land-use and climatic conditions across Texas. It was also observed that climatic conditions had more impact on CO2 emissions and uptake than land-use in this state. On average, South Central Plains and East Central Texas Plains ecoregions of East Texas and Western Gulf Coastal Plain ecoregion of Upper Coast climate zones showed higher GPP flux and potential carbon emissions and uptake than other climate zones across the state, whereas shrubland on the Trans Pecos climate zone showed lower GPP flux and carbon emissions/uptake. Comparison of GPP and NEE distribution maps between 2015 and 2018 confirmed substantial changes in carbon emissions and uptake across Texas. These results suggest that SMAP carbon products can be used to study the terrestrial carbon cycle at regional to global scales. Overall, this study helps to understand the impacts of climate, land-use, and ecosystem dynamics on the terrestrial carbon cycle. Full article
(This article belongs to the Special Issue Terrestrial Carbon Cycle)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Climate zones and their corresponding land-use, and (<b>b</b>) ecological regions and climate zones in the study area. White crosses are the locations of ten select Soil Moisture Active Passive (SMAP) grids. The inset figure (9 km × 9 km, not in scale) is a selected SMAP grid, covering Prairie View A&amp;M University (PVAMU) research farm, to evaluate SMAP Gross Primary Productivity (GPP) and Net Ecosystem Exchange (NEE).</p>
Full article ">Figure 2
<p>Cumulative distribution of precipitation (<b>a</b>), and average annual distribution of temperature across Texas (<b>b</b>) using precipitation and temperature data from 2001–2018. The cumulative precipitation plot for each climate zone was developed using National Climatic Data Center (NCDC) daily precipitation data, whereas the annual average temperature (calculated using NCDC daily temperature data) was used to develop the spatial average temperature map across Texas. The x-axis of the cumulative distribution of the precipitation plot includes a fraction of time, which means 0.5 represents 50% of the time (9 years for this plot), and 1.0 represents 100% of the time (18 years for this plot).</p>
Full article ">Figure 3
<p>Daily gross primary productivity (GPP) and NEE at Upper Coast (UC) and High Plains (HP) for cropland during 2015 and 2016 to select the critical month of each year to develop spatial distribution maps. A sequence of time series plots was developed to identify the sensitive months when GPP and NEE values were changing or abruptly changed during the study period. Based on the changes in seasonal distributions of GPP and NEE, six different months (Apr, May, Jun, Aug, Sep, and Dec) of 2015 and 2018 were used to develop spatial distribution maps of GPP and NEE.</p>
Full article ">Figure 4
<p>Daily and monthly SMAP and in-situ GPP and NEE at PVAMU Research Farm (2016–2018). The unit of RMSE is the same as of GPP and NEE.</p>
Full article ">Figure 5
<p>Distributions of monthly GPP in each climate zone (2015–2018).</p>
Full article ">Figure 6
<p>Distributions of monthly NEE in each climate zone (2015–2018).</p>
Full article ">Figure 7
<p>Spatial distribution of GPP across Texas for selected months of 2015 and 2018. White crosses are the location of selected SMAP grids.</p>
Full article ">Figure 8
<p>Spatial distribution of NEE across Texas for selected months of 2015 and 2018. White crosses are the location of selected SMAP grids.</p>
Full article ">Figure 9
<p>Annual distribution of GPP (<b>a</b>–<b>c</b>) and NEE (<b>d</b>–<b>f</b>) across Texas. White crosses are the location of selected SMAP grids.</p>
Full article ">Figure 10
<p>Temporal distributions of monthly precipitation, GPP (<b>a</b>–<b>e</b>) and NEE (<b>f</b>–<b>j</b>) of five major land-use categories: Cropland (crop), forestland (forest), pastureland (pasture), shrubland (shrub) and developed land (developed) at each selected climate zone in Texas. Note: HP = High Plains; LRP = Low Rolling Plains; ET = East Texas; EP = Edwards Plateau; SC = South Central; TP = Trans Pecos; UC = Upper Coast; NC = North Central; S = Southern; LV = Lower Valley.</p>
Full article ">Figure 11
<p>Box and Whisker plots of monthly (<b>a</b>) GPP distributions and (<b>b</b>) NEE distributions for five major land uses at each selected climate zone in the state of Texas. Note: C = cropland, F = forestland, P = pastureland, S = shrubland, and D = developed land. The white circles represent the mean, the solid horizontal lines represent the median, and gray circles represent outliers. Upper horizontal line = maximum, lower horizontal line= minimum, top of the box = upper quartile, bottom of the box = lower quartile, upper quartile to maximum = upper whisker, and the lower quartile to minimum = lower whisker.</p>
Full article ">Figure 11 Cont.
<p>Box and Whisker plots of monthly (<b>a</b>) GPP distributions and (<b>b</b>) NEE distributions for five major land uses at each selected climate zone in the state of Texas. Note: C = cropland, F = forestland, P = pastureland, S = shrubland, and D = developed land. The white circles represent the mean, the solid horizontal lines represent the median, and gray circles represent outliers. Upper horizontal line = maximum, lower horizontal line= minimum, top of the box = upper quartile, bottom of the box = lower quartile, upper quartile to maximum = upper whisker, and the lower quartile to minimum = lower whisker.</p>
Full article ">
18 pages, 4971 KiB  
Article
Impacts of Land-Use Changes on Soil Erosion in Water–Wind Crisscross Erosion Region of China
by Jie Wang, Weiwei Zhang and Zengxiang Zhang
Remote Sens. 2019, 11(14), 1732; https://doi.org/10.3390/rs11141732 - 23 Jul 2019
Cited by 12 | Viewed by 5320
Abstract
Soil erosion affects food production, biodiversity, biogeochemical cycles, hydrology, and climate. Land-use changes accelerated by intensive human activities are a dominant anthropogenic factor inducing soil erosion globally. However, the impacts of land-use-type changes on soil erosion dynamics over a continuous period for constructing [...] Read more.
Soil erosion affects food production, biodiversity, biogeochemical cycles, hydrology, and climate. Land-use changes accelerated by intensive human activities are a dominant anthropogenic factor inducing soil erosion globally. However, the impacts of land-use-type changes on soil erosion dynamics over a continuous period for constructing a sustainable ecological environment has not been systematically quantified. This study investigates the spatial–temporal dynamics of land-use change and soil erosion across a specific area in China with water–wind crisscross erosion during three periods: 1995–1999, 2000–2005, and 2005–2010. We analyzed the impacts of each land-use-type conversion on the intensity changes of soil erosion caused by water and wind, respectively. The major findings include: (1) land-use change in the water–wind crisscross erosion region of China was characterized as cultivated land expansion at the main cost of grassland during 1995–2010; (2) the strongest land-use change moved westward in space from the central Loess Plateau area in 1995–2005 to the western piedmont alluvial area in 2005–2010; (3) soil erosion area is continuously increasing, but the trend is declining from the late 1990s to the late 2000s; (4) the soil conservation capability of land-use types in water–wind crisscross erosion regions could be compiled from high to low as high coverage grasslands, medium coverage grasslands, paddy, drylands, low coverage grasslands, built-up lands, unused land of sandy lands, the Gobi Desert, and bare soil. These findings could provide some insights for executing reasonable land-use approaches to balance human demands and environment sustainability. Full article
(This article belongs to the Special Issue Remote Sensing of Human-Environment Interactions)
Show Figures

Figure 1

Figure 1
<p>The water–wind crisscross erosion area in China. It consists of four regions from west to east: the western piedmont alluvial area, the central Loess Plateau area, the central grassland sandy area, and the eastern plain sandy area.</p>
Full article ">Figure 2
<p>The soil erosion map in 1995 for the water–wind crisscross erosion area in China. This map shows the distribution of water erosion (Wa) and wind erosion (Wi) with five intensity levels of slight (Sl), Moderate (MO), Intensive (It), Very Intensive (VI), and Severe (Se). For example, “SIWa-MoWi” presents the region having water erosion intensity level of slight and wind erosion intensity level of moderate. The area of each soil erosion intensity level is summarized in <a href="#app1-remotesensing-11-01732" class="html-app">Table S2</a>.</p>
Full article ">Figure 3
<p>The workflow of this study mainly includes three analyses of land-use change, soil erosion intensity by water and wind, and the contributions of land-use changes to soil erosion dynamics. DEM = Digital Elevation Map; NDVI = normalized difference vegetation index.</p>
Full article ">Figure 4
<p>The distribution of land uses over the water–wind crisscross erosion area of China in (a) 2000, (b) 2005, and (c) 2010. Only the regions with land-use change during each period of 1995–2000, 2000–2005, and 2005–2010 were shown in these figures. The area of each land-use type is shown in <a href="#app1-remotesensing-11-01732" class="html-app">Table S3</a>. The differences in land-use change between 2000, 2005, and 2010 were shown in <a href="#app1-remotesensing-11-01732" class="html-app">Figure S1</a>.</p>
Full article ">Figure 5
<p>Changed area of each land-use type over the water–wind crisscross erosion region of China during four periods of 1995–2000, 2000–2005, 2005–2010, and 1995–2010.</p>
Full article ">Figure 6
<p>Distribution of water–wind erosion over the study area in three periods: (a) 1995–2000, (b) 2000–2005, and (c) 2005–2010. The legends are consistent with <a href="#remotesensing-11-01732-f002" class="html-fig">Figure 2</a>. The area of each soil erosion intensity level is summarized in <a href="#app1-remotesensing-11-01732" class="html-app">Table S2</a>.</p>
Full article ">Figure 7
<p>The net changed areas for each (<b>a</b>) water erosion and (<b>b</b>) wind erosion levels during for study periods of 1995–2000, 2000–2005, 2005–2010, and 1995–2010. The light blue and orange shadows present the decrease and increase trends of changed area for each intensity level during the three study periods.</p>
Full article ">Figure 8
<p>Percentage of each gradation change of water–wind combined erosion intensity across the water–wind crisscross erosion zone of China during 1995–2010. The accumulated percentage from gradation decrease to increase is shown in the purple line. The light blue and orange shadows highlight the intensity decrease and increase levels, respectively.</p>
Full article ">Figure 9
<p>The contributions (noted by percentage) of each land-use transformation to every water (<b>a–c</b>) and wind (<b>d–f</b>) soil erosion intensity change during three periods. The light blue and orange shadows highlight the intensity decrease and increase levels, respectively. <a href="#remotesensing-11-01732-f009" class="html-fig">Figure 9</a>g summarizes the contributions (the percentage was noted as red numbers) of each land-use transformation to the intensification and alleviation from water–wind combined soil erosion during 1995–2010. This figure shows the dominate land-use transformations, including high, moderate, and low coverage grasslands (HG, MG, LG), paddy (PD), drylands (DL), built-up lands (BU), unused lands of sandy lands, the Gobi Desert, bare soil (UL), woodlands (WL), and water body (WB).</p>
Full article ">Figure 10
<p>The (a) proportions and (b-c) trends of land-use changes and water–wind crisscross erosion across the four regions in the study area, including western piedmont alluvial area (WPAA), the central Loess Plateau area (CLPA), the central grassland sandy area (CGSA), and the eastern plain sandy area (EPSA).</p>
Full article ">
20 pages, 14803 KiB  
Article
Revealing Kunming’s (China) Historical Urban Planning Policies Through Local Climate Zones
by Stéphanie Vandamme, Matthias Demuzere, Marie-Leen Verdonck, Zhiming Zhang and Frieke Van Coillie
Remote Sens. 2019, 11(14), 1731; https://doi.org/10.3390/rs11141731 - 22 Jul 2019
Cited by 24 | Viewed by 6623
Abstract
Over the last decade, Kunming has been subject to a strong urbanisation driven by rapid economic growth and socio-economic, topographical and proximity factors. As this urbanisation is expected to continue in the future, it is important to understand its environmental impacts and the [...] Read more.
Over the last decade, Kunming has been subject to a strong urbanisation driven by rapid economic growth and socio-economic, topographical and proximity factors. As this urbanisation is expected to continue in the future, it is important to understand its environmental impacts and the role that spatial planning strategies and urbanisation regulations can play herein. This is addressed by (1) quantifying the cities’ expansion and intra-urban restructuring using Local Climate Zones (LCZs) for three periods in time (2005, 2011 and 2017) based on the World Urban Database and Access Portal Tool (WUDAPT) protocol, and (2) cross-referencing observed land-use and land-cover changes with existing planning regulations. The results of the surveys on urban development show that, between 2005 and 2011, the city showed spatial expansion, whereas between 2011 and 2017, densification mainly occurred within the existing urban extent. Between 2005 and 2017, the fraction of open LCZs increased, with the largest increase taking place between 2011 and 2017. The largest decrease was seen for low the plants (LCZ D) and agricultural greenhouse (LCZ H) categories. As the potential of LCZs as, for example, a heat stress assessment tool has been shown elsewhere, understanding the relation between policy strategies and LCZ changes is important to take rational urban planning strategies toward sustainable city development. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Urban Climatology)
Show Figures

Figure 1

Figure 1
<p>Larger Kunming area, with the Local Climate Zone (LCZ) region of interest in red. A, B, and C are respectively the centre, airport, and high-tech industrial development zone, respectively. Figure produced with [<a href="#B45-remotesensing-11-01731" class="html-bibr">45</a>].</p>
Full article ">Figure 2
<p>Abridged definitions for local climate zones [<a href="#B29-remotesensing-11-01731" class="html-bibr">29</a>]. ©American Meteorological Society. Used with permission.</p>
Full article ">Figure 3
<p>Examples of the different LCZs in Kunming.</p>
Full article ">Figure 4
<p>Overview of LCZ classification. Workflow A is default World Urban Database and Access Portal Tool (WUDAPT) protocol, while Workflow B is the modified workflow that uses a moving window to include neighbourhood information. Redesigned after <a href="#remotesensing-11-01731-f003" class="html-fig">Figure 3</a> in Reference [<a href="#B34-remotesensing-11-01731" class="html-bibr">34</a>]. Section numbers indicated in the boxes provide full overview of what is done in each step. Note that this procedure was repeated for years 2005, 2011, and 2017.</p>
Full article ">Figure 5
<p>LCZ maps for: (<b>a</b>) 2005 with kernel size 9 × 9, (<b>b</b>) 2011 with kernel size 5 × 5, and (<b>c</b>) 2017 with kernel size 7 × 7. Note that LCZ classes 7, 9, E, and F are not present in the maps.</p>
Full article ">Figure 6
<p>LCZ histograms for all years presenting the corresponding surface area (A) of each LCZ (A<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>L</mi> <mi>C</mi> <mi>Z</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math>) as the percentage of (<b>a</b>) the total surface area (A<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>L</mi> <mi>C</mi> <mi>Z</mi> <mo>,</mo> <mi>a</mi> <mi>l</mi> <mi>l</mi> </mrow> </msub> </semantics></math>) and (<b>b</b>) the total urban area (<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>L</mi> <mi>C</mi> <mi>Z</mi> <mo>,</mo> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">
29 pages, 2600 KiB  
Article
Comparing Spectral Characteristics of Landsat-8 and Sentinel-2 Same-Day Data for Arctic-Boreal Regions
by Alexandra Runge and Guido Grosse
Remote Sens. 2019, 11(14), 1730; https://doi.org/10.3390/rs11141730 - 22 Jul 2019
Cited by 21 | Viewed by 6654
Abstract
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering [...] Read more.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Average number of clear pixels for the study site Lena Delta for the summer periods covering 1999–2018 (blue bars: Landsat; red bars: Sentinel-2).</p>
Full article ">Figure 2
<p>The three Arctic-to-Boreal study sites, Lena Delta, Batagay region, and Yakutsk region, are located along an approximate north-south transect in Eastern Siberia. The blue frames show the Landsat-8 image footprint, and the red frames show the Sentinel-2 image footprint of the same-day acquisitions used in this study. Sentinel-2 RGB composites of the overlapping area for the Lena Delta (<b>a</b>), Batagay region (<b>b</b>), and Yakutsk region (<b>c</b>).</p>
Full article ">Figure 3
<p>Data processing workflow used for the spectral band comparison and adjustment. Landsat-8 is abbreviated to L8 and Sentinel-2 to S2.</p>
Full article ">Figure 4
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for the Lena Delta. Left plots: observed surface reflectance values. Right plots: Lena Delta Adjusted (LDA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line. In all plots, the Sentinel-2 values and bands are on the <span class="html-italic">x</span>-axis, while Landsat-8 is depicted on the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 4 Cont.
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for the Lena Delta. Left plots: observed surface reflectance values. Right plots: Lena Delta Adjusted (LDA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line. In all plots, the Sentinel-2 values and bands are on the <span class="html-italic">x</span>-axis, while Landsat-8 is depicted on the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 5
<p>Comparison of NDVI values from the Lena Delta ((<b>a</b>), upper row) and Yakutsk ((<b>b</b>), lower row). Left plots: NDVI calculated from observed Landsat-8 and Sentinel-2 surface reflectance values. Right plots: NDVI calculated from adjusted Sentinel-2 and observed Landsat-8 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A1
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Batagay. Left plots: observed surface reflectance values. Right plots: Batagay Adjusted (BatA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A1 Cont.
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Batagay. Left plots: observed surface reflectance values. Right plots: Batagay Adjusted (BatA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A2
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Yakutsk. Left plots: observed surface reflectance values. Right plots: Yakutsk Adjusted (YukA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A2 Cont.
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Yakutsk. Left plots: observed surface reflectance values. Right plots: Yakutsk Adjusted (YukA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A3
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Eastern Siberia. Left plots: observed surface reflectance values. Right plots: Eastern Siberia Adjusted (esA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A3 Cont.
<p>Comparison of surface reflectance values from the Sentinel-2 and Landsat-8 corresponding bands for Eastern Siberia. Left plots: observed surface reflectance values. Right plots: Eastern Siberia Adjusted (esA) Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A4
<p>Comparison of NDVI values from Batagay ((<b>a</b>), upper row) and Eastern Siberia ((<b>b</b>), lower row). Left plots: NDVI calculated from observed surface reflectance values. Right plots: NDVI calculated from adjusted Sentinel-2 reflectance values. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">Figure A5
<p>Comparison of Eastern Siberian Adjusted (esA) the Sentinel-2 reflectance values and Landsat-8 corresponding bands for the individual study sites. In the order from left to right: Lena Delta, Batagay, and Yakutsk. The solid black line is one-to-one, and the pink line is the ordinary least squares regression trend line.</p>
Full article ">
19 pages, 6518 KiB  
Article
Mapping Hydrothermal Zoning Pattern of Porphyry Cu Deposit Using Absorption Feature Parameters Calculated from ASTER Data
by Mengjuan Wu, Kefa Zhou, Quan Wang and Jinlin Wang
Remote Sens. 2019, 11(14), 1729; https://doi.org/10.3390/rs11141729 - 22 Jul 2019
Cited by 14 | Viewed by 3821
Abstract
Identifying hydrothermal zoning pattern associated with porphyry copper deposit is important for indicating its economic potential. Traditional approaches like systematic sampling and conventional geological mapping are time-consuming and labor extensive, and with limitations for providing small scale information. Recent developments suggest that remote [...] Read more.
Identifying hydrothermal zoning pattern associated with porphyry copper deposit is important for indicating its economic potential. Traditional approaches like systematic sampling and conventional geological mapping are time-consuming and labor extensive, and with limitations for providing small scale information. Recent developments suggest that remote sensing is a powerful tool for mapping and interpreting the spatial pattern of porphyry Cu deposit. In this study, we integrated in situ spectral measurement taken at the Yudai copper deposit in the Kalatag district, northwestern China, information obtained by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), as well as the spectra of samples (hand-specimen) measured using an Analytical Spectral Device (ASD) FieldSpec4 high-resolution spectrometer in laboratory, to map the hydrothermal zoning pattern of the copper deposit. Results proved that the common statistical approaches, such as relative band depth and Principle Component Analysis (PCA), were unable to identify the pattern accurately. To address the difficulty, we introduced a curve-fitting technique for ASTER shortwave infrared data to simulate Al(OH)-bearing, Fe/Mg(OH)-bearing, and carbonate minerals absorption features, respectively. The results indicate that the absorption feature parameters can effectively locate the ore body inside the research region, suggesting the absorption feature parameters have great potentials to delineate hydrothermal zoning pattern of porphyry Cu deposit. We foresee the method being widely used in the future. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Regional geological map [<a href="#B26-remotesensing-11-01729" class="html-bibr">26</a>] of Kalatag district and (<b>b</b>) ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) image 3, 2, 1 in red, green, blue showing location of the Yudai porphyry copper deposit.</p>
Full article ">Figure 2
<p>Geological map of the Yudai porphyry copper deposit (modified after [<a href="#B25-remotesensing-11-01729" class="html-bibr">25</a>]). 1-clastic sedimentary rock (D1d); 2-biogenic carbonates (D1d); 3-volcanic breccia; 4-dacitic volcanic and volcaniclastic rocks; 5-basalt; 6-pyrite felsite; 7-mineralized quartz diorite porphyry; 8-diorite porphyry; 9-gabbro intrusion; 10-siderite ore-bodies; 11-orebody, potassic + silication zone (POS); 12-silication + sericitization zone (SIS); 13-propylitization zone (PRO); 14-fault. Location of collected rock samples is denoted in the map with sample code.</p>
Full article ">Figure 3
<p>(<b>a</b>) RBD (Relative Absorption Band Depth) 6 [(band 5 + band 7)/band 6] and (<b>b</b>) RBD 8 [(band 7 + band 9)/band 8] overlaid with alteration zone and sampling point.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) RBD (Relative Absorption Band Depth) 6 [(band 5 + band 7)/band 6] and (<b>b</b>) RBD 8 [(band 7 + band 9)/band 8] overlaid with alteration zone and sampling point.</p>
Full article ">Figure 4
<p>Principal component (PC) resultant images on ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) shortwave infrared (SWIR) bands of the Yudai Cu deposit. (<b>a</b>) PC3, (<b>b</b>) PC4.</p>
Full article ">Figure 5
<p>Mapping the absorption feature over the Yudai. Shown are the absorption feature parameters “absorption feature depth” (<b>a</b>,<b>c</b>) and the “wavelength of maximum absorption” (<b>b</b>,<b>d</b>) obtained with ASTER band 5/6/7 and band 7/8/9.</p>
Full article ">Figure 5 Cont.
<p>Mapping the absorption feature over the Yudai. Shown are the absorption feature parameters “absorption feature depth” (<b>a</b>,<b>c</b>) and the “wavelength of maximum absorption” (<b>b</b>,<b>d</b>) obtained with ASTER band 5/6/7 and band 7/8/9.</p>
Full article ">Figure 6
<p>Photomicrographs: (<b>a</b>) Plagioclase crystals altered to sericite (Ser) and epidote (Ep) (in cross-polarized light). (<b>b</b>) Chlorite (Chl) produced by biotite alteration and kaolinization (Kln) of potassium feldspar (in plane-polarized light). (<b>c</b>) Plagioclase crystals altered to epidote (in cross-polarized light). (<b>d</b>) Alteration recrystallization into calcite (Cal) (in plane-polarized light).</p>
Full article ">Figure 7
<p>Field photographs at sampling point propylitization alteration zone -5 (PRO-5).</p>
Full article ">
14 pages, 5346 KiB  
Article
A New Approach to Earth’s Gravity Field Modeling Using GPS-Derived Kinematic Orbits and Baselines
by Xiang Guo and Qile Zhao
Remote Sens. 2019, 11(14), 1728; https://doi.org/10.3390/rs11141728 - 21 Jul 2019
Cited by 7 | Viewed by 3421
Abstract
Earth’s gravity field recovery from GPS observations collected by low earth orbiting (LEO) satellites is a well-established technique, and kinematic orbits are commonly used for that purpose. Nowadays, more and more satellites are flying in close formations. The GPS-derived kinematic baselines between them [...] Read more.
Earth’s gravity field recovery from GPS observations collected by low earth orbiting (LEO) satellites is a well-established technique, and kinematic orbits are commonly used for that purpose. Nowadays, more and more satellites are flying in close formations. The GPS-derived kinematic baselines between them can reach millimeter precision, which is more precise than the centimeter-level kinematic orbits. Thus, it has long been expected that the more precise kinematic baselines can deliver better gravity field solutions. However, this expectation has not been met yet in practice. In this study, we propose a new approach to gravity field modeling, in which kinematic orbits of the reference satellite and baseline vectors between the reference satellite and its accompanying satellite are jointly inverted. To validate the added value, data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are used. We derive kinematic orbits and inter-satellite baselines of the twin GRACE satellites from the GPS data collected in the year of 2010. Then two sets of monthly gravity field solutions up to degree and order 60 are produced. One is derived from kinematic orbits of the twin GRACE satellites (‘orbit approach’). The other is derived from kinematic orbits of GRACE A and baseline vectors between GRACE A and B (‘baseline approach’). Analysis of observation postfit residuals shows that noise in the kinematic baselines is notably lower than the kinematic orbits by 50, 47 and 43% for the along-track, cross-track and radial components, respectively. Regarding the gravity field solutions, analysis in the spectral domain shows that noise of the gravity field solutions beyond degree 10 can be significantly reduced when the baseline approach is applied, with cumulative errors up to degree 60 being reduced by 34%, when compared to the orbit approach. In the spatial domain, the recovered mass changes with the baseline approach are more consistent with those inferred from the K-Band Ranging based solutions. Our results demonstrate that the proposed baseline approach is able to provide better gravity field solutions than the orbit approach. The findings may facilitate, among others, bridging the gap between GRACE and GRACE Follow-On satellite mission. Full article
(This article belongs to the Special Issue Remote Sensing by Satellite Gravimetry)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PSD<sup>1/2</sup>s estimated from postfit residuals of kinematic orbits of GRACE A and kinematic baselines between GRACE A and B.</p>
Full article ">Figure 2
<p>Geoid height errors per degree for different gravity field solutions in January 2010.</p>
Full article ">Figure 3
<p>Geoid height errors per degree for different gravity field solutions (RMS values in 2010).</p>
Full article ">Figure 4
<p>Periodic annual signals in mass anomalies in terms of equivalent water heights inferred from different G750 filtered solutions. From top to bottom, the plots display the results based on the WHU-RL01, orbit and baseline solutions, respectively.</p>
Full article ">Figure 5
<p>Same as in <a href="#remotesensing-11-01728-f004" class="html-fig">Figure 4</a>, but for the G1000 filtered solutions.</p>
Full article ">Figure 6
<p>Geographical distribution of the gridded RMS differences with respect to WHU RL01 for different GPS-based solutions. The top and bottom rows show the results for the orbit and baseline solutions, respectively.</p>
Full article ">Figure 7
<p>Time series of mean mass anomalies over the Amazon river basin inferred from the G750 (left) and G1000 (right) filtered solutions in 2010.</p>
Full article ">
29 pages, 10244 KiB  
Article
Automatic Building Outline Extraction from ALS Point Clouds by Ordered Points Aided Hough Transform
by Elyta Widyaningrum, Ben Gorte and Roderik Lindenbergh
Remote Sens. 2019, 11(14), 1727; https://doi.org/10.3390/rs11141727 - 21 Jul 2019
Cited by 33 | Viewed by 8079
Abstract
Many urban applications require building polygons as input. However, manual extraction from point cloud data is time- and labor-intensive. Hough transform is a well-known procedure to extract line features. Unfortunately, current Hough-based approaches lack flexibility to effectively extract outlines from arbitrary buildings. We [...] Read more.
Many urban applications require building polygons as input. However, manual extraction from point cloud data is time- and labor-intensive. Hough transform is a well-known procedure to extract line features. Unfortunately, current Hough-based approaches lack flexibility to effectively extract outlines from arbitrary buildings. We found that available point order information is actually never used. Using ordered building edge points allows us to present a novel ordered points–aided Hough Transform (OHT) for extracting high quality building outlines from an airborne LiDAR point cloud. First, a Hough accumulator matrix is constructed based on a voting scheme in parametric line space (θ, r). The variance of angles in each column is used to determine dominant building directions. We propose a hierarchical filtering and clustering approach to obtain accurate line based on detected hotspots and ordered points. An Ordered Point List matrix consisting of ordered building edge points enables the detection of line segments of arbitrary direction, resulting in high-quality building roof polygons. We tested our method on three different datasets of different characteristics: one new dataset in Makassar, Indonesia, and two benchmark datasets in Vaihingen, Germany. To the best of our knowledge, our algorithm is the first Hough method that is highly adaptable since it works for buildings with edges of different lengths and arbitrary relative orientations. The results prove that our method delivers high completeness (between 90.1% and 96.4%) and correctness percentages (all over 96%). The positional accuracy of the building corners is between 0.2–0.57 m RMSE. The quality rate (89.6%) for the Vaihingen-B benchmark outperforms all existing state of the art methods. Other solutions for the challenging Vaihingen-A dataset are not yet available, while we achieve a quality score of 93.2%. Results with arbitrary directions are demonstrated on the complex buildings around the EYE museum in Amsterdam. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The general procedure for extracting high quality straight building outlines.</p>
Full article ">Figure 2
<p>The proposed Ordered point aided Hough Transform (OHT) workflow for building outline extraction. (<b>a</b>) Building points; (<b>b</b>) concave hull of a building roof; (<b>c</b>) detection of dominant directions using local maxima; (<b>d</b>) detection of initial hotspots along dominant directions yields14 initial hotspots; (<b>e</b>) reduction to 10 filtered hotpots; (<b>f</b>) 10 lines corresponding to filtered hotspots; (<b>g</b>) Point accumulator analysis yields 12 segments; (<b>h</b>) segment intersection identifies 12 corners.</p>
Full article ">Figure 3
<p>Hough transform using (<span class="html-italic">θ, r</span>) parameters for detecting lines. (<b>a</b>) In object space, a line represented by an angle-distance (θ, r) passing through three points <span class="html-italic">A</span>, <span class="html-italic">B</span>, and <span class="html-italic">C</span>; (<b>b</b>) In Hough space, this line appears as the point of intersection of the curves <math display="inline"><semantics> <mrow> <mrow> <mover accent="true"> <mi mathvariant="normal">A</mi> <mo>¯</mo> </mover> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="normal">B</mi> <mo>¯</mo> </mover> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="normal">C</mi> <mo>¯</mo> </mover> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Peak detection in smoothed data. The yellow graph represents the original variance and the magenta line represents smoothed data obtained after Savitzky Golay filtering. Vertical dashed-lines indicate detected dominant directions.</p>
Full article ">Figure 5
<p>Cells in HA, OPL, HA<sub>m</sub>, and OPL<sub>m</sub> for <span class="html-italic">r</span> = 86–88 and <span class="html-italic">θ =</span> 104°–106°. Red cells correspond to hostspots. (<b>a</b>) HA<span class="html-italic"><sub>(θ, r)</sub></span> cell containing accumulated numbers of points voting for a line; (<b>b</b>) OPL<span class="html-italic"><sub>(θ, r)</sub></span> cell containing lists of points voting for the same line; (<b>c</b>) HA<sub>m<span class="html-italic">(</span></sub><span class="html-italic"><sub>θ, r)</sub></span> cell containing merged accumulated number of points from its adjacent cell; (<b>d</b>) OPL<sub>m<span class="html-italic">(θ, r)</span></sub> cell containing lists of merged points from its adjacent cell.</p>
Full article ">Figure 6
<p>Representation of ordered edge point distribution of a building. Lines are separated in segments by gap identification. (<b>a</b>) Line L9 (red) consists of two segments: <span class="html-italic">L<sub>9</sub>A</span> and <span class="html-italic">L<sub>9</sub>B;</span> (<b>b</b>) List of points supporting the same line. The line points in L2 and L9 are both divided over two segments, indicated by red and blue numbers.</p>
Full article ">Figure 7
<p>Test set of Makassar. (<b>a</b>) Test set area (inside red outline) overlaid to Indonesian 1:10.000 base map; (<b>b</b>) Test set areas (inside yellow outline) overlaid to aerial orthoimage. (<b>c</b>) ALS point cloud of the Makassar test set; (<b>d</b>) LiDAR points of the test area; (c) Clustered building points. Different color indicates different segment. ©BIG.</p>
Full article ">Figure 8
<p>The test sets of Vaihingen. (<b>a</b>) Test set areas (inside red outlines) overlaid to OSM map; (<b>b</b>) Test set areas (inside yellow outlines) overlaid to ISPRS orthoimage. ©OSM and ISPRS.</p>
Full article ">Figure 9
<p>Two test sets of Vaihingen benchmark dataset. (<b>a</b>) ALS points of Vaihingen-A; (<b>b</b>) segmented points of Vaihingen-A; (<b>c</b>) ALS points of Vaihingen-B; (<b>d</b>) classified points of Vaihingen-B. ©ISPRS.</p>
Full article ">Figure 10
<p>EYE-Amsterdam test set. (<b>a</b>) Map of the EYE-Amsterdam; (<b>b</b>) 2017 Aerial image of the EYE-Amsterdam; (<b>c</b>) 2014 AHN3 point cloud; (<b>d</b>) AHN3 classified building points (orange). ©PDOK of the Netherlands and ESRI-NL.</p>
Full article ">Figure 10 Cont.
<p>EYE-Amsterdam test set. (<b>a</b>) Map of the EYE-Amsterdam; (<b>b</b>) 2017 Aerial image of the EYE-Amsterdam; (<b>c</b>) 2014 AHN3 point cloud; (<b>d</b>) AHN3 classified building points (orange). ©PDOK of the Netherlands and ESRI-NL.</p>
Full article ">Figure 11
<p>Extraction of buildings with multiple arbitrary directions. (<b>a</b>) Hotspots (red points) of multiple arbitrary directions; (<b>b</b>) Line detection; (<b>c</b>) Building outlines results.</p>
Full article ">Figure 12
<p>Outline extraction of a building with one false dominant direction. (<b>a</b>) Initial hotspots (red) in three dominant directions are detected where the highest peak (yellow line) is a false dominant direction; (<b>b</b>) Representation of all initial lines including five false lines (green); (<b>c</b>) Line segments after filtering.</p>
Full article ">Figure 13
<p>Building outline result of different segment lengths with two collinear segments. (<b>a</b>) Outline extraction of a building that has two collinear line segments (inside the red ellipse); (<b>b</b>) Outline extraction of a building that has four collinear line segments (inside the red ellipse).</p>
Full article ">Figure 14
<p>Building outline results in case flaws exist in the segmentation results. (<b>a</b>) Outline of a building connected to a tree; (<b>b</b>) Outline of a building roof partially covered by a tree; (<b>c</b>) Outline of a complex building shape connected to a tree.</p>
Full article ">Figure 15
<p>Experiment to evaluate the influence of the bin size parameter in case of extracting a complex building outline within a 1-m buffer of the reference corners (grey circle). (<b>a</b>) Scatter plot of building corners for different bin sizes. Different plus (+) color indicate corners of different <span class="html-italic">bin<sub>r</sub></span>; (<b>b</b>) Zoomed in a part of building corners.</p>
Full article ">Figure 16
<p>Illustration on how the proposed method regularizes building edges especially in case noise exist in the building segmentation input for the Makassar test set. (<b>a</b>) Building comparison between filtered building points (blue) and base map (red); (<b>b</b>) Building comparison between outlines generated by the proposed method (green) and base map (red).</p>
Full article ">Figure 17
<p>Comparison of building outline results with ground truth. (<b>a</b>–<b>b</b>) Vaihingen-A test area; (<b>c</b>–<b>d</b>) Vaihingen-B test area. Left: comparison of filtered building points (blue) and base map (red polygons); Right: outlines generated by the proposed method (green) and base map (red polygons).</p>
Full article ">Figure 18
<p>Vaihingen-B misses some part of building roofs due to vegetation (inside the brown circle) and low roof elevation (inside the white circle). (<b>a</b>) Overlay of building outline results (magenta) and reference (blue) with orthophoto; (<b>b</b>) Overlay of building outline results (magenta) and reference (blue) with Digital Surface Model (DSM).</p>
Full article ">Figure 19
<p>Trees connected to building lead to incorrect outline result. (<b>a</b>) Building outline result (green); (<b>b</b>) Reference building polygon; (<b>c</b>) Deviations from the mean roof height; (<b>d</b>) Intensity image of the neighboring building area.</p>
Full article ">
19 pages, 77826 KiB  
Article
Automatic Vector-Based Road Structure Mapping Using Multibeam LiDAR
by Junqiao Zhao, Xudong He, Jun Li, Tiantian Feng, Chen Ye and Lu Xiong
Remote Sens. 2019, 11(14), 1726; https://doi.org/10.3390/rs11141726 - 21 Jul 2019
Cited by 12 | Viewed by 4898
Abstract
The high-definition map (HD-map) of road structures is crucial for the safe planning and control of autonomous vehicles. However, generating and updating such maps requires intensive manual work. Simultaneous localization and mapping (SLAM) is able to automatically build and update a map of [...] Read more.
The high-definition map (HD-map) of road structures is crucial for the safe planning and control of autonomous vehicles. However, generating and updating such maps requires intensive manual work. Simultaneous localization and mapping (SLAM) is able to automatically build and update a map of the environment. Nevertheless, there is still a lack of SLAM method for generating vector-based road structure maps. In this paper, we propose a vector-based SLAM method for the road structure mapping using vehicle-mounted multibeam LiDAR. We propose using polylines as the primary mapping element instead of grid maps or point clouds because the vector-based representation is lightweight and precise. We explored the following: (1) the extraction and vectorization of road structures based on multiframe probabilistic fusion; (2) the efficient vector-based matching between frames of road structures; (3) the loop closure and optimization based on the pose-graph; and (4) the global reconstruction of the vector map. One specific road structure, the road boundary, is taken as an example. We applied the proposed mapping method to three road scenes, ranging from hundreds of meters to over ten kilometers and the results are automatically generated vector-based road boundary maps. The average absolute pose error of the trajectory in the mapping is 1.83 m without the aid of high-precision GPS. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The work flow of the proposed framework.</p>
Full article ">Figure 2
<p>Road boundary extraction. (<b>a</b>) the 3D point cloud of a scene. (<b>b</b>) the 2D projection. (<b>c</b>) the ground elimination. (<b>d</b>) the road boundaries extracted in one frame. (<b>e</b>) the multiframe probabilistic fusion.</p>
Full article ">Figure 3
<p>The structure of the fused reckoning system.</p>
Full article ">Figure 4
<p>Road boundary vectorization and simplification. (<b>a</b>) initially generated virtual scans. (<b>b</b>) road boundary candidate calculation (blue and yellow represent hit and miss, respectively). (<b>c</b>) road boundary candidate clustering (red denotes road boundaries and green denotes free spaces). (<b>d</b>) road boundary vectorization. (<b>e</b>) road boundary simplification.</p>
Full article ">Figure 5
<p>The comparison of the distance metric between polylines adopted by various methods, where the polyline represented by yellow nodes is the target polyline, and the polyline represented by blue nodes is the reference polyline.</p>
Full article ">Figure 6
<p>The illustration of the extraction of the innermost road boundary point by probe-based sampling, where the yellow nodes represent the trajectory of the vehicle and the blue nodes represent the sampled road boundary.</p>
Full article ">Figure 7
<p>The road boundaries in openly available datasets. (<b>a</b>) road boundary is heavily occluded in the KITTI dataset. (<b>b</b>) road boundary is missing in many parts of the Ford dataset.</p>
Full article ">Figure 8
<p>The TiEV autonomous driving platform and the selected areas for mapping. I: a loop road; II: the IV evaluation field; III: the campus.</p>
Full article ">Figure 9
<p>Demonstration of the results of LGM, LVM extraction, and vector-based matching. (<b>a</b>) the results of ground elimination. (<b>b</b>) the road boundaries segmented in one frame. (<b>c</b>) the fused local grid map of road boundaries (LGM). (<b>d</b>) road boundary candidate clusters (blue and yellow represent the hit road boundary and missed free space, respectively). (<b>e</b>) the raw local vectorization map (raw LVM). (<b>f</b>) the simplified local vectorization map (simplified LVM). (<b>g</b>) original unmatched road boundaries. (<b>h</b>) matched road boundaries using ICP. (<b>i</b>) matched road boundaries using raw-LVM-based matching. (<b>j</b>) matched road boundaries using simplified-LVM-based matching.</p>
Full article ">Figure 10
<p>The evaluation of the resulting trajectories. (<b>a</b>) the data collecting path where the blue dot indicates the start and the red dot indicates the end. (<b>b</b>) comparison of the ground truth, the reckoning-based trajectory, and the optimized trajectory. (<b>c</b>) the absolute positioning error of the reckoning-based trajectory. (<b>d</b>) the absolute positioning error of the optimized trajectory.</p>
Full article ">Figure 11
<p>Experimental results of the proposed method on a loop road. (<b>a</b>) resulting road boundary points (blue) and innermost road boundary points (yellow). (<b>b</b>) resulting vectorized map obtained by the proposed method. (<b>c</b>) the superposition of the vectorized map obtained by the proposed method and aerial image of the mapped area.</p>
Full article ">Figure 12
<p>The evaluation of the resulting trajectories. (<b>a</b>) the data collection path, where the blue dot indicates the start and the red dot indicates the end. (<b>b</b>) comparison of the ground truth, the reckoning-based trajectory, and the optimized trajectory. (<b>c</b>) the absolute positioning error of the reckoning-based trajectory. (<b>d</b>) the absolute positioning error of the optimized trajectory.</p>
Full article ">Figure 13
<p>Experimental results of the proposed method on a test field. (<b>a</b>) resulting road boundary points (blue) and innermost road boundary points (yellow). (<b>b</b>) resulting vectorized map obtained by the proposed method. (<b>c</b>) the superposition of the vectorized map obtained by the proposed method and the aerial image of mapped area.</p>
Full article ">Figure 14
<p>The evaluation of the resulting trajectory. (<b>a</b>) the data collection path where the blue dot indicates the start and the red dot indicates the end. (<b>b</b>) comparison of the ground truth, the reckoning-based trajectory, and the optimized trajectory. (<b>c</b>) the absolute positioning error of the reckoning-based trajectory. (<b>d</b>) the absolute positioning error of the optimized trajectory.</p>
Full article ">Figure 15
<p>Experimental results of the proposed method on the Tongji campus. (<b>a</b>) resulting road boundary points (blue) and innermost road boundary points (yellow). (<b>b</b>) resulting vectorized map obtained by the proposed method. (<b>c</b>) the superposition of the vectorized map obtained by the proposed method and the aerial image of the mapped area.</p>
Full article ">
14 pages, 24464 KiB  
Article
Deep Fully Convolutional Networks for Cadastral Boundary Detection from UAV Images
by Xue Xia, Claudio Persello and Mila Koeva
Remote Sens. 2019, 11(14), 1725; https://doi.org/10.3390/rs11141725 - 20 Jul 2019
Cited by 43 | Viewed by 6640
Abstract
There is a growing demand for cheap and fast cadastral mapping methods to face the challenge of 70% global unregistered land rights. As traditional on-site field surveying is time-consuming and labor intensive, imagery-based cadastral mapping has in recent years been advocated by fit-for-purpose [...] Read more.
There is a growing demand for cheap and fast cadastral mapping methods to face the challenge of 70% global unregistered land rights. As traditional on-site field surveying is time-consuming and labor intensive, imagery-based cadastral mapping has in recent years been advocated by fit-for-purpose (FFP) land administration. However, owing to the semantic gap between the high-level cadastral boundary concept and low-level visual cues in the imagery, improving the accuracy of automatic boundary delineation remains a major challenge. In this research, we use imageries acquired by Unmanned Aerial Vehicles (UAV) to explore the potential of deep Fully Convolutional Networks (FCNs) for cadastral boundary detection in urban and semi-urban areas. We test the performance of FCNs against other state-of-the-art techniques, including Multi-Resolution Segmentation (MRS) and Globalized Probability of Boundary (gPb) in two case study sites in Rwanda. Experimental results show that FCNs outperformed MRS and gPb in both study areas and achieved an average accuracy of 0.79 in precision, 0.37 in recall and 0.50 in F-score. In conclusion, FCNs are able to effectively extract cadastral boundaries, especially when a large proportion of cadastral boundaries are visible. This automated method could minimize manual digitization and reduce field work, thus facilitating the current cadastral mapping and updating practices. Full article
(This article belongs to the Special Issue Remote Sensing for Land Administration)
Show Figures

Figure 1

Figure 1
<p>Study areas. Two case study sites in Rwanda were selected, namely Busogo and Muhoza, representing a sub-urban and urban setting, respectively.</p>
Full article ">Figure 2
<p>The Unmanned Aerial Vehicle (UAV) images and boundary reference for selected tiles. TR1, TR2, TR3 and TS1 are selected tiles from Busogo; TR4, TR5, TR6 and TS2 are selected tiles from Muhoza. For each area, the former three were used for training and the last one was used for testing the algorithms. The boundary references in TR1~TR6 are the yellow lines. In TS1 and TS2, we separated the boundary references as visible (green lines) and invisible (red lines).</p>
Full article ">Figure 3
<p>Architecture of the proposed FCN.</p>
Full article ">Figure 4
<p>Learning curves of the FCNs in Busogo (<b>left</b>) and Muhoza (<b>right</b>).</p>
Full article ">Figure 5
<p>Reference and classification maps obtained by the investigated techniques. The visible boundary references are the green lines; the invisible are the red lines; and the detected boundaries are the yellow lines.</p>
Full article ">Figure 6
<p>The error map of the investigated techniques. Yellow lines are TP; red lines are FP; and green lines are FN.</p>
Full article ">
25 pages, 8538 KiB  
Article
Monitoring of Nitrogen and Grain Protein Content in Winter Wheat Based on Sentinel-2A Data
by Haitao Zhao, Xiaoyu Song, Guijun Yang, Zhenhai Li, Dongyan Zhang and Haikuan Feng
Remote Sens. 2019, 11(14), 1724; https://doi.org/10.3390/rs11141724 - 20 Jul 2019
Cited by 43 | Viewed by 6361
Abstract
Grain protein content (GPC) is an important indicator of wheat quality. Earlier estimation of wheat GPC based on remote sensing provided effective decision to adapt optimized strategies for grain harvest, which is of great significance for agricultural production. The objectives of this field [...] Read more.
Grain protein content (GPC) is an important indicator of wheat quality. Earlier estimation of wheat GPC based on remote sensing provided effective decision to adapt optimized strategies for grain harvest, which is of great significance for agricultural production. The objectives of this field study are: (i) To assess the ability of spectral vegetation indices (VIs) of Sentinel 2 data to detect the wheat nitrogen (N) attributes related to the grain quality of winter wheat production, and (ii) to examine the accuracy of wheat N status and GPC estimation models based on different VIs and wheat nitrogen parameters across Analytical Spectra Devices (ASD) and Unmanned Aerial Vehicle (UAV) hyper-spectral data-simulated sentinel data and the real Sentinel-2 data. In this study, four nitrogen parameters at the wheat anthesis stage, including plant nitrogen accumulation (PNA), plant nitrogen content (PNC), leaf nitrogen accumulation (LNA), and leaf nitrogen content (LNC), were evaluated for their relationship between spectral parameters and GPC. Then, a multivariate linear regression method was used to establish the wheat nitrogen and GPC estimation model through simulated Sentinel-2A VIs. The coefficients of determination ( R 2 ) of four nitrogen parameter models were all greater than 0.7. The minimum R 2 of the prediction model of wheat GPC constructed by four nitrogen parameters combined with VIs was 0.428 and the highest R 2 was 0.467. The normalized root mean square error (nRMSE) of the four nitrogen estimation models ranged from 26.333% to 29.530% when verified by the ground-measured data collected from the Beijing suburbs, and the corresponding nRMSE for the GPC-predicted models ranged from 17.457% to 52.518%. The accuracy of the estimated model was verified by UAV hyper-spectral data which had resized to different spatial resolution collected from the National Experimental Station for Precision Agriculture. The normalized root mean square error (nRMSE) of the four nitrogen estimation models ranged from 16.9% to 37.8%, and the corresponding nRMSE for the GPC-predicted models ranged from 12.3% to 13.2%. The relevant models were also verified by Sentinel-2A data collected in 2018 while the minimum nRMSE for GPC invert model based on PNA was 7.89% and the maximum nRMSE of the GPC model based on LNC was 12.46% in Renqiu district, Hebei province. The nRMSE for the wheat nitrogen estimation model ranged from 23.200% to 42.790% for LNC and PNC. These data demonstrate that freely available Sentinel-2 imagery can be used as an important data source for wheat nutrition and grain quality monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>UAV hyper-spectral image for Trail 3.</p>
Full article ">Figure 3
<p>Winter wheat planting area in Renqiu area in 2018. The green part is the wheat growing area.</p>
Full article ">Figure 4
<p>Workflow of the study.</p>
Full article ">Figure 5
<p>The distribution of the measured data.</p>
Full article ">Figure 6
<p>Relationship between nitrogen nutrition parameters and vegetation indices.</p>
Full article ">Figure 7
<p>Relationship between predicted and measured values of nitrogen nutrition parameters. (<b>a</b>) Leaf nitrogen content (LNC) estimation results based on vegetation indices. (<b>b</b>) Leaf nitrogen accumulation (LNA) estimation results based on vegetation indices. (<b>c</b>) Plant nitrogen content (PNC) estimation results based on vegetation indices. (<b>d</b>) Plant nitrogen accumulation (PNA) estimation results based on vegetation indices.</p>
Full article ">Figure 8
<p>Relationship between measured and predicted grain protein content. (<b>a</b>) GPC estimation results based on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>N</mi> <msub> <mi>C</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> and vegetation indices. (<b>b</b>) GPC estimation results based on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>N</mi> <msub> <mi>A</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> and vegetation indices. (<b>c</b>) GPC estimation results based on <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>N</mi> <msub> <mi>C</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> and vegetation indices. (<b>d</b>) GPC estimation results based on <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>N</mi> <msub> <mi>A</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> </semantics></math> and vegetation indices.</p>
Full article ">Figure 9
<p>UAV data inversion map of four nitrogen nutrition parameters. (<b>a</b>) LNC spatial distribution map. (<b>b</b>) LNA spatial distribution map. (<b>c</b>) PNC spatial distribution map. (<b>d</b>) PNA spatial distribution map.</p>
Full article ">Figure 10
<p>UAV data inversion of the grain protein content. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>L</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>L</mi> <mi>N</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>P</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>P</mi> <mi>N</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map.</p>
Full article ">Figure 11
<p>Spatial distribution map of the four nitrogen nutrition indexes. (<b>a</b>) LNC spatial distribution map. (<b>b</b>) LNA spatial distribution map. (<b>c</b>) PNC spatial distribution map. (<b>d</b>) PNA spatial distribution map.</p>
Full article ">Figure 12
<p>Spatial distribution of the grain protein content. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>L</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>L</mi> <mi>N</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>P</mi> <mi>N</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map. (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>P</mi> <msub> <mi>C</mi> <mrow> <mi>P</mi> <mi>N</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> spatial distribution map.</p>
Full article ">
18 pages, 6034 KiB  
Article
Spatiotemporal Analysis of MODIS NDVI in the Semi-Arid Region of Kurdistan (Iran)
by Mehdi Gholamnia, Reza Khandan, Stefania Bonafoni and Ali Sadeghi
Remote Sens. 2019, 11(14), 1723; https://doi.org/10.3390/rs11141723 - 20 Jul 2019
Cited by 20 | Viewed by 5208
Abstract
In this study, the spatiotemporal behavior of vegetation cover in the Kurdistan province of Iran was analyzed for the first time by TIMESAT and Breaks for Additive Season and Trend (BFAST) algorithms. They were applied on Normalized Vegetation Index (NDVI) time series from [...] Read more.
In this study, the spatiotemporal behavior of vegetation cover in the Kurdistan province of Iran was analyzed for the first time by TIMESAT and Breaks for Additive Season and Trend (BFAST) algorithms. They were applied on Normalized Vegetation Index (NDVI) time series from 2000 to 2016 derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations. The TIMESAT software package was used to estimate the seasonal parameters of NDVI and their relation to land covers. BFAST was applied for identifying abrupt changes (breakpoints) of NDVI and their magnitudes. The results from TIMESAT and BFAST were first reported separately, and then interpreted together. TMESAT outcomes showed that the lowest and highest amplitudes of NDVI during the whole time period happened in 2008 and 2010. The spatial distribution of the number of breakpoints showed different behaviors in the west and east of the study area, and the breakpoint frequency confirmed the extreme NDVI amplitudes in 2008 and 2010 found by TIMESAT. For the first time in Iran, a correlation analysis between accumulated precipitations and maximum NDVIs (from one to seven months before the NDVI maximum) was conducted. The results showed that precipitation one month before had a higher correlation with the maximum NDVIs in the region. Overall, the results describe the NDVI behavior in terms of greenness, lifetime, abrupt changes for the different land covers, and across the years, suggesting how the northwest and west of the study area can be more susceptible to drought conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Mean of annual precipitation from 2000 to 2016 in the Kurdistan province of Iran estimated by Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), (<b>B</b>) elevation map from ASTER data, (<b>C</b>) mean of annual maximum Normalized Vegetation Index (NDVIs) from 2000 to 2016.</p>
Full article ">Figure 2
<p>Land-cover map in 2015 with eight classes and their coverage percentage [<a href="#B54-remotesensing-11-01723" class="html-bibr">54</a>] in the Kurdistan province. Shrubland and grassland refer to “natural” classes.</p>
Full article ">Figure 3
<p>Normalized Vegetation Index (NDVI) parameters extracted from TIMESAT: amplitude, start of season (SoS), large integral (Linteg), small integral (Sinteg), base value, end of season (EOS), maximum value.</p>
Full article ">Figure 4
<p>(<b>A</b>) TIMESAT modeling of the NDVI observations for a sample point in the study area, including the start and end of the season. (<b>B</b>) Sorted amplitudes of NDVI from TIMESAT, for each year, for all the pixels in the study area.</p>
Full article ">Figure 5
<p>(<b>A</b>) Difference between the NDVI maximum values in 2008 and mean of maximum values (MMV) across 2000–2016, (<b>B</b>) Difference between the NDVI maximum values in 2010 and MMV across 2000–2016, (<b>C</b>) difference between the NDVI base values in 2008 and mean of base values (MBV) across 2000–2016, (<b>D</b>) difference between the NDVI base values in 2010 and MBV across 2000–2016.</p>
Full article ">Figure 6
<p>Error bar plot of TIMESAT seasonal parameters in relation to land covers for the whole time series: (<b>A</b>) maximum value of NDVI, (<b>B</b>) base value of NDVI, (<b>C</b>) day of year (DOY) of middle of season, (<b>D</b>) DOY of start of season, (<b>E</b>) DOY of end of season, and (<b>F</b>) length of season (the plots are sorted based on the mean values).</p>
Full article ">Figure 7
<p>NDVI annual parameters from TIMESAT in the study area. (<b>A</b>) Maximum value, (<b>B</b>) base value, (<b>C</b>) DOY middle of season, and (<b>D</b>) length of season (DOY).</p>
Full article ">Figure 8
<p>(<b>A</b>) Map of the number of breakpoints for the whole NDVI time series (2000–2016), and (<b>B</b>) frequency of the breakpoints for each year.</p>
Full article ">Figure 9
<p>(<b>A</b>) Histograms of magnitudes of breakpoints for the whole NDVI time series, and (<b>B</b>) mean number of breakpoints for each land cover.</p>
Full article ">Figure 10
<p>(<b>A</b>) The error bar plot of positive magnitudes, (<b>B</b>) the error bar plot of negative magnitudes in relation to land covers for the whole time series (the plots are sorted based on the mean of magnitudes).</p>
Full article ">Figure 11
<p>Spatial distribution of (<b>A</b>) maximum of breakpoint magnitudes with positive signs, (<b>B</b>) maximum of breakpoint magnitudes with negative signs for the whole time series 2000–2016.</p>
Full article ">Figure 12
<p>(<b>A</b>) Correlations between the maximum NDVI values and the accumulated precipitations from CHIRPS in the first (M1) to the seventh month (M7) before DOY 175, for all the pixels in the study area. (<b>B</b>) Accumulated monthly precipitations in the first to the seventh month before DOY 175 in each year for the whole area.</p>
Full article ">Figure 13
<p>Standardized Precipitation IndexSPI) maps from 2006 (<b>A</b>) to 2011 (<b>F</b>) showing the wet and dry conditions in the study area.</p>
Full article ">
18 pages, 3838 KiB  
Article
Evaluating the Variability of Urban Land Surface Temperatures Using Drone Observations
by Joseph Naughton and Walter McDonald
Remote Sens. 2019, 11(14), 1722; https://doi.org/10.3390/rs11141722 - 20 Jul 2019
Cited by 56 | Viewed by 7369
Abstract
Urbanization and climate change are driving increases in urban land surface temperatures that pose a threat to human and environmental health. To address this challenge, we must be able to observe land surface temperatures within spatially complex urban environments. However, many existing remote [...] Read more.
Urbanization and climate change are driving increases in urban land surface temperatures that pose a threat to human and environmental health. To address this challenge, we must be able to observe land surface temperatures within spatially complex urban environments. However, many existing remote sensing studies are based upon satellite or aerial imagery that capture temperature at coarse resolutions that fail to capture the spatial complexities of urban land surfaces that can change at a sub-meter resolution. This study seeks to fill this gap by evaluating the spatial variability of land surface temperatures through drone thermal imagery captured at high-resolutions (13 cm). In this study, flights were conducted using a quadcopter drone and thermal camera at two case study locations in Milwaukee, Wisconsin and El Paso, Texas. Results indicate that land use types exhibit significant variability in their surface temperatures (3.9–15.8 °C) and that this variability is influenced by surface material properties, traffic, weather and urban geometry. Air temperature and solar radiation were statistically significant predictors of land surface temperature (R2 0.37–0.84) but the predictive power of the models was lower for land use types that were heavily impacted by pedestrian or vehicular traffic. The findings from this study ultimately elucidate factors that contribute to land surface temperature variability in the urban environment, which can be applied to develop better temperature mitigation practices to protect human and environmental health. Full article
(This article belongs to the Special Issue Remote Sensing Monitoring of Land Surface Temperature (LST))
Show Figures

Figure 1

Figure 1
<p>Visual imagery of the case study locations: Marquette University (<b>a</b>) and University of Texas El Paso (UTEP) (<b>b</b>). Visual imagery of Marquette was captured from a drone on 11 August 2018. Visual imagery of UTEP was pulled from Google Maps on 13 March 2019.</p>
Full article ">Figure 2
<p>Flow chart of data collection and processing.</p>
Full article ">Figure 3
<p>Boxplot distribution of surface temperature (<b>a</b>); and histogram of surface temperature (<b>b</b>). Data from flight recorded on July 11, 2018. Note GRS = grass; SM = shrub/mulch; CPY = canopy; PL = parking lot; SW = sidewalk; RTC = composite rooftop; RTR = rubber rooftop; RD = road; SLR = solar.</p>
Full article ">Figure 4
<p>Boxplot distribution of average temperature, standard deviation and coefficient of variation from 14 recorded flights in Milwaukee, WI.</p>
Full article ">Figure 5
<p>Standard deviation distributions for six land use types at hour 9, 12, 15 and 17 at both the MU and UTEP locations.</p>
Full article ">Figure 6
<p>Spatial distribution of temperature from a flight recorded on 11 July 2018 (<b>a</b>) and zoomed in spatial distribution of temperature for the concrete parking lot from a flight recorded on 11 July 2018 (<b>b</b>). The hotter surfaces (red) in the right image are parked cars and the cooler surfaces (blue) are heat shadows visible after parked cars leave.</p>
Full article ">Figure 7
<p>Spatial distribution of tempearture (<b>a</b>), albedo (<b>b</b>), NDVI (<b>c</b>) and ATI (<b>d</b>) for a flight recorded on 11 August 2018.</p>
Full article ">Figure 8
<p>Surface temperature data plotted against albedo from a flight recorded on 11 August 2018.</p>
Full article ">Figure 9
<p>Temperature prediction models of six surface types: grass (<b>a</b>), canopy cover (<b>b</b>), parking lot (<b>c</b>), sidewalk (<b>d</b>), composite rooftop (<b>e</b>) and road (<b>f</b>). UTEP datapoint is fitted in green. Note the 95% confidence intervals are in blue.</p>
Full article ">Figure 10
<p>Gaussian peak models of six surface types: grass (<b>a</b>), canopy cover (<b>b</b>), parking lot (<b>c</b>), sidewalk (<b>d</b>), composite rooftop (<b>e</b>) and road (<b>f</b>). Note that GRS = grass; CPY = canopy; PL = parking lot; SW = sidewalk; RTC = composite rooftop; AT = air temperature; SR = solar radiation; t = time.</p>
Full article ">
13 pages, 5761 KiB  
Article
Canopy and Terrain Height Retrievals with ICESat-2: A First Look
by Amy L. Neuenschwander and Lori A. Magruder
Remote Sens. 2019, 11(14), 1721; https://doi.org/10.3390/rs11141721 - 20 Jul 2019
Cited by 174 | Viewed by 9142
Abstract
NASA’s Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) launched in fall 2018 and has since collected continuous elevation data over the Earth’s surface. The primary scientific objective is to measure the cryosphere for studies related to land ice and sea ice characteristics. The [...] Read more.
NASA’s Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) launched in fall 2018 and has since collected continuous elevation data over the Earth’s surface. The primary scientific objective is to measure the cryosphere for studies related to land ice and sea ice characteristics. The vantage point from space, however, provides the opportunity to measure global surfaces including oceans, land, and vegetation. The ICESat-2 mission has dedicated products to the represented surface types, including an along-track elevation profile of terrain and canopy heights (ATL08). This study presents the first look at the ATL08 product and the quantitative assessment of the canopy and terrain height retrievals as compared to airborne lidar data. The study also provides qualitative examples of ICESat-2 observations from selected ecosystems to highlight the broad capability of the satellite for vegetation applications. Analysis of the mission’s preliminary ATL08 data product accuracy using an ICESat-2 transect over a vegetated region of Finland indicates a 5 m offset in geolocation knowledge (horizontal accuracy) well within the 6.5 m mission requirement. The vertical RMSE for the terrain and canopy height retrievals for one transect are 0.85 m and 3.2 m respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The distribution of gross primary production and total carbon storage as a function of latitude (copyright [<a href="#B3-remotesensing-11-01721" class="html-bibr">3</a>]).</p>
Full article ">Figure 2
<p>Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) beam configuration on the Earth’s surface.</p>
Full article ">Figure 3
<p>Conceptual illustration depicting photon sampling over vegetation.</p>
Full article ">Figure 4
<p>Top panel. Profile of 25 km of ATL03 photons from strong beam (GT1L) over boreal forest northeast of Edmonton, Alberta. These data were collected on 21 February 2019 during the night, as evidenced by the relatively low number of noise photons above the surface signal photons. The bottom panel is the result of the ATL08 algorithm surface classification labels for ground photons (red), intermediate canopy photons (blue), and top of canopy photons (green) The panel to the right depicts the location of the ICESat-2 track in Google Earth for context [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B16-remotesensing-11-01721" class="html-bibr">16</a>].</p>
Full article ">Figure 5
<p>Deforestation in Brazil. The top panel of approximately 50 km of ATL03 photons from the strong beam of track 2 (GT2L) over tropical forest in Brazil acquired 19 March 2019. The bottom panel is the result of the ATL08 algorithm surface classification labels for ground photons (red), intermediate canopy photons (blue), and top of canopy photons (green). The panel to the right depicts the location of the ICESat-2 track in Google Earth for context [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B22-remotesensing-11-01721" class="html-bibr">22</a>].</p>
Full article ">Figure 6
<p>Canopy height mapping of savanna/woodland vegetation. The top panel of approximately 14 km of ATL03 photons from the strong beam of track 3 (GT3L) over Mopane woodlands in Botswana acquired on 3 February 2019. The bottom panel is the result of the ATL08 algorithm surface classification labels for ground photons (red), intermediate canopy photons (blue), and top of canopy photons (green). The panel to the right shows the ICESat-2 tracks across the landscape for context [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B25-remotesensing-11-01721" class="html-bibr">25</a>].</p>
Full article ">Figure 7
<p>Profile of ICESat-2 data over mangrove forest stand in Mexico highlight a unique characteristic of photons from specular returns [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B26-remotesensing-11-01721" class="html-bibr">26</a>]. In some instances, these specular returns can be used to identify standing water or potentially saturated soils.</p>
Full article ">Figure 8
<p>Geolocation accuracy plot indicating the absolute accuracy of the ICEsat-2 data compared to an airborne lidar survey from Finland.</p>
Full article ">Figure 9
<p>A subset of the ATL08 terrain height validation results from track ATL08_20181124001516_08610105 over Finland. The airborne lidar point cloud (truth) are shown as grey dots where the canopy and terrain points have been isolated [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B27-remotesensing-11-01721" class="html-bibr">27</a>]. The median terrain height derived from the airborne lidar data are shown in yellow, and the ATL08 Best Fit terrain height is shown as green.</p>
Full article ">Figure 10
<p>Scatterplot of ATL08 median terrain height compared to airborne lidar median terrain height within the 100 m segment for the full 200 km of analyzed data.</p>
Full article ">Figure 11
<p>A subset of the ATL08 Canopy height validation results from track ATL08_20181124001516_08610105 over Finland [<a href="#B14-remotesensing-11-01721" class="html-bibr">14</a>,<a href="#B27-remotesensing-11-01721" class="html-bibr">27</a>]. The airborne lidar point cloud (truth) are shown as grey dots where the canopy and terrain points have been isolated as light and dark shades of grey. The ATL08 canopy height (green dot) and the airborne lidar canopy height (blue dot) are plotted at the 100 m step size.</p>
Full article ">Figure 12
<p>Scatterplot of ATL08 absolute canopy height compared to airborne lidar absolute top of canopy height within the 100 m segment for the full 200 km of analyzed data.</p>
Full article ">
11 pages, 4546 KiB  
Technical Note
LEO to GEO-SAR Interferences: Modelling and Performance Evaluation
by Antonio Leanza, Marco Manzoni, Andrea Monti-Guarnieri and Marco di Clemente
Remote Sens. 2019, 11(14), 1720; https://doi.org/10.3390/rs11141720 - 20 Jul 2019
Cited by 12 | Viewed by 4551
Abstract
This paper proposes a statistical model to evaluate the impact of the signal backscattered by low Earth orbiting (LEO) synthetic aperture radar (SAR) and received by GEO-stationary orbiting SAR. The model properly accounts for the bistatic backscatter, the number of LEO-SAR satellites and [...] Read more.
This paper proposes a statistical model to evaluate the impact of the signal backscattered by low Earth orbiting (LEO) synthetic aperture radar (SAR) and received by GEO-stationary orbiting SAR. The model properly accounts for the bistatic backscatter, the number of LEO-SAR satellites and their duty cycles. The presence of many sun-synchronous, dawn-dusk satellites creates a 24 h periodic pattern in interference that should be considered in the acquisition plan of future geostationary SAR. The model, implemented by a numerical simulator, allows also the prediction of performance in future scenarios of many LEO-SAR. Examples and evaluations are made here for X band. Full article
(This article belongs to the Special Issue Radio Frequency Interference (RFI) in Microwave Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometry of a geostationary synthetic aperture radar (SAR), “Geo”, affected by interferences due to the signal backscattered from a set of low Earth orbiting (LEO) SARs, L<sub>1</sub> and L<sub>i</sub><b><sub>.</sub></b></p>
Full article ">Figure 2
<p>Directivity (in dB) for a 5 m diameter, X band antenna, paced on a GEO-SAR satellite and pointing at Central Italy.</p>
Full article ">Figure 3
<p>Models for the bistatic backscatter from the <span class="html-italic">i</span>-th LEO-SAR to the GEO-SAR. (<b>a</b>) Geometry, as seen from the target position: Notice the change of backscatter plane between LEO and GEO directions. (<b>b</b>) Cases of forward and back-scattering. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mi>L</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math> represent the incoming signal from LEO-SAR and outgoing to the GEO-SAR.</p>
Full article ">Figure 4
<p>Bistatic backscatter evaluated as function of the geographic position of the LEO-SAR and observed by a GEO-SAR located at 10° E longitude (red cross) according to [<a href="#B10-remotesensing-11-01720" class="html-bibr">10</a>,<a href="#B11-remotesensing-11-01720" class="html-bibr">11</a>], for X-band, ‘soil’. Upper four panels: Polar orbiting LEO-SAR. Lower panels: nearly equatorial orbiting LEO-SAR.</p>
Full article ">Figure 5
<p>Estimation of the probability density of LEO-SAR visibility in GEO-SAR: Earth view (<b>left</b>) and mean number of satellites, (<b>right</b>), as function of time, derived from analysis of <a href="#remotesensing-11-01720-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Estimation of the radio frequency interferences (RFI) power, as a function of the hour of the day, for three different cases: a SS/DD satellite with near polar orbit (<b>left</b>), two satellites with low inclined orbit (<b>center</b>), and a set of five satellites with SS orbits in three different planes (<b>right</b>).</p>
Full article ">Figure 7
<p>Hourly estimation of the mean LEO-to-GEO RFI power, due to the 30 LEO-SAR in <a href="#remotesensing-11-01720-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 8
<p>(<b>Left</b>) Mean noise equivalent sigma zero (NESZ), computed by assuming as noise the bistatic backscatter from the present scenario of 30 LEO-SARs in <a href="#remotesensing-11-01720-t001" class="html-table">Table 1</a>, as a function of the hour of the day and the integration time. (<b>Right</b>) variation of the NESZ predicted as function of the number of X-band LEO-SAR.</p>
Full article ">Figure 9
<p>Azimuth resolution achievable by an X band GEO-SAR with the maximum allowed eccentricity of 0.002 [<a href="#B14-remotesensing-11-01720" class="html-bibr">14</a>], as function of the hour of the day since the perigee pass (t = 0), computed for different subapertures. The shaded areas mark times when bad performance is achieved: It is recommended that they match the periodical peaks in <a href="#remotesensing-11-01720-f007" class="html-fig">Figure 7</a>.</p>
Full article ">
22 pages, 6774 KiB  
Article
Tracking the Land Use/Land Cover Change in an Area with Underground Mining and Reforestation via Continuous Landsat Classification
by Jiaxin Mi, Yongjun Yang, Shaoliang Zhang, Shi An, Huping Hou, Yifei Hua and Fuyao Chen
Remote Sens. 2019, 11(14), 1719; https://doi.org/10.3390/rs11141719 - 20 Jul 2019
Cited by 45 | Viewed by 6638
Abstract
Understanding the changes in a land use/land cover (LULC) is important for environmental assessment and land management. However, tracking the dynamic of LULC has proved difficult, especially in large-scale underground mining areas with extensive LULC heterogeneity and a history of multiple disturbances. Additional [...] Read more.
Understanding the changes in a land use/land cover (LULC) is important for environmental assessment and land management. However, tracking the dynamic of LULC has proved difficult, especially in large-scale underground mining areas with extensive LULC heterogeneity and a history of multiple disturbances. Additional research related to the methods in this field is still needed. In this study, we tracked the LULC change in the Nanjiao mining area, Shanxi Province, China between 1987 and 2017 via random forest classifier and continuous Landsat imagery, where years of underground mining and reforestation projects have occurred. We applied a Savitzky–Golay filter and a normalized difference vegetation index (NDVI)-based approach to detect the temporal and spatial change, respectively. The accuracy assessment shows that the random forest classifier has a good performance in this heterogeneous area, with an accuracy ranging from 81.92% to 86.6%, which is also higher than that via support vector machine (SVM), neural network (NN), and maximum likelihood (ML) algorithm. LULC classification results reveal that cultivated forest in the mining area increased significantly after 2004, while the spatial extent of natural forest, buildings, and farmland decreased significantly after 2007. The areas where vegetation was significantly reduced were mainly because of the transformation from natural forest and shrubs into grasslands and bare lands, respectively, whereas the areas with an obvious increase in NDVI were mainly because of the conversion from grasslands and buildings into cultivated forest, especially when villages were abandoned after mining subsidence. A partial correlation analysis demonstrated that the extent of LULC change was significantly related to coal production and reforestation, which indicated the effects of underground mining and reforestation projects on LULC changes. This study suggests that continuous Landsat classification via random forest classifier could be effective in monitoring the long-term dynamics of LULC changes, and provide crucial information and data for the understanding of the driving forces of LULC change, environmental impact assessment, and ecological protection planning in large-scale mining areas. Full article
(This article belongs to the Special Issue Remote Sensing of Human-Environment Interactions)
Show Figures

Figure 1

Figure 1
<p>Location of the Nanjiao mining area and the distribution of the subsidence area. The boxes with years in legend represent the durations of mining activities occurring in each area.</p>
Full article ">Figure 2
<p>Coal production and reforestation from 1987 to 2017. EP1 (Ecological project 1): Three-North Shelter Forest Program. EP2 (Ecological project 2): Beijing-Tianjin Sandstorm Source Control Project.</p>
Full article ">Figure 3
<p>A comparison of remote sensing images and field survey photos. Landsat image was composited with false-color (RGB: Band 432).</p>
Full article ">Figure 4
<p>Training sample information: (<b>a</b>) Distribution of training samples; (<b>b</b>) spectral characteristics of the samples; (<b>c</b>) annual changes in the normalized difference vegetation index (NDVI) of samples. Note: The grey area in (<b>c</b>) represents the duration of images selected for classification.</p>
Full article ">Figure 5
<p>Workflow of the LULC classification and change detection.</p>
Full article ">Figure 6
<p>A comparison of the classification accuracy via four algorithms. RD: Random forest; SVM: Support vector machine; NN: Neural network; ML: Maximum likelihood.</p>
Full article ">Figure 7
<p>A comparison of the overall accuracies using Random forest classification, the Fine Resolution Observation and Monitoring of Global Land Cover (FROM-GLC) map, and the 30-m resolution Global Land Cover Dataset (GLOBELAND 30). (<b>a</b>) Landsat image with false-color composite; (<b>b</b>) Random forest classification; (<b>c</b>) FROM-GLC; and (<b>d</b>) GLOBELAND 30.</p>
Full article ">Figure 8
<p>Continuous LULC classification between 1987 and 2017.</p>
Full article ">Figure 9
<p>Land use changes in the mining area from 1987 to 2017. Black line: The trend line of the actual area of the land use. Red line: The trend line after Savitzky-Golay filter setting ten windows. (<b>a</b>) Cultivated forest; (<b>b</b>) natural forest; (<b>c</b>) mixed forest; (<b>d</b>) shrubs; (<b>e</b>) high-coverage grassland; (<b>f</b>) low-coverage grassland; (<b>g</b>) farmland; (<b>h</b>) buildings; (<b>i</b>) bare land.</p>
Full article ">Figure 10
<p>The distribution of NDVI in 1987, 1997, 2007, and 2017.</p>
Full article ">Figure 11
<p>The distribution of difference-NDVI (DNDVI) which was calculated by the NDVI in 1987, 1997, 2007, and 2017.</p>
Full article ">Figure 12
<p>The fitting curve of land use area and coal production, as well as reforestation area. Red line and black line are linear fit curve and non-linear fit curve, respectively.</p>
Full article ">
13 pages, 3454 KiB  
Article
Remote Sensing of Lake Ice Phenology across a Range of Lakes Sizes, ME, USA
by Shuai Zhang and Tamlin M. Pavelsky
Remote Sens. 2019, 11(14), 1718; https://doi.org/10.3390/rs11141718 - 20 Jul 2019
Cited by 32 | Viewed by 4655
Abstract
Remote sensing of ice phenology for small lakes is hindered by a lack of satellite observations with both high temporal and spatial resolutions. By merging multi-source satellite data over individual lakes, we present a new algorithm that successfully estimates ice freeze and thaw [...] Read more.
Remote sensing of ice phenology for small lakes is hindered by a lack of satellite observations with both high temporal and spatial resolutions. By merging multi-source satellite data over individual lakes, we present a new algorithm that successfully estimates ice freeze and thaw timing for lakes with surface areas as small as 0.13 km2 and obtains consistent results across a range of lake sizes. We have developed an approach for classifying ice pixels based on the red reflectance band of Moderate Resolution Imaging Spectroradiometer (MODIS) imagery, with a threshold calibrated against ice fraction from Landsat Fmask over each lake. Using a filter derived from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2) surface air temperature product, we removed outliers in the time series of lake ice fraction. The time series of lake ice fraction was then applied to identify lake ice breakup and freezeup dates. Validation results from over 296 lakes in Maine indicate that the satellite-based lake ice timing detection algorithm perform well, with mean absolute error (MAE) of 5.54 days for breakup dates and 7.31 days for freezeup dates. This algorithm can be applied to lakes worldwide, including the nearly two million lakes with surface area between 0.1 and 1 km2. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

Figure 1
<p>Locations of study lakes in Maine.</p>
Full article ">Figure 2
<p>Flow chart of the algorithm.</p>
Full article ">Figure 3
<p>Mean reflectance of near infrared (NIR) and red band over surface of Silver Lake (Figure 8, Pond) lake in Maine. Two periods of water and ice are labelled.</p>
Full article ">Figure 4
<p>Time series of lake ice fraction and temperature over Silver Lake (<b>a</b>) before and (<b>b</b>) after filtering.</p>
Full article ">Figure 5
<p>Calibration of MODIS thresholds against lake fraction of Landsat Fmask. (<b>a</b>) is the mean absolute distance (MAD) between MODIS and Fmask lake ice fraction calculated using the optimal thresholds. Lakes are divided into three groups by size. (<b>b</b>) is the distribution of optimal thresholds selected by all lakes.</p>
Full article ">Figure 6
<p>Time series of lake ice breakup and freezeup dates.</p>
Full article ">Figure 7
<p>Validation results for breakup dates.</p>
Full article ">Figure 8
<p>Validation results of freezeup dates. Freezeup dates later than day 365 (or 366 days in a leap year) means freezeup was detected in the following year.</p>
Full article ">Figure 9
<p>Semivariogram of breakup and freezeup dates in 2017.</p>
Full article ">
22 pages, 1063 KiB  
Article
Reflectance Properties of Hemiboreal Mixed Forest Canopies with Focus on Red Edge and Near Infrared Spectral Regions
by Lea Hallik, Andres Kuusk, Mait Lang and Joel Kuusk
Remote Sens. 2019, 11(14), 1717; https://doi.org/10.3390/rs11141717 - 20 Jul 2019
Cited by 16 | Viewed by 4468
Abstract
This study present the results of airborne top-of-canopy measurements of reflectance spectra in the spectral domain of 350–1050 nm over the hemiboreal mixed forest. We investigated spectral transformations that were originally designed for utilization at very different spectral resolutions. We found that the [...] Read more.
This study present the results of airborne top-of-canopy measurements of reflectance spectra in the spectral domain of 350–1050 nm over the hemiboreal mixed forest. We investigated spectral transformations that were originally designed for utilization at very different spectral resolutions. We found that the estimates of red edge inflection point by two methods—the linear four-point interpolation approach (S2REP) and searching the maximum of the first derivative spectrum ( D m a x ) according to the mathematical definition of red edge inflection point—were well related to each other but S2REP produced a continuously shifting location of red edge inflection point while D m a x resulted in a discrete variable with peak jumps between fixed locations around 717 nm and 727 nm for forest canopy (the third maximum at 700 nm appeared only in clearcut areas). We found that, with medium high spectral resolution (bandwidth 10 nm, spectral step 3.3 nm), the in-filling of the O 2 -A Fraunhofer line ( F a r e a ) was very strongly related to single band reflectance factor in NIR spectral region ( ρ = 0.91, p < 0.001) and not related to Photochemical Reflectance Index (PRI). Stemwood volume, basal area and tree height of dominant layer were negatively correlated with reflectance factors at both visible and NIR spectral region due to the increase in roughness of canopy surface and the amount of shade. Forest age was best related to single band reflectance at NIR region ( ρ = −0.48, p < 0.001) and the best predictor for allometric LAI was the single band reflectance at red spectral region ( ρ = −0.52, p < 0.001) outperforming all studied vegetation indices. It suggests that Sentinel-2 MSI bands with higher spatial resolution (10 m pixel size) could be more beneficial than increased spectral resolution for monitoring forest LAI and age. The new index R 751 /R 736 originally developed for leaf chlorophyll content estimation, also performed well at the canopy level and was mainly influenced by the location of red edge inflection point ( ρ = 0.99, p < 0.001) providing similar info in a simpler mathematical form and using a narrow spectral region very close to the O 2 -A Fraunhofer line. Full article
(This article belongs to the Special Issue Remote Sensing of Boreal Forests)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flight route of helicopter measurements and forest stand borders. The background is the CHRIS scene DD40 band 12 image of 27 July 2011.</p>
Full article ">Figure 2
<p>Data processing steps for calculating reflectance factors [<a href="#B32-remotesensing-11-01717" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>(<b>A</b>) Some examples of reflectance spectra from two birch dominated stands (50 and 90 years old) and a pine dominated stand. The spectral resolution (FWHM) is 10 nm and the spectral sampling interval 3.3 nm. Vertical lines denote spectral bands selected for further analysis in current study as single band reflectance factors and for calculating vegetation indices. (<b>B</b>) The first derivative of reflectance spectrum. Vertical lines denote the locations of peaks for red edge inflection points around 700, 717 and 727 nm. (<b>C</b>) An example of fitting continuum for the estimation of in-filling of the O<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>-A Fraunhofer line around 760 nm.</p>
Full article ">Figure 4
<p>New simple ratio vegetation index R<math display="inline"><semantics> <msub> <mrow/> <mn>751</mn> </msub> </semantics></math>/R<math display="inline"><semantics> <msub> <mrow/> <mn>736</mn> </msub> </semantics></math> in relation to: (<b>A</b>) red edge inflection point estimate S2REP (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.99, <span class="html-italic">p</span> &lt; 0.001); and (<b>B</b>) NDVI<math display="inline"><semantics> <msub> <mrow/> <mn>705</mn> </msub> </semantics></math> (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.95, <span class="html-italic">p</span> &lt; 0.001). (<b>C</b>) Relationship between the same vegetation index in SR and NDVI forms (using spectral bands at 736 nm and 751 nm) <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 1, <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 5
<p>Comparison of two methods for estimating the red-edge inflection point. The location of red-edge inflection point estimated according to its mathematical definition as the location of the maximum of the first derivative is shown on the x-axis and the result of linear four-point interpolation approach (S2REP) is displayed on the y-axis.</p>
Full article ">Figure 6
<p>The relationship between Photochemical Reflectance Index (PRI) and red-edge inflection point calculated by linear four-point interpolation approach (S2REP) (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.60, <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 7
<p>Single band reflectance factor at red-edge spectral region R<math display="inline"><semantics> <msub> <mrow/> <mn>705</mn> </msub> </semantics></math> in relation to: (<b>A</b>) stand height H (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = −0.47, <span class="html-italic">p</span> &lt; 0.001); and (<b>B</b>) stemwood volume of dominant layer M1 (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = −0.57, <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">
14 pages, 4630 KiB  
Article
Dasymetric Mapping Using UAV High Resolution 3D Data within Urban Areas
by Carla Rebelo, António Manuel Rodrigues and José António Tenedório
Remote Sens. 2019, 11(14), 1716; https://doi.org/10.3390/rs11141716 - 19 Jul 2019
Cited by 8 | Viewed by 4176
Abstract
Multi-temporal analysis of census small-area microdata is hampered by the fact that census tract shapes do not often coincide between census exercises. Dasymetric mapping techniques provide a workaround that is nonetheless highly dependent on the quality of ancillary data. The objectives of this [...] Read more.
Multi-temporal analysis of census small-area microdata is hampered by the fact that census tract shapes do not often coincide between census exercises. Dasymetric mapping techniques provide a workaround that is nonetheless highly dependent on the quality of ancillary data. The objectives of this work are to: (1) Compare the use of three spatial techniques for the estimation of population according to census tracts: Areal interpolation and dasymetric mapping using control data—building block area (2D) and volume (3D); (2) demonstrate the potential of unmanned aerial vehicle (UAV) technology for the acquisition of control data; (3) perform a sensitivity analysis using Monte Carlo simulations showing the effect of changes in building block volume (3D information) in population estimates. The control data were extracted by a (semi)-automatic solution—3DEBP (3D extraction building parameters) developed using free open source software (FOSS) tools. The results highlight the relevance of 3D for the dasymetric mapping exercise, especially if the variations in height between building blocks are significant. Using low-cost UAV backed systems with a FOSS-only computing framework also proved to be a competent solution with a large scope of potential applications. Full article
(This article belongs to the Special Issue Remote Sensing for Urban Morphology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area with census tracts and building footprints.</p>
Full article ">Figure 2
<p>Census tracts: Source (Geographic Base for Information Referencing—BGRI 2001) and target (BGRI 2011).</p>
Full article ">Figure 3
<p>Trajectory flight lines performed by Swinglet CAM.</p>
Full article ">Figure 4
<p>Summary flowchart.</p>
Full article ">Figure 5
<p>Methodological approach to building block volume extraction—3DEBP—based on a 3D Point Cloud.</p>
Full article ">Figure 6
<p>Points selected (eave roof and ground) for each block in area A.</p>
Full article ">Figure 7
<p>Sensitivity analysis performed using blocks (V1, V2, V3 and V4) changing block height.</p>
Full article ">Figure 8
<p>2D/3D comparison between reference data and data estimated from UAV: (<b>a</b>) Building block footprints (red: Reference dataset; black: extracted from UAV dataset); (<b>b</b>) 3D block model obtained from reference dataset; (<b>c</b>) 3D block model obtained from UAV data and (<b>d</b>) combination of the two 3D models represented in b, c).</p>
Full article ">Figure 9
<p>Results of the Monte Carlo simulation.</p>
Full article ">
14 pages, 2055 KiB  
Article
Using Solar-Induced Chlorophyll Fluorescence Observed by OCO-2 to Predict Autumn Crop Production in China
by Jin Wei, Xuguang Tang, Qing Gu, Min Wang, Mingguo Ma and Xujun Han
Remote Sens. 2019, 11(14), 1715; https://doi.org/10.3390/rs11141715 - 19 Jul 2019
Cited by 22 | Viewed by 4641
Abstract
The remote sensing of solar-induced chlorophyll fluorescence (SIF) has attracted considerable attention as a new monitor of vegetation photosynthesis. Previous studies have revealed the close correlation between SIF and terrestrial gross primary productivity (GPP), and have used SIF to estimate vegetation GPP. This [...] Read more.
The remote sensing of solar-induced chlorophyll fluorescence (SIF) has attracted considerable attention as a new monitor of vegetation photosynthesis. Previous studies have revealed the close correlation between SIF and terrestrial gross primary productivity (GPP), and have used SIF to estimate vegetation GPP. This study investigated the relationship between the Orbiting Carbon Observatory-2 (OCO-2) SIF products at two retrieval bands (SIF757, SIF771) and the autumn crop production in China during the summer of 2015 on different timescales. Subsequently, we evaluated the performance to estimate the autumn crop production of 2016 by using the optimal model developed in 2015. In addition, the OCO-2 SIF was compared with the moderate resolution imaging spectroradiometer (MODIS) vegetation indices (VIs) (normalized difference vegetation index, NDVI; enhanced vegetation index, EVI) for predicting the crop production. All the remotely sensed products exhibited the strongest correlation with autumn crop production in July. The OCO-2 SIF757 estimated autumn crop production best (R2 = 0.678, p < 0.01; RMSE = 748.901 ten kilotons; MAE = 567.629 ten kilotons). SIF monitored the crop dynamics better than VIs, although the performances of VIs were similar to SIF. The estimation accuracy was limited by the spatial resolution and discreteness of the OCO-2 SIF products. Our findings demonstrate that SIF is a feasible approach for the crop production estimation and is not inferior to VIs, and suggest that accurate autumn crop production forecasts while using the SIF-based model can be obtained one to two months before the harvest. Furthermore, the proposed method can be widely applied with the development of satellite-based SIF observation technology. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The trajectory of Orbiting Carbon Observatory-2 (OCO-2) passed by cropland from June to August 2015 in China. The land cover map is from the moderate resolution imaging spectroradiometer (MODIS) land cover type product (MCD12Q1) based on the IGBP (International Geosphere-Biosphere Programme) classification scheme.</p>
Full article ">Figure 2
<p>The correlations between the SIF757’, SIF771’, EVI’, NDVI’, and the government’s autumn crop production statistics (ten kilotons) in China during June to August 2015 at the monthly and seasonal scales. The parameter’ represent SIF757’ (W m<sup>−2</sup> sr<sup>−1</sup> μm<sup>−1</sup>), SIF771’ (W m<sup>−2</sup> sr<sup>−1</sup> μm<sup>−1</sup>), EVI’, and NDVI’, respectively.</p>
Full article ">Figure 3
<p>The comparisons between autumn crop production estimated by the SIF757’, SIF771’, EVI’, NDVI’, and the government’s autumn crop production statistics in China in 2016 using July data. Production<sub>obs</sub> (ten kilotons) and production<sub>est</sub> (ten kilotons) represent production observation and production estimation, respectively.</p>
Full article ">Figure 4
<p>The spatial distribution of the government’s autumn crop production statistics, the percentage of deviation of the autumn crop production estimated by using SIF757 in 2016, and the percentage of cropland covered by solar-induced chlorophyll fluorescence (SIF) data in each province (<b>a</b>); and, topography of Chinese cropland (<b>b</b>).</p>
Full article ">
3 pages, 172 KiB  
Editorial
Editorial for the Special Issue “Frontiers in Spectral Imaging and 3D Technologies for Geospatial Solutions”
by Eija Honkavaara, Konstantinos Karantzalos, Xinlian Liang, Erica Nocerino, Ilkka Pölönen and Petri Rönnholm
Remote Sens. 2019, 11(14), 1714; https://doi.org/10.3390/rs11141714 - 19 Jul 2019
Cited by 1 | Viewed by 2676
Abstract
This Special Issue hosts papers on the integrated use of spectral imaging and 3D technologies in remote sensing, including novel sensors, evolving machine learning technologies for data analysis, and the utilization of these technologies in a variety of geospatial applications. The presented results [...] Read more.
This Special Issue hosts papers on the integrated use of spectral imaging and 3D technologies in remote sensing, including novel sensors, evolving machine learning technologies for data analysis, and the utilization of these technologies in a variety of geospatial applications. The presented results showed improved results when multimodal data was used in object analysis. Full article
24 pages, 15856 KiB  
Article
Comparing Deep Neural Networks, Ensemble Classifiers, and Support Vector Machine Algorithms for Object-Based Urban Land Use/Land Cover Classification
by Shahab Eddin Jozdani, Brian Alan Johnson and Dongmei Chen
Remote Sens. 2019, 11(14), 1713; https://doi.org/10.3390/rs11141713 - 19 Jul 2019
Cited by 148 | Viewed by 10966
Abstract
With the advent of high-spatial resolution (HSR) satellite imagery, urban land use/land cover (LULC) mapping has become one of the most popular applications in remote sensing. Due to the importance of context information (e.g., size/shape/texture) for classifying urban LULC features, Geographic Object-Based Image [...] Read more.
With the advent of high-spatial resolution (HSR) satellite imagery, urban land use/land cover (LULC) mapping has become one of the most popular applications in remote sensing. Due to the importance of context information (e.g., size/shape/texture) for classifying urban LULC features, Geographic Object-Based Image Analysis (GEOBIA) techniques are commonly employed for mapping urban areas. Regardless of adopting a pixel- or object-based framework, the selection of a suitable classifier is of critical importance for urban mapping. The popularity of deep learning (DL) (or deep neural networks (DNNs)) for image classification has recently skyrocketed, but it is still arguable if, or to what extent, DL methods can outperform other state-of-the art ensemble and/or Support Vector Machines (SVM) algorithms in the context of urban LULC classification using GEOBIA. In this study, we carried out an experimental comparison among different architectures of DNNs (i.e., regular deep multilayer perceptron (MLP), regular autoencoder (RAE), sparse, autoencoder (SAE), variational autoencoder (AE), convolutional neural networks (CNN)), common ensemble algorithms (Random Forests (RF), Bagging Trees (BT), Gradient Boosting Trees (GB), and Extreme Gradient Boosting (XGB)), and SVM to investigate their potential for urban mapping using a GEOBIA approach. We tested the classifiers on two RS images (with spatial resolutions of 30 cm and 50 cm). Based on our experiments, we drew three main conclusions: First, we found that the MLP model was the most accurate classifier. Second, unsupervised pretraining with the use of autoencoders led to no improvement in the classification result. In addition, the small difference in the classification accuracies of MLP from those of other models like SVM, GB, and XGB classifiers demonstrated that other state-of-the-art machine learning classifiers are still versatile enough to handle mapping of complex landscapes. Finally, the experiments showed that the integration of CNN and GEOBIA could not lead to more accurate results than the other classifiers applied. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>30 cm aerial image covering an urban area in Calhoun, Illinois, USA.</p>
Full article ">Figure 2
<p>50 cm WorldView-2 image covering a subregion the city of Kingston, Ontario, Canada.</p>
Full article ">Figure 3
<p>Subsets of classification maps generated by the regular deep multilayer perceptron (MLP), Gradient Boosting Trees (GB), convolutional neural network (CNN), and support vector machines (SVM) models generated for the 30 cm image.</p>
Full article ">Figure 4
<p>Subsets of classification maps generated by the MLP, XGB, CNN, and SVM models for the 50 cm image.</p>
Full article ">Figure A1
<p>Graphical representation of an MLP model with two hidden layers.</p>
Full article ">Figure A2
<p>Graphical representation of a regular autoencoder model with a hidden layer.</p>
Full article ">Figure A3
<p>Graphical representation of an SAE model with three hidden layers. Note: Gray-colored circles are neurons deactivated using the regularization term.</p>
Full article ">Figure A4
<p>Confusion matrix derived from the MLP model for 30 cm image.</p>
Full article ">Figure A5
<p>Confusion matrix derived from the GB model for 30 cm image.</p>
Full article ">Figure A6
<p>Confusion matrix of the SVM for 30 cm image.</p>
Full article ">Figure A7
<p>Confusion matrix of the CNN model for 30 cm image.</p>
Full article ">Figure A8
<p>Confusion matrix of the MLP model for 50 cm image.</p>
Full article ">Figure A9
<p>Confusion matrix of the XGB model for 50 cm image.</p>
Full article ">Figure A10
<p>Confusion matrix of the SVM model for 50 cm image.</p>
Full article ">Figure A11
<p>Confusion matrix of the CNN model for 50 cm image.</p>
Full article ">Figure A12
<p>Classification map generated by combining classification maps of the MLP model for the 30 cm image.</p>
Full article ">Figure A13
<p>Classification map generated by combining classification maps of the MLP model for the 50 cm image.</p>
Full article ">
16 pages, 2017 KiB  
Article
Assessing the Impact of the Built-Up Environment on Nighttime Lights in China
by Cheng Wang, Haiming Qin, Kaiguang Zhao, Pinliang Dong, Xuebo Yang, Guoqing Zhou and Xiaohuan Xi
Remote Sens. 2019, 11(14), 1712; https://doi.org/10.3390/rs11141712 - 19 Jul 2019
Cited by 9 | Viewed by 3899
Abstract
Figuring out the effect of the built-up environment on artificial light at night is essential for better understanding nighttime luminosity in both socioeconomic and ecological perspectives. However, there are few studies linking artificial surface properties to nighttime light (NTL). This study uses a [...] Read more.
Figuring out the effect of the built-up environment on artificial light at night is essential for better understanding nighttime luminosity in both socioeconomic and ecological perspectives. However, there are few studies linking artificial surface properties to nighttime light (NTL). This study uses a statistical method to investigate effects of construction region environments on nighttime brightness and its variation with building height and regional economic development level. First, we extracted footprint-level target heights from Geoscience Laser Altimeter System (GLAS) waveform light detection and ranging (LiDAR) data. Then, we proposed a set of built-up environment properties, including building coverage, vegetation fraction, building height, and surface-area index, and then extracted these properties from GLAS-derived height, GlobeLand30 land-cover data, and DMSP/OLS radiance-calibrated NTL data. Next, the effects of non-building areas on NTL data were removed based on a supervised method. Finally, linear regression analyses were conducted to analyze the relationships between nighttime lights and built-up environment properties. Results showed that building coverage and vegetation fraction have weak correlations with nighttime lights (R2 < 0.2), building height has a moderate correlation with nighttime lights (R2 = 0.48), and surface-area index has a significant correlation with nighttime lights (R2 = 0.64). The results suggest that surface-area index is a more reasonable measure for estimating light number and intensity of NTL because it takes into account both building coverage and height, i.e., building surface area. Meanwhile, building height contributed to nighttime lights greater than building coverage. Further analysis showed the correlation between NTL and surface-area index becomes stronger with the increase of building height, while it is the weakest when the regional economic development level is the highest. In conclusion, these results can help us better understand the determinants of nighttime lights. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of DMSP/OLS radiance-calibrated NTL data of mainland China in 2006.</p>
Full article ">Figure 2
<p>Technical flow chart of the methods used in this research.</p>
Full article ">Figure 3
<p>An example of raw GLAS waveform (red dots), Gaussian fitting (blue line), and decomposed Gaussian components (black line).</p>
Full article ">Figure 4
<p>Correlations between DMSP/OLS radiance-calibrated NTL data and built-up environment properties: (<b>a</b>) building height; (<b>b</b>) building coverage; (<b>c</b>) vegetation fraction; (<b>d</b>) surface-area index.</p>
Full article ">Figure 4 Cont.
<p>Correlations between DMSP/OLS radiance-calibrated NTL data and built-up environment properties: (<b>a</b>) building height; (<b>b</b>) building coverage; (<b>c</b>) vegetation fraction; (<b>d</b>) surface-area index.</p>
Full article ">Figure 5
<p>Correlations between DMSP/OLS radiance-calibrated NTL data and surface-area index of (<b>a</b>) low-rise buildings, (<b>b</b>) middle-level buildings, and (<b>c</b>) high-rise buildings.</p>
Full article ">Figure 6
<p>Correlations between DMSP/OLS radiance-calibrated NTL data and surface-area index of the three economic zones of China: (<b>a</b>) eastern region, (<b>b</b>) central region, and (<b>c</b>) western region.</p>
Full article ">Figure 6 Cont.
<p>Correlations between DMSP/OLS radiance-calibrated NTL data and surface-area index of the three economic zones of China: (<b>a</b>) eastern region, (<b>b</b>) central region, and (<b>c</b>) western region.</p>
Full article ">
21 pages, 4086 KiB  
Article
An Unsupervised Method to Detect Rock Glacier Activity by Using Sentinel-1 SAR Interferometric Coherence: A Regional-Scale Study in the Eastern European Alps
by Aldo Bertone, Francesco Zucca, Carlo Marin, Claudia Notarnicola, Giovanni Cuozzo, Karl Krainer, Volkmar Mair, Paolo Riccardi, Mattia Callegari and Roberto Seppi
Remote Sens. 2019, 11(14), 1711; https://doi.org/10.3390/rs11141711 - 19 Jul 2019
Cited by 13 | Viewed by 4817
Abstract
Rock glaciers are widespread periglacial landforms in mountain regions like the European Alps. Depending on their ice content, they are characterized by slow downslope displacement due to permafrost creep. These landforms are usually mapped within inventories, but understand their activity is a very [...] Read more.
Rock glaciers are widespread periglacial landforms in mountain regions like the European Alps. Depending on their ice content, they are characterized by slow downslope displacement due to permafrost creep. These landforms are usually mapped within inventories, but understand their activity is a very difficult task, which is frequently accomplished using geomorphological field evidences, direct measurements, or remote sensing approaches. In this work, a powerful method to analyze the rock glaciers’ activity was developed exploiting the synthetic aperture radar (SAR) satellite data. In detail, the interferometric coherence estimated from Sentinel-1 data was used as key indicator of displacement, developing an unsupervised classification method to distinguish moving (i.e., characterized by detectable displacement) from no-moving (i.e., without detectable displacement) rock glaciers. The original application of interferometric coherence, estimated here using the rock glacier outlines as boundaries instead of regular kernel windows, allows describing the activity of rock glaciers at a regional-scale. The method was developed and tested over a large mountainous area located in the Eastern European Alps (South Tyrol and western part of Trentino, Italy) and takes into account all the factors that may limit the effectiveness of the coherence in describing the rock glaciers’ activity. The activity status of more than 1600 rock glaciers was classified by our method, identifying more than 290 rock glaciers as moving. The method was validated using an independent set of rock glaciers whose activity is well-known, obtaining an accuracy of 88%. Our method is replicable over any large mountainous area where rock glaciers are already mapped and makes it possible to compensate for the drawbacks of time-consuming and subjective analysis based on geomorphological evidences or other SAR approaches. Full article
(This article belongs to the Special Issue Remote Sensing of Changing Mountain Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical setting of the study area. (<b>a</b>) Area covered by the Sentinel-1 tracks 117 and 168, overlaid on the permafrost probability distribution on the entire Alps according to the index provided by the GLOBpermafrost project [<a href="#B46-remotesensing-11-01711" class="html-bibr">46</a>]. The red and yellow lines represent the geographical outlines of the Trentino and South Tyrol, respectively. (<b>b</b>) Elevation map of the study area and distribution of rock glaciers (red and black dots).</p>
Full article ">Figure 2
<p>Summary data from the rock glacier inventory of South Tyrol [<a href="#B16-remotesensing-11-01711" class="html-bibr">16</a>]. (<b>a</b>) Percentage of active, inactive, and relict rock glaciers; (<b>b</b>) altitudinal e distribution; and (<b>c</b>) aspect distribution. In (<b>b</b>), the average (AVG) and the standard deviation (STD) of the altitude for the three classes are shown.</p>
Full article ">Figure 3
<p>List of Sentinel-1 images acquired during the snow-free period of 2017. The black circles identify the images acquired in dry conditions and used as reference SAR backscattering images (see <a href="#sec2dot2-remotesensing-11-01711" class="html-sec">Section 2.2</a>.).</p>
Full article ">Figure 4
<p>General block scheme of the proposed rock glaciers’ classification. Sentinel-1 datasets from different relative orbit were pre-processed to obtain backscattering images, phase differences, and layover-shadow mask for each pair of images, using different temporal baselines. Then for each rock glacier, the most favorable relative orbit dataset was used. During the data selection, small or vegetated rock glaciers or with extended layover-shadow areas were discarded. For each selected rock glacier and for each temporal baseline, the pair of images with the sum of the absolute backscattering difference of master and slave images close to zero was selected. Coherence is then estimated for the selected pairs of images. Then rock glaciers were classified by the expectation maximization algorithm using the coherence values estimated with different temporal baselines.</p>
Full article ">Figure 5
<p>Gaussian distributions of the coherence values. The coherence values are showed by the x-axis and the frequency values by the y-axis. The Gaussian distribution of moving and no-moving rock glaciers obtained by the application of the Expectation Maximization (EM) algorithm are shown by red and green lines, respectively. Each graph shows the coherence distributions obtained with the same temporal baseline.</p>
Full article ">Figure 6
<p>The length of the horizontal bars represent the number of rock glaciers classified as moving and no-moving by our method, the vegetated rock glaciers, and the not-classified rock glaciers. Inside the horizontal bars, the comparison between the number of labeled rock glaciers and the classification of the South Tyrol Inventory (STI) is shown by different colors, and the level of agreement is shown by the percentages.</p>
Full article ">Figure 7
<p>(<b>a</b>) Rock glaciers mapped in the STI. (<b>b</b>) Rock glaciers classified by our method and divided in moving (red dots) and no-moving (black dots). Vegetated rock glaciers are not displayed. (<b>c</b>) Rock glaciers classified by our method and vegetated rock glaciers (green dots), which were added to the no-moving class. (<b>d</b>) Rose diagram of aspect distribution of moving and no-moving forms, including the vegetated ones.</p>
Full article ">Figure 8
<p>Altitudinal distribution of moving (red line) and no-moving rock glaciers (included the vegetated ones, green line). The average (AVG) and the standard deviation (STD) of the altitude for the two classes are shown.</p>
Full article ">Figure 9
<p>Permafrost probability distribution of moving rock glaciers according to the GLOBpermafrost map. The average (AVG) of permafrost probability is shown.</p>
Full article ">Figure 10
<p>Results of the rock glacier classification in Trentino and South Tyrol using the restricted dataset of images acquired in August (<b>a</b>) and September (<b>b</b>) 2017.</p>
Full article ">Figure 11
<p>Coherence trends (black lines) and sum of the absolute backscattering differences (SABD) trends (blue lines) for three temporal baselines (6, 12, and 18 days) of two selected rock glaciers in Trentino. Red circles indicate the pair of images selected with the complete dataset (2 months), while the red diamonds indicate the pair of images selected with the reduced dataset (1 month). Horizontal dashed lines indicate the time interval for each pair of images (master and slave). In most cases, the image selected for the complete dataset is the same to that selected for the reduced dataset. (<b>a</b>) A rock glacier classified as moving using both the complete dataset (2 months) and the reduced dataset (August or September 2017). (<b>b</b>) A rock glacier classified as no-moving in the August images, and as moving in the September images. For the same temporal baseline, the ranges of coherence values between (<b>a</b>) and (<b>b</b>) are different.</p>
Full article ">
18 pages, 4813 KiB  
Article
Developing Transformation Functions for VENμS and Sentinel-2 Surface Reflectance over Israel
by V.S. Manivasagam, Gregoriy Kaplan and Offer Rozenstein
Remote Sens. 2019, 11(14), 1710; https://doi.org/10.3390/rs11141710 - 19 Jul 2019
Cited by 21 | Viewed by 6574
Abstract
Vegetation and Environmental New micro Spacecraft (VENμS) and Sentinel-2 are both ongoing earth observation missions that provide high-resolution multispectral imagery at 10 m (VENμS) and 10–20 m (Sentinel-2), at relatively high revisit frequencies (two days for VENμS and five days for Sentinel-2). Sentinel-2 [...] Read more.
Vegetation and Environmental New micro Spacecraft (VENμS) and Sentinel-2 are both ongoing earth observation missions that provide high-resolution multispectral imagery at 10 m (VENμS) and 10–20 m (Sentinel-2), at relatively high revisit frequencies (two days for VENμS and five days for Sentinel-2). Sentinel-2 provides global coverage, whereas VENμS covers selected regions, including parts of Israel. To facilitate the combination of these sensors into a unified time-series, a transformation model between them was developed using imagery from the region of interest. For this purpose, same-day acquisitions from both sensor types covering the surface reflectance over Israel, between April 2018 and November 2018, were used in this study. Transformation coefficients from VENμS to Sentinel-2 surface reflectance were produced for their overlapping spectral bands (i.e., visible, red-edge and near-infrared). The performance of these spectral transformation functions was assessed using several methods, including orthogonal distance regression (ODR), the mean absolute difference (MAD), and spectral angle mapper (SAM). Post-transformation, the value of the ODR slopes were close to unity for the transformed VENμS reflectance with Sentinel-2 reflectance, which indicates near-identity of the two datasets following the removal of systemic bias. In addition, the transformation outputs showed better spectral similarity compared to the original images, as indicated by the decrease in SAM from 0.093 to 0.071. Similarly, the MAD was reduced post-transformation in all bands (e.g., the blue band MAD decreased from 0.0238 to 0.0186, and in the NIR it decreased from 0.0491 to 0.0386). Thus, the model helps to combine the images from Sentinel-2 and VENμS into one time-series that facilitates continuous, temporally dense vegetation monitoring. Full article
(This article belongs to the Special Issue Cross-Calibration and Interoperability of Remote Sensing Instruments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sentinel-2 (<b>left</b>) and vegetation and environmental new micro spacecraft (VENμS) (<b>right</b>) tiles covering Israel. Tile footprints are demarked in red and their names are inscribed in blue. The grey-shaded regions in the overlap between Sentinel-2 tiles were used in the NBAR correction assessment (shown in Figure 4).</p>
Full article ">Figure 2
<p>Relative spectral response functions of VENμS and Sentinel-2 bands. Sources: Sentinel-2: (ref: COPE-GSEG-EOPG-TN-15-0007) issued by European Space Agency Version 3.0, accessed from: <a href="https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/document-library/-/asset_publisher/Wk0TKajiISaR/content/sentinel-2a-spectral-responses" target="_blank">https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/document-library/-/asset_publisher/Wk0TKajiISaR/content/sentinel-2a-spectral-responses</a>. VENμS: accessed from: <a href="http://www.cesbio.ups-tlse.fr/multitemp/wp-content/uploads/2018/09/rep6S.txt" target="_blank">http://www.cesbio.ups-tlse.fr/multitemp/wp-content/uploads/2018/09/rep6S.txt</a>.</p>
Full article ">Figure 3
<p>The processing chain for the development of transformation models between VENμS and Sentinel-2 reflectance imagery. * The VENμS red-edge bands (8–10) and near infra-red (NIR) band (11) were resampled to 20 m. ** The Sentinel-2 broad NIR and narrow NIR bands (i.e., bands 8 and 8A) were compared with the VENμS NIR band (11).</p>
Full article ">Figure 4
<p>Mean absolute difference between Nadir BRDF Adjusted Reflectance (NBAR) in the overlapping regions of Sentinel-2 imagery (shown in <a href="#remotesensing-11-01710-f001" class="html-fig">Figure 1</a>) during the months of April and August, which represent the spring and the summer (high vegetation coverage in spring vs. low vegetation coverage in summer). The values found in this study are compared against the values reported in Roy et al. [<a href="#B21-remotesensing-11-01710" class="html-bibr">21</a>,<a href="#B22-remotesensing-11-01710" class="html-bibr">22</a>] for the month of April. RE denotes Red-edge.</p>
Full article ">Figure 5
<p>Scatter plot of VENμS and Sentinel-2 green band reflectance before (<b>A</b>) and after (<b>C</b>) removing the outliers using Cook’s distance method. Subplots (<b>B</b>,<b>D</b>) show the Cook’s distance case order plot for the respective scatter plots.</p>
Full article ">Figure 6
<p>Scatter plots of VENμS and Sentinel-2 surface reflectance for the validation set of pixels. (<b>A</b>) prior to the transformation; and (<b>B</b>) post-transformation. The red line is the orthogonal distance regression (ODR) slope line showing the bias relative to the identity line (black-dashed line). A high point density is marked in yellow tones, while a low point density is marked in blue tones. * This margin of error represents the 99% confidence interval.</p>
Full article ">Figure 7
<p>Scatter plots of VENμS and Sentinel-2 surface reflectance for the validation set of pixels after the removal of outliers. (<b>A</b>) prior to the transformation; and (<b>B</b>) post-transformation. The red line is the orthogonal distance regression slope line showing the bias relative to the identity line (black dashed line). A high point density is marked in yellow tones, while a low point density is marked in blue tones. * This margin of error represents a 99% confidence interval.</p>
Full article ">Figure 8
<p>Mean absolute difference (MAD) between VENμS and Sentinel-2 reflectance, prior to transformation and post-transformation.</p>
Full article ">Figure 9
<p>Turkey box plots of the spectral angle mapper (SAM) between VENμS and Sentinel-2 reflectance (<b>A</b>) Full validation dataset and (<b>B</b>) Post removal of outliers from the dataset.</p>
Full article ">
26 pages, 54047 KiB  
Article
Rapid Assessment of a Typhoon Disaster Based on NPP-VIIRS DNB Daily Data: The Case of an Urban Agglomeration along Western Taiwan Straits, China
by Yuanmao Zheng, Guofan Shao, Lina Tang, Yuanrong He, Xiaorong Wang, Yening Wang and Haowei Wang
Remote Sens. 2019, 11(14), 1709; https://doi.org/10.3390/rs11141709 - 19 Jul 2019
Cited by 35 | Viewed by 5547
Abstract
Rapid assessment of natural disasters is essential for disaster analysis and spatially explicit strategic decisions of post-disaster reconstruction but requires timely available data. The recent daily data of the National Polar-Orbiting Partnership Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) day/night band (DNB) provide new [...] Read more.
Rapid assessment of natural disasters is essential for disaster analysis and spatially explicit strategic decisions of post-disaster reconstruction but requires timely available data. The recent daily data of the National Polar-Orbiting Partnership Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) day/night band (DNB) provide new opportunities to detect and evaluate natural disasters. Here, we introduce an application of NPP-VIIRS DNB daily data for rapidly assessing the damage of a severe typhoon that struck the urban agglomerations along the western Taiwan Straits in China. Our research explored the methods of rapid identification and extraction of the areas based on changes in nighttime light (NTL) after the typhoon disaster by using a statistical radiation-normalization method. We analyzed the correlations of NTL image derivatives with human population, population density, and gross domestic product (GDP). The strong correlations were found between NTL image light density and population density (R2 = 0.83) and between the total nighttime light intensity and GDP (R2 = 0.96) at the prefecture level. In addition, we examined the interrelationships between changes in NTL images and the areas affected by the typhoon and proposed a method to predict the affected population. Finally, the affected area and the affected population in the study area could be rapidly retrieved based on the proposed remote sensing method. The overall accuracy was 83.2% for the detection of the affected population after disaster and the recovery rate of the affected area was 86.9% in the third week after the typhoon. This research demonstrates that the NTL image-based change detection method is simple and effective, and further explains that the NPP-VIIRS DNB daily data are useful for rapidly assessing affected areas and affected populations after typhoon disasters, and for timely quantifying the degree of recovery at a large spatial scale. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The geographical location of the research area, (<b>b</b>) the path of super typhoon Maria.</p>
Full article ">Figure 2
<p>The outline of the research process.</p>
Full article ">Figure 3
<p>The nighttime light images before and after the strong typhoon Maria in Fuzhou: (<b>a</b>) Stable nighttime light before typhoon Maria, (<b>b</b>) constant nighttime light intensity before typhoon Maria, (<b>c</b>) the nighttime light intensity in middle areas was greatly reduced after typhoon Maria.</p>
Full article ">Figure 4
<p>Regression model of between nighttime light (NTL) intensity and population or gross domestic product (GDP): (<b>a</b>) Light density versus population density (PD), (<b>b</b>) total light versus total population (TP), (<b>c</b>) total light versus PD, (<b>d</b>) light density versus TP, (<b>e</b>) total light versus GDP, (<b>f</b>) light density versus GDP.</p>
Full article ">Figure 5
<p>Regression model between NTL area and population or GDP: (<b>a</b>) Light area versus total population (TP), (<b>b</b>) light area versus population density (PD), (<b>c</b>) light area versus GDP.</p>
Full article ">Figure 6
<p>Regression model between weighted NTL intensity and population or GDP: (<b>a</b>) Weighted light density versus population density (PD), (<b>b</b>) total weighted light versus total population (TP), (<b>c</b>) total weighted light versus PD, (<b>d</b>) weighted light density versus TP, (<b>e</b>) total weighted light versus GDP, (<b>f</b>) weighted light density versus GDP.</p>
Full article ">Figure 7
<p>The R<sup>2</sup> values for correlations between NTL indicators and population or GDP at the prefecture level and county level.</p>
Full article ">Figure 8
<p>Affected areas after the typhoon disaster in the research area: (<b>a</b>) Distribution of affected areas, (<b>b</b>) superposition of affected areas and population density.</p>
Full article ">Figure 9
<p>Assessment of affected population and affected intensity: (<b>a</b>) Distribution of affected population, (<b>b</b>) superposition of percent of normal light (PNL) and GDP.</p>
Full article ">Figure 10
<p>Temporal–spatial changes in affected areas retrieved from PNL images for 20 cities at various times after the typhoon.</p>
Full article ">Figure 11
<p>Ratios of post-disaster to pre-disaster nighttime light intensity at different times after the passage of Maria.</p>
Full article ">Figure 12
<p>Affected status after passage of typhoon Maria for various cities: (<b>a</b>) Land cover 2010 in Wenzhou, the affected status on the first day (<b>b</b>), in the first week (<b>c</b>) and in the third week (<b>d</b>) after the disaster in Wenzhou city, (<b>e</b>) land cover 2010 in Fuhou, the affected status on first day (<b>f</b>), in the first week (<b>g</b>) and in the third week (<b>h</b>) after the disaster in Fuzhou city, (<b>i</b>) land cover 2010 in Yingtan, the affected status on the first day (<b>j</b>), in the first week (<b>k</b>) and in the third week (<b>l</b>) after the disaster in Yingtan city.</p>
Full article ">Figure 12 Cont.
<p>Affected status after passage of typhoon Maria for various cities: (<b>a</b>) Land cover 2010 in Wenzhou, the affected status on the first day (<b>b</b>), in the first week (<b>c</b>) and in the third week (<b>d</b>) after the disaster in Wenzhou city, (<b>e</b>) land cover 2010 in Fuhou, the affected status on first day (<b>f</b>), in the first week (<b>g</b>) and in the third week (<b>h</b>) after the disaster in Fuzhou city, (<b>i</b>) land cover 2010 in Yingtan, the affected status on the first day (<b>j</b>), in the first week (<b>k</b>) and in the third week (<b>l</b>) after the disaster in Yingtan city.</p>
Full article ">
21 pages, 8698 KiB  
Article
Affine-Function Transformation-Based Object Matching for Vehicle Detection from Unmanned Aerial Vehicle Imagery
by Shuang Cao, Yongtao Yu, Haiyan Guan, Daifeng Peng and Wanqian Yan
Remote Sens. 2019, 11(14), 1708; https://doi.org/10.3390/rs11141708 - 19 Jul 2019
Cited by 10 | Viewed by 4128
Abstract
Vehicle detection from remote sensing images plays a significant role in transportation related applications. However, the scale variations, orientation variations, illumination variations, and partial occlusions of vehicles, as well as the image qualities, bring great challenges for accurate vehicle detection. In this paper, [...] Read more.
Vehicle detection from remote sensing images plays a significant role in transportation related applications. However, the scale variations, orientation variations, illumination variations, and partial occlusions of vehicles, as well as the image qualities, bring great challenges for accurate vehicle detection. In this paper, we present an affine-function transformation-based object matching framework for vehicle detection from unmanned aerial vehicle (UAV) images. First, meaningful and non-redundant patches are generated through a superpixel segmentation strategy. Then, the affine-function transformation-based object matching framework is applied to a vehicle template and each of the patches for vehicle existence estimation. Finally, vehicles are detected and located after matching cost thresholding, vehicle location estimation, and multiple response elimination. Quantitative evaluations on two UAV image datasets show that the proposed method achieves an average completeness, correctness, quality, and F1-measure of 0.909, 0.969, 0.883, and 0.938, respectively. Comparative studies also demonstrate that the proposed method achieves compatible performance with the Faster R-CNN and outperforms the other eight existing methods in accurately detecting vehicles of various conditions. Full article
(This article belongs to the Special Issue Remote Sensing for Target Object Detection and Identification)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the vehicle detection workflow using the proposed affine-function transformation-based object matching framework.</p>
Full article ">Figure 2
<p>Illustration of object matching. (<b>a</b>) A group of template feature points representing a vehicle, (<b>b</b>) a group of scene feature points representing a scene containing a vehicle instance, and (<b>c</b>) the matched scene feature points.</p>
Full article ">Figure 3
<p>Illustration of the affine transformation model. <b><span class="html-italic">q1</span></b> to <b><span class="html-italic">q4</span></b> are, respectively, the corresponding matched locations of <b><span class="html-italic">p1</span></b> to <b><span class="html-italic">p4</span></b> after applying the affine transformation function.</p>
Full article ">Figure 4
<p>Illustration of the feature dissimilarity measures viewed as a 3D point set and the constructed convex dissimilarity measure function (facets).</p>
Full article ">Figure 5
<p>Illustration of the successive convexification scheme. (<b>a</b>) In the first iteration, all the scene feature points (red dots) are used to construct the convex dissimilarity measure function, (<b>b</b>) in the second iteration, only the scene feature points in the trust region (red dots) are used to construct the convex dissimilarity measure function, and (<b>c</b>) similar operations as those in the second iteration are performed in the latter iterations.</p>
Full article ">Figure 6
<p>Illustrations of (<b>a</b>) the vehicle location is estimated as the centroid of the matching locations (yellow dot), and (<b>b</b>) a vehicle existing in two patches generates two locations.</p>
Full article ">Figure 7
<p>Illustrations of (<b>a</b>) the DJI Phantom 4 Pro Unmanned Aerial Vehicle (UAV) system, (<b>b</b>) the Nanjing study area, and (<b>c</b>) the Changsha study area.</p>
Full article ">Figure 8
<p>Illustrations of (<b>a</b>) the vehicle template, and (<b>b</b>) a subset of the scene dataset used for robustness evaluation.</p>
Full article ">Figure 9
<p>Illustrations of (<b>a</b>) a scene sample transformed with different scales, (<b>b</b>) a scene sample rotated with different angles, (<b>c</b>) a scene sample occluded with different proportions, and (<b>d</b>) a scene contaminated with different levels of salt and pepper noises.</p>
Full article ">Figure 10
<p>Illustration of a subset of vehicle detection results on a Unmanned Aerial Vehicle (UAV) image.</p>
Full article ">Figure 11
<p>Illustrations of vehicle detection results on Unmanned Aerial Vehicle (UAV) images under challenging scenarios. (<b>a</b>) high density of vehicles, (<b>b</b>) vehicles covered with shadows, (<b>c</b>) vehicles occluded by high-rise buildings, and (<b>d</b>) vehicles occluded by overhead trees.</p>
Full article ">Figure 12
<p>Illustration of three patches having almost the similar bag-of-words representations. (<b>a</b>) a complete vehicle, (<b>b</b>) and (<b>c</b>) transformed vehicles on (<b>a</b>).</p>
Full article ">
18 pages, 15405 KiB  
Article
Spatio-Temporal Patterns of Coastal Aquaculture Derived from Sentinel-1 Time Series Data and the Full Landsat Archive
by Dorothee Stiller, Marco Ottinger and Patrick Leinenkugel
Remote Sens. 2019, 11(14), 1707; https://doi.org/10.3390/rs11141707 - 18 Jul 2019
Cited by 44 | Viewed by 6237
Abstract
Asia is the major contributor to global aquaculture production in quantity, accounting for almost 90%. These practices lead to extensive land-use and land-cover changes in coastal areas, and thus harm valuable and sensitive coastal ecosystems. Remote sensing and GIS technologies contribute to the [...] Read more.
Asia is the major contributor to global aquaculture production in quantity, accounting for almost 90%. These practices lead to extensive land-use and land-cover changes in coastal areas, and thus harm valuable and sensitive coastal ecosystems. Remote sensing and GIS technologies contribute to the mapping and monitoring of changes in aquaculture, providing essential information for coastal management applications. This study aims to investigate aquaculture expansion and spatio-temporal dynamics in two Chinese river deltas over three decades: the Yellow River Delta (YRD) and the Pearl River Delta (PRD). Long-term patterns of aquaculture change are extracted based on combining a reference layer on existing aquaculture ponds for 2015 derived from Sentinel-1 data with annual information on water bodies extracted from the long-term Landsat archive. Furthermore, the suitability of the proposed approach to be applied on a global scale is tested based on exploiting the Global Surface Water (GSW) dataset. We found enormous increases in aquaculture area for the investigated target deltas: an 18.6-fold increase for the YRD (1984–2016), and a 4.1-fold increase for the PRD (1990–2016). Furthermore, we detect hotspots of aquaculture expansion based on linear regression analyses for the deltas, indicating that hotspots are located in coastal regions for the YRD and along the Pearl River in the PRD. A comparison with high-resolution Google Earth data demonstrates that the proposed approach can detect spatio-temporal changes of aquaculture at an overall accuracy of 89%. The presented approach has the potential to be applied to larger spatial scales covering a time period of more than three decades. This is crucial to define appropriate management strategies to reduce the environmental impacts of aquaculture expansion, which are expected to increase in the future. Full article
(This article belongs to the Special Issue Remote Sensing for Fisheries and Aquaculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the two investigated deltas.</p>
Full article ">Figure 2
<p>Available Level-2 Landsat SR data in the Yellow River Delta (YRD) and the Pearl River Delta (PRD), from 1984–2016.</p>
Full article ">Figure 3
<p>Workflow diagram on creating Landsat aquaculture layers and assessing aquaculture dynamics.</p>
Full article ">Figure 4
<p>Detected permanent water bodies overlaid by the Sentinel-1 aquaculture layer for 2015.</p>
Full article ">Figure 5
<p>Trend in aquaculture area for the YRD and the PRD based on Global Surface Water (GSW) data and Landsat Surface Reflectance (SR) data with adjusted R<sup>2</sup> and p-value obtained from the regression analysis, including the 95% confidence region.</p>
Full article ">Figure 6
<p>Aquaculture dynamics for the YRD from 1984 to 2016.</p>
Full article ">Figure 7
<p>Aquaculture dynamics for the PRD from 1984 to 2016.</p>
Full article ">Figure 8
<p>Temporal patterns and hotspots of aquaculture expansion for the YRD.</p>
Full article ">Figure 9
<p>Temporal patterns and hotspots of aquaculture expansion for the PRD.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop