Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 11, December-2
Previous Issue
Volume 11, November-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 23 (December-1 2019) – 164 articles

Cover Story (view full-size image): The figure shows an elevation model of the Filchner–Ronne Ice Shelf in Antarctica derived from CryoSat-2 swath altimetry data acquired in 2018. The color coding indicates the surface elevation scaled from 0 to 200 m. The calving front location (CFL) marks the seaward limit of the shelf and is indicated by a series of lines illustrating the gradual advance of the ice shelf in the period 2011–2018 with a 6-month sampling. Mapping the time variable CFL of Antarctic ice shelves is important for estimating the freshwater budget or as a precursor of dynamic instability. Wuite et al. have developed a novel approach for deriving regular and consistent CFLs based on edge detection and vectorization of the sharp ice edge in gridded elevation data, representing a valuable new application for the CryoSat-2 mission that has not been previously exploited.View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3486 KiB  
Article
Comparison of Lake Optical Water Types Derived from Sentinel-2 and Sentinel-3
by Tuuli Soomets, Kristi Uudeberg, Dainis Jakovels, Matiss Zagars, Anu Reinart, Agris Brauns and Tiit Kutser
Remote Sens. 2019, 11(23), 2883; https://doi.org/10.3390/rs11232883 - 3 Dec 2019
Cited by 20 | Viewed by 5240
Abstract
Inland waters play a critical role in our drinking water supply. Additionally, they are important providers of food and recreation possibilities. Inland waters are known to be optically complex and more diverse than marine or ocean waters. The optical properties of natural waters [...] Read more.
Inland waters play a critical role in our drinking water supply. Additionally, they are important providers of food and recreation possibilities. Inland waters are known to be optically complex and more diverse than marine or ocean waters. The optical properties of natural waters are influenced by three different and independent sources: phytoplankton, suspended matter, and colored dissolved organic matter. Thus, the remote sensing of these waters is more challenging. Different types of waters need different approaches to obtain correct water quality products; therefore, the first step in remote sensing of lakes should be the classification of the water types. The classification of optical water types (OWTs) is based on the differences in the reflectance spectra of the lake water. This classification groups lake and coastal waters into five optical classes: Clear, Moderate, Turbid, Very Turbid, and Brown. We studied the OWTs in three different Latvian lakes: Burtnieks, Lubans, and Razna, and in a large Estonian lake, Lake Võrtsjärv. The primary goal of this study was a comparison of two different Copernicus optical instrument data for optical classification in lakes: Ocean and Land Color Instrument (OLCI) on Sentinel-3 and Multispectral Instrument (MSI) on Sentinel-2. We found that both satellite OWT classifications in lakes were comparable (R2 = 0.74). We were also able to study the spatial and temporal changes in the OWTs of the study lakes during 2017. The comparison between two satellites was carried out to understand if the classification of the OWTs with both satellites is compatible. Our results could give us not only a better overview of the changes in the lake water by studying the temporal and spatial variability of the OWTs, but also possibly better retrieval of Level 2 satellite products when using OWT guided approach. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The reference reflectance spectra used for each optical water type (OWT) [<a href="#B16-remotesensing-11-02883" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>The proportion of the optical water types (OWTs) ("unclassified" pixels excluded, included in <a href="#remotesensing-11-02883-t002" class="html-table">Table 2</a>) in 2017 from all cloud-free images in different lakes.</p>
Full article ">Figure 3
<p>The distribution of the optical water types (OWTs) for all Sentinel-2 and -3 matching cases.</p>
Full article ">Figure 4
<p>Correlation between the prevalent optical water type (OWT) derived from Sentinel-3 and -2 images. Colors represent different lakes: blue—Razna; green— Lubans; yellow— Võrtsjärv; red— Burtnieks. The x-and y-axes represent the OWTs: 1—Clear; 2—Moderate; 3—Turbid; 4—Very Turbid; 5—Brown.</p>
Full article ">Figure 5
<p>The comparison of the spatial variability of the optical water types (OWTs) between Sentinel-2 and -3 in four different lakes at selected dates. The upper row for each lake shows the enhanced red, green, blue (RGB) images, the second shows the spatial variability of the OWTs, and the third row shows the percentages of each OWT of the given scene.</p>
Full article ">Figure 6
<p>Temporal variability and the distribution of the optical water types (OWTs) derived from Sentinel-2 and Sentinel-3 in different lakes during 2017. The lower panel of each sub-figure shows the frequency of the acquired data.</p>
Full article ">
18 pages, 8103 KiB  
Article
Synthetic Aperture Radar Remote Sensing of Operational Platform Produced Water Releases
by Stine Skrunes, A. Malin Johansson and Camilla Brekke
Remote Sens. 2019, 11(23), 2882; https://doi.org/10.3390/rs11232882 - 3 Dec 2019
Cited by 17 | Viewed by 4225
Abstract
Oil spill detection services based on satellite synthetic aperture radar (SAR) frequently detect oil slicks close to platforms due to legal releases of produced water. Separating these slicks from larger releases, e.g., due to accidental leakage is challenging. The aim of this work [...] Read more.
Oil spill detection services based on satellite synthetic aperture radar (SAR) frequently detect oil slicks close to platforms due to legal releases of produced water. Separating these slicks from larger releases, e.g., due to accidental leakage is challenging. The aim of this work is to investigate the SAR characteristics of produced water, including the typical appearance in HH/VV data, possible variations with oil volume, and limitations on detectability. The study is based on dual-polarization TerraSAR-X data collected with constant imaging geometry over one platform in the North Sea. Despite the low oil content (volume percentage of 0.001%–0.002% in this data set), produced water is clearly detectable, with median damping ratios around 3–9 dB. Produced water is detected here in wind speeds of 2–12 m/s, with reduced detectability above ca 9 m/s. Hourly average release volumes with an oil component as low as 0.003 m 3 are detected. The damping ratio, polarization difference, and co-polarization power ratio are investigated and show no clear correlation with released oil volume. However, some indications of trends such as increasing signal damping with oil volume should be further investigated when data over larger release volumes are available. When comparing the properties of the entire slick with the most recently released part, similar or slightly higher damping ratios were found in the full slick case. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The oil platform Brage located in the North Sea. Image is courtesy of Wintershall Dea/Screen Story. (<b>b</b>) Map showing the location of the platform (red dot) and the satellite data (blue rectangle).</p>
Full article ">Figure 2
<p>Noise analysis with signal-to-noise ratio (SNR) plotted as a function of incidence angle. Each vertical line represents the range of the regions of interest (ROI) SNR values from the 5th percentile to the 95th percentile (small dots) with the 50th percentile indicated with a star (diamond) for VV (HH). The full slick is used to generate the percentiles. Colors changes from light to dark blue with increasing release volume. One clean sea (CS) region for each scene is included and presented in gray.</p>
Full article ">Figure 3
<p>SAR scene #2 and the derived features, (<b>a</b>) VV intensity [dB], (<b>b</b>) VV damping ratio [dB], (<b>c</b>) polarization difference, and (<b>d</b>) co-polarization power ratio. Bright targets are masked out to improve visualization. TerraSAR-X ©2018 Distribution Airbus DS, Infoterra GmbH.</p>
Full article ">Figure 4
<p>SAR scene #3 and the derived features, (<b>a</b>) VV intensity [dB], (<b>b</b>) VV damping ratio [dB], (<b>c</b>) polarization difference, and (<b>d</b>) co-polarization power ratio. Bright targets are masked out to improve visualization. TerraSAR-X ©2018 Distribution Airbus DS, Infoterra GmbH.</p>
Full article ">Figure 5
<p>SAR scene #5 and the derived features, (<b>a</b>) VV intensity [dB], (<b>b</b>) VV damping ratio [dB] (<b>c</b>), polarization difference, and (<b>d</b>) co-polarization power ratio. Bright targets are masked out to improve visualization. TerraSAR-X ©2018 Distribution Airbus DS, Infoterra GmbH.</p>
Full article ">Figure 6
<p>SAR scene #7 and the derived features, (<b>a</b>) VV intensity [dB], (<b>b</b>) VV damping ratio [dB], (<b>c</b>) polarization difference, and (<b>d</b>) co-polarization power ratio. Bright targets are masked out to improve visualization. TerraSAR-X ©2019 Distribution Airbus DS, Infoterra GmbH.</p>
Full article ">Figure 7
<p>SAR derived parameters for the full slick vs. approximate oil release from 00:00–17:00 on the day of SAR acquisition (at 17:12). Each vertical line shows the range of the feature values within the slick ROI from the 5th percentile to the 95th percentile (small dots) with the 50th percentile indicated with a star. Colors change from light to dark blue with increasing release volume. In the left column, clean sea values are indicated with gray lines and circles. (<b>a</b>) VV intensity, (<b>b</b>) VV damping ratio, (<b>c</b>) polarization difference, (<b>d</b>) normalized polarization difference, (<b>e</b>) co-polarization power ratio, and (<b>f</b>) normalized co-polarization power ratio.</p>
Full article ">Figure 8
<p>SAR derived parameters for the subslick vs. approximate oil release from 13:00–17:00 on the day of SAR acquisition (at 17:12). Each vertical line shows the range of the feature values within the slick ROI from the 5th percentile to the 95th percentile (small dots) with the 50th percentile indicated with a star. Colors change from light to dark blue with increasing release volume. In the left column, clean sea values are indicated with gray lines and circles. (<b>a</b>) VV intensity, (<b>b</b>) VV damping ratio, (<b>c</b>) polarization difference, (<b>d</b>) normalized polarization difference, (<b>e</b>) co-polarization power ratio, and (<b>f</b>) normalized co-polarization power ratio.</p>
Full article ">
18 pages, 8924 KiB  
Article
Next Generation Mapping: Combining Deep Learning, Cloud Computing, and Big Remote Sensing Data
by Leandro Parente, Evandro Taquary, Ana Paula Silva, Carlos Souza and Laerte Ferreira
Remote Sens. 2019, 11(23), 2881; https://doi.org/10.3390/rs11232881 - 3 Dec 2019
Cited by 54 | Viewed by 11263
Abstract
The rapid growth of satellites orbiting the planet is generating massive amounts of data for Earth science applications. Concurrently, state-of-the-art deep-learning-based algorithms and cloud computing infrastructure have become available with a great potential to revolutionize the image processing of satellite remote sensing. Within [...] Read more.
The rapid growth of satellites orbiting the planet is generating massive amounts of data for Earth science applications. Concurrently, state-of-the-art deep-learning-based algorithms and cloud computing infrastructure have become available with a great potential to revolutionize the image processing of satellite remote sensing. Within this context, this study evaluated, based on thousands of PlanetScope images obtained over a 12-month period, the performance of three machine learning approaches (random forest, long short-term memory-LSTM, and U-Net). We applied these approaches to mapped pasturelands in a Central Brazil region. The deep learning algorithms were implemented using TensorFlow, while the random forest utilized the Google Earth Engine platform. The accuracy assessment presented F1 scores for U-Net, LSTM, and random forest of, respectively, 96.94%, 98.83%, and 95.53% in the validation data, and 94.06%, 87.97%, and 82.57% in the test data, indicating a better classification efficiency using the deep learning approaches. Although the use of deep learning algorithms depends on a high investment in calibration samples and the generalization of these methods requires further investigations, our results suggest that the neural network architectures developed in this study can be used to map large geographic regions that consider a wide variety of satellite data (e.g., PlanetScope, Sentinel-2, Landsat-8). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Data and methods used in the generation of pasture mappings for the study area. The random forest classification approach was performed using the Google Earth Engine, while the deep learning approaches were implemented on a local server. LSTM: long short-term memory.</p>
Full article ">Figure 2
<p>Study area, which is 18,000 km<sup>2</sup> in area and located in the state of Goiás, Brazil. (<b>a</b>) Spatial distribution of polygons collected via visual interpretation and (<b>b</b>) field samples in relation to the study area (the PlanetScope mosaic shown refers to November 2017).</p>
Full article ">Figure 3
<p>Examples of homogeneous polygons (i.e., with pixels of only one LULC class) and reference samples used in this study. It is noteworthy that point and segment samples have very different data structures.</p>
Full article ">Figure 4
<p>Neural network architecture used to classify pasture areas in the study area. The LSTM layer was responsible for analyzing the temporal dimension by considering all PlanetScope mosaics, while the convolutional layers analyzed the spatial dimension through a 3 × 3 pixel window; the spectral dimension (i.e., blue, green, red, and near-infrared) was analyzed by both layers.</p>
Full article ">Figure 5
<p>Semantic segmentation approach used to classify pasture areas in the study area. This architecture, derived from U-Net, analyzed two images via an early-fusion technique by simultaneously considering the spatial, temporal, and spectral dimensions.</p>
Full article ">Figure 6
<p>F1 score of random forest pasture area classifications produced with different time windows and the same set of texture and spectral-temporal metrics. Although the best classification result used the 12 PlanetScope monthly mosaics, classifications using just three months had a high F1 score.</p>
Full article ">Figure 7
<p>Training loss curves of the LSTM (<b>a</b>) and U-Net (<b>b</b>) classification approaches. All curves, referring to training and validation data, showed a downward trend along the 100 epochs.</p>
Full article ">Figure 8
<p>The monthly PlanetScope mosaic for November 2017 (<b>a</b>) and the pasture maps produced with random forest (<b>b</b>), LSTM (<b>c</b>) and U-Net (<b>d</b>). In the region of interest 1 (ROI 1), U-Net was better at separating agricultural areas, while in region 2 (ROI 2), LSTM and random forest distinguished urban area pixels the best. In region of interest 3 (ROI 3), U-Net better filled the pasture areas, but mis-mapped some gallery forest pixels.</p>
Full article ">Figure 9
<p>Accuracy analysis results with validation data (<b>a</b>) collected via visual interpretation and tests (<b>b</b>) obtained in the field. The highest F1 score values were obtained using LSTM in the validation data and by U-Net in the test data. Relatively close F1 score values in both assessment sets may indicate a better generalization of U-Net.</p>
Full article ">
18 pages, 6208 KiB  
Letter
The Influence of Vegetation Characteristics on Individual Tree Segmentation Methods with Airborne LiDAR Data
by Qiuli Yang, Yanjun Su, Shichao Jin, Maggi Kelly, Tianyu Hu, Qin Ma, Yumei Li, Shilin Song, Jing Zhang, Guangcai Xu, Jianxin Wei and Qinghua Guo
Remote Sens. 2019, 11(23), 2880; https://doi.org/10.3390/rs11232880 - 3 Dec 2019
Cited by 46 | Viewed by 6872
Abstract
This study investigated the effects of forest type, leaf area index (LAI), canopy cover (CC), tree density (TD), and the coefficient of variation of tree height (CVTH) on the accuracy of different individual tree segmentation methods (i.e., canopy height model, pit-free canopy height [...] Read more.
This study investigated the effects of forest type, leaf area index (LAI), canopy cover (CC), tree density (TD), and the coefficient of variation of tree height (CVTH) on the accuracy of different individual tree segmentation methods (i.e., canopy height model, pit-free canopy height model (PFCHM), point cloud, and layer stacking seed point) with LiDAR data. A total of 120 sites in the Sierra Nevada Forest (California) and Shavers Creek Watershed (Pennsylvania) of the United States, covering various vegetation types and characteristics, were used to analyze the performance of the four selected individual tree segmentation algorithms. The results showed that the PFCHM performed best in all forest types, especially in conifer forests. The main forest characteristics influencing segmentation methods were LAI and CC, LAI and TD, and CVTH in conifer, broadleaf, and mixed forests, respectively. Most of the vegetation characteristics (i.e., LAI, CC, and TD) negatively correlated with all segmentation methods, while the effect of CVTH varied with forest type. These results can help guide the selection of individual tree segmentation method given the influence of vegetation characteristics. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study areas and plots in (<b>d</b>) conifer forest in California, (<b>e</b>) broadleaf forest and (<b>f</b>) mixed forest in Pennsylvania. Examples of National Agriculture Imagery Program (NAIP) imagery at 1 m resolution of (<b>a</b>) conifer forest, (<b>b</b>) broadleaf forest, and (<b>c</b>) mixed forest in the study area.</p>
Full article ">Figure 2
<p>(<b>a</b>) The 3D view of the raw point cloud in a plot, (<b>b</b>) the top view of the raw point cloud, and the (<b>c</b>) manually delineated individual trees overlaid on the pit-free canopy height model.</p>
Full article ">Figure 3
<p>The general workflow of the proposed study. Note that CHM-based/CHM, pit-free CHM-based/PFCHM, point cloud-based/PCS, and layer stacking seed point-based/LSS represent the four individual segmentation algorithms used in this study. LAI and CVTH represent leaf area index and the coefficient of variation of tree height. OA represents overall accuracy.</p>
Full article ">Figure 4
<p>The individual tree segmentation results of (<b>a</b>) CHM, (<b>b</b>) PFCHM, (<b>c</b>) PCS, and (<b>d</b>) LSS in a coniferous plot.</p>
Full article ">Figure 5
<p>Accuracy comparison of the four segmentation methods in (<b>a</b>) conifer, (<b>b</b>) broadleaf, and (<b>c</b>) mixed forests.</p>
Full article ">Figure 6
<p>The relative weights of all vegetation characteristics on segmentation in (<b>a</b>) conifer, (<b>b</b>) broadleaf, and (<b>c</b>) mixed forests. CC and TD represent canopy cover and tree density.</p>
Full article ">Figure 7
<p>The accuracy of the four individual tree segmentation methods varied with the four vegetation characteristic variables in the three forest types. The blue, red, yellow, and green bars represent CHM, PFCHM, PCS, and LSS, respectively, and the three levels of vegetation characteristics (shown in <a href="#remotesensing-11-02880-t003" class="html-table">Table 3</a>) increase from left to right in each color bar.</p>
Full article ">Figure 8
<p>Tree density of two typical plots: (<b>a</b>) 400 trees/ha and (<b>b</b>) 680 trees/ha.</p>
Full article ">
22 pages, 12078 KiB  
Article
The July/August 2019 Lava Flows at the Sciara del Fuoco, Stromboli–Analysis from Multi-Sensor Infrared Satellite Imagery
by Simon Plank, Francesco Marchese, Carolina Filizzola, Nicola Pergola, Marco Neri, Michael Nolde and Sandro Martinis
Remote Sens. 2019, 11(23), 2879; https://doi.org/10.3390/rs11232879 - 3 Dec 2019
Cited by 33 | Viewed by 4706
Abstract
On 3 July 2019 a rapid sequence of paroxysmal explosions at the summit craters of Stromboli (Aeolian-Islands, Italy) occurred, followed by a period of intense Strombolian and effusive activity in July, and continuing until the end of August 2019. We present a joint [...] Read more.
On 3 July 2019 a rapid sequence of paroxysmal explosions at the summit craters of Stromboli (Aeolian-Islands, Italy) occurred, followed by a period of intense Strombolian and effusive activity in July, and continuing until the end of August 2019. We present a joint analysis of multi-sensor infrared satellite imagery to investigate this eruption episode. Data from the Spinning-Enhanced-Visible-and-InfraRed-Imager (SEVIRI) was used in combination with those from the Multispectral-Instrument (MSI), the Operational-Land-Imager (OLI), the Advanced-Very High-Resolution-Radiometer (AVHRR), and the Visible-Infrared-Imaging-Radiometer-Suite (VIIRS). The analysis of infrared SEVIRI-data allowed us to detect eruption onset and to investigate short-term variations of thermal volcanic activity, providing information in agreement with that inferred by nighttime-AVHRR-observations. By using Sentinel-2-MSI and Landsat-8-OLI imagery, we better localized the active lava-flows. The latter were quantitatively characterized using infrared VIIRS-data, estimating an erupted lava volume of 6.33 × 10 6 ± 3.17 × 10 6 m3 and a mean output rate of 1.26 ± 0.63 m3/s for the July/August 2019 eruption period. The estimated mean-output-rate was higher than the ones in the 2002–2003 and 2014 Stromboli effusive eruptions, but was lower than in the 2007-eruption. These results confirmed that a multi-sensor-approach might provide a relevant contribution to investigate, monitor and characterize thermal volcanic activity in high-risk areas. Full article
(This article belongs to the Special Issue Satellite Remote Sensing of High-Temperature Thermal Anomalies)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Temporal trend of Spinning-Enhanced-Visible-and-InfraRed-Imager (SEVIRI) mid-infrared (MIR) channel data <math display="inline"><semantics> <mrow> <mi>B</mi> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (blue curve) recorded during the period 1–8 July 2019 over Stromboli. In black, the curve of <math display="inline"><semantics> <mrow> <msub> <mo>µ</mo> <mrow> <mi>M</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; in light and dark grey the curve of <math display="inline"><semantics> <mrow> <msub> <mo>µ</mo> <mrow> <mi>M</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>±</mo> <mn>3</mn> <msub> <mi>σ</mi> <mrow> <mi>M</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. SEVIRI MIR channel data of 3 July at 08:45 UTC with indication of pixel (<math display="inline"><semantics> <mrow> <mi>B</mi> <msub> <mi>T</mi> <mrow> <mi>M</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> = 309.65 K) appearing anomalous before the start of the paroxysm.</p>
Full article ">Figure 2
<p>Thermal anomalies detected by the RST<sub>VOLC</sub> algorithm over Stromboli volcano processing nighttime Advanced-Very High-Resolution-Radiometer (AVHRR) data during June/August 2019. Note that fluctuations in the hotspot pixel number were also due to cloud effects.</p>
Full article ">Figure 3
<p>False color imagery of Stromboli acquired by (<b>a</b>–<b>g</b>,<b>i</b>–<b>l</b>) Sentinel-2 with band combination 12/11/8A and (<b>h</b>) by Landsat-8 at night time with band combination 7/6 overlaid on the Sentinel-2 acquisition from 7 June 2019. Thermal activity at the central craters can be seen at all acquisitions. Active lava flows moving down the Sciara del Fuoco (NW slope of Stromboli) are visible from 7 July to 1 August 2019. Contour lines derived from ALOS DEM.</p>
Full article ">Figure 4
<p>False color imagery of Stromboli acquired by (<b>a</b>,<b>c</b>–<b>f</b>) Sentinel-2 with band combination 12/11/8A and (<b>b</b>) by Landsat-8 with band combination 7/6/5. The first three images (<b>a</b>–<b>c</b>) show lava flows at clear sky conditions. Although the other three acquisitions (<b>d</b>–<b>f</b>) are partly cloudy, thermal activity due to active lava flows moving down the Sciara del Fuoco (NW slope of Stromboli) can be seen. Contour lines derived from ALOS DEM.</p>
Full article ">Figure 5
<p>VIIRS data-based temporal evolution of the estimated TADR (m<sup>3</sup>/s) from 4 July to 30 August 2019. All VIIRS hotspots that were within the aforementioned time period detected over the central craters and the Sciara del Fuoco were considered (scenario I). The dotted line shows the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>2.47</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>8</mn> </msup> <mo> </mo> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>, the dashed line the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>8.25</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>7</mn> </msup> <mo> </mo> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>, which are the corresponding values of <span class="html-italic">X</span><sub><span class="html-italic">SiO</span><sub>2</sub></sub> = 49.155 wt%. The solid line represents the mean of the TADR estimation.</p>
Full article ">Figure 6
<p>Same as <a href="#remotesensing-11-02879-f005" class="html-fig">Figure 5</a>, however, only VIIRS hotspots acquired at a scan angle ≤ 31.59° off-nadir were considered (scenario II).</p>
Full article ">Figure 7
<p>Same as <a href="#remotesensing-11-02879-f006" class="html-fig">Figure 6</a>, however, for the daytime acquisitions only those with clear sky conditions were considered. Moreover, all nighttime acquisitions were taken into account (scenario III).</p>
Full article ">Figure 8
<p>Same as <a href="#remotesensing-11-02879-f007" class="html-fig">Figure 7</a>, however, in addition, also for the nighttime acquisitions only those with clear sky conditions were considered (scenario IV).</p>
Full article ">Figure 9
<p>Same as <a href="#remotesensing-11-02879-f007" class="html-fig">Figure 7</a>, however, only the clear sky daytime acquisitions were considered (no nighttime acquisitions) (scenario V).</p>
Full article ">Figure 10
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f005" class="html-fig">Figure 5</a> (scenario I).</p>
Full article ">Figure 11
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f006" class="html-fig">Figure 6</a> (scenario II).</p>
Full article ">Figure 12
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f007" class="html-fig">Figure 7</a> (scenario III).</p>
Full article ">Figure 13
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f008" class="html-fig">Figure 8</a> (scenario IV).</p>
Full article ">Figure 14
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f009" class="html-fig">Figure 9</a> (scenario V).</p>
Full article ">Figure 15
<p>VIIRS data-based temporal evolution of the estimated TADR (m<sup>3</sup>/s) from 4 July to 30 August 2019, applying scenario IV with <span class="html-italic">X</span><sub><span class="html-italic">SiO</span><sub>2</sub></sub> = 49.12 wt%. The dotted line shows the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>2.49</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>8</mn> </msup> <mtext> </mtext> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>, the dashed line the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>8.31</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>7</mn> </msup> <mtext> </mtext> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>. The corresponding value of the solid line represents the mean of the TADR estimation.</p>
Full article ">Figure 16
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f015" class="html-fig">Figure 15</a> (scenario IV) with <span class="html-italic">X</span><sub><span class="html-italic">SiO</span><sub>2</sub></sub> = 49.12 wt%.</p>
Full article ">Figure 17
<p>VIIRS data-based temporal evolution of the estimated TADR (m<sup>3</sup>/s) from 4 July to 30 August 2019, applying scenario (IV) with <span class="html-italic">X</span><sub><span class="html-italic">SiO</span><sub>2</sub></sub> = 49.19 wt%. The dotted line shows the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>2.46</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>8</mn> </msup> <mo> </mo> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>, the dashed line the results for <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mn>8.18</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mn>7</mn> </msup> <mo> </mo> <mfrac> <mi mathvariant="normal">J</mi> <mrow> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>. The corresponding value of the solid line represents the mean of the TADR estimation.</p>
Full article ">Figure 18
<p>VIIRS data-based temporal evolution of the cumulative volume (m<sup>3</sup>) of erupted lava based on the TADR estimation shown in <a href="#remotesensing-11-02879-f017" class="html-fig">Figure 17</a> (scenario IV) with <span class="html-italic">X</span><sub><span class="html-italic">SiO</span><sub>2</sub></sub> = 49.19 wt%.</p>
Full article ">
25 pages, 8869 KiB  
Article
An Object-Based Markov Random Field Model with Anisotropic Penalty for Semantic Segmentation of High Spatial Resolution Remote Sensing Imagery
by Chen Zheng, Xinxin Pan, Xiaohui Chen, Xiaohui Yang, Xin Xin and Limin Su
Remote Sens. 2019, 11(23), 2878; https://doi.org/10.3390/rs11232878 - 3 Dec 2019
Cited by 5 | Viewed by 4343
Abstract
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to [...] Read more.
The Markov random field model (MRF) has attracted a lot of attention in the field of remote sensing semantic segmentation. But, most MRF-based methods fail to capture the various interactions between different land classes by using the isotropic potential function. In order to solve such a problem, this paper proposed a new generalized probability inference with an anisotropic penalty for the object-based MRF model (OMRF-AP) that can distinguish the differences in the interactions between any two land classes. Specifically, an anisotropic penalty matrix was first developed to describe the relationships between different classes. Then, an expected value of the penalty information (EVPI) was developed in this inference criterion to integrate the anisotropic class-interaction information and the posteriori distribution information of the OMRF model. Finally, by iteratively updating the EVPI terms of different classes, segmentation results could be achieved when the iteration converged. Experiments of texture images and different remote sensing images demonstrated that our method could show a better performance than other state-of-the-art MRF-based methods, and a post-processing scheme of the OMRF-AP model was also discussed in the experiments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) An illustration of that one sub-object may belong to different objects, (<b>b</b>) the spatial relationship of a tree in the urban, (<b>c</b>) the spatial relationship of a tree in the forest.</p>
Full article ">Figure 2
<p>Flowchart of the OMRF (object-based Markov random field) model for image segmentation.</p>
Full article ">Figure 3
<p>Flowchart of the proposed OMRF-AP (anisotropic penalty for the object-based MRF) model for image segmentation.</p>
Full article ">Figure 4
<p>Example of setting the anisotropic penalty matrix (APM) value (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>). (<b>a</b>) Original SPOT 5 image, (<b>b</b>) Visual interpretation result, (<b>c</b>) Over-segmented regions, (<b>d</b>) Result with the default APM (anisotropic penalty matrix) value, (<b>e</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> = 1.01, (<b>f</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> = 1.015, (<b>g</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> = 1.02, (<b>h</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> = 1.03, (<b>i</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> = 1.04.</p>
Full article ">Figure 5
<p>Kappa and OA (overall accuracy) values of different APM values. (<b>a</b>) Kappa and OA values of the OMRF-AP with different <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> from 1 to 1.1 with step 0.001. (<b>b</b>) Kappa and OA values of the OMRF-AP with different <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> from 1 to 1.1 with step 0.001 under the condition of <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>1.03</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Example of setting the AMP value (<math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>), under the condition of <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>1.03</mn> </mrow> </semantics></math>. (<b>a</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> = 1.01, (<b>b</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> =1.015, (<b>c</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo> </mo> </mrow> </semantics></math> = 1.02, (<b>d</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> = 1.03, (<b>e</b>) Result with <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mrow> <mn>3</mn> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> =1.04.</p>
Full article ">Figure 7
<p>(<b>a</b>) Kappa and OA values of the OMRF-AP, with different <math display="inline"><semantics> <mi>β</mi> </semantics></math> values from 0 to 50, (<b>b</b>) Kappa and OA values of the OMRF-AP with different MRA (minimum region areas) values from 1 to 400.</p>
Full article ">Figure 8
<p>Segmentation results of six MRF-based methods for the first texture image. (<b>a</b>) Texture image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Ground truth with class number (each class number denotes one type of textures). (<b>c</b>) Result of ICM, (<b>d</b>) Result of MRMRF, (<b>e</b>) Result of IRGS, (<b>f</b>) Result of OMRF, (<b>g</b>) Result of NED-MRF, (<b>h</b>) Result of OMRF-AP. MRF: Markov random field, ICM: iterated conditional mode, MRMRF: multi-resolution MRF, IRGS: iterative region growing using semantics, NED-MRF: normalized Euclidean distance MRF model.</p>
Full article ">Figure 9
<p>Segmentation results of six MRF-based methods for the second texture image. (<b>a</b>) Texture image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Ground truth with class number (each class number denotes one type of textures), (<b>c</b>) Result of ICM, (<b>d</b>) Result of MRMRF, (<b>e</b>) Result of IRGS, (<b>f</b>) Result of OMRF, (<b>g</b>) Result of NED-MRF, (<b>h</b>) Result of OMRF-AP.</p>
Full article ">Figure 10
<p>Segmentation results of the six MRF-based methods for the SPOT 5 remote sensing image. (<b>a</b>) SPOT 5 image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Visual interpretation result with class number, (<b>c</b>) Interpretation of each class label, (<b>d</b>) Result of ICM, (<b>e</b>) Result of MRMRF, (<b>f</b>) Result of IRGS, (<b>g</b>) Result of OMRF, (<b>h</b>) Result of NED-MRF, (<b>i</b>) Result of OMRF-AP.</p>
Full article ">Figure 11
<p>Segmentation results of the six MRF-based methods for the VHSR (very high spatial resolution) aerial remote sensing image. (<b>a</b>) Observed aerial image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Visual interpretation result with class number, (<b>c</b>) Interpretation of each class label, (<b>d</b>) Result of ICM, (<b>e</b>) Result of MRMRF, (<b>f</b>) Result of IRGS, (<b>g</b>) Result of OMRF, (<b>h</b>) Result of NED-MRF, (<b>i</b>) Result of OMRF-AP.</p>
Full article ">Figure 12
<p>Segmentation results of the six MRF-based methods for the second VHSR aerial remote sensing image. (<b>a</b>) Observed aerial image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Visual interpretation result with class number, (<b>c</b>) Interpretation of each class label, (<b>d</b>) Result of ICM, (<b>e</b>) Result of MRMRF, (<b>f</b>) Result of IRGS, (<b>g</b>) Result of OMRF, (<b>h</b>) Result of NED-MRF, (<b>i</b>) Result of OMRF-AP.</p>
Full article ">Figure 13
<p>Segmentation results of the six MRF-based methods for the Gaofen-2 remote sensing image from GID (Gaofen image dataset). (<b>a</b>) Gaofen-2 image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Ground Truth, (<b>c</b>) Interpretation of each class label, (<b>d</b>) Result of ICM, (<b>e</b>) Result of MRMRF, (<b>f</b>) Result of IRGS, (<b>g</b>) Result of OMRF, (<b>h</b>) Result of NED-MRF, (<b>i</b>) Result of OMRF-AP.</p>
Full article ">Figure 14
<p>Segmentation results of OMRF-AP and OMRF-APP (OMRF-AP with post-processing) for four previous experiments. (<b>a</b>) and (<b>d</b>) Segmentation results of OMRF-AP and OMRF-APP for the first texture image (explanation of each texture please refers to <a href="#remotesensing-11-02878-f008" class="html-fig">Figure 8</a>). (<b>b</b>) and (<b>e</b>) Segmentation results of OMRF-AP and OMRF-APP for the second texture image (explanation of each texture please refers to <a href="#remotesensing-11-02878-f009" class="html-fig">Figure 9</a>). (<b>c</b>) and (<b>f</b>) Segmentation results of OMRF-AP and OMRF-APP for the SPOT 5 image (explanation of each class please refers to <a href="#remotesensing-11-02878-f010" class="html-fig">Figure 10</a>). (<b>g</b>) and (<b>j</b>) Segmentation results of OMRF-AP and OMRF-APP for the first aerial image (explanation of each class please refers to <a href="#remotesensing-11-02878-f011" class="html-fig">Figure 11</a>). (<b>h</b>) and (<b>k</b>) Segmentation results of OMRF-AP and OMRF-APP for the second aerial image (explanation of each class please refers to <a href="#remotesensing-11-02878-f012" class="html-fig">Figure 12</a>). (<b>i</b>) and (<b>l</b>) Segmentation results of OMRF-AP and OMRF-APP for the Gaofen-2 image (explanation of each class please refers to <a href="#remotesensing-11-02878-f013" class="html-fig">Figure 13</a>).</p>
Full article ">Figure 15
<p>Segmentation results of OMRF-AP of a QuickBird remote sensing image with the APM <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>D</mi> <mi>s</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) QuickBird image (it is available as a <a href="#app1-remotesensing-11-02878" class="html-app">Supplementary Material</a>), (<b>b</b>) Result of OMRF-AP with the APM <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>D</mi> <mi>s</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Segmentation results of OMRF-AP of the SPOT 5 remote sensing image with different APM. (<b>a</b>) Result of OMRF-AP with the original APM <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>D</mi> <mi>s</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </semantics></math> (<b>b</b>) Result of OMRF-AP with a new APM <math display="inline"><semantics> <mrow> <mi>A</mi> <msub> <mo>′</mo> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>D</mi> <mi>s</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 4352 KiB  
Article
Use of A Neural Network-Based Ocean Body Radiative Transfer Model for Aerosol Retrievals from Multi-Angle Polarimetric Measurements
by Cheng Fan, Guangliang Fu, Antonio Di Noia, Martijn Smit, Jeroen H.H. Rietjens, Richard A. Ferrare, Sharon Burton, Zhengqiang Li and Otto P. Hasekamp
Remote Sens. 2019, 11(23), 2877; https://doi.org/10.3390/rs11232877 - 3 Dec 2019
Cited by 24 | Viewed by 4805
Abstract
For aerosol retrieval from multi-angle polarimetric (MAP) measurements over the ocean it is important to accurately account for the contribution of the ocean-body to the top-of-atmosphere signal, especially for wavelengths <500 nm. Performing online radiative transfer calculations in the coupled atmosphere ocean system [...] Read more.
For aerosol retrieval from multi-angle polarimetric (MAP) measurements over the ocean it is important to accurately account for the contribution of the ocean-body to the top-of-atmosphere signal, especially for wavelengths <500 nm. Performing online radiative transfer calculations in the coupled atmosphere ocean system is too time consuming for operational retrieval algorithms. Therefore, mostly lookup-tables of the ocean body reflection matrix are used to represent the lower boundary in an atmospheric radiative transfer model. For hyperspectral measurements such as those from Spectro-Polarimeter for Planetary Exploration (SPEXone) on the NASA Plankton, Aerosol, Cloud and ocean Ecosystem (PACE) mission, also the use of look-up tables is unfeasible because they will become too big. In this paper, we propose a new method for aerosol retrieval over ocean from MAP measurements using a neural network (NN) to model the ocean body reflection matrix. We apply the NN approach to synthetic SPEXone measurements and also to real data collected by SPEX airborne during the Aerosol Characterization from Polarimeter and Lidar (ACEPOL) campaign. We conclude that the NN approach is well capable for aerosol retrievals over ocean, introducing no significant error on the retrieved aerosol properties Full article
(This article belongs to the Special Issue Advances of Remote Sensing Inversion)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Reconstruction mean relative error versus number of principal components for the radiance. Since the relative errors at different wavelengths are quite different, we chose the average values of relative errors instead of root-mean-square error (RMSE). Different lines correspond to different chlorophyll-a concentrations.</p>
Full article ">Figure 2
<p>Scatter plots of the relative difference for the element <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">R</mi> <mrow> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math> of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">R</mi> <mrow> <mi>u</mi> <mi>l</mi> </mrow> </msub> </mrow> </semantics></math> at the SPEX-airborne viewing angles.</p>
Full article ">Figure 3
<p>Retrievals of the aerosol optical thickness (AOT) at 550 nm for synthetic scenes generated using the analytical ocean forward model described in <a href="#sec2-remotesensing-11-02877" class="html-sec">Section 2</a>. <b>Left</b>: retrievals performed using the neural network (NN)-based ocean model. <b>Right</b>: retrievals performed using the analytical forward model.</p>
Full article ">Figure 4
<p>Synthetic retrievals: Root-mean-square error (RMSE) as a function of <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mn>550</mn> </mrow> </msub> </mrow> </semantics></math> threshold for single scattering albedo (SSA) at 550 nm (<b>a</b>) and aerosol layer height (ALH) (<b>b</b>), root-mean-square error (RMSE) as a function of <math display="inline"><semantics> <mrow> <msubsup> <mi>τ</mi> <mrow> <mn>550</mn> </mrow> <mi>f</mi> </msubsup> </mrow> </semantics></math> threshold for effective radius at 550 nm of fine modes (<b>c</b>) and the real part of refractive index at 550 nm of fine modes (<b>d</b>), root-mean-square error (RMSE) as a function of <math display="inline"><semantics> <mrow> <msubsup> <mi>τ</mi> <mrow> <mn>550</mn> </mrow> <mi>c</mi> </msubsup> </mrow> </semantics></math> threshold for effective radius at 550 nm of coarse modes (<b>e</b>) and the real part of refractive index at 550 nm of coarse modes (<b>f</b>). Shaded areas indicate the SPEXone requirements [<a href="#B24-remotesensing-11-02877" class="html-bibr">24</a>].</p>
Full article ">Figure 5
<p>Synthetic retrievals: Number of retrievals with <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="sans-serif">χ</mi> <mn>2</mn> </msup> <mo>&lt;</mo> <mn>1.5</mn> <mo> </mo> </mrow> </semantics></math>as a function of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">τ</mi> <mrow> <mn>550</mn> </mrow> </msub> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">τ</mi> <mrow> <mn>550</mn> </mrow> <mi mathvariant="normal">f</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">τ</mi> <mrow> <mn>550</mn> </mrow> <mi mathvariant="normal">c</mi> </msubsup> </mrow> </semantics></math> (<b>right</b>) for the NN retrieval.</p>
Full article ">Figure 6
<p>(<b>Upper panel</b>) Comparison of SPEX-retrieved aerosol optical depth (AOD) vs. AOD obtained from HSRL-2 measurements integrated over the atmospheric column at 355 nm and 532 nm on 23rd Oct 2017. The line is the equality line, the red points represent the 355 nm and the blue points represent the 532 nm. The data within a circle with a diameter of 5 km from the center of the HSRL-2 pixel is averaged and then compared. (<b>Lower panel</b>) The difference between the AOD retrieved by SPEX airborne and HSRL-2 as function of the average AOD of the two instruments.</p>
Full article ">Figure 7
<p>Same as <a href="#remotesensing-11-02877-f006" class="html-fig">Figure 6</a> but using the look-up table with exactly computed ocean body reflection matrices.</p>
Full article ">Figure 7 Cont.
<p>Same as <a href="#remotesensing-11-02877-f006" class="html-fig">Figure 6</a> but using the look-up table with exactly computed ocean body reflection matrices.</p>
Full article ">
25 pages, 11219 KiB  
Article
A Multi-Channel Algorithm for Mapping Volcanic Thermal Anomalies by Means of Sentinel-2 MSI and Landsat-8 OLI Data
by Francesco Marchese, Nicola Genzano, Marco Neri, Alfredo Falconieri, Giuseppe Mazzeo and Nicola Pergola
Remote Sens. 2019, 11(23), 2876; https://doi.org/10.3390/rs11232876 - 3 Dec 2019
Cited by 54 | Viewed by 8919
Abstract
The Multispectral Instrument (MSI) and the Operational Land Imager (OLI), respectively onboard Sentinel-2A/2B and Landsat 8 satellites, thanks to their features especially in terms of spatial/spectral resolution, represents two important instruments for investigating thermal volcanic activity from space. In this study, we used [...] Read more.
The Multispectral Instrument (MSI) and the Operational Land Imager (OLI), respectively onboard Sentinel-2A/2B and Landsat 8 satellites, thanks to their features especially in terms of spatial/spectral resolution, represents two important instruments for investigating thermal volcanic activity from space. In this study, we used data from those sensors to test an original multichannel algorithm, which aims at mapping volcanic thermal anomalies at a global scale. The algorithm, named Normalized Hotspot Indices (NHI), combines two normalized indices, analyzing near infrared (NIR) and short wave infrared (SWIR) radiances, to identify hotspot pixels in daylight conditions. Results, achieved studying a number of active volcanoes located in different geographic areas and characterized by a different eruptive behavior, demonstrated the NHI capacity in mapping both subtle and more intense volcanic thermal anomalies despite some limitations (e.g., missed detections because of clouds/volcanic plumes). In addition, the study shows that the performance of NHI might be further increased using some additional spectral/spatial tests, in view of a possible usage of this algorithm within a known multi-temporal scheme of satellite data analysis. The low processing times and the straight forth exportability to data from other sensors make NHI, which is sensitive even to other high temperature sources, suited for mapping hot volcanic targets integrating information provided by current and well-established satellite-based volcanoes monitoring systems. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic location of active volcanoes investigated in this work.</p>
Full article ">Figure 2
<p>(<b>a</b>) The Red; Green; Blue (RGB) product from Sentinel 2 MSI data of 28 December, 2018 at 09:44 UTC, covering Mt Etna area. The product was generated after converting top-of-atmosphere (TOA) reflectance, measured in band 12 (Red), band 11 (Green), and band 8A (Blue), to radiance by means of the Sentinel Application Platform (SNAP) tool from the European Space Agency (ESA); (<b>b</b>) Radiance changes recorded along the A–B (left panels) and C–D (right panels) transect regions intersecting the crater area and Valle del Bove, respectively.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) The Red; Green; Blue (RGB) product from Sentinel 2 MSI data of 28 December, 2018 at 09:44 UTC, covering Mt Etna area. The product was generated after converting top-of-atmosphere (TOA) reflectance, measured in band 12 (Red), band 11 (Green), and band 8A (Blue), to radiance by means of the Sentinel Application Platform (SNAP) tool from the European Space Agency (ESA); (<b>b</b>) Radiance changes recorded along the A–B (left panels) and C–D (right panels) transect regions intersecting the crater area and Valle del Bove, respectively.</p>
Full article ">Figure 3
<p>Normalized Hotspot Indices (NHI) imagery from Sentinel-2A MSI data of 28 December, 2018 (on the top) and changes of two NHI indices retrieved along the same A–B and C–D transect regions of <a href="#remotesensing-11-02876-f002" class="html-fig">Figure 2</a>a (middle-bottom panels); (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>N</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>NHI maps (in Lat/Long WGS84 projection) from Sentinel-2 MSI scenes of July 2018–February 2019 covering Mt. Etna area (in background the Band 12 imagery). In yellow, pixels having values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mtext> </mtext> </mrow> </semantics></math>&gt; 0, in red those with values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>N</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4 Cont.
<p>NHI maps (in Lat/Long WGS84 projection) from Sentinel-2 MSI scenes of July 2018–February 2019 covering Mt. Etna area (in background the Band 12 imagery). In yellow, pixels having values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mtext> </mtext> </mrow> </semantics></math>&gt; 0, in red those with values of <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>H</mi> <msub> <mi>I</mi> <mrow> <mi>S</mi> <mi>W</mi> <mi>N</mi> <mi>I</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>NHI maps from Sentinel-2 MSI data of March and June 2019 covering Stromboli volcano. Yellow/red pixels indicate hotspot pixels flagged by NHI.</p>
Full article ">Figure 6
<p>NHI maps from Sentinel-2 MSI data acquired in March 2017 covering the area of Erta Ale volcano. Yellow/red pixels indicate hotspot pixels flagged by NHI.</p>
Full article ">Figure 7
<p>NHI maps from two Sentinel-2 MSI data of December 2017 covering the Erta Ale volcanic area; (<b>a</b>) 5 December, 2017; (<b>b</b>) 15 December, 2017.</p>
Full article ">Figure 8
<p>NHI maps from Sentinel-2 MSI data covering part of the Hawaii Islands and including the Kilauea volcano; (<b>a</b>) 4 March, 2018; (<b>b</b>) 29 March, 2018.</p>
Full article ">Figure 9
<p>NHI maps from Sentinel-2 MSI data of October 2018 covering the area of Sakurajima volcano. Yellow/red pixels indicate detected hotspot pixels.</p>
Full article ">Figure 10
<p>NHI maps from Sentinel-2 MSI data of March 2019 covering the Popocatepetl. Yellow/red pixels, within the region marked in blue, indicated an intra-crater thermal activity. Hotspot pixels within the areas marked in green were possibly associated to fires occurring at the volcano’s slopes (green box) and far from the volcano edifice (green circles).</p>
Full article ">Figure 11
<p>(<b>a</b>) Temporal trend of hotspot pixels identified by NHI at Shiveluch volcano analyzing Landsat 8 OLI data of March 2016–April 2019; (<b>b</b>) NHI map (B7 imagery is displayed in background) and RGB product (bottom-right panel) from satellite scene of 2 March, 2016; (<b>c</b>) NHI map and RGB product (top-right panel) from satellite scene of 19 April, 2019. The RGB product was generated using band B7 (Red), B6 (Green), and B5 (Blue) of OLI.</p>
Full article ">Figure 12
<p>NHI map of Piton de La Fournaise generated from Landsat 8 OLI data of (<b>a</b>) 12 May, 2018; (<b>b</b>) 24 February, 2019; (<b>c</b>) NHI map from Sentinel 2 MSI data (B12 imagery is displayed in background) of 11 June, 2019. At the top of each panel, the RGB product generated as <a href="#remotesensing-11-02876-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Hotspot pixels flagged at Mt. Etna during July 2018-February 2019 by: (<b>a</b>) NHI from Sentinel-2 MSI data; (<b>b</b>) RST<sub>VOLC</sub> from National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational Satellites (Metop)-Advanced Very High Resolution Radiometer (AVHRR) records, after filtering data for values of satellite zenith angle lower than 40°.</p>
Full article ">Figure 14
<p>NHI map from Sentinel 2 MSI data of 11 June, 2019, generated after filtering pixels with values of <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mn>2.2</mn> </mrow> </msub> <mo>&lt;</mo> <mn>3.0</mn> </mrow> </semantics></math> W m<sup>−2</sup> sr<sup>−1</sup> µm<sup>−1</sup>, showing the lava flow at Piton de la Fournaise (see yellow/red pixels). Elaboration was performed under Google Earth Engine using the preliminary NHI tool.</p>
Full article ">Figure 15
<p>NHI map from Sentinel-2 MSI data of 12 July 2007 at 09:50 UTC showing thermal anomalies (yellow/red pixels) associated to several documented anthropogenic fires affecting flanks and slopes of the Vesuvius volcano (Italy).</p>
Full article ">
20 pages, 3022 KiB  
Article
Long-Term Change of the Secchi Disk Depth in Lake Maninjau, Indonesia Shown by Landsat TM and ETM+ Data
by Fajar Setiawan, Bunkei Matsushita, Rossi Hamzah, Dalin Jiang and Takehiko Fukushima
Remote Sens. 2019, 11(23), 2875; https://doi.org/10.3390/rs11232875 - 3 Dec 2019
Cited by 25 | Viewed by 4564
Abstract
Most of the lakes in Indonesia are facing environmental problems such as eutrophication, sedimentation, and depletion of dissolved oxygen. The water quality data for supporting lake management in Indonesia are very limited due to financial constraints. To address this issue, satellite data are [...] Read more.
Most of the lakes in Indonesia are facing environmental problems such as eutrophication, sedimentation, and depletion of dissolved oxygen. The water quality data for supporting lake management in Indonesia are very limited due to financial constraints. To address this issue, satellite data are often used to retrieve water quality data. Here, we developed an empirical model for estimating the Secchi disk depth (SD) from Landsat TM/ETM+ data by using data collected from nine Indonesian lakes/reservoirs (SD values 0.5–18.6 m). We made two efforts to improve the robustness of the developed model. First, we carried out an image preprocessing series of steps (i.e., removing contaminated water pixels, filtering images, and mitigating atmospheric effects) before the Landsat data were used. Second, we selected two band ratios (blue/green and red/green) as SD predictors; these differ from previous studies’ recommendation. The validation results demonstrated that the developed model can retrieve SD values with an R2 of 0.60 and the root mean square error of 1.01 m in Lake Maninjau, Indonesia (SD values ranged from 0.5 to 5.8 m, n = 74). We then applied the developed model to 230 scenes of preprocessed Landsat TM/ETM+ images to generate a long-term SD database for Lake Maninjau during 1987–2018. The visual comparison of the in situ-measured and satellite estimated SD values, as well as several events (e.g., algal bloom, water gate open, and fish culture), showed that the Landsat-based SD estimations well captured the change tendency of water transparency in Lake Maninjau, and these estimations will thus provide useful data for lake managers and policy-makers. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Landsat image of Lake Maninjau in Sumatera Island, Indonesia (acquired on 6 July 2011; R:G:B = 4:5:1; green stars = the measurement sites). (<b>b</b>) Locations of the nine Indonesian lakes investigated in 2011–2014 and used for calibrating the Secchi disk depth (SD) estimation model (green circles). (<b>c</b>) Monthly averaged precipitation of Lake Maninjau [<a href="#B32-remotesensing-11-02875" class="html-bibr">32</a>]. (<b>d</b>) Number of fish cages and fish production in Lake Maninjau from 1992 to 2016 [<a href="#B33-remotesensing-11-02875" class="html-bibr">33</a>,<a href="#B34-remotesensing-11-02875" class="html-bibr">34</a>,<a href="#B35-remotesensing-11-02875" class="html-bibr">35</a>,<a href="#B36-remotesensing-11-02875" class="html-bibr">36</a>,<a href="#B37-remotesensing-11-02875" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Comparison of the in situ SD measurements and the corresponding estimated SD values using the 17 selected models in the model calibration procedures.</p>
Full article ">Figure 3
<p>Comparisons of the in situ-measured SD values (In Situ SD Dataset III) and the corresponding estimated SD values from the preprocessed Landsat images (Landsat Dataset III) using the 17 selected SD estimation models (<span class="html-italic">n</span> = 74).</p>
Full article ">Figure 4
<p>Comparison of the 17 selected SD estimation models using Taylor diagram in terms of their correlation coefficients, root-mean-square differences, and standard deviations.</p>
Full article ">Figure 5
<p>Long-term changes in the water transparency in Lake Maninjau from 1987 to 2018. Red points: The averaged SD values estimated from the preprocessed Landsat Dataset II using the BF model. Blue points: The averaged in situ SD values for each field survey (In Situ SD Dataset II). Red line: Obtained from the red points via a trend analysis in R language. Blue line: Obtained from the blue points via a trend analysis in R language. Gray areas: 95% confidence intervals of the trend analysis.</p>
Full article ">Figure 6
<p>Relationship between the number of fish cages and the Landsat-based SD values during the period 2004–2012 in Lake Maninjau (<span class="html-italic">n</span> = 9).</p>
Full article ">
12 pages, 1477 KiB  
Letter
Quantification of Soil Organic Carbon in Biochar-Amended Soil Using Ground Penetrating Radar (GPR)
by Xiaoqing Shen, Tyler Foster, Heather Baldi, Iliyana Dobreva, Byron Burson, Dirk Hays, Rodante Tabien and Russell Jessup
Remote Sens. 2019, 11(23), 2874; https://doi.org/10.3390/rs11232874 - 3 Dec 2019
Cited by 12 | Viewed by 4972
Abstract
The application of biochar amendments to soil has been proposed as a strategy for mitigating global carbon (C) emissions and soil organic carbon (SOC) loss. Biochar can provide additional agronomic benefits to cropping systems, including improved crop yield, soil water holding capacity, seed [...] Read more.
The application of biochar amendments to soil has been proposed as a strategy for mitigating global carbon (C) emissions and soil organic carbon (SOC) loss. Biochar can provide additional agronomic benefits to cropping systems, including improved crop yield, soil water holding capacity, seed germination, cation exchange capacity (CEC), and soil pH. To maximize the beneficial effects of biochar amendments towards the inventory, increase, and management of SOC pools, nondestructive analytical methods such as ground penetrating radar (GPR) are needed to identify and quantify belowground C. The use of GPR has been well characterized across geological, archaeological, engineering, and military applications. While GPR has been predominantly utilized to detect relatively large objects such as rocks, tree roots, land mines, and peat soils, the objective of this study was to quantify comparatively smaller, particulate sources of SOC. This research used three materials as C sources: biochar, graphite, and activated C. The C sources were mixed with sand—12 treatments in total—and scanned under three moisture levels: 0%, 10%, and 20% to simulate different soil conditions. GPR attribute analyses and Naïve Bayes predictive models were utilized in lieu of visualization methods because of the minute size of the C particles. Significant correlations between GPR attributes and both C content and moisture levels were detected. The accuracy of two predictive models using a Naïve Bayes classifier for C content was trivial but the accuracy for C structure was 56%. The analyses confirmed the ability of GPR to identify differences in both C content and C structure. Beneficial future applications could focus on applying GPR across more diverse soil conditions. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The experiment sand trough illustration diagram. All the treatments (samples) were buried 5.08 cm in depth and 25.4 cm apart. The dimensions of each sample are 19.05 cm x 17.78 cm x 2.54 cm.</p>
Full article ">Figure 2
<p>The display of seven channels created by transmitters and receivers used to collect ground penetrating radar (GPR) data. The orange triangles represent transmitters and the dark blue rectangles represent receivers. The antenna used contains seven channels with each channel consisting of one transmitter and one receiver.</p>
Full article ">Figure 3
<p>The scanning cart prototype equipped with Ground Penetrating Radar (GPR).</p>
Full article ">
20 pages, 5016 KiB  
Article
Monitoring Within-Field Variability of Corn Yield using Sentinel-2 and Machine Learning Techniques
by Ahmed Kayad, Marco Sozzi, Simone Gatto, Francesco Marinello and Francesco Pirotti
Remote Sens. 2019, 11(23), 2873; https://doi.org/10.3390/rs11232873 - 3 Dec 2019
Cited by 107 | Viewed by 12667
Abstract
Monitoring and prediction of within-field crop variability can support farmers to make the right decisions in different situations. The current advances in remote sensing and the availability of high resolution, high frequency, and free Sentinel-2 images improve the implementation of Precision Agriculture (PA) [...] Read more.
Monitoring and prediction of within-field crop variability can support farmers to make the right decisions in different situations. The current advances in remote sensing and the availability of high resolution, high frequency, and free Sentinel-2 images improve the implementation of Precision Agriculture (PA) for a wider range of farmers. This study investigated the possibility of using vegetation indices (VIs) derived from Sentinel-2 images and machine learning techniques to assess corn (Zea mays) grain yield spatial variability within the field scale. A 22-ha study field in North Italy was monitored between 2016 and 2018; corn yield was measured and recorded by a grain yield monitor mounted on the harvester machine recording more than 20,000 georeferenced yield observation points from the study field for each season. VIs from a total of 34 Sentinel-2 images at different crop ages were analyzed for correlation with the measured yield observations. Multiple regression and two different machine learning approaches were also tested to model corn grain yield. The three main results were the following: (i) the Green Normalized Difference Vegetation Index (GNDVI) provided the highest R2 value of 0.48 for monitoring within-field variability of corn grain yield; (ii) the most suitable period for corn yield monitoring was a crop age between 105 and 135 days from the planting date (R4–R6); (iii) Random Forests was the most accurate machine learning approach for predicting within-field variability of corn yield, with an R2 value of almost 0.6 over an independent validation set of half of the total observations. Based on the results, within-field variability of corn yield for previous seasons could be investigated from archived Sentinel-2 data with GNDVI at crop stage (R4–R6). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study field in Ferrara, North Italy.</p>
Full article ">Figure 2
<p>Corn grain yield data for the 2016, 2017, and 2018 growing seasons.</p>
Full article ">Figure 3
<p>Coefficient of determination (R<sup>2</sup> value) between vegetation indices and actual yield at different crop ages for the three seasons. The three columns represent different NIR bands used.</p>
Full article ">Figure 4
<p>Measured yield maps (top) and Green Normalized Difference Vegetation Index (GNDVI) maps of date with a higher coefficient of determination for each year (middle) and corresponding scatter plot, equations, coefficient R<sup>2</sup> values where <span class="html-italic">p</span> values &lt; 0.001 (bottom (<b>a</b>) for 2016, (<b>b</b>) for 2017 and (<b>c</b>) for 2018 season).</p>
Full article ">Figure 4 Cont.
<p>Measured yield maps (top) and Green Normalized Difference Vegetation Index (GNDVI) maps of date with a higher coefficient of determination for each year (middle) and corresponding scatter plot, equations, coefficient R<sup>2</sup> values where <span class="html-italic">p</span> values &lt; 0.001 (bottom (<b>a</b>) for 2016, (<b>b</b>) for 2017 and (<b>c</b>) for 2018 season).</p>
Full article ">Figure 5
<p>Improvement of fit versus the number of trees used in the Random Forests method.</p>
Full article ">Figure 6
<p>Accuracy metrics (average and 1× standard deviation) of several machine learning methods: multiple regression (MR), random forests (RF), and support vector machines (SVM). Values are in kg for error metrics.</p>
Full article ">Figure 7
<p>Mean absolute error (MAE) values from the Random Forests model that was trained with Sentinel-2 data on a certain date (rows) applied for prediction using Sentinel-2 images at all dates (columns). Table color scaling from green for low values to red for high values. Values are in kg for error metrics.</p>
Full article ">
26 pages, 4878 KiB  
Article
Deriving the Reservoir Conditions for Better Water Resource Management Using Satellite-Based Earth Observations in the Lower Mekong River Basin
by Syed A. Ali and Venkataramana Sridhar
Remote Sens. 2019, 11(23), 2872; https://doi.org/10.3390/rs11232872 - 3 Dec 2019
Cited by 17 | Viewed by 4725
Abstract
The Mekong River basin supported a large population and ecosystem with abundant water and nutrient supply. However, the impoundments in the river can substantially alter the flow downstream and its timing. Using limited observations, this study demonstrated an approach to derive dam characteristics, [...] Read more.
The Mekong River basin supported a large population and ecosystem with abundant water and nutrient supply. However, the impoundments in the river can substantially alter the flow downstream and its timing. Using limited observations, this study demonstrated an approach to derive dam characteristics, including storage and flow rate, from remote-sensing-based data. Global Reservoir and Lake Monitor (GRLM), River-Lake Hydrology (RLH), and ICESat-GLAS, which generated altimetry from Jason series and inundation areas from Landsat 8, were used to estimate the reservoir surface area and change in storage over time. The inflow simulated by the variable infiltration capacity (VIC) model from 2008 to 2016 and the reservoir storage change were used in the mass balance equation to calculate outflows for three dams in the basin. Estimated reservoir total storage closely resembled the observed data, with a Nash-Sutcliffe efficiency and coefficient of determination more than 0.90 and 0.95, respectively. An average decrease of 55% in outflows was estimated during the wet season and an increase of up to 94% in the dry season for the Lam Pao. The estimated decrease in outflows during the wet season was 70% and 60% for Sirindhorn and Ubol Ratana, respectively, along with a 36% increase in the dry season for Sirindhorn. Basin-wide demand for evapotranspiration, about 935 mm, implicitly matched with the annual water diversion from 1000 to 2300 million m3. From the storage–discharge rating curves, minimum storage was also evident in the monsoon season (June–July), and it reached the highest in November. This study demonstrated the utility of remote sensing products to assess the impacts of dams on flows in the Mekong River basin. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location map of the Mekong River basin along with the river’s mainstream with gauge stations used for streamflow validation (black square) and the dams considered in this study (red circle). Natural color image of (<b>b</b>) Lam Pao, (<b>c</b>) Sirindhorn, and (<b>d</b>) Ubol Ratana downloaded from the official US Geological Survey (USGS) website (<a href="https://earthexplorer.usgs.gov/" target="_blank">https://earthexplorer.usgs.gov/</a>).</p>
Full article ">Figure 2
<p>The graphical representation of the operations adopted in the methodology: (<b>a</b>) image processing of the Landsat8 and Sentinel-2 data reservoir surface area extraction using the normalized difference water index (NDWI) indicator; (<b>b</b>) inventory of the satellites capturing the altimetry used for assembling the water level fluctuation of the reservoirs; (<b>c</b>) regression analysis fit using the selected water level and surface area data for expressing the surface area, live storage, and total storage in terms of water level; (<b>d</b>) validation of the simulated total storage of the reservoir through comparison with the observed data; (<b>e</b>) variable infiltration capacity (VIC) model used for the estimation of reservoir inflow; and (<b>f</b>) time series of the inflow and outflow from the reservoirs.</p>
Full article ">Figure 3
<p>Time series of the surface area and water level variation, relationship between the surface area and water level, simulated and observed total storage comparison, and scatter plot between the observed and simulated total storage of Lam Pao (<b>a</b>–<b>f</b>), Sirindhorn (<b>g</b>–<b>l</b>), and Ubol Ratana (<b>m</b>–<b>o</b>) reservoirs. The water level is with respect to the satellite reference datum derived at a 10-day temporal resolution, and the surface area was extracted at a 16-day frequency.</p>
Full article ">Figure 4
<p>Comparison between the monthly simulated and observed inflows and outflows for Lam Pao (<b>a</b>,<b>b</b>), Sirindhorn (<b>d</b>,<b>e</b>), and Ubol Ratana (<b>g</b>,<b>h</b>) from 2008 to 2016 based on the variable infiltration capacity (VIC) simulated flow and water balance. Comparison of the seasonal cycle of inflows and outflows derived from the monthly ensemble mean between 2008 and 2016 (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 5
<p>Percentage change in the outflow relative to inflow from the reservoirs (<b>a</b>) Lam Pao, (<b>c</b>) Sirindhorn, and (<b>e</b>) Ubol Ratana during wet (June–November) and dry (December–May) periods from 2008 to 2016. Annual precipitation for the catchments of Lam Pao (<b>b</b>), Sirindhorn (<b>d</b>), and Ubol Ratana (<b>f</b>) is also shown from 2008 to 2016.</p>
Full article ">Figure 6
<p>The mean monthly variation of the total storage for each month during the year along with upper and lower limits, defined as the rule curve, for (<b>a</b>) Lam Pao, (<b>b</b>) Sirindhorn, and (<b>c</b>) Ubol Ratana derived from the surface area and water level variation of the reservoirs during the period 2008 to 2018. The relationships between the total storage and water level at the reservoirs were developed for (<b>d</b>) Lam Pao and (<b>e</b>) Sirindhorn.</p>
Full article ">Figure 7
<p>The volume of water extracted from the reservoir, which was not accounted for in the outflow, annually during 2008–2016 and monthly variation in the MODIS (moderate resolution imaging spectroradiometer) estimated evapotranspiration for (<b>a</b>) Lam Pao, (<b>c</b>) Sirindhorn, and (<b>e</b>) Ubol Ratana. The seasonal series of the extracted volume from the ensemble means for each month resemble the evapotranspiration variation over the year (<b>b</b>,<b>d</b>,<b>f</b>).</p>
Full article ">Figure A1
<p>The water mask extracted from the Landsat8 and Sentinel-2 satellite imageries constituting the surface area of the Lam Pao, Sirindhorn, and Ubol Ratana reservoirs. The surface area variations of the reservoirs during January, March, May, July, September, and November demonstrate the seasonal pattern of swelling and shrinking of the area. Because of the deficiency of the continuous satellite data for all the months, images from 2013 to 2015 were used to form the described series. Comprehensive information on the seasonal and annual pattern of reservoirs’ surface area variation was evident from the time series of the water mask from 2013 to 2018.</p>
Full article ">Figure A2
<p>Evaluation of the variable infiltration capacity (VIC) model with comparisons of the simulated (dotted line) and observed (continuous line) streamflow for calibration (1986–1992) and evaluation (1993–1998) periods at seven gauge station locations. The accuracy of the VIC model in simulating the streamflow was quantified with the Nash-Sutcliffe model efficiency coefficient (NSE) and coefficient of determination (R<sup>2</sup>), listed for each reservoir, for calibration and evaluation periods.</p>
Full article ">
14 pages, 5386 KiB  
Article
Thickness Measurement of Water Film/Rivulets Based on Grayscale Index
by Haiquan Jing, Yi Cheng, Xuhui He, Xu Zhou and Jia He
Remote Sens. 2019, 11(23), 2871; https://doi.org/10.3390/rs11232871 - 3 Dec 2019
Cited by 1 | Viewed by 5362
Abstract
This study proposed a nonintrusive and cost-efficient technique to measure the thickness of a thin water film/rivulet based on the grayscale index. This technique uses millions of probes and only needs a digital camera, fill lights, and pigment. For water colored with diluted [...] Read more.
This study proposed a nonintrusive and cost-efficient technique to measure the thickness of a thin water film/rivulet based on the grayscale index. This technique uses millions of probes and only needs a digital camera, fill lights, and pigment. For water colored with diluted pigment, the grayscale index of the water captured by a digital camera depends on the water thickness. This relationship can be utilized to measure the water thickness through digital image processing. In the present study, the relationship between the grayscale index and water thickness was theoretically and experimentally investigated. Theoretical derivation revealed that when the product of water thickness and the color density approaches to 0, the grayscale index is inversely proportional to the thickness. The experimental results show that under the color density of 0.05%, the grayscale index is inversely proportional to the thickness of water film when the thickness is less than 6 mm. This linear relationship was utilized to measure the distribution and profile of a water rivulet flowing on the lower surface of a cable model. Full article
(This article belongs to the Special Issue Vision-Based Sensing in Engineering Structures)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sectional view of water with pigment captured by a digital camera.</p>
Full article ">Figure 2
<p>Visible pigment in the top view varies with water thickness for different q: (<b>a</b>) Overall trend, (<b>b</b>) Thin water film. (<span class="html-fig-inline" id="remotesensing-11-02871-i001"> <img alt="Remotesensing 11 02871 i001" src="/remotesensing/remotesensing-11-02871/article_deploy/html/images/remotesensing-11-02871-i001.png"/></span> q = 0.5%, <span class="html-fig-inline" id="remotesensing-11-02871-i002"> <img alt="Remotesensing 11 02871 i002" src="/remotesensing/remotesensing-11-02871/article_deploy/html/images/remotesensing-11-02871-i002.png"/></span> q = 1.0%, <span class="html-fig-inline" id="remotesensing-11-02871-i003"> <img alt="Remotesensing 11 02871 i003" src="/remotesensing/remotesensing-11-02871/article_deploy/html/images/remotesensing-11-02871-i003.png"/></span> q = 3.0%, <span class="html-fig-inline" id="remotesensing-11-02871-i004"> <img alt="Remotesensing 11 02871 i004" src="/remotesensing/remotesensing-11-02871/article_deploy/html/images/remotesensing-11-02871-i004.png"/></span> q = 5.0%).</p>
Full article ">Figure 3
<p>Experimental setup.</p>
Full article ">Figure 4
<p>Experiment instruments: (<b>a</b>) Camera, (<b>b</b>) Illuminometer, (<b>c</b>) Fill lights, (<b>d</b>) Pigment.</p>
Full article ">Figure 5
<p>Schematic of water tank: (<b>a</b>) Top view, (<b>b</b>) Section plan.</p>
Full article ">Figure 6
<p>Raw image.</p>
Full article ">Figure 7
<p>Adjusted image.</p>
Full article ">Figure 8
<p>Identification of <math display="inline"><semantics> <mrow> <mi>x</mi> <msub> <mo>|</mo> <mrow> <msub> <mi>T</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The grayscale index varies with water thickness.</p>
Full article ">Figure 10
<p>Setup for water rivulet measurement.</p>
Full article ">Figure 11
<p>The raw image of the water rivulet.</p>
Full article ">Figure 12
<p>Grayscale index of the recorded water rivulet: (<b>a</b>) with background, (<b>b</b>) without background.</p>
Full article ">Figure 13
<p>Profile of the water rivulet.</p>
Full article ">
17 pages, 13446 KiB  
Article
Remote Sensing and Texture Image Classification Network Based on Deep Learning Integrated with Binary Coding and Sinkhorn Distance
by Chu He, Qingyi Zhang, Tao Qu, Dingwen Wang and Mingsheng Liao
Remote Sens. 2019, 11(23), 2870; https://doi.org/10.3390/rs11232870 - 3 Dec 2019
Cited by 7 | Viewed by 4039
Abstract
In the past two decades, traditional hand-crafted feature based methods and deep feature based methods have successively played the most important role in image classification. In some cases, hand-crafted features still provide better performance than deep features. This paper proposes an innovative network [...] Read more.
In the past two decades, traditional hand-crafted feature based methods and deep feature based methods have successively played the most important role in image classification. In some cases, hand-crafted features still provide better performance than deep features. This paper proposes an innovative network based on deep learning integrated with binary coding and Sinkhorn distance (DBSNet) for remote sensing and texture image classification. The statistical texture features of the image extracted by uniform local binary pattern (ULBP) are introduced as a supplement for deep features extracted by ResNet-50 to enhance the discriminability of features. After the feature fusion, both diversity and redundancy of the features have increased, thus we propose the Sinkhorn loss where an entropy regularization term plays a key role in removing redundant information and training the model quickly and efficiently. Image classification experiments are performed on two texture datasets and five remote sensing datasets. The results show that the statistical texture features of the image extracted by ULBP complement the deep features, and the new Sinkhorn loss performs better than the commonly used softmax loss. The performance of the proposed algorithm DBSNet ranks in the top three on the remote sensing datasets compared with other state-of-the-art algorithms. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The general framework of the proposed algorithm, deep learning integrated with binary coding and Sinkhorn distance (DBSNet).</p>
Full article ">Figure 2
<p>The relationship between receptive field and network depth.</p>
Full article ">Figure 3
<p>Residual module in ResNet-50.</p>
Full article ">Figure 4
<p>The framework of ResNet-50.</p>
Full article ">Figure 5
<p>Calculation of the local binary pattern (LBP).</p>
Full article ">Figure 6
<p>The framework of uniform local binary pattern (ULBP).</p>
Full article ">Figure 7
<p>The detailed framework of the proposed algorithm: DBSNet.</p>
Full article ">Figure 8
<p>Comparison of feature maps of RSNet, ULBP, and DBSNet algorithms on KTH-TIPS2-b dataset.</p>
Full article ">Figure 9
<p>Example images of two texture datasets from top to bottom: KTH-TIPS2-a and KTH-TIPS2-b.</p>
Full article ">Figure 10
<p>Example images of five remote sensing scene classification datasets from top to bottom: AID, RSSCN7, UC-Merced, WHU-RS19, and OPTIMAL-31.</p>
Full article ">Figure 11
<p>The framework of the shallow convolutional neural network (CNN).</p>
Full article ">
19 pages, 3276 KiB  
Article
Assessing the Feasibility of Using Sentinel-2 Imagery to Quantify the Impact of Heatwaves on Irrigated Vineyards
by Alessia Cogato, Vinay Pagay, Francesco Marinello, Franco Meggio, Peter Grace and Massimiliano De Antoni Migliorati
Remote Sens. 2019, 11(23), 2869; https://doi.org/10.3390/rs11232869 - 2 Dec 2019
Cited by 36 | Viewed by 6281
Abstract
Heatwaves are common in many viticultural regions of Australia. We evaluated the potential of satellite-based remote sensing to detect the effects of high temperatures on grapevines in a South Australian vineyard over the 2016–2017 and 2017–2018 seasons. The study involved: (i) comparing the [...] Read more.
Heatwaves are common in many viticultural regions of Australia. We evaluated the potential of satellite-based remote sensing to detect the effects of high temperatures on grapevines in a South Australian vineyard over the 2016–2017 and 2017–2018 seasons. The study involved: (i) comparing the normalized difference vegetation index (NDVI) from medium- and high-resolution satellite images; (ii) determining correlations between environmental conditions and vegetation indices (Vis); and (iii) identifying VIs that best indicate heatwave effects. Pearson’s correlation and Bland–Altman testing showed a significant agreement between the NDVI of high- and medium-resolution imagery (R = 0.74, estimated difference −0.093). The band and the VI most sensitive to changes in environmental conditions were 705 nm and enhanced vegetation index (EVI), both of which correlated with relative humidity (R = 0.65 and R = 0.62, respectively). Conversely, SWIR (short wave infrared, 1610 nm) exhibited a negative correlation with growing degree days (R = −0.64). The analysis of heat stress showed that green and red edge bands—the chlorophyll absorption ratio index (CARI) and transformed chlorophyll absorption ratio index (TCARI)—were negatively correlated with thermal environmental parameters such as air and soil temperature and growing degree days (GDDs). The red and red edge bands—the soil-adjusted vegetation index (SAVI) and CARI2—were correlated with relative humidity. To the best of our knowledge, this is the first study demonstrating the effectiveness of using medium-resolution imagery for the detection of heat stress on grapevines in irrigated vineyards. Full article
(This article belongs to the Special Issue Remote Sensing in Viticulture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and experimental design. The blue line represents the high-vigor area and the red line identifies the total vineyard block (213 ha). Low-vigor areas were identified as non-productive or headlands and excluded from the analysis.</p>
Full article ">Figure 2
<p>Comparison between the Sentinel-2 and the WorldView-2 images: (<b>a</b>) NDVI values calculated from the WorldView-2 image after interrow removal (detail in the red circle); (<b>b</b>) NDVI values in the oversampled WorldView-2 image (10 m) after interrow removal; (<b>c</b>) NDVI values calculated in the Sentinel-2 image; (<b>d</b>) the two images partially matched, with the comparison performed only within the matching areas (sample area marked in red, matching area marked in green).</p>
Full article ">Figure 3
<p>Results of the Pearson’s correlation for the NDVI values computed from WorldView-2 and Sentinel-2 images.</p>
Full article ">Figure 4
<p>Results of the Bland–Altman concordance test for the NDVI values computed from WorldView-2 and Sentinel-2 images. The plot describes the agreement between two quantitative measurements (NDVI). The systematic bias and limits of agreement were calculated by computingthe median and the first and third quartiles of the differences between the two measurements, respectively. The difference of the two paired measurements is plotted against the mean of the two measurements.</p>
Full article ">Figure 5
<p>Correlogram of the input variables for the medium-vigor areas during the 2016–2017 growing season (<b>a</b>), 2017–2018 growing season (<b>b</b>) and combined for both seasons (<b>c</b>). Only recurrent correlations between spectral features and environmental conditions are shown. Positive correlations are displayed in blue and negative correlations in red. Color intensity is proportional to R, while the magnitude of the circles is proportional to <span class="html-italic">p</span>.</p>
Full article ">
22 pages, 9627 KiB  
Article
Scattering Transform Framework for Unmixing of Hyperspectral Data
by Yiliang Zeng, Christian Ritz, Jiahong Zhao and Jinhui Lan
Remote Sens. 2019, 11(23), 2868; https://doi.org/10.3390/rs11232868 - 2 Dec 2019
Cited by 5 | Viewed by 4339
Abstract
The scattering transform, which applies multiple convolutions using known filters targeting different scales of time or frequency, has a strong similarity to the structure of convolution neural networks (CNNs), without requiring training to learn the convolution filters, and has been used for hyperspectral [...] Read more.
The scattering transform, which applies multiple convolutions using known filters targeting different scales of time or frequency, has a strong similarity to the structure of convolution neural networks (CNNs), without requiring training to learn the convolution filters, and has been used for hyperspectral image classification in recent research. This paper investigates the application of the scattering transform framework to hyperspectral unmixing (STFHU). While state-of-the-art research on unmixing hyperspectral data utilizing scattering transforms is limited, the proposed end-to-end method applies pixel-based scattering transforms and preliminary three-dimensional (3D) scattering transforms to hyperspectral images in the remote sensing scenario to extract feature vectors, which are then trained by employing the regression model based on the k-nearest neighbor (k-NN) to estimate the abundance of maps of endmembers. Experiments compare performances of the proposed algorithm with a series of existing methods in quantitative terms based on both synthetic data and real-world hyperspectral datasets. Results indicate that the proposed approach is more robust to additive noise, which is suppressed by utilizing the rich information in both high-frequency and low-frequency components represented by the scattering transform. Furthermore, the proposed method achieves higher accuracy for unmixing using the same amount of training data with all comparative approaches, while achieving equivalent performance to the best performing CNN method but using much less training data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structure of the hyperspectral unmixing with scattering transform features. The green arrows represent the scattering coefficient, and the black arrows delineate the scattering propagator, which is computed iteratively.</p>
Full article ">Figure 2
<p>Structure of the three-dimensional (3D) filter for hyperspectral unmixing.</p>
Full article ">Figure 3
<p>Example spectra of the scattering transform network. (<b>a</b>) The original spectral vector input; (<b>b</b>) The scattering transform level 0, which is a low-pass version of the input; (<b>c</b>) The scattering transform level 1 and each representing 1/3 of the range of the low pass version; (<b>d</b>) The scattering transform level 2; and (<b>e</b>) The scattering transform coefficients, which consist of the scattering transform levels 0, 1, and 2.</p>
Full article ">Figure 4
<p>Operation principal of the k-nearest neighbor (k-NN) regressor.</p>
Full article ">Figure 5
<p>Eight selected endmembers from the United States (U.S.) Geological Survey (USGS) for building synthetic data.</p>
Full article ">Figure 6
<p>Original and noisy synthetic data at the 100th band. (<b>a</b>) The original synthetic data at the 100th band; (<b>b</b>) The Noise1 data at the 100th band; and (<b>c</b>) The Noise2 data at the 100th band.</p>
Full article ">Figure 7
<p>Spectra of the original and noisy synthetic data located at the (100, 100) pixel. (<b>a</b>) The spectrum of the original synthetic data; (<b>b</b>) The spectrum of Noise1; and (<b>c</b>) The spectrum of Noise2.</p>
Full article ">Figure 8
<p>Spectra plots of six endmembers of one hyperspectral image in the Urban dataset.</p>
Full article ">Figure 9
<p>Urban hyperspectral data and its ground truth. (<b>a</b>) The hyperspectral image and (<b>b</b>) the ground truth abundance maps.</p>
Full article ">Figure 10
<p>Jasper Ridge hyperspectral data and its ground truth. (<b>a</b>) The hyperspectral image in Jasper Ridge and (<b>b</b>) the ground truth of abundance maps.</p>
Full article ">Figure 11
<p>Samson hyperspectral data and the ground truth. (<b>a</b>) Is the hyperspectral image in Samson, and (<b>b</b>) shows the ground truth of abundance maps.</p>
Full article ">Figure 12
<p>Ground truth and estimated abundance maps of eight endmembers utilizing four algorithms on the Noise2 dataset at a training ratio of 50%. (<b>a</b>) The ground truths of all endmembers; (<b>b</b>) The estimated abundance maps of the proposed STFHU approach; (<b>c</b>) The estimated abundance maps of the ANN method; (<b>d</b>) The estimated abundance maps of the LSU algorithm; and (<b>e</b>) The estimated abundance maps of the CNN approach.</p>
Full article ">Figure 13
<p>RMSE values calculated by comparing the abundance maps of eight endmembers estimated by the proposed STFHU algorithm, ANN, LSU, and CNN separately with the ground truth.</p>
Full article ">Figure 14
<p>Example endmember abundance map estimates of original and noisy datasets using the same non-noisy training model at a training ratio of 50%, and their comparison with the ground truth. (<b>a</b>) The ground truth of eighth endmembers; (<b>b</b>) The estimated abundance map of the original synthetic data; (<b>c</b>) The estimated abundance map of Noise1; and (<b>d</b>) The estimated abundance map of Noise2.</p>
Full article ">Figure 15
<p>Average RMSE results from comparing abundance map estimates utilizing STFHU algorithm based on the original and noisy datasets with the ground truth at different training ratios.</p>
Full article ">Figure 16
<p>Comparison of the ground truth and estimated abundance maps of the second endmember using the proposed approach and CNN separately based on non-noisy data at a training ratio of 5%. (<b>a</b>) The ground truth; (<b>b</b>) The abundance map of the proposed method; and (<b>c</b>) The abundance map of CNN.</p>
Full article ">Figure 17
<p>Partial abundance map estimation results at a training ratio of 50%. (<b>a</b>) The ground truths of all endmembers; (<b>b</b>) The estimated abundance maps of STFHU; (<b>c</b>) The estimated abundance maps of the ANN method; (<b>d</b>) The estimated abundance maps of the LSU algorithm; and (<b>e</b>) The estimated abundance maps of the CNN approach.</p>
Full article ">Figure 18
<p>Abundance map estimation results at 5% training ratio utilizing the proposed algorithm and CNN based on pixels. (<b>a</b>) The ground truths of all endmembers; (<b>b</b>) the estimated abundance maps of the proposed STFHU; and (<b>c</b>) the estimated abundance maps of the CNN method.</p>
Full article ">Figure 19
<p>Comparisons of the original spectrum and noisy spectral data with their scattering transform features. (<b>a</b>) The original spectrum; (<b>b</b>) The scattering transform features of the original spectrum; (<b>c</b>) The Noise2 spectrum; and (<b>d</b>) The scattering transform features of the Noise2 spectrum.</p>
Full article ">
15 pages, 2705 KiB  
Article
Mitigation of Ionospheric Scintillation Effects on GNSS Signals with VMD-MFDFA
by Wasiu Akande Ahmed, Falin Wu, Dessi Marlia, Ednofri and Yan Zhao
Remote Sens. 2019, 11(23), 2867; https://doi.org/10.3390/rs11232867 - 2 Dec 2019
Cited by 9 | Viewed by 4192
Abstract
Severe scintillations degrade the satellite signal intensity below the fade margin of satellite receivers thereby resulting in failure of communication, positioning, and navigational services. The performance of satellite receivers is obviously restricted by ionospheric scintillation effects, which may lead to signal degradation primarily [...] Read more.
Severe scintillations degrade the satellite signal intensity below the fade margin of satellite receivers thereby resulting in failure of communication, positioning, and navigational services. The performance of satellite receivers is obviously restricted by ionospheric scintillation effects, which may lead to signal degradation primarily due to the refraction, reflection, and scattering of radio signals. Thus, there is a need to develop an ionospheric scintillation detection and mitigation technique for robust satellite signal receivers. Hence, variational mode decomposition (VMD) is proposed. VMD addresses the problem of ionospheric scintillation effects on global navigation satellite system (GNSS) signals by extracting the noise from the radio signals in combination with multifractal detrended fluctuation analysis (MFDFA). MFDFA helps as a criterion designed to detect and distinguish the intrinsic mode functions (IMFs) into noisy (scintillated) and noise-free (non-scintillated) IMF signal components using the MFDFA threshold. The results of the proposed method are promising, reliable, and have the potential to mitigate ionospheric scintillation effects on both the synthetic (simulated) and real GNSS data obtained from Manado station (latitude 1.34° S and longitude 124.82° E), Indonesia. From the results, the effectiveness of VMD-MFDFA over complementary ensemble empirical mode decomposition with MFDFA (CEEMD-MFDFA) is an indication of better performance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flowchart of variational mode decomposition (VMD) denoising.</p>
Full article ">Figure 2
<p>The decomposed signal intrinsic mode functions (IMFs) of synthetic and real scintillation data using the VMD method: (<b>a</b>) synthetic scintillation data and (<b>b</b>) Manado station scintillation data.</p>
Full article ">Figure 3
<p>Ionospheric amplitude scintillation of synthetic and real scintillation data: (<b>a</b>) synthetic scintillation data and (<b>b</b>) Manado station scintillation data. PRN—pseudo-random number; UTC—coordinated universal time.</p>
Full article ">Figure 4
<p>Decomposition of synthetic data into IMFs using VMD and CEEMD methods: (<b>a</b>) VMD method and (<b>b</b>) CEEMD method.</p>
Full article ">Figure 5
<p>Decomposition of Manado real data into IMFs using VMD and CEEMD methods: (<b>a</b>) VMD method and (<b>b</b>) CEEMD method.</p>
Full article ">Figure 6
<p>Multifractal detrended fluctuation analysis (MFDFA) threshold for the decomposed IMFs of synthetic and real scintillation data using CEEMD and VMD methods: (<b>a</b>) synthetic scintillation data and (<b>b</b>) Manado station scintillation data.</p>
Full article ">Figure 7
<p>Performance comparison of VMD-DFA, CEEMD-MFDFA and VMD-MFDFA methods on amplitude scintillation: (<b>a</b>) synthetic scintillation data and (<b>b</b>) Manado station scintillation data.</p>
Full article ">Figure 8
<p>GNSS signal amplitude scintillation denoising with VMD-DFA, CEEMD-MFDFA and VMD-MFDFA.</p>
Full article ">Figure 9
<p>Performance evaluation comparison of CEEMD-MFDFA and VMD-MFDFA methods for synthetic and real data: (<b>a</b>) synthetic scintillation data and (<b>b</b>) Manado station scintillation data.</p>
Full article ">
28 pages, 6892 KiB  
Article
A Novel Ensemble Approach for Landslide Susceptibility Mapping (LSM) in Darjeeling and Kalimpong Districts, West Bengal, India
by Jagabandhu Roy, Sunil Saha, Alireza Arabameri, Thomas Blaschke and Dieu Tien Bui
Remote Sens. 2019, 11(23), 2866; https://doi.org/10.3390/rs11232866 - 2 Dec 2019
Cited by 135 | Viewed by 9273
Abstract
Landslides are among the most harmful natural hazards for human beings. This study aims to delineate landslide hazard zones in the Darjeeling and Kalimpong districts of West Bengal, India using a novel ensemble approach combining the weight-of-evidence (WofE) and support vector machine (SVM) [...] Read more.
Landslides are among the most harmful natural hazards for human beings. This study aims to delineate landslide hazard zones in the Darjeeling and Kalimpong districts of West Bengal, India using a novel ensemble approach combining the weight-of-evidence (WofE) and support vector machine (SVM) techniques with remote sensing datasets and geographic information systems (GIS). The study area currently faces severe landslide problems, causing fatalities and losses of property. In the present study, the landslide inventory database was prepared using Google Earth imagery, and a field investigation carried out with a global positioning system (GPS). Of the 326 landslides in the inventory, 98 landslides (30%) were used for validation, and 228 landslides (70%) were used for modeling purposes. The landslide conditioning factors of elevation, rainfall, slope, aspect, geomorphology, geology, soil texture, land use/land cover (LULC), normalized differential vegetation index (NDVI), topographic wetness index (TWI), sediment transportation index (STI), stream power index (SPI), and seismic zone maps were used as independent variables in the modeling process. The weight-of-evidence and SVM techniques were ensembled and used to prepare landslide susceptibility maps (LSMs) with the help of remote sensing (RS) data and geographical information systems (GIS). The landslide susceptibility maps (LSMs) were then classified into four classes; namely, low, medium, high, and very high susceptibility to landslide occurrence, using the natural breaks classification methods in the GIS environment. The very high susceptibility zones produced by these ensemble models cover an area of 630 km2 (WofE& RBF-SVM), 474 km2 (WofE& Linear-SVM), 501km2 (WofE& Polynomial-SVM), and 498 km2 (WofE& Sigmoid-SVM), respectively, of a total area of 3914 km2. The results of our study were validated using the receiver operating characteristic (ROC) curve and quality sum (Qs) methods. The area under the curve (AUC) values of the ensemble WofE& RBF-SVM, WofE & Linear-SVM, WofE & Polynomial-SVM, and WofE & Sigmoid-SVM models are 87%, 90%, 88%, and 85%, respectively, which indicates they are very good models for identifying landslide hazard zones. As per the results of both validation methods, the WofE & Linear-SVM model is more accurate than the other ensemble models. The results obtained from this study using our new ensemble methods can provide proper and significant information to decision-makers and policy planners in the landslide-prone areas of these districts. Full article
(This article belongs to the Special Issue Remote Sensing of Natural Hazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and landslide location map.</p>
Full article ">Figure 2
<p>Methodological flowchart of the present work.</p>
Full article ">Figure 3
<p>Field photographs of some landslides in the study area. (<b>a</b>) Sikkim-Kalimpong road (27°03′20″N, 88°26′03″E) (<b>b</b>) Sevokekalimandir (26°54′01″N, 88°28′18″E). (<b>c</b>) Lish catchment (26°57′N, 88°30′17″E). (<b>d</b>) Darjeeling road (26°54′33″N, 88°17′10″E). (<b>e</b>) Pagla Jhora (26°52′47.70″N, 88°18′11.24″E). (<b>f</b>) Sevoke Road (26°54′33″N, 88°28′04″E).</p>
Full article ">Figure 4
<p>Landslide conditioning factors - <b>a</b>. elevation, <b>b</b>. slope, <b>c</b>. aspect, <b>d</b>. rainfall, <b>e</b>. geology, <b>f</b>. soil texture, <b>g</b>. distance from river, <b>h</b>. distance from lineament, <b>i</b>. land use/land cover (LULC), <b>j</b>. normalized differential vegetation index (NDVI), <b>k</b>. distance from road, <b>l</b>. topographic wetness index (TWI), <b>m</b>. stream power index (SPI), <b>n</b>. sediment transportation index (STI), <b>o</b>. Geomorphology, <b>p</b>. Seismic map.</p>
Full article ">Figure 4 Cont.
<p>Landslide conditioning factors - <b>a</b>. elevation, <b>b</b>. slope, <b>c</b>. aspect, <b>d</b>. rainfall, <b>e</b>. geology, <b>f</b>. soil texture, <b>g</b>. distance from river, <b>h</b>. distance from lineament, <b>i</b>. land use/land cover (LULC), <b>j</b>. normalized differential vegetation index (NDVI), <b>k</b>. distance from road, <b>l</b>. topographic wetness index (TWI), <b>m</b>. stream power index (SPI), <b>n</b>. sediment transportation index (STI), <b>o</b>. Geomorphology, <b>p</b>. Seismic map.</p>
Full article ">Figure 5
<p>Landslide Susceptibility maps (LSMs) produced by different ensemble models – (<b>a</b>). WofE&amp; RBF-SVM, (<b>b</b>). WofE&amp;Linear-SVM, (<b>c</b>). WofE&amp; Polynomial-SVM, (<b>d</b>). WofE&amp; Sigmoid-SVM models.</p>
Full article ">Figure 6
<p>Areal distributions of LSMs by– (<b>a</b>). area distribution of LSMs, (<b>b</b>). percentage of area distribution of LSMs.</p>
Full article ">Figure 7
<p>Validation of LSMs using the ROC curve showing the area under curve (AUC) – (<b>a</b>). WofE&amp; RBF-SVM, (<b>b</b>). WofE&amp;Linear-SVM, (<b>c</b>). WofE&amp; Polynomial-SVM, (<b>d</b>). WofE&amp; Sigmoid-SVM models.</p>
Full article ">
26 pages, 5947 KiB  
Article
Detection of New Zealand Kauri Trees with AISA Aerial Hyperspectral Data for Use in Multispectral Monitoring
by Jane J. Meiforth, Henning Buddenbaum, Joachim Hill, James Shepherd and David A. Norton
Remote Sens. 2019, 11(23), 2865; https://doi.org/10.3390/rs11232865 - 2 Dec 2019
Cited by 4 | Viewed by 5642
Abstract
The endemic New Zealand kauri trees (Agathis australis) are of major importance for the forests in the northern part of New Zealand. The mapping of kauri locations is required for the monitoring of the deadly kauri dieback disease (Phytophthora agathidicida [...] Read more.
The endemic New Zealand kauri trees (Agathis australis) are of major importance for the forests in the northern part of New Zealand. The mapping of kauri locations is required for the monitoring of the deadly kauri dieback disease (Phytophthora agathidicida (PTA)). In this study, we developed a method to identify kauri trees by optical remote sensing that can be applied in an area-wide campaign. Dead and dying trees were separated in one class and the remaining trees with no to medium stress symptoms were defined in the two classes “kauri” and “other”. The reference dataset covers a representative selection of 3165 precisely located crowns of kauri and 21 other canopy species in the Waitakere Ranges west of Auckland. The analysis is based on an airborne hyperspectral AISA Fenix image (437–2337 nm, 1 m2 pixel resolution). The kauri spectra show characteristically steep reflectance and absorption features in the near-infrared (NIR) region with a distinct long descent at 1215 nm, which can be parameterised with a modified Normalised Water Index (mNDWI-Hyp). With a Jeffries–Matusita separability over 1.9, the kauri spectra can be well separated from 21 other canopy vegetation spectra. The Random Forest classifier performed slightly better than Support Vector Machine. A combination of the mNDWI-Hyp index with four additional spectral indices with three red to NIR bands resulted in an overall pixel-based accuracy (OA) of 91.7% for crowns larger 3 m diameter. While the user’s and producer’s accuracies for the class “kauri” with 94.6% and 94.8% are suitable for management purposes, the separation of “dead/dying trees” from “other” canopy vegetation poses the main challenge. The OA can be improved to 93.8% by combining “kauri” and “dead/dying” trees in one class, separate classifications for low and high forest stands and a binning to 10 nm bandwidths. Additional wavelengths and their respective indices only improved the OA up to 0.6%. The method developed in this study allows an accurate location of kauri trees for an area-wide mapping with a five-band multispectral sensor in a representative selection of forest ecosystems. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Kauri growth classes used in this study, depending on the mean crown diameter (cdm). (Photos: [<a href="#B39-remotesensing-11-02865" class="html-bibr">39</a>]).</p>
Full article ">Figure 2
<p>(<b>a</b>) Location of the Waitakere Ranges on the North Island of New Zealand west of Auckland City. The general area with naturally occurring kauri in New Zealand [<a href="#B2-remotesensing-11-02865" class="html-bibr">2</a>] is marked as hatched. (<b>b</b>) Study sites in the Waitakere Ranges with the reference crowns marked in red (background map: [<a href="#B42-remotesensing-11-02865" class="html-bibr">42</a>]).</p>
Full article ">Figure 3
<p>Reference crowns (total 3165), used in the analysis, per class and diameter.</p>
Full article ">Figure 4
<p>Mean spectra of the target classes “kauri”, “dead/dying” and “other” with standard deviations (stdev).</p>
Full article ">Figure 5
<p>Jeffries–Matusita separability [<a href="#B61-remotesensing-11-02865" class="html-bibr">61</a>] of the three target classes for different spectral ranges. A value larger than 1.9 indicates a high separability. The analysis was based on MNF transformations for all bands in the different spectral ranges.</p>
Full article ">Figure 6
<p>Mean spectra of kauri (thick black line) and six selected other canopy species (grey) that got most easily confused with kauri. The number of pixels (pix) used to generate the mean spectra is given in parentheses. The spectra of these species show the lowest separability from the kauri spectrum in this study (see <a href="#remotesensing-11-02865-t0A2" class="html-table">Table A2</a>).</p>
Full article ">Figure 7
<p>Mean spectra of kauri (black) and five other canopy species (grey) that have the highest separabilities from the kauri spectrum in this study (see <a href="#remotesensing-11-02865-t0A2" class="html-table">Table A2</a>). The number of pixels (pix) used to generate the mean spectra is given in parentheses.</p>
Full article ">Figure 8
<p>Mean spectra of the target classes “kauri”, “dead/dying” and “other” with standard deviations (“stdev”). Below: Band positions of 13 selected spectral indices.</p>
Full article ">Figure 9
<p>Performance of selected indices and index combinations to identify the class “dead/dying” (light grey) and to distinguish between “kauri” and “other vegetation” (dark grey) with an RF classification (five-fold random split, 20 repetitions). Please note that the x-axis starts at 55%.</p>
Full article ">Figure 10
<p>Performance of the final 4–8-band index combinations to distinguish the three target classes “kauri”, “dead/dying” and “other” canopy vegetation. (RF, five-fold random split, 20 repetitions). Please note that the y-axis starts at 89%.</p>
Full article ">Figure 11
<p>RGB images of the three first bands of MNF transformations [<a href="#B49-remotesensing-11-02865" class="html-bibr">49</a>] from: (<b>a</b>) the VIS to NIR1 spectral range (431–970 nm); (<b>b</b>) VIS to NIR2 (431–1327 nm); and (<b>c</b>) the full spectral range from VIS to SWIR (431–2337 nm). The importance of the NIR2 and SWIR spectrum is visible in the higher colour contrast of kauri crowns compared to the VNIR image. The numbers in the kauri polygons indicate the stress symptom class for the crown with 1 = non-symptomatic and 5 = dead.</p>
Full article ">Figure 12
<p>Histograms for selected indices on sunlit pixels for all crown diameters, with the class “kauri” marked in light blue, the class “dead/dying” in red and the class “other” in dark blue. (<b>a</b>) The histogram for the mNDWI-Hyp index, which performed best to separate the class kauri from other vegetation by capturing distinctive features in the NIR2 region, is shown. For the separation of the class “dead/dying”, indices in the RED/NIR1 region are better suited, such as (<b>b</b>) the SR800 and (<b>c</b>) the NDNI index (see <a href="#remotesensing-11-02865-t0A3" class="html-table">Table A3</a> for descriptions of these indices).</p>
Full article ">Figure 13
<p>Overall accuracies for two selected sets of six and eight bands in the visible to NIR1 range. The accuracies are calculated for two and three target classes both with and without an additional CHM layer. The results are based on an RF classification with a three-fold split in 10 repetitions on 94,971 pixel values, including small crowns (&lt;3 m diameter). The standard deviations vary from 0.12 to 0.2.</p>
Full article ">Figure 14
<p>Combined results of 10 RF classifications with a 5-fold stratified random split with different seed values. Overview (left) and detailed maps (right) for the Cascades (<b>a</b>,<b>b</b>), Maungaroa (<b>c</b>,<b>d</b>) and Kauri Grove area (<b>e</b>,<b>f</b>). The numbers indicate the symptom classes in kauri crowns (1 = non-symptomatic, 5 = dead).</p>
Full article ">
21 pages, 8360 KiB  
Article
Retrieval of Snow Depth over Arctic Sea Ice Using a Deep Neural Network
by Jiping Liu, Yuanyuan Zhang, Xiao Cheng and Yongyun Hu
Remote Sens. 2019, 11(23), 2864; https://doi.org/10.3390/rs11232864 - 2 Dec 2019
Cited by 21 | Viewed by 5448
Abstract
The accurate knowledge of spatial and temporal variations of snow depth over sea ice in the Arctic basin is important for understanding the Arctic energy budget and retrieving sea ice thickness from satellite altimetry. In this study, we develop and validate a new [...] Read more.
The accurate knowledge of spatial and temporal variations of snow depth over sea ice in the Arctic basin is important for understanding the Arctic energy budget and retrieving sea ice thickness from satellite altimetry. In this study, we develop and validate a new method for retrieving snow depth over Arctic sea ice from brightness temperatures at different frequencies measured by passive microwave radiometers. We construct an ensemble-based deep neural network and use snow depth measured by sea ice mass balance buoys to train the network. First, the accuracy of the retrieved snow depth is validated with observations. The results show the derived snow depth is in good agreement with the observations, in terms of correlation, bias, root mean square error, and probability distribution. Our ensemble-based deep neural network can be used to extend the snow depth retrieval from first-year sea ice (FYI) to multi-year sea ice (MYI), as well as during the melting period. Second, the consistency and discrepancy of snow depth in the Arctic basin between our retrieval using the ensemble-based deep neural network and two other available retrievals using the empirical regression are examined. The results suggest that our snow depth retrieval outperforms these data sets. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Trajectories of 38 sea ice mass balance buoys used in this study.</p>
Full article ">Figure 2
<p>Histogram of the daily averaged IMB snow depth (bin width = 5 cm).</p>
Full article ">Figure 3
<p>Histogram of the daily averaged ice mass balance (IMB) as a function of (<b>a</b>) month, (<b>b</b>) IMB measured sea ice thickness (bin width = 10 cm) and (<b>c</b>) sea ice concentration (%) derived from the NASA Team algorithm (bin width = 5%).</p>
Full article ">Figure 4
<p>The topological configuration of the three hidden layer backpropagation neural network.</p>
Full article ">Figure 5
<p>The correlation coefficient and RMSE of the 100 three hidden layer networks with random weights and biases assigned initially, (<b>a</b>,<b>c</b>) training data and (<b>b</b>,<b>d</b>) validation data. The horizontal black line represents the correlation coefficient between the snow depth obtained from the ensemble-based deep neural network snow depth and IMB measurements.</p>
Full article ">Figure 6
<p>(<b>a</b>) Scatter plot between the snow depth of our ensemble-based deep neural network (SD-EDNN) and the snow depth data of ice mass balance buoys (SD-IMB), and (<b>b</b>) histogram of the difference between SD-EDNN and SD-IMB for all the validation data (black color, bin width = 5 cm), the data during freeze-up period (November to March, blue color), and the data during melting period (May to September, red color), the root mean square error (RMSE) between SD-EDNN and SD-IMB for all validation data, as well as the data during freeze-up and melting period, are marked with horizontal bars. (<b>c</b>) Spatial distribution of the difference between SD-EDNN and SD-IMB. (<b>d</b>) Boxplot of the difference between SD-EDNN as a function of SD-IMB with 5 cm bins, the <span class="html-italic">y</span>-axis on the right side shows the number of SD-IMB in each bin. The red line is the number of IMB observations used for the calculation of SD-EDNN minus SD-IMB for each bin.</p>
Full article ">Figure 7
<p>Time series of snow depth from IMB and SD-EDNN for individual buoys (<b>a</b>–<b>e</b>) and their corresponding trajectories. Blue dots are IMB data, red dots are the retrieval, and red dots with black circle are the validation data.</p>
Full article ">Figure 8
<p>Spatial distribution of the monthly mean SD-NASA snow depth averaged from June 2002 to June 2018.</p>
Full article ">Figure 9
<p>Spatial distribution of the monthly mean SD-UB snow depth averaged from June 2002 to June 2018.</p>
Full article ">Figure 10
<p>Spatial distribution of the monthly mean SD-EDNN snow depth averaged from June 2002 to June 2018.</p>
Full article ">Figure 11
<p>Scatter plot between the overlapped (<b>a</b>) SD-EDNN, (<b>c</b>) SD-UB and (<b>e</b>) SD-NASA and SD-IMB for the first year sea ice (FYI), and the histogram of the difference between the overlapped (<b>b</b>) SD-EDNN, (<b>d</b>) SD-UB and (<b>f</b>) SD-NASA and SD-IMB (bin width = 5 cm) for the FYI.</p>
Full article ">Figure 11 Cont.
<p>Scatter plot between the overlapped (<b>a</b>) SD-EDNN, (<b>c</b>) SD-UB and (<b>e</b>) SD-NASA and SD-IMB for the first year sea ice (FYI), and the histogram of the difference between the overlapped (<b>b</b>) SD-EDNN, (<b>d</b>) SD-UB and (<b>f</b>) SD-NASA and SD-IMB (bin width = 5 cm) for the FYI.</p>
Full article ">Figure 12
<p>(<b>a</b>) Scatter plot between SD-UB and SD-IMB, and (<b>b</b>) histogram of the difference between SD-UB and SD-IMB for all the data (black color, bin width = 5 cm), the data during freeze-up period (November to March, blue color), and the data during melting period (May to September, red color), the root mean square error (RMSE) between SD-EDNN and SD-IMB as well as the data during freeze-up and melting period are marked with horizontal bars. (<b>c</b>) Spatial distribution of the difference between SD-UB and SD-IMB.</p>
Full article ">
23 pages, 9713 KiB  
Article
Near-Real Time Automatic Snow Avalanche Activity Monitoring System Using Sentinel-1 SAR Data in Norway
by Markus Eckerstorfer, Hannah Vickers, Eirik Malnes and Jakob Grahn
Remote Sens. 2019, 11(23), 2863; https://doi.org/10.3390/rs11232863 - 2 Dec 2019
Cited by 36 | Viewed by 8489
Abstract
Knowledge of the spatio-temporal occurrence of avalanche activity is critical for avalanche forecasting. We present a near-real time automatic avalanche monitoring system that outputs detected avalanche polygons within roughly 10 min after Sentinel-1 SAR data are download. Our avalanche detection algorithm has an [...] Read more.
Knowledge of the spatio-temporal occurrence of avalanche activity is critical for avalanche forecasting. We present a near-real time automatic avalanche monitoring system that outputs detected avalanche polygons within roughly 10 min after Sentinel-1 SAR data are download. Our avalanche detection algorithm has an average probability of detection (POD) of 67.2% with a false alarm rate (FAR) averaging 45.9, with a maximum POD of over 85% and a minimum FAR of 24.9% compared to manual detection of avalanches. The high variability in performance stems from the dynamic nature of snow in the Sentinel-1 data. After tuning parameters of the detection algorithm, we processed five years of Sentinel-1 images acquired over a 150 × 100 km large area in Northern Norway, with the best setup. Compared to a dataset of field-observed avalanches, 77.3% were manually detectable. Using these manual detections as benchmark, the avalanche detection algorithm achieved an accuracy of 79% with high POD in cases of medium to large wet snow avalanches. For the first time, we present a dataset of spatio-temporal avalanche activity over several winters from a large region. Currently, the Norwegian Avalanche Warning Service is using our processing system for pre-operational use in three regions in Norway. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of the processing chain for automatic avalanche detection using Sentinel-1 data.</p>
Full article ">Figure 2
<p>(<b>a</b>) Avalanche runout mask (purple), water bodies, agricultural and forested areas superimposed onto a hill shade map of the area of interest. (<b>b</b>) Masks due to layover and radar shadow in ascending and descending orbits, (<b>c</b>) average amount of Sentinel-1 coverage per 6 days repeat cycle for avalanche runout zones in the study area in 2017–2018. Areas without Sentinel-1 coverage are shown as shaded areas.</p>
Full article ">Figure 3
<p>Logic and steps of automatic avalanche detection.</p>
Full article ">Figure 4
<p>Flow chart describing the avalanche age tracking algorithm.</p>
Full article ">Figure 5
<p>Variation of True Skill Score for each parameter that was tested (compare to <a href="#remotesensing-11-02863-t002" class="html-table">Table 2</a>), and for each of the three datasets. (<b>a</b>) Fraction of pixels in region &gt; upper DoG threshold (k<sub>DoG</sub>), (<b>b</b>) Number of classes for image segmentation (N<sub>classes</sub>), (<b>c</b>) Minimum avalanche size (A<sub>min</sub>), (<b>d</b>) Filter radius of second Difference of Gaussians filter (r2), (<b>e</b>) Backscatter contrast (false detection filter) (bsContThr), (<b>f</b>) Fraction of pixels in region &gt; class change threshold (k<sub>CC</sub>).</p>
Full article ">Figure 6
<p>Time series of daily avalanche detections with all detections in green and age tracked detections in magenta for all five winters in the period of 2014–2019. The numbers in the upper left corner indicate the total number of all detections and age tracked detections.</p>
Full article ">Figure 7
<p>Avalanche activity map with the location of detected avalanches superimposed onto a hillshade map. The colors represent the detected avalanche activity for each of the five winters. The three histograms on the right side depict the distributions of (<b>a</b>) avalanche debris size, (<b>b</b>) maximum runout elevation, and (<b>c</b>) slope angle of maximum runout for the entire 2014–2019 age-tracked dataset (N = 31863).</p>
Full article ">Figure 8
<p>Heat map showing the percentage of 500 m<sup>2</sup> squares covered by detected avalanches (2014–2019).</p>
Full article ">Figure 9
<p>Manual and automatically detected avalanches classified by their size, avalanche type and snow type.</p>
Full article ">Figure 10
<p>Outline accuracy of automatic detections compared to manual interpretation of RGB change detection images and field photographs.</p>
Full article ">Figure 11
<p>Sentinel-1 constellation observation scenario of snow-covered mountain regions with validity starting February 2018. This map is modified from a map published at <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario" target="_blank">https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario</a>. Snow-covered mountain regions were inferred from the Global Mountain Explorer raster dataset of global mountain regions.</p>
Full article ">
27 pages, 23440 KiB  
Article
Ship Detection Using Deep Convolutional Neural Networks for PolSAR Images
by Weiwei Fan, Feng Zhou, Xueru Bai, Mingliang Tao and Tian Tian
Remote Sens. 2019, 11(23), 2862; https://doi.org/10.3390/rs11232862 - 2 Dec 2019
Cited by 40 | Viewed by 4900
Abstract
Ship detection plays an important role in many remote sensing applications. However, the performance of the PolSAR ship detection may be degraded by the complicated scattering mechanism, multi-scale size of targets, and random speckle noise, etc. In this paper, we propose a ship [...] Read more.
Ship detection plays an important role in many remote sensing applications. However, the performance of the PolSAR ship detection may be degraded by the complicated scattering mechanism, multi-scale size of targets, and random speckle noise, etc. In this paper, we propose a ship detection method for PolSAR images based on modified faster region-based convolutional neural network (Faster R-CNN). The main improvements include proposal generation by adopting multi-level features produced by the convolution layers, which fits ships with different sizes, and the addition of a Deep Convolutional Neural Network (DCNN)-based classifier for training sample generation and coast mitigation. The proposed method has been validated by four measured datasets of NASA/JPL airborne synthetic aperture radar (AIRSAR) and uninhabited aerial vehicle synthetic aperture radar (UAVSAR). Performance comparison with the modified constant false alarm rate (CFAR) detector and the Faster R-CNN has demonstrated that the proposed method can improve the detection probability while reducing the false alarm rate and missed detections. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of the deep convolutional neural network (DCNN)-based ship detector.</p>
Full article ">Figure 2
<p>Pauli images of (<b>a</b>) coast; (<b>b</b>) sea; and (<b>c</b>) ships.</p>
Full article ">Figure 3
<p>Training samples with size of <math display="inline"><semantics> <mrow> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math>, where the top row shows ship samples, the middle row shows sea samples, and the bottom row shows coast samples.</p>
Full article ">Figure 4
<p>Architecture of the proposed classifier.</p>
Full article ">Figure 5
<p>Architecture of Faster R-CNN.</p>
Full article ">Figure 6
<p>Structure of the region of interest (ROI) pooling layer, the proposal in the last feature map must be converted into the same size to fit the input of the fully connected layer.</p>
Full article ">Figure 7
<p>The modified Faster R-CNN. The left part is region proposal network (RPN) and the layers are connected with solid-lines; the right part is the detection network, and the layers are connected with dash-lines; the proposal generated by RPN serves as input for the detection network. Meanwhile, the RPN and detection network share full-image convolutional features. The Faster R-CNN which generates proposals from Conv3 is termed deep Faster R-CNN and shallow Faster R-CNN generates proposals from Conv2.</p>
Full article ">Figure 8
<p>Stepping window structure of the segment PolSAR image, where the dashed rectangle indicates the edge of the PolSAR image, and the solid rectangle indicates the non-overlapping block-shifting segmentation window.</p>
Full article ">Figure 9
<p>Pauli RGB image of the airborne synthetic aperture radar (AIRSAR) Japan dataset.</p>
Full article ">Figure 10
<p>Ship detection result of partial samples, where the red and green rectangles are generated from the last two convolutional layers, respectively. (<b>a</b>–<b>d</b>) show the ship detection results of the Faster R-CNN for original PolSAR image shown in <a href="#remotesensing-11-02862-f009" class="html-fig">Figure 9</a>, and (<b>e</b>–<b>h</b>) show the ship detection results of the Faster R-CNN for PolSAR images after multi-look processing with 9 looks.</p>
Full article ">Figure 11
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN detector, (<b>b</b>) the deep Faster R-CNN detector, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified constant false alarm rate (CFAR) detector, (<b>e</b>) the SPWH detector, (<b>f</b>) the shallow Faster R-CNN detector using Conv1 in RPN, and (<b>g</b>) the fully convolutional network-based ship detector.</p>
Full article ">Figure 12
<p>Ship detection results of the proposed detector with training data generated from (<b>a</b>) Cloude decomposition and (<b>b</b>) Huynen decomposition.</p>
Full article ">Figure 13
<p>Ship detection results of the proposed detector after multi-look processing with (<b>a</b>) 9 looks, (<b>b</b>) 25 looks, and (<b>c</b>) 49 looks.</p>
Full article ">Figure 14
<p>C-band Japan dataset. (<b>a</b>) Pauli RGB image. (<b>b</b>) Detection result of the shallow Faster R-CNN detector. (<b>c</b>) Detection result of the deep Faster R-CNN detector. (<b>d</b>) Detection result of the proposed detector. (<b>e</b>) Detection result of the proposed detector after multi-look processing with 9-looks, (<b>f</b>) Detection result of the modified CFAR detector, and (<b>g</b>) the fully convolutional network based-ship detector.</p>
Full article ">Figure 14 Cont.
<p>C-band Japan dataset. (<b>a</b>) Pauli RGB image. (<b>b</b>) Detection result of the shallow Faster R-CNN detector. (<b>c</b>) Detection result of the deep Faster R-CNN detector. (<b>d</b>) Detection result of the proposed detector. (<b>e</b>) Detection result of the proposed detector after multi-look processing with 9-looks, (<b>f</b>) Detection result of the modified CFAR detector, and (<b>g</b>) the fully convolutional network based-ship detector.</p>
Full article ">Figure 15
<p>Pauli RGB image of the Gulfco area A dataset.</p>
Full article ">Figure 16
<p>Ship detection result of partial samples, the red and green rectangles denote the bounding boxes generated from feature maps of the last two convolutional layers, respectively. (<b>a</b>) shows the zoomed version of ship detection result of the area marked with A in the <a href="#remotesensing-11-02862-f015" class="html-fig">Figure 15</a>, and (<b>b</b>) shows the zoomed version of ship detection result of the area marked with B in the <a href="#remotesensing-11-02862-f015" class="html-fig">Figure 15</a>.</p>
Full article ">Figure 17
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN, (<b>b</b>) the deep Faster R-CNN, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified CFAR detector and (<b>e</b>) the fully convolutional network-based ship detector.</p>
Full article ">Figure 18
<p>Pauli RGB image of the uninhabited aerial vehicle synthetic aperture radar (UAVSAR) dataset. The aircraft sensor flew from top to bottom with respect to the image orientation, i.e., azimuth direction, during data acquisition, while the range direction was from the left to the right.</p>
Full article ">Figure 19
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN, (<b>b</b>) the deep Faster R-CNN, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified CFAR detector and (<b>e</b>) the fully convolutional network-based ship detector.</p>
Full article ">Figure 19 Cont.
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN, (<b>b</b>) the deep Faster R-CNN, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified CFAR detector and (<b>e</b>) the fully convolutional network-based ship detector.</p>
Full article ">Figure 20
<p>Pauli RGB image of the AIRSAR Taiwan area dataset. The aircraft sensor flew from bottom to top with respect to the image orientation, i.e., azimuth direction, during data acquisition, while the range direction was from the right to the left.</p>
Full article ">Figure 21
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN, (<b>b</b>) the deep Faster R-CNN, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified CFAR detector and (<b>e</b>) the fully convolutional network-based ship detector.</p>
Full article ">Figure 21 Cont.
<p>Ship detection results of (<b>a</b>) the shallow Faster R-CNN, (<b>b</b>) the deep Faster R-CNN, (<b>c</b>) the proposed ship detector, (<b>d</b>) the modified CFAR detector and (<b>e</b>) the fully convolutional network-based ship detector.</p>
Full article ">
23 pages, 34308 KiB  
Article
Remotely-Sensed Identification of a Transition for the Two Ecosystem States Along the Elevation Gradient: A Case Study of Xinjiang Tianshan Bogda World Heritage Site
by Hong Wan, Xinyuan Wang, Lei Luo, Peng Guo, Yanchuang Zhao, Kai Wu and Hongge Ren
Remote Sens. 2019, 11(23), 2861; https://doi.org/10.3390/rs11232861 - 2 Dec 2019
Cited by 3 | Viewed by 3660
Abstract
The alpine treeline, as an ecological transition zone between montane coniferous forests and alpine meadows (two ecosystem states), is a research hotspot of global ecology and climate change. Quantitative identification of its elevation range can efficiently capture the results of the interaction between [...] Read more.
The alpine treeline, as an ecological transition zone between montane coniferous forests and alpine meadows (two ecosystem states), is a research hotspot of global ecology and climate change. Quantitative identification of its elevation range can efficiently capture the results of the interaction between climate change and vegetation. Digital extraction and extensive analysis in such a critical elevation range crucially depend on the ability of monitoring ecosystem variables and the suitability of the experimental model, which are often restricted by the weak intersection of disciplines and the spatial-temporal continuity of the data. In this study, the existence of two states was confirmed by frequency analysis and the Akaike information criterion (AIC) as well as the Bayesian information criterion (BIC) indices. The elevation range of a transition for the two ecosystem states on the northern slope of the Bogda was identified by the potential analysis. The results showed that the elevation range of co-occurrence for the two ecosystem states was 2690–2744 m. At the elevation of 2714 m, the high land surface temperature (LST) state started to exhibit more attraction than the low LST state. This elevation value was considered as a demarcation where abrupt shifts between the two states occurred with the increase of elevation. The identification results were validated by a field survey and unmanned aerial vehicle data. Progress has been made in the transition identification for the ecosystem states along the elevation gradient in mountainous areas by combining the remotely-sensed index with a potential analysis. This study also provided a reference for obtaining the elevation of the alpine tree line quickly and accurately. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and scope of the study area. (<b>a</b>) Location of the study area within China. (<b>b</b>) Elevation of the Bogda derived from 1″ Shuttle Radar Topographic Mission (SRTM) data.</p>
Full article ">Figure 2
<p>Field photos at the Bogda. (<b>a</b>) Land cover types in 2015. (<b>b</b>,<b>c</b>) Montane coniferous forests. (<b>d</b>) Co-occurrence region of montane coniferous forests and alpine meadows. (<b>e</b>) Alpine meadows.</p>
Full article ">Figure 3
<p>Field routes and survey points in July 2018.</p>
Full article ">Figure 4
<p>(<b>a</b>) Aspect distribution map. (<b>b</b>) LST distribution on 28 July 2016. (<b>c</b>) LST statistics of different land cover types.</p>
Full article ">Figure 5
<p>Omnidirectional LST statistics.</p>
Full article ">Figure 6
<p>NDVI spatial distribution of the Bogda on 28 July 2016.</p>
Full article ">Figure 7
<p>Field photo near Tianchi Lake.</p>
Full article ">Figure 8
<p>(<b>a</b>) LST probability density distribution. Note that LST values were arcsine transformed. (<b>b</b>) AIC and BIC for the simulations of one to five normal distribution modes.</p>
Full article ">Figure 9
<p>(<b>a</b>) Variation of the states obtained from potential energy analysis for the LST along the elevation gradient. Black and purple dots were both local minimums of potential energy, but black spots had lower potential energy. Between the two black vertical lines (at an elevation of 2690–2744 m) was the area where the two states co-occurred. At the black dotted line (at 2714 m), the low LST state potential energy was greater than the high LST state potential energy, and the system appeared to shift to a high LST state. The contour was the spatial distribution of the potential energy estimation results. (<b>b–f</b>) showed the details for the variation of the states of LST along the elevation gradient.</p>
Full article ">Figure 10
<p>The Validation of the results. (<b>a</b>) Sentinel-2 L1C data overlaying the elevation range of transition and the demarcation elevation identified in this study (full view). (<b>b</b>) Three validation sites in situ (PtID 5–7) for the ending elevation of the transition, and two validation sites in situ (PtID 3–4) for the demarcation elevation. (<b>c</b>) Two validation sites in situ (PtID 1–2) for the starting elevation of the transition. (<b>d</b>) The validation site in situ (PtID 3) from unmanned aerial vehicle data. (<b>e</b>) Google earth verifies the identified results.</p>
Full article ">Figure 11
<p>SBA spatial distribution of the Bogda on 28 July 2016.</p>
Full article ">Figure 12
<p>Variation of the states obtained from potential energy for land surface albedo along the elevation gradient. The black vertical lines (at 2740 m) was the area where the low albedo state began to fluctuate significantly. At the other black vertical lines (at 2870 m), the high albedo state can be observed with an increase in elevation. The red dotted line represented that the albedo was 0.1. The rest of the caption is the same as in <a href="#remotesensing-11-02861-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 13
<p>The partial distribution map of results based on the LST.</p>
Full article ">Figure 14
<p>The field photos of Shrubs.</p>
Full article ">
35 pages, 7039 KiB  
Article
An Under-Ice Hyperspectral and RGB Imaging System to Capture Fine-Scale Biophysical Properties of Sea Ice
by Emiliano Cimoli, Klaus M. Meiners, Arko Lucieer and Vanessa Lucieer
Remote Sens. 2019, 11(23), 2860; https://doi.org/10.3390/rs11232860 - 2 Dec 2019
Cited by 16 | Viewed by 7002
Abstract
Sea-ice biophysical properties are characterized by high spatio-temporal variability ranging from the meso- to the millimeter scale. Ice coring is a common yet coarse point sampling technique that struggles to capture such variability in a non-invasive manner. This hinders quantification and understanding of [...] Read more.
Sea-ice biophysical properties are characterized by high spatio-temporal variability ranging from the meso- to the millimeter scale. Ice coring is a common yet coarse point sampling technique that struggles to capture such variability in a non-invasive manner. This hinders quantification and understanding of ice algae biomass patchiness and its complex interaction with some of its sea ice physical drivers. In response to these limitations, a novel under-ice sled system was designed to capture proxies of biomass together with 3D models of bottom topography of land-fast sea-ice. This system couples a pushbroom hyperspectral imaging (HI) sensor with a standard digital RGB camera and was trialed at Cape Evans, Antarctica. HI aims to quantify per-pixel chlorophyll-a content and other ice algae biological properties at the ice-water interface based on light transmitted through the ice. RGB imagery processed with digital photogrammetry aims to capture under-ice structure and topography. Results from a 20 m transect capturing a 0.61 m wide swath at sub-mm spatial resolution are presented. We outline the technical and logistical approach taken and provide recommendations for future deployments and developments of similar systems. A preliminary transect subsample was processed using both established and novel under-ice bio-optical indices (e.g., normalized difference indexes and the area normalized by the maximal band depth) and explorative analyses (e.g., principal component analyses) to establish proxies of algal biomass. This first deployment of HI and digital photogrammetry under-ice provides a proof-of-concept of a novel methodology capable of delivering non-invasive and highly resolved estimates of ice algal biomass in-situ, together with some of its environmental drivers. Nonetheless, various challenges and limitations remain before our method can be adopted across a range of sea-ice conditions. Our work concludes with suggested solutions to these challenges and proposes further method and system developments for future research. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Concept design of the under-ice hyperspectral and RGB imaging system to capture fine-scale biophysical properties of sea ice. The system is designed to retrieve bio-optical relationship from downwelling sea-ice transmitted radiance. The sliding system aims to smoothly scan transects tenths of meters. It has a variable ski span of 0.82 to 1.2 m, a ski length of 1.48 m and a height of approximately 2 m. Its modular buoyancy system allows adjustment of the upward push against the ice and stabilizes the structure under different payload set-ups. The figure also shows the payload attitude reference system relative to the sensors orientation (heading, roll, and pitch). HI refers to Hyperspectral Imaging and FOV to Field of View.</p>
Full article ">Figure 2
<p>Field deployment and operation concept for the under-ice hyperspectral imaging and RGB scanning system. Two worm gear winches provide highly controllable slow movement back and forth along predefined transect. Movement commands are provided via radio communication and manual winching. The support remotely operated vehicle (ROV) is used to establish a tow-line between the deployment hole and the opposite transect endpoint. The deployment and operation require at least three people. Figure is not to scale.</p>
Full article ">Figure 3
<p>An overview of the payload main internal components, their allocation within the enclosure and volume required to host the payload. AK10 stands for AISA Kestrel 10. The figure also includes the payload attitude reference system relative to the sensors orientation (heading, roll and pitch).</p>
Full article ">Figure 4
<p>Field pictures of the first deployment at Cape Evans, Antarctica. <b>a</b>) The system control station together with the removable payload tray. <b>b</b>) The system deployed in the water prior to under-ice immersion. Visible is the external payload composed of the TriOS Ramses ACC and a set of four Lumen Subsea LEDs, and the prop maneuvering cradles. <b>c</b>) The system scanning over the selected transect underneath the highly productive fast-ice of Cape Evans. <b>d</b>) One of the worm gear winches at the opposite side of the transect in speed-up mode using a drill adapter.</p>
Full article ">Figure 5
<p>Overview of the surveyed western transect produced with structure from motion (SfM) digital photogrammetry using the RGB imagery. Camera positions and Ramses ACC irradiance samples were synchronized to the same sampling frequency, so they match in space. Blocks A and B within the transect were selected for further image analysis. On top is a photograph of the transect direction viewed from above the surface. Displaying the typical survey conditions (little to zero snow) of the study area.</p>
Full article ">Figure 6
<p>Display of the main data products of the developed under-ice payload. Block A and block B refer to two different subsections within the western transect that were selected for further analyses. <b>a</b>) Under-ice orthomosaic produced from the RGB imagery. <b>b</b>) Hillshade of the SfM derived digital elevation model (DEM) illustrating relief structure produced by the large cavities. <b>c</b>) Visual representation of the hyperspectral data cube for block A including block B as an RGB composite. Panel <b>d</b>) and <b>e</b>) display the high variability of radiance spectra for a selected variety of spots within block B (both unprocessed and smoothed with a Savitzky-Golay filter respectively). Panel <b>f</b>) display four of the darkest pixels within the image associated to extremely dense algal clumps. For all plots, spectrum shows a × 4 pixels spectral average which corresponds to approx. 1.2 mm pixel size. Native pixel size is 0.624 mm.</p>
Full article ">Figure 7
<p>Two upward looking RGB image samples taken from the Sony a6300 camera dataset shown at full resolution. Both images display some examples of spotted under-ice feeders (circled). Left image shows a ctenophore (comb jelly) and right image shows a couple of circled amphipods. <b>a</b>) Image taken nearby the visible deployment ice hole. The image zooms into a large brine channel and further on the highly detailed under-ice skeletal layer. <b>b</b>) Image taken midway on the transect displaying the high concentration of oxygen bubbles produced by the photosynthesizing ice algae.</p>
Full article ">Figure 8
<p><b>a</b>) Mean ± one standard deviation of downwelling under-ice irradiance (E<sub>d</sub>) spectra from the TriOS RAMESES ACC-VIS located near the ice water interface for the full 20.1 m transect. <b>b</b>) Mean ± one standard deviation of under-ice downwelling radiance spectra (L<sub>d</sub>) from all the pixels of block B hyperspectral image subsample from the AK10. <b>c</b>) Mean ± one standard deviation of under-ice irradiance and radiance spectra normalized by area under curve for the Ramses ACC-VIS over all the transect and for all pixels of block B AK10 hyperspectral image. <b>d</b>) Mean ± one standard deviation of under-ice downwelling radiance (L<sub>d</sub>) normalized by the maximum radiance pixel of all block B and corresponding to one of the cavities or secondary brine channels seen in the image (L<sub>d-cavity</sub>).</p>
Full article ">Figure 9
<p>Results of principal component analysis (PCA, also known as EOF), applied to the spectral dimension of block B (hyperspectral image subsample of the western transect). Top images display the first three PC scores applied to every pixel of the image using corresponding loadings for each component. Bottom plots display the loadings for each wavelength for each principal component. Plot display as well the proportion of variance explained by each corresponding component. Light grey areas highlight the maximum chl-a absorption regions at 440 and 670 nm. Spatial resolution for PCA was maintained to a native 0.625 mm.</p>
Full article ">Figure 10
<p>Application of spectral indexes as proxies of chl-a distribution over block b HI subsample. <b>a</b>) Results from the application of a commonly used index in sea-ice bio-optical literature, the normalized difference index (NDI), applied for wavelengths 648:567 nm on block B hyperspectral image subsample. <b>b</b>) Application of a novel index to sea-ice bio-optical literature, the area under curve normalized to maximal band depth (ANMB) between wavelengths 650 to 700, applied to the same block B. <b>c</b>) Plot of continuum removed spectrum of three random pixels within block B to help visualizing the ANMB 650–700 concept and its association with chl-a absorption. For the color bars, higher values (towards red) correspond to higher expected biomass. Spatial resolution for the indices was binned to 1.2 mm.</p>
Full article ">Figure A1
<p>Schematics of the electronic power and communication streams for the internal and the additional external under-ice payloads.</p>
Full article ">
21 pages, 2082 KiB  
Article
Hyperspectral Image Super-Resolution with 1D–2D Attentional Convolutional Neural Network
by Jiaojiao Li, Ruxing Cui, Bo Li, Rui Song, Yunsong Li and Qian Du
Remote Sens. 2019, 11(23), 2859; https://doi.org/10.3390/rs11232859 - 1 Dec 2019
Cited by 26 | Viewed by 5241
Abstract
Hyperspectral image (HSI) super-resolution (SR) is of great application value and has attracted broad attention. The hyperspectral single image super-resolution (HSISR) task is correspondingly difficult in SR due to the unavailability of auxiliary high resolution images. To tackle this challenging task, different from [...] Read more.
Hyperspectral image (HSI) super-resolution (SR) is of great application value and has attracted broad attention. The hyperspectral single image super-resolution (HSISR) task is correspondingly difficult in SR due to the unavailability of auxiliary high resolution images. To tackle this challenging task, different from the existing learning-based HSISR algorithms, in this paper we propose a novel framework, i.e., a 1D–2D attentional convolutional neural network, which employs a separation strategy to extract the spatial–spectral information and then fuse them gradually. More specifically, our network consists of two streams: a spatial one and a spectral one. The spectral one is mainly composed of the 1D convolution to encode a small change in the spectrum, while the 2D convolution, cooperating with the attention mechanism, is used in the spatial pathway to encode spatial information. Furthermore, a novel hierarchical side connection strategy is proposed for effectively fusing spectral and spatial information. Compared with the typical 3D convolutional neural network (CNN), the 1D–2D CNN is easier to train with less parameters. More importantly, our proposed framework can not only present a perfect solution for the HSISR problem, but also explore the potential in hyperspectral pansharpening. The experiments over widely used benchmarks on SISR and hyperspectral pansharpening demonstrate that the proposed method could outperform other state-of-the-art methods, both in visual quality and quantity measurements. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of our 1D network architecture including the encoding sub-network and the up-sampling sub-network. The input of hyperspectral single image super-resolution (HSISR) are the low resolution (LR) Hyperspectral images (HSIs), and the inputs of hyperspectral pansharpening are LR HSI and pan. To ensure that both tasks can employ this network, the pan image used for hyperspectral pansharpening is simply addressed via a multi-scale learning network to extract deep spatial features, and then form a deep feature cube with the same size of the LR HSI. The detailed network and parameters setting for extraction of the deep features can be referenced from Figure 4 and Table 2.</p>
Full article ">Figure 2
<p>Illustration of 1D convolution path for spectral feature encoding, and 2D convolution path for spatial feature extraction of our proposed spectral and spatial residual block.</p>
Full article ">Figure 3
<p>The detailed structures of our proposed spatial attention block.</p>
Full article ">Figure 4
<p>The spatial inputs of hyperspectral pansharpening that is reconstructed as the same size of LR HSI.</p>
Full article ">Figure 5
<p>The performance of our proposed algorithm with the increasing <span class="html-italic">a</span>.</p>
Full article ">Figure 6
<p>The visual results of spatial super-resolution (SR) on Pavia University, in which the area in the red rectangle is enlarged three times in the bottom right corner of the image.</p>
Full article ">Figure 7
<p>The visual results of spatial SR on Pavia Center, in which the area in the red rectangle is enlarged three times in the bottom right corner of the image.</p>
Full article ">Figure 8
<p>Example spectrum of Pavia University: The figures above show the locations and the figures below show the corresponding spectrum.</p>
Full article ">Figure 9
<p>Example spectrum of Pavia Center: The figures above show the locations and the figures below show the corresponding spectrum.</p>
Full article ">Figure 10
<p>The visual results of hyperspectral pansharpenning on the CAVE. dataset, in which the area in the red rectangle is enlarged three times in the bottom right corner of the image.</p>
Full article ">Figure 11
<p>Example spectrum of the Cave dataset: The figures above show the locations and the figures below show the corresponding spectrum.</p>
Full article ">Figure 12
<p>The visual results of hyperspectral pansharpenning on the Harvard dataset, in which the area in the red rectangle is enlarged three times in the bottom right corner of the image.</p>
Full article ">Figure 13
<p>Example spectrum of Havard dataset: The figures above show the locations and the figures below show the corresponding spectrum.</p>
Full article ">
19 pages, 7052 KiB  
Article
Assessment of the Degree of Building Damage Caused by Disaster Using Convolutional Neural Networks in Combination with Ordinal Regression
by Tianyu Ci, Zhen Liu and Ying Wang
Remote Sens. 2019, 11(23), 2858; https://doi.org/10.3390/rs11232858 - 1 Dec 2019
Cited by 45 | Viewed by 6151
Abstract
We propose a new convolutional neural networks method in combination with ordinal regression aiming at assessing the degree of building damage caused by earthquakes with aerial imagery. The ordinal regression model and a deep learning algorithm are incorporated to make full use of [...] Read more.
We propose a new convolutional neural networks method in combination with ordinal regression aiming at assessing the degree of building damage caused by earthquakes with aerial imagery. The ordinal regression model and a deep learning algorithm are incorporated to make full use of the information to improve the accuracy of the assessment. A new loss function was introduced in this paper to combine convolutional neural networks and ordinal regression. Assessing the level of damage to buildings can be considered as equivalent to predicting the ordered labels of buildings to be assessed. In the existing research, the problem has usually been simplified as a problem of pure classification to be further studied and discussed, which ignores the ordinal relationship between different levels of damage, resulting in a waste of information. Data accumulated throughout history are used to build network models for assessing the level of damage, and models for assessing levels of damage to buildings based on deep learning are described in detail, including model construction, implementation methods, and the selection of hyperparameters, and verification is conducted by experiments. When categorizing the damage to buildings into four types, we apply the method proposed in this paper to aerial images acquired from the 2014 Ludian earthquake and achieve an overall accuracy of 77.39%; when categorizing damage to buildings into two types, the overall accuracy of the model is 93.95%, exceeding such values in similar types of theories and methods. Full article
(This article belongs to the Special Issue Remote Sensing of Natural Hazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Post-event aerial image of the Yushu earthquake, Qinghai Province, China.</p>
Full article ">Figure 2
<p>Post-event aerial image of Ludian earthquake, Yunnan Province, China.</p>
Full article ">Figure 3
<p>Eamples of building damage in the datasets.</p>
Full article ">Figure 4
<p>Illustration of the proposed network. The network consists of a convolutional neural network (CNN) feature extractor and a classifier. Solid arrows represent data flow. We adopt VGG-16, ResNet-50, and a baseline network as our CNN feature extractors. The Softmax classifier and ordinal regression (OR) classifier offer the choice of two classifiers. The OR classifier that is shown in this figure branches out into three layers, where each layer contains two neurons. The prediction damage degree is decoded from these layers. The supervised information of the network is the damage grade of buildings.</p>
Full article ">Figure 5
<p>The impact of the number of training set samples on overall accuracy.</p>
Full article ">Figure 6
<p>Example of underestimated building damage by visual interpretation of an aerial image. <b>Left</b>: ground photo; <b>Right</b>: aerial image. The collapse of the building is not visible on the aerial image.</p>
Full article ">
21 pages, 11553 KiB  
Article
Transferred Multi-Perception Attention Networks for Remote Sensing Image Super-Resolution
by Xiaoyu Dong, Zhihong Xi, Xu Sun and Lianru Gao
Remote Sens. 2019, 11(23), 2857; https://doi.org/10.3390/rs11232857 - 1 Dec 2019
Cited by 32 | Viewed by 4640
Abstract
Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR [...] Read more.
Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR processes with regard to the complex spatial distribution of the remote sensing images and the diverse spatial scales of ground objects. In this paper, a novel multi-perception attention network (MPSR) is developed with performance exceeding those of many existing state-of-the-art models. By incorporating the proposed enhanced residual block (ERB) and residual channel attention group (RCAG), MPSR can super-resolve low-resolution remote sensing images via multi-perception learning and multi-level information adaptive weighted fusion. Moreover, a pre-train and transfer learning strategy is introduced, which improved the SR performance and stabilized the training procedure. Experimental comparisons are conducted using 13 state-of-the-art methods over a remote sensing dataset and benchmark natural image sets. The proposed model proved its excellence in both objective criterion and subjective perspective. Full article
(This article belongs to the Special Issue Image Super-Resolution in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Multi-perception attention network (MPSR).</p>
Full article ">Figure 2
<p>Comparison of residual blocks: (<b>a</b>) the residual block in SRResNet, (<b>b</b>) common residual block structure (RB), and (<b>c</b>) proposed enhanced residual block (ERB).</p>
Full article ">Figure 3
<p>Residual channel attention group (RCAG).</p>
Full article ">Figure 4
<p>Channel attention (CA). <math display="inline"><semantics> <mo>⊗</mo> </semantics></math> represents the element-wise product.</p>
Full article ">Figure 5
<p>Representative images from the datasets used for comparing and evaluating algorithms. (<b>a</b>) DIV2K, (<b>b</b>) UC MERCED, (<b>c</b>) Set5, (<b>d</b>) Set14, (<b>e</b>) BSD100, (<b>f</b>) Urban100.</p>
Full article ">Figure 5 Cont.
<p>Representative images from the datasets used for comparing and evaluating algorithms. (<b>a</b>) DIV2K, (<b>b</b>) UC MERCED, (<b>c</b>) Set5, (<b>d</b>) Set14, (<b>e</b>) BSD100, (<b>f</b>) Urban100.</p>
Full article ">Figure 6
<p>Effect of using the pre-training strategy. (<b>a</b>) Performance of training ×3 MPSR by using the pre-trained ×2 model. (<b>b</b>) Performance of training ×3 MPSR from scratch. Image 0801 to 0805 from the DIV2K dataset are used for validation during training.</p>
Full article ">Figure 7
<p>Visual comparison of ×2 SR results of the UCtest. The best results are <b>in bold</b>.</p>
Full article ">Figure 8
<p>Visual comparison of ×3 SR results of the UCtest. The best results are <b>in bold</b>.</p>
Full article ">Figure 9
<p>Visual comparison of ×4 SR results of the UCtest. The best results are <b>in bold</b>.</p>
Full article ">Figure 10
<p>Visual comparisons of Urban100. The best results are <b>in bold</b>.</p>
Full article ">Figure 11
<p>×2, ×3, and ×4 SR results of the GaoFen-1 data.</p>
Full article ">Figure 12
<p>×2, ×3, and ×4 SR results of the GaoFen-2 data.</p>
Full article ">Figure 12 Cont.
<p>×2, ×3, and ×4 SR results of the GaoFen-2 data.</p>
Full article ">
18 pages, 9643 KiB  
Article
Effects of Distinguishing Vegetation Types on the Estimates of Remotely Sensed Evapotranspiration in Arid Regions
by Tao Du, Li Wang, Guofu Yuan, Xiaomin Sun and Shusen Wang
Remote Sens. 2019, 11(23), 2856; https://doi.org/10.3390/rs11232856 - 1 Dec 2019
Cited by 6 | Viewed by 3812
Abstract
Accurate estimates of evapotranspiration (ET) in arid ecosystems are important for sustainable water resource management due to competing water demands between human and ecological environments. Several empirical remotely sensed ET models have been constructed and their potential for regional scale ET estimation in [...] Read more.
Accurate estimates of evapotranspiration (ET) in arid ecosystems are important for sustainable water resource management due to competing water demands between human and ecological environments. Several empirical remotely sensed ET models have been constructed and their potential for regional scale ET estimation in arid ecosystems has been demonstrated. Generally, these models were built using combined measured ET and corresponding remotely sensed and meteorological data from diverse sites. However, there are usually different vegetation types or mixed vegetation types in these sites, and little information is available on the estimation uncertainty of these models induced by combining different vegetation types from diverse sites. In this study, we employed the most popular one of these models and recalibrated it using datasets from two typical vegetation types (shrub Tamarix ramosissima and arbor Populus euphratica) in arid ecosystems of northwestern China. The recalibration was performed in the following two ways: using combined datasets from the two vegetation types, and using a single dataset from specific vegetation type. By comparing the performance of the two methods in ET estimation for Tamarix ramosissima and Populus euphratica, we investigated and compared the accuracy of ET estimation at the site scale and the difference in annual ET estimation at the regional scale. The results showed that the estimation accuracy of daily, monthly, and yearly ET was improved by distinguishing the vegetation types. The method based on the combined vegetation types had a great influence on the estimation accuracy of annual ET, which overestimated annual ET about 9.19% for Tamarix ramosissima and underestimated annual ET about 11.50% for Populus euphratica. Furthermore, substantial difference in annual ET estimation at regional scale was found between the two methods. The higher the vegetation coverage, the greater the difference in annual ET. Our results provide valuable information on evaluating the estimation accuracy of regional scale ET using empirical remotely sensed ET models for arid ecosystems. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area (<b>a</b>); locations of the <span class="html-italic">Tamarix</span> transect, the <span class="html-italic">Populus</span> transect, and the Tikanlik Weather Station (<b>b</b>); locations of the <span class="html-italic">Tamarix</span> site (<b>c</b>,<b>e</b>) and the <span class="html-italic">Populus</span> site (<b>d</b>,<b>f</b>) shown against a 30-m standard false color Landsat image; photos of <span class="html-italic">Tamarix ramosissima</span> (<b>g</b>) and <span class="html-italic">Populus euphratica</span> (<b>h</b>). Among them, (<b>e</b>) is the partial enlargement of the blue box in (<b>c</b>), and (<b>f</b>) is the partial enlargement of the blue box in (<b>d</b>). In (<b>c</b>) and (<b>e</b>), the sky blue square is the pixel where the <span class="html-italic">Tamarix</span> flux tower is located, and the green box represents the appropriate footprint area (90 m × 90 m, 9 pixels) measured by the <span class="html-italic">Tamarix</span> flux tower. In (<b>d</b>) and (<b>f</b>), the pink square is the pixel where the <span class="html-italic">Populus</span> flux tower is located, and the yellow box represents the appropriate footprint area (270 m × 270 m, 81 pixels) measured by the <span class="html-italic">Populus</span> flux tower.</p>
Full article ">Figure 2
<p>Comparison between the filtered NDVI and the interpolated daily NDVI for the <span class="html-italic">Tamarix</span> site throughout 2017 (<b>a</b>) and for the <span class="html-italic">Populus</span> site throughout 2016 (<b>b</b>). It also shows the division of different stages throughout the year.</p>
Full article ">Figure 3
<p>Time series of observed and simulated daily ET using CVTM and SVTM-T for the <span class="html-italic">Tamarix</span> site (<b>a</b>) and using CVTM and SVTM-P for the <span class="html-italic">Populus</span> site (<b>d</b>). Relationship between observed and simulated daily ET using CVTM (<b>b</b>) and SVTM-T (<b>c</b>) for the <span class="html-italic">Tamarix</span> site and using CVTM (<b>e</b>) and SVTM-P (<b>f</b>) for the <span class="html-italic">Populus</span> site. The black dashed lines are the 1:1 lines and the red short dashed lines are the regression lines.</p>
Full article ">Figure 4
<p>Comparison between observed daily ET and simulated daily ET for the <span class="html-italic">Tamarix</span> site throughout 2017 (<b>a</b>–<b>c</b>; using CVTM (<b>b</b>) and SVTM-T (<b>c</b>)) and for the <span class="html-italic">Populus</span> site throughout 2016 (<b>d</b>–<b>f</b>; using CVTM (<b>e</b>) and SVTM-P (<b>f</b>)). The black dashed lines are the 1:1 lines and the red short dashed lines are the regression lines.</p>
Full article ">Figure 5
<p><span class="html-italic">Tamarix</span> transect. Satellite photo (available through Google Earth (2010)) (<b>a</b>). Annual ET for 2017 estimated by CVTM (<b>b</b>) and by SVTM-T (<b>c</b>), and annual ET difference (CVTM−SVTM-T) (<b>d</b>) corresponding to pixels of Landsat OLI imagery.</p>
Full article ">Figure 6
<p><span class="html-italic">Populus</span> transect. Satellite photo (available through Google Earth (2010)) (<b>a</b>). Annual ET for 2016 estimated by CVTM (<b>b</b>) and by SVTM-P (<b>c</b>), and annual ET difference (CVTM−SVTM-P) (<b>d</b>) corresponding to pixels of Landsat OLI imagery.</p>
Full article ">
20 pages, 6946 KiB  
Article
Biomass Estimation for Semiarid Vegetation and Mine Rehabilitation Using Worldview-3 and Sentinel-1 SAR Imagery
by Nisha Bao, Wenwen Li, Xiaowei Gu and Yanhui Liu
Remote Sens. 2019, 11(23), 2855; https://doi.org/10.3390/rs11232855 - 1 Dec 2019
Cited by 25 | Viewed by 5062
Abstract
The surface mining activities in grassland and rangeland zones directly affect the livestock production, forage quality, and regional grassland resources. Mine rehabilitation is necessary for accelerating the recovery of the grassland ecosystem. In this work, we investigate the integration of data obtained via [...] Read more.
The surface mining activities in grassland and rangeland zones directly affect the livestock production, forage quality, and regional grassland resources. Mine rehabilitation is necessary for accelerating the recovery of the grassland ecosystem. In this work, we investigate the integration of data obtained via a synthetic aperture radar (Sentinel-1 SAR) with data obtained by optical remote sensing (Worldview-3, WV-3) in order to monitor the conditions of a vegetation area rehabilitated after coal mining in North China. The above-ground biomass (AGB) is used as an indicator of the rehabilitated vegetation conditions and the success of mine rehabilitation. The wavelet principal component analysis is used for the fusion of the WV-3 and Sentinel-1 SAR images. Furthermore, a multiple linear regression model is applied based on the relationship between the remote sensing features and the AGB field measurements. Our results show that WV-3 enhanced vegetation indices (EVI), mean texture from band8 (near infrared band2, NIR2), the SAR vertical and horizon (VH) polarization, and band 8 (NIR2) from the fused image have higher correlation coefficient value with the field-measured AGB. The proposed AGB estimation model combining WV-3 and Sentinel 1A SAR imagery yields higher model accuracy (R2 = 0.79 and RMSE = 22.82 g/m2) compared to that obtained with any of the two datasets only. Besides improving AGB estimation, the proposed model can also reduce the uncertainty range by 7 g m−2 on average. These results demonstrate the potential of new multispectral high-resolution datasets, such as Sentinel-1 SAR and Worldview-3, in providing timely and accurate AGB estimation for mine rehabilitation planning and management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the research area (<b>left</b>). Photograph of the mine area showing sampling sites in rehabilitated dump (<b>middle</b>). Digital elevation model (DEM) of the studied dump (<b>right</b>).</p>
Full article ">Figure 2
<p>Preprocessing procedures for synthetic aperture radar (SAR) images.</p>
Full article ">Figure 3
<p>SAR vertical and vertical polarization (VV) image visualization before and after image reconstruction.</p>
Full article ">Figure 4
<p>SAR vertical and horizon polarization (VH) image visualization before and after image reconstruction.</p>
Full article ">Figure 5
<p>The fused image integrating the Sentinel-SAR image with the WV-3 image.</p>
Full article ">Figure 6
<p>The entropy value for all spectral bands of WV-3, VV polarization of the SAR image, and fused bands.</p>
Full article ">Figure 7
<p>The average gradient value for all available bands of WV-3, VV polarization of the SAR image, and fused bands.</p>
Full article ">Figure 8
<p>The similarity coefficient and the spectral distortion between the WV-3 image and the fused image for all spectral bands.</p>
Full article ">Figure 9
<p>The relationship between measured and retrieved above-ground biomass (AGB) from the best biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 10
<p>The relationship between measured AGB of 2018 and 2019 and retrieved AGB from the biomass prediction model based on the fusion band.</p>
Full article ">Figure 11
<p>Residual distributions of the best biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 12
<p>Biomass mapping for the best four biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 12 Cont.
<p>Biomass mapping for the best four biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 13
<p>Biomass uncertainty for the best four biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a></p>
Full article ">Figure 13 Cont.
<p>Biomass uncertainty for the best four biomass prediction models of <a href="#remotesensing-11-02855-t006" class="html-table">Table 6</a></p>
Full article ">
17 pages, 10605 KiB  
Article
Sequential InSAR Time Series Deformation Monitoring of Land Subsidence and Rebound in Xi’an, China
by Baohang Wang, Chaoying Zhao, Qin Zhang and Mimi Peng
Remote Sens. 2019, 11(23), 2854; https://doi.org/10.3390/rs11232854 - 1 Dec 2019
Cited by 25 | Viewed by 5191
Abstract
Interferometric synthetic aperture radar (InSAR) time series deformation monitoring plays an important role in revealing historical displacement of the Earth’s surface. Xi’an, China, has suffered from severe land subsidence along with ground fissure development since the 1960s, which has threatened and will continue [...] Read more.
Interferometric synthetic aperture radar (InSAR) time series deformation monitoring plays an important role in revealing historical displacement of the Earth’s surface. Xi’an, China, has suffered from severe land subsidence along with ground fissure development since the 1960s, which has threatened and will continue to threaten the stability of urban artificial constructions. In addition, some local areas in Xi’an suffered from uplifting for some specific period. Time series deformation derived from multi-temporal InSAR techniques makes it possible to obtain the temporal evolution of land subsidence and rebound in Xi’an. In this paper, we used the sequential InSAR time series estimation method to map the ground subsidence and rebound in Xi’an with Sentinel-1A data during 2015 to 2019, allowing estimation of surface deformation dynamically and quickly. From 20 June 2015 to 17 July 2019, two areas subsided continuously (Sanyaocun-Fengqiyuan and Qujiang New District), while Xi’an City Wall area uplifted with a maximum deformation rate of 12 mm/year. Furthermore, Yuhuazhai subsided from 20 June 2015 to 14 October 2018, and rebound occurred from 14 October 2018 to 17 July 2019, which can be explained as the response to artificial water injection. In the process of artificial water injection, the rebound pattern can be further divided into immediate elastic recovery deformation and time-dependent visco-elastic recovery deformation. Full article
(This article belongs to the Special Issue Remote Sensing of Natural Hazards)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of sequential InSAR time series estimation.</p>
Full article ">Figure 2
<p>Quaternary geology map of Xi’an, where Chang’an-Lintong fault (CAF) and 14 ground fissures are superimposed, and loess ridge areas are labeled with white blocks.</p>
Full article ">Figure 3
<p>The illustration of interferogram configuration between the first group of SAR data (i.e., archived SAR data) and the newly received SAR images (i.e., new observation data from SAR satellites). (<b>A</b>) Single-link interferogram configuration; (<b>B</b>) network-link interferogram configuration. The blue lines indicate interferograms generated between archived SAR images in the first group by SBAS technology and the green lines show the new interferograms generated between newly received SAR images and older archived SAR images by SBAS technology.</p>
Full article ">Figure 4
<p>Annual deformation rate map in the vertical direction over the study area from 20 June 2015 to 17 July 2019. The deformation time series for six points indicated by A–F are shown in <a href="#remotesensing-11-02854-f005" class="html-fig">Figure 5</a>. Rectangular boxes L1 and L2 are enlarged and shown in Figures 6 and 8, respectively. Red dotted line indicates ground fissures, and the red line indicates CAF faults. The black pentagram indicates the location of the reference point.</p>
Full article ">Figure 5
<p>Deformation time series at six typical points (<b>A</b>–<b>F</b>), which are located in <a href="#remotesensing-11-02854-f004" class="html-fig">Figure 4</a>. The six points show different deformation magnitude.</p>
Full article ">Figure 6
<p>The deformation and optical image of Xi’an City Wall; (<b>A</b>) deformation rate map from 20 June 2015 to 17 July 2019, which is an enlargement of L1 in <a href="#remotesensing-11-02854-f004" class="html-fig">Figure 4</a>; (<b>B</b>) an optical image of Xi’an City Wall; (<b>C</b>) a photo of Xi’an City Wall.</p>
Full article ">Figure 7
<p>Deformation time series at points (<b>A</b>–<b>D</b>); their locations are indicated in <a href="#remotesensing-11-02854-f006" class="html-fig">Figure 6</a>A.</p>
Full article ">Figure 8
<p>Cumulative deformation time series of Yuhuazhai from 20 June 2015 to 17 July 2019.</p>
Full article ">Figure 9
<p>Cumulative rebound deformation time series of Yuhuazhai from 5 April 2018 to 17 July 2019. The black rectangular box is enlarged in <a href="#remotesensing-11-02854-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 10
<p>Enlarged deformation map of the area in the rectangle in <a href="#remotesensing-11-02854-f009" class="html-fig">Figure 9</a>, with indication of the ground fissure F4. The time series deformation of four points localized at A–D are shown in <a href="#remotesensing-11-02854-f011" class="html-fig">Figure 11</a>. The Yuhuazhai area indicated in the rectangle is shown in <a href="#remotesensing-11-02854-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 11
<p>Deformation time series at four points A–D in <a href="#remotesensing-11-02854-f010" class="html-fig">Figure 10</a>. Red lines divide time series deformation into three stages.</p>
Full article ">Figure 12
<p>Optical image of Yuhuazhai (rectangular box in <a href="#remotesensing-11-02854-f010" class="html-fig">Figure 10</a>). Seven pumping wells are identified. This area experienced large rebound deformation after artificial water injection.</p>
Full article ">Figure 13
<p>Recognition of pumping wells 1, 2, and 3 in <a href="#remotesensing-11-02854-f012" class="html-fig">Figure 12</a> from optical image (<b>A</b>–<b>C</b>), and by photo of the scene (<b>D</b>–<b>F</b>), respectively.</p>
Full article ">
Previous Issue
Back to TopTop