Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 11, September-2
Previous Issue
Volume 11, August-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 17 (September-1 2019) – 114 articles

Cover Story (view full-size image): Observation of the spatial distribution of cloud optical thickness (COT) is useful for the prediction and diagnosis of photovoltaic power generation. Using deep learning techniques, a convolutional neural network (CNN) has been trained based on synthetic spectral radiance data generated using a 3D atmospheric radiative transfer model, which enables the quick estimation of the COT distribution from the image of a ground-mounted radiometrically calibrated digital camera. The CNN retrieves the COT spatial distribution using spectral features and spatial contexts. Evaluation of the method using synthetic data and a comparison with the existing method shows promising results. Shown on the cover page are (left) synthetic camera images, (middle) ground truth COT, and (right) COT estimated by CNN. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2459 KiB  
Article
Performance of Selected Ionospheric Models in Multi-Global Navigation Satellite System Single-Frequency Positioning over China
by Ahao Wang, Junping Chen, Yize Zhang, Lingdong Meng and Jiexian Wang
Remote Sens. 2019, 11(17), 2070; https://doi.org/10.3390/rs11172070 - 3 Sep 2019
Cited by 20 | Viewed by 3878
Abstract
Ionospheric delay as the major error source needs to be properly handled in multi-GNSS (Global Navigation Satellite System) single-frequency positioning and the different ionospheric models exhibit apparent performance difference. In this study, two single-frequency positioning solutions with different ionospheric corrections are utilized to [...] Read more.
Ionospheric delay as the major error source needs to be properly handled in multi-GNSS (Global Navigation Satellite System) single-frequency positioning and the different ionospheric models exhibit apparent performance difference. In this study, two single-frequency positioning solutions with different ionospheric corrections are utilized to comprehensively analyze the ionospheric delay effects on multi-frequency and multi-constellation positioning performance, including standard point positioning (SPP) and ionosphere-constrained precise point positioning (PPP). The four ionospheric models studied are the GPS broadcast ionospheric model (GPS-Klo), the BDS (BeiDou Navigation Satellite System) broadcast ionospheric model (BDS-Klo), the BDS ionospheric grid model (BDS-Grid) and the Global Ionosphere Maps (GIM) model. Datasets are collected from 10 stations over one month in 2019. The solar remained calm and the ionosphere was stable during the test period. The experimental results show that for single-frequency SPP, the GIM model achieves the best accuracy, and the positioning accuracy of the BDS-Klo and BDS-Grid model is much better than the solution with GPS-Klo model in the N and U components. For the single-frequency PPP performance, the average convergence time of the ionosphere-constrained PPP is much reduced compared with the traditional PPP approach, where the improvements are of 11.2%, 11.9%, 21.3% and 39.6% in the GPS-Klo-, BDS-Klo-, BDS-Grid- and GIM-constrained GPS + GLONASS + BDS single-frequency PPP solutions, respectively. Furthermore, the positioning accuracy of the BDS-Grid- and GIM-constrained PPP is generally the same as the ionosphere-free combined single-frequency PPP. Through the combination of GPS, GLONASS and BDS, the positioning accuracy and convergence performance for all single-system single-frequency SPP/PPP solutions can be effectively improved. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical distribution of the selective MGEX (blue) and CMONOC (red) stations.</p>
Full article ">Figure 2
<p>Geomagnetic Kp index and F10.7 values on 1–28 February 2019.</p>
Full article ">Figure 3
<p>Positioning errors of b1 SF-SPP with different ionospheric models (GPS-Klo, BDS-Klo, BDS-Grid and GIM) in (<b>a</b>) single-system or (<b>b</b>) multi-system at SNMX station (DoY 36 in 2019, sum of Kp = 0.88, F10.7 = 71 sfu). Where, e.g., GPS-Klo-GRC represents the SF-SPP results using data of GPS + GLONASS + BDS with ionosphere delay corrected using the GPS-Klo model, and the abbreviation G, R and C represent GPS, GLONASS and BDS, respectively.</p>
Full article ">Figure 4
<p>Mean number of satellites and mean PDOP for different SF-SPP processing scenarios at 10 selected stations on 5 February 2019, (<b>a</b>) b1 SPP; (<b>b</b>) b2 SPP; the abbreviation G, R and C represent GPS, GLONASS and BDS, respectively.</p>
Full article ">Figure 5
<p>Comparison of positioning error of b1 SF-PPP with different schemes for GPS, GLONASS and GPS + GLONASS + BDS solutions at QHGC station (DoY 35 in 2019, sum of Kp = 1.25, F10.7 = 71 sfu). The corresponding satellite numbers are represented by the yellow curve. The abbreviation “Cons” represents “Constrained”.</p>
Full article ">Figure 6
<p>Average convergence time of SF-PPP with different schemes for GPS-only, GLONASS-only, BDS-only, GPS + GLONASS, GPS + BDS and GPS + GLONASS + BDS solutions using datasets at 10 stations over one month.</p>
Full article ">
18 pages, 2682 KiB  
Article
Integrating SEBAL with in-Field Crop Water Status Measurement for Precision Irrigation Applications—A Case Study
by Stefano Gobbo, Stefano Lo Presti, Marco Martello, Lorenza Panunzi, Antonio Berti and Francesco Morari
Remote Sens. 2019, 11(17), 2069; https://doi.org/10.3390/rs11172069 - 3 Sep 2019
Cited by 24 | Viewed by 5286
Abstract
The surface energy balance algorithm for land (SEBAL) has been demonstrated to provide accurate estimates of crop evapotranspiration (ET) and yield at different spatial scales even under highly heterogeneous conditions. However, validation of the SEBAL using in-field direct and indirect measurements of plant [...] Read more.
The surface energy balance algorithm for land (SEBAL) has been demonstrated to provide accurate estimates of crop evapotranspiration (ET) and yield at different spatial scales even under highly heterogeneous conditions. However, validation of the SEBAL using in-field direct and indirect measurements of plant water status is a necessary step before deploying the algorithm as an irrigation scheduling tool. To this end, a study was conducted in a maize field located near the Venice Lagoon area in Italy. The experimental area was irrigated using a 274 m long variable rate irrigation (VRI) system with 25-m sections. Three irrigation management zones (IMZs; high, medium and low irrigation requirement zones) were defined combining soil texture and normalized difference vegetation index (NDVI) data. Soil moisture sensors were installed in the different IMZs and used to schedule irrigation. In addition, SEBAL-based actual evapotranspiration (ETr) and biomass estimates were calculated throughout the season. VRI management allowed crop water demand to be matched, saving up to 42 mm (−16%) of water when compared to uniform irrigation rates. The high irrigation amounts applied during the growing season to avoid water stress resulted in no significant differences among the IMZs. SEBAL-based biomass estimates agreed with in-season measurements at 72, 105 and 112 days after planting (DAP; r2 = 0.87). Seasonal ET matched the spatial variability observed in the measured yield map at harvest. Moreover, the SEBAL-derived yield map largely agreed with the measured yield map with relative errors of 0.3% among the IMZs and of 1% (0.21 t ha−1) for the whole field. While the FAO method-based stress coefficient (Ks) never dropped below the optimum condition (Ks = 1) for all the IMZs and the uniform zone, SEBAL Ks was sensitive to changes in water status and remained below 1 during most of the growing season. Using SEBAL to capture the daily spatial variation in crop water needs and growth would enable the definition of transient, dynamic IMZs. This allows farmers to apply proper irrigation amounts increasing water use efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Aerial image of the study area with the field boundaries (in red).</p>
Full article ">Figure 2
<p>Soil sampling scheme utilized for the determination of soil texture with the 103 sampling points (<b>a</b>), clay content (expressed as percentage, %; <b>b</b>) and sand (expressed as percentage, %; <b>c</b>) content maps resulting from ordinary kriging. Modified from [<a href="#B28-remotesensing-11-02069" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Fuzziness performance index (FPI) and normalized classification entropy (NCE) values as calculated by the Management Zone Analyst during the cluster analysis.</p>
Full article ">Figure 4
<p>Management zones resulting from the Management Zone Analyst cluster analysis (<b>a</b>), final application map used for variable rate irrigation with the integration of the three IMZs (zone 1—orange; zone 2—red, zone 3—light blue) and the uniform irrigated area (green zone; <b>b</b>).</p>
Full article ">Figure 5
<p>Daily maximum (black line), minimum (red line) air temperature and rainfall (vertical bars) during the 2015 growing season from the Agenzia Regionale per la Prevenzione e Protezione Ambientale del Veneto (ARPAV; <a href="http://www.arpa.veneto.it/" target="_blank">http://www.arpa.veneto.it/</a>) weather station.</p>
Full article ">Figure 6
<p>Volumetric water content (VWC) data for the three IMZs (<b>a–c</b>) and the uniform area (<b>d</b>). Horizontal black dotted lines represent the irrigation threshold used for scheduling irrigation over the 2015 growing season (only for the three IMZs), red dotted lines represent the 50% total available soil water in the root zone (TAW) threshold calculated using the FAO method [<a href="#B35-remotesensing-11-02069" class="html-bibr">35</a>].</p>
Full article ">Figure 6 Cont.
<p>Volumetric water content (VWC) data for the three IMZs (<b>a–c</b>) and the uniform area (<b>d</b>). Horizontal black dotted lines represent the irrigation threshold used for scheduling irrigation over the 2015 growing season (only for the three IMZs), red dotted lines represent the 50% total available soil water in the root zone (TAW) threshold calculated using the FAO method [<a href="#B35-remotesensing-11-02069" class="html-bibr">35</a>].</p>
Full article ">Figure 7
<p>Biomass samples collected at 72 (24/6), 102 (24/7) and 115 (6/8) days after planting (DAP) during the growing season (blue bars) and SEBAL biomass estimates on the same sampling dates for all the zones.</p>
Full article ">Figure 8
<p>Temporal variability in SEBAL actual evapotranspiration (ETr) maps on June 2nd (<b>a</b>), July 2nd (<b>b</b>) and August 6th, 2015 (<b>c</b>).</p>
Full article ">Figure 9
<p>Map of seasonal ET (ET<sub>seasonal</sub>) for the study area in 2015 during the growing season (April 13th to September 7th).</p>
Full article ">Figure 10
<p>Measured (<b>a</b>) and SEBAL (<b>b</b>) yield maps, relative error map (<b>c</b>) between measured and SEBAL yield values.</p>
Full article ">Figure 11
<p>Volumetric water content (VWC), FAO-56 and SEBAL-based water stress coefficients (K<sub>s</sub>) calculated over the 2015 growing season for zone 1 (<b>a</b>), zone 2 (<b>b</b>), zone 3 (<b>c</b>) and the uniform irrigated area (<b>d</b>).</p>
Full article ">
19 pages, 4025 KiB  
Article
Remote Sensing Estimation of Lake Total Phosphorus Concentration Based on MODIS: A Case Study of Lake Hongze
by Junfeng Xiong, Chen Lin, Ronghua Ma and Zhigang Cao
Remote Sens. 2019, 11(17), 2068; https://doi.org/10.3390/rs11172068 - 3 Sep 2019
Cited by 48 | Viewed by 6017
Abstract
Phosphorus (P) is an important substance for the growth of phytoplankton and an efficient index to assess the water quality. However, estimation of the TP concentration in waters by remote sensing must be associated with optical substances such as the chlorophyll-a (Chla) and [...] Read more.
Phosphorus (P) is an important substance for the growth of phytoplankton and an efficient index to assess the water quality. However, estimation of the TP concentration in waters by remote sensing must be associated with optical substances such as the chlorophyll-a (Chla) and the suspended particulate matter (SPM). Based on the good correlation between the suspended inorganic matter (SPIM) and P in Lake Hongze, we used the direct and indirect derivation methods to develop algorithms for the total phosphorus (TP) estimation with the MODIS/Aqua data. Results demonstrate that the direct derivation algorithm based on 645 nm and 1240 nm of the MODIS/Aqua performs a satisfied accuracy (R2 = 0.75, RMSE = 0.029mg/L, MRE = 39% for the training dataset, R2 = 0.68, RMSE = 0.033mg/L, MRE = 47% for the validate dataset), which is better than that of the indirect derivation algorithm. The 645 nm and 1240 nm of MODIS are the main characteristic band of the SPM, so that algorithm can effectively reflect the P variations in Lake Hongze. Additionally, the ratio of the TP to the SPM is positively correlated with the accuracy of the algorithm as well. The proportion of the SPIM in the SPM has a complex effect on the accuracy of the algorithm. When the SPIM accounts for 78%, the algorithm achieves the highest accuracy. Furthermore, the performance of this direct derivation algorithm was examined in two inland lakes in China (Lake Nanyi and Lake Chaohu), it derived the expected P distribution in Lake Nanyi whereas the algorithm failed in Lake Chaohu. Different water properties influence significantly the accuracy of this direct derivation algorithm, while the TP, Chla, and suspended particular inorganic matter (SPOM) of Lake Chaohu are much higher than those of the other two lakes, thus it is difficult to estimate the TP concentration by a simple band combination in Lake Chaohu. Although the algorithm depends on the dataset used in the development, it usually presents a good estimation for those waters where the SPIM dominated, especially when the SPIM accounts for 60% to 80% of the SPM. This research proposed a direct derivation algorithm for the TP estimation for the turbid lake and will provide a theoretical and practical reference for extending the optical remote sensing application and the TP empirical algorithm of Lake Hongze’s help for the local government management water quality. Full article
(This article belongs to the Special Issue Satellite Monitoring of Water Quality and Water Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study site and sampling plots.</p>
Full article ">Figure 2
<p>The optimal algorithms obtained by the direct derivation method: (<b>a</b>) The relationship between the in situ measured TP and (B2–B5)/(B2+B5); (<b>b</b>) the validation of the MODIS-estimated TP with the in situ measured TP based on an independent dataset.</p>
Full article ">Figure 3
<p>The optimal algorithms obtained by the indirect derivation method: (<b>a</b>) The relationship between the in situ measured SPM and (B2−B5); (<b>b</b>) the validation of the MODIS-estimated SPM with the in situ measured SPM based on an independent dataset; (<b>c</b>) the relationship between the in situ measured TP and MODIS-estimated SPM; (<b>d</b>) the validation of the indirect derivation method-estimated TP with the in situ measured TP based on an independent dataset.</p>
Full article ">Figure 4
<p>The seasonal average of the TP concentration from 2016 to 2018 in Lake Hongze in each season: (<b>a</b>) Spring; (<b>b</b>) summer; (<b>c</b>) autumn; (<b>d</b>) winter.</p>
Full article ">Figure 5
<p>The correlation coefficients between the SPM, SPIM, SPOM, TP, and R<sub>rs</sub> measured in the sites.</p>
Full article ">Figure 6
<p>Algorithm accuracy change with different substance changes. (<b>a</b>) Chlorophyll-a (Chla); (<b>b</b>) SPM; (<b>c</b>) TP/SPM; (<b>d</b>) SPIM/SPM.</p>
Full article ">Figure 7
<p>Box-plot of the water quality attributes of the three lakes: (<b>a</b>) Chla; (<b>b</b>) TP; (<b>c</b>) SPM; (<b>d</b>) SPIM; (<b>e</b>) SPOM.</p>
Full article ">Figure 8
<p>(<b>a</b>) Scatter plot of the (B2−B5)/(B2+B5) and TP concentration in Lake Chaohu; (<b>b</b>) scatter plot of the (B2−B5)/(B2+B5) and TP concentration in Lake Nanyi; (<b>c</b>) scatter plot of the B1*B5 and TP concentration in Lake Chaohu; (<b>d</b>) scatter plot of the (B1−B2)/(B1+B2) and TP concentration in Lake Nanyi.</p>
Full article ">Figure 9
<p>(<b>a</b>) Scatter plot of the (B1–B2)/(B1+B2) and Chla concentration in Lake Nanyi; (<b>b</b>) scatter plot of the (B1–B2)/(B1+B2) and SPM concentration in Lake Nanyi; (<b>c</b>) scatter plot of the (B1–B2)/(B1+B2) and SPIM concentration in Lake Nanyi; (<b>d</b>) scatter plot of the (B1–B2)/(B1+B2) and SPOM concentration in Lake Nanyi.</p>
Full article ">
19 pages, 14147 KiB  
Article
SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits
by Mario Fuentes Reyes, Stefan Auer, Nina Merkle, Corentin Henry and Michael Schmitt
Remote Sens. 2019, 11(17), 2067; https://doi.org/10.3390/rs11172067 - 3 Sep 2019
Cited by 126 | Viewed by 12608
Abstract
Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, [...] Read more.
Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Comparison between acquired SAR (<b>left</b>) and optical (<b>right</b>) images.</p>
Full article ">Figure 2
<p>Architecture of CycleGAN.</p>
Full article ">Figure 3
<p>Results obtained from the applied network, organized in two blocks. <b>Left</b>: original SAR file, center: images generated with adapted CycleGAN, <b>right</b>: optical image.</p>
Full article ">Figure 4
<p>Results from the CycleGAN network compared to other approaches. First column: original SAR image, second: CycleGAN result, third: DRIT result, fourth: NL-SAR result [<a href="#B34-remotesensing-11-02067" class="html-bibr">34</a>], last: optical image.</p>
Full article ">Figure 5
<p>Results from the generator applied to the SEN1-2 data set. Left are the original SAR patches, the middle column shows the generated images by the network and the right are the optical references.</p>
Full article ">Figure 6
<p>Binary road segmentation results of the DeepLabv3+ models trained on the SAR, NL-SAR and CycleGAN image data sets separately. Rows 1 and 3: image samples. Rows 2 and 4: segmentation confusion, colored as follows: green pixels are true positives, blue are false negatives, and red are false positives.</p>
Full article ">Figure 7
<p>Overlay of the road candidates detected on SAR (marked yellow) over the SAR image (<b>left</b>) and CycleGAN image (<b>right</b>).</p>
Full article ">
23 pages, 3447 KiB  
Article
Estimating Nitrogen from Structural Crop Traits at Field Scale—A Novel Approach Versus Spectral Vegetation Indices
by Nora Tilly and Georg Bareth
Remote Sens. 2019, 11(17), 2066; https://doi.org/10.3390/rs11172066 - 3 Sep 2019
Cited by 12 | Viewed by 4865
Abstract
A sufficient nitrogen (N) supply is mandatory for healthy crop growth, but negative consequences of N losses into the environment are known. Hence, deeply understanding and monitoring crop growth for an optimized N management is advisable. In this context, remote sensing facilitates the [...] Read more.
A sufficient nitrogen (N) supply is mandatory for healthy crop growth, but negative consequences of N losses into the environment are known. Hence, deeply understanding and monitoring crop growth for an optimized N management is advisable. In this context, remote sensing facilitates the capturing of crop traits. While several studies on estimating biomass from spectral and structural data can be found, N is so far only estimated from spectral features. It is well known that N is negatively related to dry biomass, which, in turn, can be estimated from crop height. Based on this indirect link, the present study aims at estimating N concentration at field scale in a two-step model: first, using crop height to estimate biomass, and second, using the modeled biomass to estimate N concentration. For comparison, N concentration was estimated from spectral data. The data was captured on a spring barley field experiment in two growing seasons. Crop surface height was measured with a terrestrial laser scanner, seven vegetation indices were calculated from field spectrometer measurements, and dry biomass and N concentration were destructively sampled. In the validation, better results were obtained with the models based on structural data (R2 < 0.85) than on spectral data (R2 < 0.70). A brief look at the N concentration of different plant organs showed stronger dependencies on structural data (R2: 0.40–0.81) than on spectral data (R2: 0.18–0.68). Overall, this first study shows the potential of crop-specific across‑season two-step models based on structural data for estimating crop N concentration at field scale. The validity of the models for in-season estimations requires further research. Full article
(This article belongs to the Special Issue Remote Sensing for Precision Nitrogen Management)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Steps from point cloud to plot-wise averaged crop surface height (CSH): [<b>A</b>] Filtered point cloud representing the crop surface; [<b>B</b>] Raster data set of CSH; [<b>C</b>] Plot-wise averaged CSH values.</p>
Full article ">Figure 2
<p>Campaign-wise averaged codes for the developmental steps (BBCH).</p>
Full article ">Figure 3
<p>Concept of the two-step (<b>A</b>) and one-step (<b>B</b>) models (BRM: biomass regression model; NRM: nitrogen regression model; CSH: crop surface height; VI: vegetation index; DBio: dry biomass; DBio<sub>mod</sub>: modeled dry biomass; %N: nitrogen concentration; %N<sub>mod</sub>: modeled nitrogen concentration).</p>
Full article ">Figure 4
<p>Blue triangles and green dots represent the values of 2013 and 2014, respectively, with linear (light grey) and the best-fitting (black) regression lines. [<b>A</b>]–[<b>C</b>]: Scatterplots of the crop traits vs. day after seeding (DAS). [<b>D</b>]–[<b>F</b>]: Crop traits plotted against each other.</p>
Full article ">Figure 5
<p>Scatterplots of dry biomass and N concentration per plant vs. crop surface height ([<b>A</b>] and [<b>B</b>]) or NRI ([<b>C</b>] and [<b>D</b>]) with best-fitting regression lines. Blue dots, green triangles, and orange rhombs represent the values of leaf, stem, and ear, respectively.</p>
Full article ">Figure 6
<p>Scatterplots of the ear dry biomass vs. N concentration of stem [<b>Α</b>] and leaf [<b>B</b>] with best-fitting regression lines.</p>
Full article ">
28 pages, 7729 KiB  
Article
CNN-Based Land Cover Classification Combining Stratified Segmentation and Fusion of Point Cloud and Very High-Spatial Resolution Remote Sensing Image Data
by Keqi Zhou, Dongping Ming, Xianwei Lv, Ju Fang and Min Wang
Remote Sens. 2019, 11(17), 2065; https://doi.org/10.3390/rs11172065 - 2 Sep 2019
Cited by 43 | Viewed by 6481
Abstract
Traditional and convolutional neural network (CNN)-based geographic object-based image analysis (GeOBIA) land-cover classification methods prosper in remote sensing and generate numerous distinguished achievements. However, a bottleneck emerges and hinders further improvements in classification results, due to the insufficiency of information provided by very [...] Read more.
Traditional and convolutional neural network (CNN)-based geographic object-based image analysis (GeOBIA) land-cover classification methods prosper in remote sensing and generate numerous distinguished achievements. However, a bottleneck emerges and hinders further improvements in classification results, due to the insufficiency of information provided by very high-spatial resolution images (VHSRIs). To be specific, the phenomenon of different objects with similar spectrum and the lack of topographic information (heights) are natural drawbacks of VHSRIs. Thus, multisource data steps into people’s sight and shows a promising future. Firstly, for data fusion, this paper proposed a standard normalized digital surface model (StdnDSM) method which was actually a digital elevation model derived from a digital terrain model (DTM) and digital surface model (DSM) to break through the bottleneck by fusing VHSRI and cloud points. It smoothed and improved the fusion of point cloud and VHSRIs and thus performed well in follow-up classification. The fusion data then were utilized to perform multiresolution segmentation (MRS) and worked as training data for the CNN. Moreover, the grey-level co-occurrence matrix (GLCM) was introduced for a stratified MRS. Secondly, for data processing, the stratified MRS was more efficient than unstratified MRS, and its outcome result was theoretically more rational and explainable than traditional global segmentation. Eventually, classes of segmented polygons were determined by majority voting. Compared to pixel-based and traditional object-based classification methods, majority voting strategy has stronger robustness and avoids misclassifications caused by minor misclassified centre points. Experimental analysis results suggested that the proposed method was promising for object-based classification. Full article
Show Figures

Figure 1

Figure 1
<p>The workflow for the convolutional neural network (CNN)-based land cover classification method combining stratified segmentation and point cloud data fusion.</p>
Full article ">Figure 2
<p>The comparison of point cloud (<b>a</b>) and standard normalized digital surface model (StdnDSM) (<b>b</b>). The site was located in Helsinki, Finland.</p>
Full article ">Figure 3
<p>The comparison of (<b>a</b>) digital terrain model (DTM), (<b>b</b>) digital surface model (DSM), (<b>c</b>) normalised DSM (nDSM) and (<b>d</b>) StdnDSM. The site was located in Helsinki, Finland.</p>
Full article ">Figure 4
<p>The comparison of fusion data. (<b>a</b>) DTM, (<b>b</b>) DSM, (<b>c</b>) nDSM and (<b>d</b>) StdnDSM. The site was located in Helsinki, Finland.</p>
Full article ">Figure 5
<p>A demonstration of gray-level co-occurrence matrix (GLCM). (<b>a</b>) The eight processing directions of GCLM, (<b>b</b>) the processing window moves and (<b>c</b>) the counted co-occurrence matrix.</p>
Full article ">Figure 6
<p>The architecture of Alexnet. Conv represents the convolutional layer; Pool denotes pooling layer; FulC is the fully-connected layer. Numbers underneath each image indicate corresponding size and dimension.</p>
Full article ">Figure 7
<p>Demonstration of majority voting.</p>
Full article ">Figure 8
<p>Experiment area: a scene in Helsinki.</p>
Full article ">Figure 9
<p>The comparison of nDSM (<b>a</b>) and StdnDSM (<b>b</b>).</p>
Full article ">Figure 10
<p>Information entropy demonstration of nDSM, StdnDSM, nDSM fusion and StdnDSM fusion.</p>
Full article ">Figure 11
<p>The training image of Alexnet.</p>
Full article ">Figure 12
<p>Demonstration of (<b>a</b>) training samples and (<b>b</b>) validating samples.</p>
Full article ">Figure 13
<p>Large homogenous regions.</p>
Full article ">Figure 14
<p>The comparison of stratified and unstratified multiresolution segmentation (MRS) of fusion data.</p>
Full article ">Figure 15
<p>The demonstration of four single scale classification results. From (<b>a</b>–<b>d</b>) the scale is 15, 25, 35 and 45 respectively.</p>
Full article ">Figure 16
<p>The demonstration of experiment area (<b>a</b>) and final classification result (<b>b</b>).</p>
Full article ">Figure 17
<p>The demonstration of confusion matrix. (<b>a</b>) Confusion matrix of multiscale 15–45, (<b>b</b>) 15–25–45, (<b>c</b>) 15–35–45 and (<b>d</b>) 25–45.</p>
Full article ">Figure 18
<p>Demonstration of classification results.</p>
Full article ">Figure 19
<p>The demonstration of train and loss curve. (<b>a</b>) The train and loss curve of scale 15, (<b>b</b>) scale 35 and (<b>c</b>) multiscale 15–35–45.</p>
Full article ">Figure 20
<p>The comparison of point cloud (PC) added and non-added data in segmentation. Red line (<b>a</b>) represents RGB and black line (<b>b</b>) is PC added.</p>
Full article ">
21 pages, 21384 KiB  
Article
Measuring and Predicting Urban Expansion in the Angkor Region of Cambodia
by Jie Liu, Hongge Ren, Xinyuan Wang, Zeeshan Shirazi and Bin Quan
Remote Sens. 2019, 11(17), 2064; https://doi.org/10.3390/rs11172064 - 2 Sep 2019
Cited by 7 | Viewed by 6858
Abstract
Recent increases in urbanization and tourism threaten the viability of UNESCO world heritage sites across the globe. The Angkor world heritage site located in southern Cambodia is now facing such a challenge. Over the past two decades, Angkor has seen over 300,000% growth [...] Read more.
Recent increases in urbanization and tourism threaten the viability of UNESCO world heritage sites across the globe. The Angkor world heritage site located in southern Cambodia is now facing such a challenge. Over the past two decades, Angkor has seen over 300,000% growth in international tourist arrivals, which has led to uncontrolled development of the nearby city of Siem Reap. This study uses remote sensing and GIS to comprehend the process of urban expansion during the past 14 years, and has applied the CA-Markov model to predict future urban expansion. This paper analyzes the urban pressure on the Angkor site at different scales. The results reveal that the urban area of Siem Reap city increased from 28.23 km2 in 2004 to 73.56 km2 in 2017, an increase of 160%. Urban growth mainly represented a transit-oriented pattern of expansion, and it was also observed that land surfaces, such as arable land, forests, and grasslands, were transformed into urban residential land. The total constructed land area in the core and buffer zones increased by 12.99 km2 from 2004 to 2017, and 72% of the total increase was in the buffer zone. It is predicted that the built-up area in Siem Reap is expected to cover 135.09 km2 by 2025 and 159.14 km2 by 2030. The number of monuments that are most likely be affected by urban expansion is expected to increase from 9 in 2017 to 14 in 2025 and 17 in 2030. The urban area in Siem Reap has increased dramatically over the past decade and monuments continue to be decimated by urban expansion. This paper urges closer attention and urgent actions to minimize the urban pressure on the Angkor site in the future. Full article
Show Figures

Figure 1

Figure 1
<p>ZEMP zones in the Angkor world heritage site.</p>
Full article ">Figure 2
<p>Map of the study area.</p>
Full article ">Figure 3
<p>The framework of urban expansion simulation.</p>
Full article ">Figure 4
<p>The driving factors evaluated by the membership function of the fuzzy set.</p>
Full article ">Figure 5
<p>Suitability map.</p>
Full article ">Figure 6
<p>Urban expansion of Siem Reap from 2004 to 2017.</p>
Full article ">Figure 7
<p>An eight-direction diagram of urban expansion intensity in Siem Reap from 2004 to 2017.</p>
Full article ">Figure 8
<p>The comparison of the growth of urban expansion and the tourist population: (<b>a</b>) the comparison of the urban area and tourist population; (<b>b</b>) the comparison of the growth rates of urban areas and the tourist population.</p>
Full article ">Figure 9
<p>Statistics of urban expansion from other land cover types from 2004 to 2017.</p>
Full article ">Figure 10
<p>Statistics of urban expansion in Siem Reap from 2004 to 2017.</p>
Full article ">Figure 11
<p>Analysis of the impact of urban expansion on Angkor monuments in 2017.</p>
Full article ">Figure 12
<p>The area of urban expansion in buffer zones from 2004 to 2017.</p>
Full article ">Figure 13
<p>Urban expansions in Zone 1 and Zone 2 from 2004 to 2017.</p>
Full article ">Figure 14
<p>Predicted built-up area in protected area in 2025.</p>
Full article ">Figure 15
<p>Predicted built-up area in protected area in 2030.</p>
Full article ">
23 pages, 14858 KiB  
Article
Spatiotemporal Characterization of Mangrove Phenology and Disturbance Response: The Bangladesh Sundarban
by Christopher Small and Daniel Sousa
Remote Sens. 2019, 11(17), 2063; https://doi.org/10.3390/rs11172063 - 2 Sep 2019
Cited by 19 | Viewed by 4422
Abstract
This work presents a spatiotemporal analysis of the phenology and disturbance response in the Sundarban mangrove forest on the Ganges-Brahmaputra Delta in Bangladesh. The methodological approach is based on an Empirical Orthogonal Function (EOF) analysis of the new Harmonized Landsat Sentinel-2 (HLS) BRDF [...] Read more.
This work presents a spatiotemporal analysis of the phenology and disturbance response in the Sundarban mangrove forest on the Ganges-Brahmaputra Delta in Bangladesh. The methodological approach is based on an Empirical Orthogonal Function (EOF) analysis of the new Harmonized Landsat Sentinel-2 (HLS) BRDF and atmospherically corrected reflectance time series, preceded by a Robust Principal Component Analysis (RPCA) separation of Low Rank and Sparse components of the image time series. Low Rank components are spatially and temporally pervasive while Sparse components are transient and localized. The RPCA clearly separates subtle spatial variations in the annual cycle of monsoon-modulated greening and senescence of the mangrove forest from the spatiotemporally complex agricultural phenology surrounding the Sundarban. A 3 endmember temporal mixture model maps spatially coherent differences in the 2018 greening-senescence cycle of the mangrove which are both concordant and discordant with existing species composition maps. The discordant patterns suggest a phenological response to environmental factors like surface hydrology. On decadal time scales, a standard EOF analysis of vegetation fraction maps from annual post-monsoon Landsat imagery is sufficient to isolate locations of shoreline advance and retreat related to changes in sedimentation and erosion, as well as cyclone-induced defoliation and recovery. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Index maps of the Ganges-Brahmaputra Delta (<b>top</b>) and Sundarban mangrove forest (<b>bottom</b>). Elevation map (inset) superimposed on MODIS composite shows height gradient of the forest canopy and subsidence of the polders north of the Sundarban. Sentinel 2 False color composite (<b>bottom</b>) shows current river channel network and forest canopy cover variations. Note contrast between post-monsoon rice(<b>top</b>) and winter fallow (<b>bottom</b>) in agricultural areas surrounding the mangrove forest. Red box shows extent of HLS tiles used for this study. GPS tracks show locations of field surveys.</p>
Full article ">Figure 2
<p>The spectral mixing space for the Sentinel 2 HLS product T45QYE on julian day 322 of 2018. Projected onto the global S2 mixing space (top, color), the range of reflectances is compressed, but the local space (bottom) clearly resolves multiple mixing trends (arrows) between distinct endmembers and shadow. The color of the vegetation spectra corresponds to the color of the mixing trend arrow on the PC1/PC3 projection.</p>
Full article ">Figure 3
<p>Reflectance\Fraction composites of post-monsoon Sentinel 2 HLS for jd 322 of 2018. Spectral unmixing of the HLS reflectance by inversion of a 3 endmember linear spectral mixture model using standardized spectral endmembers projects the 11 reflectance bands onto a ternary land cover space to yield per-pixel fractional abundance estimates of sediment and soil Substrate, illluminated Vegetation and Dark fractions of water or shadow. The diagonal split illlustrates the effectiveness of the ternary mixture model by greater separation of primary colors as physically distinct land cover components.</p>
Full article ">Figure 4
<p>Tri-temporal vegetation maps. HLS Sentinel 2 vegetation fractions from julian days 32 (r), 107 (b) and 322 (g) of 2018 with common linear stretch (top) show spatially coherent seasonal greenness variations of ± 20% in mangroves and much larger changes in agricultural fields. A 10 x10 km substack from western edge (inset &amp; bottom) shown as a spatiotemporal cube has more subdued, but still spatially coherent, temporal greenness variations on tri-temporal composite (front of cube) while seasonal progression of 69 HLS vegetation fraction maps (top &amp; right edges) clearly show variations in post-monsoon greening. The Gaussian stretch of the tri-temporal substack reveals even finer scale coherent variations in seasonal greenness.</p>
Full article ">Figure 5
<p>Low order PC composite of HLS S30 vegetation fraction time series. The inset temporal feature space shows clusters distinguishing mangrove forest from rice fields from water bodies. The wide variety of rice phenologies produces a 3D continuum encompassing more of the feature space than the forest-water continuum and compressing the forest into two closely packed clusters in one apex of the feature space. As a result, the forest appears only shades of green but the rice paddys appear a variety of colors corresponding to varying combinations of PCs 1, 2 &amp; 3.</p>
Full article ">Figure 6
<p>Low order PC composites of RPCA Low Rank and Sparse component HLS S30 vegetation fraction time series. The Low Rank temporal feature space (inset left) clearly separates water from vegetation but also multiple rice phenologies from levee trees from mangrove forest. The Sparse Component contains more transient features like clouds, river sediments and finer scale rice phenology on polders to the north and east of the forest.</p>
Full article ">Figure 7
<p>Temporal mixture model fraction map for HLS Low Rank image time series. Inset temporal EMs correspond to apexes of the mangrove cluster in the PC1/2 temporal feature space shown in <a href="#remotesensing-11-02063-f006" class="html-fig">Figure 6</a>. Water and agricultural areas appear blue and magenta because they lie outside the model but closest to the blue EM. Note qualitative spatial similarity to the tri-temporal fraction map in <a href="#remotesensing-11-02063-f004" class="html-fig">Figure 4</a>. The red EM shows a decrease in greenness preceding the monsoon, but little increase after.</p>
Full article ">Figure 8
<p>Temporal feature space and PC map of 10 × 10 km substack from <a href="#remotesensing-11-02063-f004" class="html-fig">Figure 4</a>. The PC map shows spatially coherent structure resulting from distinct combinations of PCs 1–3. The feature space (bottom) clearly separates water from forest, spanning 3 distinct senescent phenologies at apexes of PC 3/2 space.</p>
Full article ">Figure 9
<p>S2 HLS phenology map for 10 × 10 km substack with WorldView-2 comparison. The phenology map (top) is derived from the temporal mixture model based on the 3 tEMs spanning the temporal feature space in <a href="#remotesensing-11-02063-f008" class="html-fig">Figure 8</a>. Water lies off the model plane but projects onto the slower senescent tEM so can be distinguished by higher model misfit. Comparison with 2010 WorldView-2 image (bottom) shows clear correspondence between phenology fractions and canopy density. The more open canopy areas show more shadow and exposed mud consistent with lower vegetation fractions in the corresponding tEMs in <a href="#remotesensing-11-02063-f008" class="html-fig">Figure 8</a>. © Worldview-2, Digital Globe 2019.</p>
Full article ">Figure 10
<p>Interannual temporal feature space and endmembers for 1989–2017 annual post-monsoon Landsat vegetation fraction time series. Most profound changes in vegetation cover are related to advance and retreat of shorelines, riverbanks and channel networks. Bank erosion can occur quickly, but most changes are progressive over multiple years with vegetation thinning gradually or mudflat colonization occurring by vegetation species succession. Multi-year loss and recovery are less common, but do occur in some areas.</p>
Full article ">Figure 11
<p>Interannual change map for Southeastern Bangladesh Sundarban 1989–2017. PC1 shows mean vegetation abundance while warmer and cooler colors show decadal increase and decrease of vegetation respectively. Shoreline and riverbank retreat from erosion are pervasive, particularly along the wider channels. Vegetation colonization of mudflats is more conspicuous but accounts for less area than is lost to erosion.</p>
Full article ">Figure 12
<p>Impact and recovery from Cyclone Sidr on 11.11.2007. Mean vegetation fraction from forest south and west of 22.2°N 89.9°E (cross) drops 0.08 immediately after Sidr but recovers almost fully to its long-term mean of 0.20 (including shadow &amp; water) by the following year. The area of mangrove defoliation (brown on 30.11.2007) is localized to a NNE-SSW swath immediately west of the cyclone track. Defoliation and erosion was most pronounced along south-facing shorelines (top) but measurable shoreline retreat was more focused.</p>
Full article ">
11 pages, 3613 KiB  
Letter
Global Ionospheric Model Accuracy Analysis Using Shipborne Kinematic GPS Data in the Arctic Circle
by Di Wang, Xiaowen Luo, Jinling Wang, Jinyao Gao, Tao Zhang, Ziyin Wu, Chunguo Yang and Zhaocai Wu
Remote Sens. 2019, 11(17), 2062; https://doi.org/10.3390/rs11172062 - 2 Sep 2019
Cited by 5 | Viewed by 3732
Abstract
The global ionospheric model built by the International Global Navigation Satellite System (GNSS) Service (IGS) using GNSS reference stations all over the world is currently the most widely used ionospheric product on a global scale. Therefore, analysis and evaluation of this ionospheric product’s [...] Read more.
The global ionospheric model built by the International Global Navigation Satellite System (GNSS) Service (IGS) using GNSS reference stations all over the world is currently the most widely used ionospheric product on a global scale. Therefore, analysis and evaluation of this ionospheric product’s accuracy and reliability are essential for the practical use of the product. In contrast to the traditional way of assessing global ionospheric models with ground-based static measurements, our study used shipborne kinematic global positioning system (GPS) measurements collected over 18 days to perform a preliminary analysis and evaluation of the accuracy of the global ionospheric models; our study took place in the Arctic Circle. The data from the International GNSS Service stations near the Arctic Circle were used to verify the ionospheric total electron contents derived from the kinematic data. The results suggested that the global ionospheric model had an approximate regional accuracy of 12 total electron content units (TECu) within the Arctic Circle and deviated from the actual ionospheric total electron content value by about 4 TECu. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The route of the Chinese research vessel within the Arctic Circle and the location of the ground-based Global Navigation Satellite System (GNSS) reference stations. The days near the International GNSS Service (IGS) stations are shown as a blue line (September 1 to 3, 2017).</p>
Full article ">Figure 2
<p>The direct ionospheric total electron content (TEC) variation in shipborne data.</p>
Full article ">Figure 3
<p>Comparison between Vertical TEC (VTEC) changes in shipborne data and in the IGS reference stations data.</p>
Full article ">Figure 4
<p>The probability distribution of the differences between the global ionosphere model TEC and ionospheric TEC obtained from the kinematic data (<b>a</b>): September 1, 2, 3, and 4; (<b>b</b>): September 5, 7, 8, and 9; (<b>c</b>): September 11, 12, 13, and 14; (<b>d</b>): September 15, 17, and 18; (<b>e</b>): September 19, 21, and 22; (<b>f</b>): Total data). TECu = total electron content units.</p>
Full article ">Figure 5
<p>The RMSE of the difference between the global ionospheric model TEC and the ionospheric TEC obtained from the kinematic data.</p>
Full article ">Figure 6
<p>The probability distribution of the differences between the global ionosphere model TEC and ionospheric TEC obtained from the IGS GNSS reference stations data ((<b>a</b>): BJNM; (<b>b</b>): KHAR).</p>
Full article ">Figure 7
<p>The RMSE of the difference between the global ionospheric model TEC and the ionospheric TEC obtained from the IGS GNSS reference stations data.</p>
Full article ">
24 pages, 6103 KiB  
Article
Accelerated MCMC for Satellite-Based Measurements of Atmospheric CO2
by Otto Lamminpää, Jonathan Hobbs, Jenný Brynjarsdóttir, Marko Laine, Amy Braverman, Hannakaisa Lindqvist and Johanna Tamminen
Remote Sens. 2019, 11(17), 2061; https://doi.org/10.3390/rs11172061 - 2 Sep 2019
Cited by 6 | Viewed by 4854
Abstract
Markov Chain Monte Carlo (MCMC) is a powerful and promising tool for assessing the uncertainties in the Orbiting Carbon Observatory 2 (OCO-2) satellite’s carbon dioxide measurements. Previous research in comparing MCMC and Optimal Estimation (OE) for the OCO-2 retrieval has highlighted the issues [...] Read more.
Markov Chain Monte Carlo (MCMC) is a powerful and promising tool for assessing the uncertainties in the Orbiting Carbon Observatory 2 (OCO-2) satellite’s carbon dioxide measurements. Previous research in comparing MCMC and Optimal Estimation (OE) for the OCO-2 retrieval has highlighted the issues of slow convergence of MCMC, and furthermore OE and MCMC not necessarily agreeing with the simulated ground truth. In this work, we exploit the inherent low information content of the OCO-2 measurement and use the Likelihood-Informed Subspace (LIS) dimension reduction to significantly speed up the convergence of MCMC. We demonstrate the strength of this analysis method by assessing the non-Gaussian shape of the retrieval’s posterior distribution, and the effect of operational OCO-2 prior covariance’s aerosol parameters on the retrieval. We further show that in our test cases we can use this analysis to improve the retrieval to retrieve the simulated true state significantly more accurately and to characterize the non-Gaussian form of the posterior distribution of the retrieval problem. Full article
(This article belongs to the Special Issue Remote Sensing of Carbon Dioxide and Methane in Earth’s Atmosphere)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Eigenvalues of the Hessian matrix with different numbers of samples used in the calculation of the Monte Carlo average 2.6. The eigenvalues are separated to less than and greater than unity by a black line.</p>
Full article ">Figure 2
<p>Upper panel: <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math> posterior histogram from full dimensional MCMC (blue) compared to LIS MCMC (red). Also shown the starting value for MCMC and simulated ground truth (green). Lower panel: every 100th sample of <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math> chain from full dimensional MCMC (blue) and LIS MCMC (red).</p>
Full article ">Figure 3
<p>Left panel: CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> profiles from the MCMC retrieval compared with prior mean, ground truth and OE retrieval result. Right panel: CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> parts of samples from LIS and CS projected back to full space.</p>
Full article ">Figure 4
<p>Aerosol parameter marginal histograms with operational prior (black dashed line), ground truth (green) and OE retrieval result (red dotted line). SO = sulphate, DU = dust, Ice = cloud ice and Water = cloud liquid water.</p>
Full article ">Figure 5
<p>2D posterior distributions of selected aerosol and surface parameters from LIS MCMC (blue with 95% posterior confidence interval on blue contour) compared with ground truth (green), prior (black with 95% confidence interval on black ellipse) and OE (red with 95% posterior confidence interval on red ellipse). Parameter numbers agree with the numbering in the surrogate model description (see <a href="#remotesensing-11-02061-t0A3" class="html-table">Table A3</a>).</p>
Full article ">Figure 6
<p>2D posterior distributions of X<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math> against aerosol parameters from LIS MCMC (blue with 95% posterior confidence interval on blue contour) compared with ground truth (green) and OE (red with 95% posterior confidence interval on red ellipse). Parameter numbers agree with the numbering in the surrogate model description and correspond to those in <a href="#remotesensing-11-02061-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 7
<p>Posterior correlation matrix of <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo stretchy="false">^</mo> </mover> </semantics></math>. The elements of the matrix corresponding to CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> and aerosol parameters of the state vector are separated with black lines. It should be noted that the first aerosol parameters have a high correlation with the CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> part of the state vector.</p>
Full article ">Figure 8
<p>Three separate test cases showing <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math> posterior histograms (using operational prior (blue) and widened prior) from LIS MCMC compared to the OE retrieval (blue and red dashed lines) and simulated ground truth (green). Also shown the true value of <math display="inline"><semantics> <msub> <mi>X</mi> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math> with the corresponding MCMC mean using a widened prior. It should be noted that the example case 2 on the middle panel most likely did not converge in either the MCMC or OE when using the operational prior, which is not an unusual scenario even in the operational retrieval.</p>
Full article ">Figure 9
<p><b>Test case 1</b>: aerosol parameter marginal histograms (corresponding to retrievals using operational prior (blue) and widened prior (orange)) with operational prior (dashed black), ground truth (green) and OE retrieval result (blue and red vertical lines). SO = sulphate, DU = dust, Ice = cloud ice and Water = cloud liquid water.</p>
Full article ">Figure 10
<p><b>Test case 2</b>: aerosol parameter marginal histograms (corresponding to retrievals using operational prior (blue) and widened prior (orange)) with operational prior (dashed black), ground truth (green) and OE retrieval result (blue and red vertical lines). SO = sulphate, DU = dust, Ice = cloud ice and Water = cloud liquid water.</p>
Full article ">Figure 11
<p><b>Test case 3</b>: aerosol parameter marginal histograms (corresponding to retrievals using operational prior (blue) and widened prior (orange)) with operational prior (dashed black), ground truth (green) and OE retrieval result (blue and red vertical lines). SO = sulphate, DU = dust, Ice = cloud ice and Water = cloud liquid water.</p>
Full article ">Figure A1
<p>CO<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> profiles from OE retrievals with starting the optimization from different first guesses. <b>Left panel</b>: starting <math display="inline"><semantics> <mi>γ</mi> </semantics></math> = 10, normalized step size tolerance = 40. <b>Right panel</b>: starting <math display="inline"><semantics> <mi>γ</mi> </semantics></math> = 30, normalized step size tolerance = 0.0001.</p>
Full article ">
17 pages, 8268 KiB  
Article
Automated Cloud and Cloud-Shadow Masking for Landsat 8 Using Multitemporal Images in a Variety of Environments
by Danang Surya Candra, Stuart Phinn and Peter Scarth
Remote Sens. 2019, 11(17), 2060; https://doi.org/10.3390/rs11172060 - 2 Sep 2019
Cited by 20 | Viewed by 6272
Abstract
Landsat 8 images have been widely used for many applications, but cloud and cloud-shadow cover issues remain. In this study, multitemporal cloud masking (MCM), designed to detect cloud and cloud-shadow for Landsat 8 in tropical environments, was improved for application in sub-tropical environments, [...] Read more.
Landsat 8 images have been widely used for many applications, but cloud and cloud-shadow cover issues remain. In this study, multitemporal cloud masking (MCM), designed to detect cloud and cloud-shadow for Landsat 8 in tropical environments, was improved for application in sub-tropical environments, with the greatest improvement in cloud masking. We added a haze optimized transformation (HOT) test and thermal band in the previous MCM algorithm to improve the algorithm in the detection of haze, thin-cirrus cloud, and thick cloud. We also improved the previous MCM in the detection of cloud-shadow by adding a blue band. In the visual assessment, the algorithm can detect a thick cloud, haze, thin-cirrus cloud, and cloud-shadow accurately. In the statistical assessment, the average user’s accuracy and producer’s accuracy of cloud masking results across the different land cover in the selected area was 98.03% and 98.98%, respectively. On the other hand, the average user’s accuracy and producer’s accuracy of cloud-shadow masking results was 97.97% and 96.66%, respectively. Compared to the Landsat 8 cloud cover assessment (L8 CCA) algorithm, MCM has better accuracies, especially in cloud-shadow masking. Our preliminary tests showed that the new MCM algorithm can detect cloud and cloud-shadow for Landsat 8 in a variety of environments. Full article
(This article belongs to the Special Issue Remote Sensing: 10th Anniversary)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Landsat 8 images were selected in a global area with a variety of cloud types and different environments in sub-tropical south, tropical, and sub-tropical north. The scenes of Landsat 8 images are shown by the red color. (adapted from [<a href="#B28-remotesensing-11-02060" class="html-bibr">28</a>]).</p>
Full article ">Figure 2
<p>The annual availability of clear images of Landsat-8 in a variety of environments (sub-tropical south, tropical, and sub-tropical north). X-axis is path/row and Y-axis is the total number of clear images.</p>
Full article ">Figure 3
<p>The flowchart of the new Multitemporal Cloud Masking (MCM) algorithm for Landsat 8. TI(Bi) is band i in the target image and RI(Bj) is band j in the reference image. Abs(D(X)) is the absolute value of D(X). B2, B3, B4, B5, B6, B9, and B11 represent the blue, green, red, near infrared, shortwave infrared, cirrus, and thermal bands in Landsat 8, respectively.</p>
Full article ">Figure 4
<p>The cloud image sample on path/row 090/079. (<b>a</b>) The sample of thick cloud, (<b>b</b>) The brightness temperature (BT) of the image (<b>a</b>) in band 10 ranges from 16 to 30, and (<b>c</b>) Band 11 ranges from 16 to 28 (bottom).</p>
Full article ">Figure 5
<p>Band selection and obtaining the threshold for thick-cloud masking from the Top of Atmospheric (TOA) reflectance of the selected pixel on: (<b>a</b>) the center of the cloud region and (<b>b</b>) the edge of the cloud region.</p>
Full article ">Figure 6
<p>Band selection and obtaining the threshold for cloud-shadow masking from the TOA reflectance of the selected pixel on: (<b>a</b>) the center of the cloud region and (<b>b</b>) the edge of the cloud region.</p>
Full article ">Figure 7
<p>Band selection and obtaining the threshold for cloud-shadow masking from the TOA reflectance of the selected pixel on the cloud-shadow in the sea area.</p>
Full article ">Figure 8
<p>Cloud and cloud-shadow masking in one of the sub-tropical south images (Toowoomba, Queensland, Australia). (<b>a</b>) A target image, (<b>b</b>) The new MCM resultant image, (<b>c</b>) The previous MCM resultant image, (<b>d</b>) A part of the target image, (<b>e</b>) The new MCM resultant image of (<b>d</b>), and (<b>f</b>) The previous MCM resultant image of (<b>d</b>), respectively. The red and blue color indicates cloud and the cloud-shadow region, respectively.</p>
Full article ">Figure 9
<p>Cloud masking in sub-tropical north (Nevada, USA). (<b>a</b>) A target image, (<b>b</b>) The new MCM resultant image, (<b>c</b>) The previous MCM resultant image, (<b>d</b>) A part of the target image, (<b>e</b>) The new MCM resultant image of (<b>d</b>), and (<b>f</b>) The previous MCM resultant image of (<b>d</b>), respectively. The red and blue color indicates cloud and the cloud-shadow region, respectively.</p>
Full article ">Figure 10
<p>Cloud-shadow masking in the tropical area (Bulawayo, Zimbabwe). (<b>a</b>) A target image, (<b>b</b>) The new MCM resultant image, (<b>c</b>) The previous MCM resultant image, (<b>d</b>) A part of the target image, (<b>e</b>) The new MCM resultant image of (<b>d</b>), and (<b>f</b>) The previous MCM resultant image of (<b>d</b>), respectively. The red and blue color indicates cloud and the cloud-shadow region, respectively.</p>
Full article ">Figure 11
<p>Comparison of the new MCM algorithm results and the L8 CCA algorithm results in one of the sub-tropical south images (Toowoomba, Queensland, Australia). The red and blue color indicates cloud and the cloud-shadow region, respectively.</p>
Full article ">Figure 12
<p>The results of the new MCM and L8 CCA in the detection of cloud and cloud-shadow in (<b>a</b>) settlement, (<b>b</b>) cropland, (<b>c</b>) forest, and (<b>d</b>) desert for statistical assessments.</p>
Full article ">Figure 13
<p>Comparison between the MCM and L8 CCA algorithm in the accuracy of the cloud-masking results.</p>
Full article ">Figure 14
<p>Comparison between the MCM and L8 CCA algorithm in the accuracy of the cloud-shadow masking results.</p>
Full article ">
27 pages, 12037 KiB  
Article
Remote Sensing of Ice Phenology and Dynamics of Europe’s Largest Coastal Lagoon (The Curonian Lagoon)
by Rasa Idzelytė, Igor E. Kozlov and Georg Umgiesser
Remote Sens. 2019, 11(17), 2059; https://doi.org/10.3390/rs11172059 - 2 Sep 2019
Cited by 11 | Viewed by 3857
Abstract
A first-ever spatially detailed record of ice cover conditions in the Curonian Lagoon (CL), Europe’s largest coastal lagoon located in the southeastern Baltic Sea, is presented. The multi-mission synthetic aperture radar (SAR) measurements acquired in 2002–2017 by Envisat ASAR, RADARSAT-2, Sentinel-1 A/B, and [...] Read more.
A first-ever spatially detailed record of ice cover conditions in the Curonian Lagoon (CL), Europe’s largest coastal lagoon located in the southeastern Baltic Sea, is presented. The multi-mission synthetic aperture radar (SAR) measurements acquired in 2002–2017 by Envisat ASAR, RADARSAT-2, Sentinel-1 A/B, and supplemented by the cloud-free moderate imaging spectroradiometer (MODIS) data, are used to document the ice cover properties in the CL. As shown, satellite observations reveal a better performance over in situ records in defining the key stages of ice formation and decay in the CL. Using advantages of both data sources, an updated ice season duration (ISD) record is obtained to adequately describe the ice cover season in the CL. High-resolution ISD maps provide important spatial details of ice growth and decay in the CL. As found, ice cover resides longest in the south-eastern CL and along the eastern coast, including the Nemunas Delta, while the shortest ice season is observed in the northern CL. During the melting season, the ice melt pattern is clearly shaped by the direction of prevailing winds, and ice drift velocities obtained from a limited number of observations range within 0.03–0.14 m/s. The pronounced shortening of the ice season duration in the CL is observed at a rate of 1.6–2.3 days year‒1 during 2002–2017, which is much higher than reported for the nearby Baltic Sea regions. While the timing of the freeze onset and full freezing has not changed much, the dates of the final melt onset and last observation of ice have a clear decreasing pattern toward an earlier ice break-up and complete melt-off due to an increase of air temperature strongly linked to the North Atlantic Oscillation (NAO). Notably, the correlation between the ISD, air temperature, and winter NAO index is substantially higher when considering the lagoon-averaged ISD values derived from satellite observations compared to those derived from coastal records. The latter clearly demonstrated the richness of the satellite observations that should definitely be exploited in regional ice monitoring programs. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area–the Curonian Lagoon, with respect to the Baltic Sea. The red frame in the smaller map shows the location of the lagoon in the southeastern part of the Baltic Sea, while the red points indicate the locations of the coastal stations. The contour lines inside the lagoon indicate the bathymetry.</p>
Full article ">Figure 2
<p>Intercomparison of the (<b>a</b>) freeze onset (FO), (<b>b</b>) full freezing (FF), (<b>c</b>) melt onset (MO), (<b>d</b>) last observation of ice cover (LOI) dates, and (<b>e</b>) ice season duration (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> ) derived from the satellite observations (solid lines) and in situ records (dashed lines), and the time difference between them (black solid line).</p>
Full article ">Figure 3
<p>Sentinel-1B image of ice outflow from the Curonian Lagoon to the Baltic Sea (taken on the January 6 2017). The light blue arrow indicates the magnitude of the ice outflow from the port gates.</p>
Full article ">Figure 4
<p>MODIS images of the ice cover retreat in the northern Curonian Lagoon during the winter of 2002–2003. Labels and arrows indicate the wind speed and direction.</p>
Full article ">Figure 5
<p>Examples of the ice drift in the Curonian Lagoon observed in the SAR images. Envisat ASAR images of the two ice floes in the Lithuanian part of the Curonian Lagoon taken on 4 April 2005 at 8:48:44 UTC (<b>a</b>) and at 20:13:50 UTC (<b>b</b>). Sentinel-1A images of the ice floes in the Russian part of the lagoon acquired on 28 February 2015 at 16:19:21 UTC (<b>c</b>) and 2 March at 04:51:06 UTC (<b>d</b>).</p>
Full article ">Figure 6
<p>SAR-derived parameters of the ice drift in the Curonian Lagoon. (<b>a</b>) Drift of two ice floes observed on 4 April 2005; (<b>b</b>) drift of an ice floe on 28 March 2009; (<b>c</b>) drift of the first ice floe on 28 February and 2 March 2015; (<b>d</b>) drift of the second ice floe on 28 February and 2 March 2015. Labeled points indicate the polygon centroids, lines represent the linear distance between them, and the arrows indicate an average wind speed and direction.</p>
Full article ">Figure 7
<p>Variations of the average air temperature (blue line) and percentage of the ice cover (gray bars) in the Curonian Lagoon during the shortest winter season of 2007–2008.</p>
Full article ">Figure 8
<p>The ice cover retreat in the Curonian Lagoon in the winter of 2011–2012 under the prevalence of northwesterly winds. The red color indicates the ice cover boundary, the blue color shows water, the labels and arrows show the wind speed and direction.</p>
Full article ">Figure 9
<p>The ice cover retreat in the Curonian Lagoon in the winter of 2013–2014 under the prevalence of easterly and northerly winds. The red color indicates the ice cover boundary, the blue color shows water, the labels and arrows show the wind speed and direction.</p>
Full article ">Figure 10
<p>Yearly-mean ice season duration in the Curonian Lagoon as derived from the satellite data in 2002–2017.</p>
Full article ">Figure 11
<p>Yearly maps of the ice season duration in the Curonian Lagoon obtained from satellite data in 2002–2017.</p>
Full article ">Figure 12
<p>The 10-day averaged percentage of the ice cover extent in the Curonian Lagoon for the three winter categories.</p>
Full article ">Figure 13
<p>The spatial variations of the ice cover extent during (<b>a</b>) ice formation and (<b>b</b>) ice melting periods. The colors represent the percentage of ice observations during these periods, i.e., blue color in (<b>a</b>) shows the areas where the ice starts to form first, while the red color in (<b>b</b>) shows the areas where the ice melts first.</p>
Full article ">Figure 14
<p>The interannual variability of the ice freeze onset, full freezing, final melt onset, and last observation of ice dates determined from the satellite data (solid lines) and their trends (dashed lines).</p>
Full article ">Figure 15
<p>The interannual variability of the various ice season duration types (<b>a</b>), NAO winter index, and cumulative negative air temperature during the 15 winter seasons (<b>b</b>) in 2002–2017.</p>
Full article ">Figure 16
<p>Scatterplots of the ice season duration values derived from the coastal records (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>, green points), joint use of the coastal and satellite data (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>r</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>, blue points), and spatially-averaged satellite data (<math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>, red points) compard to the NAO winter index.</p>
Full article ">Figure 17
<p>Scatterplots of the cumulative negative air temperature compared to the ice season duration (ISD) obtained from the coastal observations (<b>a</b>), maximum (<b>b</b>) and spatial-mean (<b>c</b>) ISD from the satellite data.</p>
Full article ">
25 pages, 12616 KiB  
Article
Detection of Small Target Using Schatten 1/2 Quasi-Norm Regularization with Reweighted Sparse Enhancement in Complex Infrared Scenes
by Fei Zhou, Yiquan Wu, Yimian Dai and Peng Wang
Remote Sens. 2019, 11(17), 2058; https://doi.org/10.3390/rs11172058 - 2 Sep 2019
Cited by 28 | Viewed by 3240
Abstract
In uniform infrared scenes with single sparse high-contrast small targets, most existing small target detection algorithms perform well. However, when encountering multiple and/or structurally sparse targets in complex backgrounds, these methods potentially lead to high missing and false alarm rate. In this paper, [...] Read more.
In uniform infrared scenes with single sparse high-contrast small targets, most existing small target detection algorithms perform well. However, when encountering multiple and/or structurally sparse targets in complex backgrounds, these methods potentially lead to high missing and false alarm rate. In this paper, a novel and robust infrared single-frame small target detection is proposed via an effective integration of Schatten 1/2 quasi-norm regularization and reweighted sparse enhancement (RS1/2NIPI). Initially, to achieve a tighter approximation to the original low-rank regularized assumption, a nonconvex low-rank regularizer termed as Schatten 1/2 quasi-norm (S1/2N) is utilized to replace the traditional convex-relaxed nuclear norm. Then, a reweighted l1 norm with adaptive penalty serving as sparse enhancement strategy is employed in our model for suppressing non-target residuals. Finally, the small target detection task is reformulated as a problem of nonconvex low-rank matrix recovery with sparse reweighting. The resulted model falls into the workable scope of inexact augment Lagrangian algorithm, in which the S1/2N minimization subproblem can be efficiently solved by the designed softening half-thresholding operator. Extensive experimental results on several real infrared scene datasets validate the superiority of the proposed method over the state-of-the-arts with respect to background interference suppression and target extraction. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the low-rank property of the background patch-image. (<b>a</b>) Representative background image and the clipped local patch-image (denoted by the red window). (<b>b</b>–<b>d</b>) Singular values of the global background patch-image, the clipped local patch-image A and B, respectively.</p>
Full article ">Figure 2
<p>Illustration of the low-rank approximation using different rank function. (<b>a</b>) Construction of local patch-image A clipped in <a href="#remotesensing-11-02058-f001" class="html-fig">Figure 1</a>a. (<b>b</b>) Degree of the deviation between singular values recovered by different rank functions and the original ones.</p>
Full article ">Figure 3
<p>Illustration of weighted function. (<b>a</b>) Penalty curves of the traditional weighted function and the new weighted function with <span class="html-italic">q</span> from 0.1 to 0.9 in interval 0.2. <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> sets 0.001. (<b>b</b>) Magnified map of the brown rectangular area. (<b>c</b>) Weight difference between the traditional weight and the new weight varying <span class="html-italic">q</span> from 0.1 to 0.9 with interval 0.2.</p>
Full article ">Figure 4
<p>The diagram of the proposed RS<sub>1/2</sub>NIPI model in this paper.</p>
Full article ">Figure 5
<p>Original infrared small target scenes under various scenes for experiments. (<b>a</b>–<b>l</b>) Infrared images with single small target. (<b>m</b>–<b>r</b>) Infrared images with multiple targets.</p>
Full article ">Figure 6
<p>Infrared small target and its local area.</p>
Full article ">Figure 7
<p>The detection results of the proposed model. The targets are labeled and/or enlarged for better visualization. (<b>a1</b>–<b>r1</b>) are the corresponding detecting results of the proposed method in <a href="#remotesensing-11-02058-f005" class="html-fig">Figure 5</a>a–r.</p>
Full article ">Figure 8
<p>Representative single target images from the datasets and the separated target images obtained by six low-rank recovery-based methods. (<b>1</b>–<b>4</b>) are four representative single target images from the tested datasets.</p>
Full article ">Figure 9
<p>Representative multiple targets images from the datasets and the separated target images obtained by six low-rank recovery-based methods. (<b>5</b>–<b>8</b>) are four representative multiple targets images from the tested datasets.</p>
Full article ">Figure 10
<p>The ROC curves of detection results obtained using different methods (<b>a</b>) Sequence 1. (<b>b</b>) Sequence 2. (<b>c</b>) Sequence 3. (<b>d</b>) Sequence 4. (<b>e</b>) Sequence 5. (<b>f</b>) Sequence 6.</p>
Full article ">Figure 11
<p>Representative single target images from the datasets and the target images obtained by saliency-based methods and the proposed one. (<b>1</b>–<b>4</b>) are four representative single target images from the tested datasets.</p>
Full article ">Figure 12
<p>Representative multiple targets images from the datasets and the target images obtained by saliency-based methods and the proposed one. (<b>5</b>–<b>8</b>) are four representative multiple targets images from the tested datasets.</p>
Full article ">Figure 13
<p>The ROC curves of detection results obtained using different methods (<b>a</b>) Sequence 1. (<b>b</b>) Sequence 2. (<b>c</b>) Sequence 3. (<b>d</b>) Sequence 4. (<b>e</b>) Sequence 5. (<b>f</b>) Sequence 6.</p>
Full article ">Figure 14
<p>An example of structurally sparse target scenes and the corresponding detection results obtained by the proposed method compared with ten competitive methods.</p>
Full article ">Figure 15
<p>An example of structurally sparse target scenes and the corresponding detection results obtained by the proposed method compared with ten competitive methods.</p>
Full article ">Figure 16
<p>An example of structurally sparse target scenes and the corresponding detection results obtained by the proposed method compared with ten competitive methods.</p>
Full article ">Figure 17
<p>ROC curves of sequences 1–4 with respect to different parameters. <b>Row 1</b>: Different patch sizes. <b>Row 2</b>: Different sliding steps. <b>Row 3</b>: Different sparse penalty. <b>Row 4</b>: Different weight factor.</p>
Full article ">Figure 18
<p>Illustration of the convergence rates of the methods based on low-rank recovery. (<b>a</b>,<b>b</b>) show the iteration curves of methods based on low-rank recovery in Sequences 1 and 2.</p>
Full article ">
20 pages, 4536 KiB  
Article
A Robust Rule-Based Ensemble Framework Using Mean-Shift Segmentation for Hyperspectral Image Classification
by Majid Shadman Roodposhti, Arko Lucieer, Asim Anees and Brett A. Bryan
Remote Sens. 2019, 11(17), 2057; https://doi.org/10.3390/rs11172057 - 1 Sep 2019
Cited by 7 | Viewed by 4362
Abstract
This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and [...] Read more.
This paper assesses the performance of DoTRules—a dictionary of trusted rules—as a supervised rule-based ensemble framework based on the mean-shift segmentation for hyperspectral image classification. The proposed ensemble framework consists of multiple rule sets with rules constructed based on different class frequencies and sequences of occurrences. Shannon entropy was derived for assessing the uncertainty of every rule and the subsequent filtering of unreliable rules. DoTRules is not only a transparent approach for image classification but also a tool to map rule uncertainty, where rule uncertainty assessment can be applied as an estimate of classification accuracy prior to image classification. In this research, the proposed image classification framework is implemented using three world reference hyperspectral image datasets. We found that the overall accuracy of classification using the proposed ensemble framework was superior to state-of-the-art ensemble algorithms, as well as two non-ensemble algorithms, at multiple training sample sizes. We believe DoTRules can be applied more generally to the classification of discrete data such as hyperspectral satellite imagery products. Full article
(This article belongs to the Special Issue Image Segmentation for Environmental Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A visual illustration of different categories of machine learning methods used for image classification.</p>
Full article ">Figure 2
<p>Schematic demonstration of (<b>a</b>,<b>b</b>) reliable and (<b>c</b>,<b>d</b>) unreliable rules extracted using DoTRules. The black circles represent the segment values of randomly selected spectral bands composing different rules for one target pixel. Considering rule sets number #1 and #2, the latter will have more impact in combining votes due to its larger length.</p>
Full article ">Figure 3
<p>False color composites and ground truth images of the datasets used to illustrate the image classification using DoTRules, including the (<b>a</b>,<b>b</b>) Indian Pines, (<b>c</b>,<b>d</b>) Salinas and (<b>e</b>,<b>f</b>) Pavia University datasets.</p>
Full article ">Figure 4
<p>The mean spectral signatures of the (<b>a</b>) Indian Pines, (<b>b</b>) Salinas Valley and (<b>c</b>) Pavia University datasets.</p>
Full article ">Figure 5
<p>The DoTRules classification results and estimated pixel-based <math display="inline"><semantics> <mrow> <msub> <mstyle mathsize="140%" displaystyle="true"> <mi>e</mi> </mstyle> <mrow> <mi>d</mi> <mo>′</mo> </mrow> </msub> </mrow> </semantics></math>, for the Indian Pines, Salinas and Pavia datasets. The red pixels show the location of unreliable rules according to entropy thresholding (<math display="inline"><semantics> <mrow> <msub> <mstyle mathsize="140%" displaystyle="true"> <mi>e</mi> </mstyle> <mrow> <mi>d</mi> <mo>′</mo> </mrow> </msub> </mrow> </semantics></math> &gt; 0.3, for <span class="html-italic">α</span> = 0.05), while the grey pixels are reliable rules above the threshold. The red pixels are counted for each sample size.</p>
Full article ">Figure 6
<p>Entropy versus the hit ratio of rules for the (<b>a</b>) Indian Pines, (<b>b</b>) Salinas Valley and (<b>c</b>) Pavia University dataset 10% training sample sizes. The bubble sizes show the frequency of each rule among all corresponding rules from different rule sets before combining votes.</p>
Full article ">
17 pages, 5970 KiB  
Letter
Snow Depth Estimation with GNSS-R Dual Receiver Observation
by Kegen Yu, Shuyao Wang, Yunwei Li, Xin Chang and Jiancheng Li
Remote Sens. 2019, 11(17), 2056; https://doi.org/10.3390/rs11172056 - 1 Sep 2019
Cited by 12 | Viewed by 4216
Abstract
Two estimation methods using a dual GNSS (Global Navigation Satellite System) receiver system are proposed. The dual-frequency combination method combines the carrier phase observations of dual-frequency signals, whereas the single-frequency combination method combines the pseudorange and carrier phase observations of a single-frequency signal, [...] Read more.
Two estimation methods using a dual GNSS (Global Navigation Satellite System) receiver system are proposed. The dual-frequency combination method combines the carrier phase observations of dual-frequency signals, whereas the single-frequency combination method combines the pseudorange and carrier phase observations of a single-frequency signal, both of which are geometry-free strictly combination and free of the effect of ionospheric delay. Theoretical models are established in the offline phase to describe the relationship between the spectral peak frequency of the combined sequence and the antenna height. A field experiment was conducted recently and the data processing results show that the root mean squared error (RMSE) of the dual-frequency combination method is 5.04 cm with GPS signals and 6.26 cm with BDS signals, which are slightly greater than the RMSE of 4.16 cm produced by the single-frequency combination method of L1 band with GPS signals. The results also demonstrate that the proposed two combination methods and the SNR method achieve similar performance. A dual receiver system enables the better use of GNSS signal carrier phase observations for snow depth estimation, achieving increased data utilization. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The dual receiver system for snow depth estimation.</p>
Full article ">Figure 2
<p>Considering the actual antenna radiation pattern, the multipath-induced carrier phase error of GPS L1 band signals vary with the sine of the satellite elevation angle when the antenna height is 1 m (top) and 2 m (bottom).</p>
Full article ">Figure 3
<p>Considering the actual antenna radiation pattern, the multipath-induced pseudorange phase error of GPS L1 band signals vary with the sine of the satellite elevation angle when the antenna height is 1 m (top) and 2 m (bottom).</p>
Full article ">Figure 4
<p>The variation of combined sequence with the sine of the satellite elevation angle and spectrograms under different antenna heights.</p>
Full article ">Figure 5
<p>Linear relationship between the lower antenna height and spectral peak frequency by least squares fitting using GPS observation, when the antenna height difference is 0.32 m.</p>
Full article ">Figure 6
<p>(<b>a</b>) Location of the experiment; (<b>b</b>) three receivers for GNSS data collection are not far away from each other.</p>
Full article ">Figure 7
<p>Example of radiation patterns of a typical geodetic GNSS antenna.</p>
Full article ">Figure 8
<p>A wooden ruler for measuring snow depth.</p>
Full article ">Figure 9
<p>The skyplots of different satellite constellations on 2 January 2017. (<b>a</b>) The skyplot of GPS satellites. (<b>b</b>) the skyplot of BDS satellites.</p>
Full article ">Figure 10
<p>Time series and power spectral density produced by DFC method with simulated data (dashed red line) and observed data (solid blue line) of the GPS G18 when the heights of the two antennas are 1.486 m and 1.689 m, respectively.</p>
Full article ">Figure 11
<p>Observed (ground-truth) antenna heights versus estimated antenna heights obtained by the proposed methods with GPS and BDS observation data.</p>
Full article ">Figure 12
<p>The three scatter plots show the daily snow depth observed and estimated results for GPS and BDS by two proposed methods. The three subplots are produced using models generated with the three antennas of the three receivers, respectively.</p>
Full article ">
36 pages, 23464 KiB  
Article
Large-Scale Remote Sensing Image Retrieval Based on Semi-Supervised Adversarial Hashing
by Xu Tang, Chao Liu, Jingjing Ma, Xiangrong Zhang, Fang Liu and Licheng Jiao
Remote Sens. 2019, 11(17), 2055; https://doi.org/10.3390/rs11172055 - 1 Sep 2019
Cited by 35 | Viewed by 4382
Abstract
Remote sensing image retrieval (RSIR), a superior content organization technique, plays an important role in the remote sensing (RS) community. With the number of RS images increases explosively, not only the retrieval precision but also the retrieval efficiency is emphasized in the large-scale [...] Read more.
Remote sensing image retrieval (RSIR), a superior content organization technique, plays an important role in the remote sensing (RS) community. With the number of RS images increases explosively, not only the retrieval precision but also the retrieval efficiency is emphasized in the large-scale RSIR scenario. Therefore, the approximate nearest neighborhood (ANN) search attracts the researchers’ attention increasingly. In this paper, we propose a new hash learning method, named semi-supervised deep adversarial hashing (SDAH), to accomplish the ANN for the large-scale RSIR task. The assumption of our model is that the RS images have been represented by the proper visual features. First, a residual auto-encoder (RAE) is developed to generate the class variable and hash code. Second, two multi-layer networks are constructed to regularize the obtained latent vectors using the prior distribution. These two modules mentioned are integrated under the generator adversarial framework. Through the minimax learning, the class variable would be a one-hot-like vector while the hash code would be the binary-like vector. Finally, a specific hashing function is formulated to enhance the quality of the generated hash code. The effectiveness of the hash codes learned by our SDAH model was proved by the positive experimental results counted on three public RS image archives. Compared with the existing hash learning methods, the proposed method reaches improved performance. Full article
(This article belongs to the Special Issue Content-Based Remote Sensing Image Retrieval)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework of basic remote sensing image retrieval.</p>
Full article ">Figure 2
<p>Framework of basic adversarial auto-encoder model.</p>
Full article ">Figure 3
<p>Framework of the proposed semi-supervised deep adversarial hashing network.</p>
Full article ">Figure 4
<p>Comparison between the normal fully connection block and the the residual block: (<b>a</b>) normal fully connection block; and (<b>b</b>) residual block.</p>
Full article ">Figure 5
<p>Examples of different scenes in UCM dataset. The names of scenes can be found in <a href="#remotesensing-11-02055-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 6
<p>Examples of different scenes in AID dataset. The names of scenes can be found in <a href="#remotesensing-11-02055-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Examples of different scenes in NWPU dataset. The names of scenes can be found in <a href="#remotesensing-11-02055-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 8
<p>Retrieval precision–recall curve of different hashing methods counted on UCM archive: (<b>a</b>) the length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 9
<p>Mean average precision counted on different number of retrieval images using diverse hashing methods for UCM archive: (<b>a</b>) the length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 10
<p>Retrieval precision–recall curve of different hashing methods counted on AID archive: (<b>a</b>) The length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 11
<p>Mean average precision counted on different number of retrieval images using diverse hashing methods for AID archive: (<b>a</b>) the length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 12
<p>Retrieval precision–recall curve of different hashing methods counted on NPWU archive: (<b>a</b>)The length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 13
<p>Mean average precision counted on different number of retrieval images using diverse hashing methods for NPWU archive: (<b>a</b>) the length of hash codes is 32 bits; (<b>b</b>) the length of hash codes is 64 bits; (<b>c</b>) the length of hash codes is 128 bits; (<b>d</b>) the length of hash codes is 256 bits; and (<b>e</b>) the length of hash codes is 512 bits.</p>
Full article ">Figure 14
<p>Two-dimensional scatterplots of different hash codes (128 bits) obtained by t-SNE over UCM with 21 semantic scenes. The relationships between number ID and scenes can be found in <a href="#remotesensing-11-02055-t001" class="html-table">Table 1</a>: (<b>a</b>) SDAH (Our method); (<b>b</b>) SSDH; (<b>c</b>) BTNSPLH; (<b>d</b>) KSH; (<b>e</b>) DQH; (<b>f</b>) DHN; and (<b>g</b>) DSH.</p>
Full article ">Figure 15
<p>Two-dimensional scatterplots of different hash codes (128 bits) obtained by t-SNE over AID with 30 semantic scenes. The relationships between number ID and scenes can be found in <a href="#remotesensing-11-02055-t002" class="html-table">Table 2</a>: (<b>a</b>) SDAH (Our method); (<b>b</b>) SSDH; (<b>c</b>) BTNSPLH; (<b>d</b>) KSH; (<b>e</b>) DQH; (<b>f</b>) DHN; and (<b>g</b>) DSH.</p>
Full article ">Figure 16
<p>Two-dimensional scatterplots of different hash codes (128 bits) obtained by t-SNE over NWPU with 45 semantic scenes. The relationships between number ID and scenes can be found in <a href="#remotesensing-11-02055-t003" class="html-table">Table 3</a>: (<b>a</b>) SDAH (Our method); (<b>b</b>) SSDH; (<b>c</b>) BTNSPLH; (<b>d</b>) KSH; (<b>e</b>) DQH; (<b>f</b>) DHN; and (<b>g</b>) DSH.</p>
Full article ">Figure 17
<p>Retrieval examples of “Storage Tanks” within UCM dataset based on the 128 bits hash codes learned by the different hash learning methods. The first images in each row are the queries. The remaining images in each row are the Top 10 retrieval results. The incorrect results are tagged in red, and the number of correct results among Top 50 retrieval images is provided for reference.</p>
Full article ">Figure 18
<p>Retrieval examples of “Square” within AID dataset based on the 128 bits hash codes learned by the different hash learning methods. The first images in each row are the queries. The remaining images in each row are the Top 10 retrieval results. The incorrect results are tagged in red, and the number of correct results among Top 50 retrieval images is provided for reference.</p>
Full article ">Figure 19
<p>Retrieval examples of “Forest” within NWPU dataset based on the 128 bits hash codes learned by the different hash learning methods. The first images in each row are the queries. The remaining images in each row are the Top 10 retrieval results. The incorrect results are tagged in red, and the number of correct results among Top 50 retrieval images is provided for reference.</p>
Full article ">Figure 20
<p>Influence of different parameters on our hash learning model. The results are counted on the UCM dataset using the 128 bits hash codes: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>c</mi> </msub> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>q</mi> </msub> </semantics></math>; (<b>d</b>) <span class="html-italic">m</span>; (<b>e</b>) <span class="html-italic">t</span>; and (<b>f</b>) <span class="html-italic">T</span>.</p>
Full article ">Figure 21
<p>Influence of different parameters on our hash learning model. The results are counted on the AID dataset using the 128 bits hash codes: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>c</mi> </msub> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>q</mi> </msub> </semantics></math>; (<b>d</b>) <span class="html-italic">m</span>; (<b>e</b>) <span class="html-italic">t</span>; and (<b>f</b>) <span class="html-italic">T</span>.</p>
Full article ">Figure 22
<p>Influence of different parameters on our hash learning model. The results are counted on the NWPU dataset using the 128 bits hash codes: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>c</mi> </msub> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>q</mi> </msub> </semantics></math>; (<b>d</b>) <span class="html-italic">m</span>; (<b>e</b>) <span class="html-italic">t</span>; and (<b>f</b>) <span class="html-italic">T</span>.</p>
Full article ">
27 pages, 5965 KiB  
Article
Development and Intercomparison Study of an Atmospheric Motion Vector Retrieval Algorithm for GEO-KOMPSAT-2A
by Soo Min Oh, Régis Borde, Manuel Carranza and In-Chul Shin
Remote Sens. 2019, 11(17), 2054; https://doi.org/10.3390/rs11172054 - 1 Sep 2019
Cited by 16 | Viewed by 3837
Abstract
We derived an atmospheric motion vector (AMV) algorithm for the Geostationary Korea Multipurpose Satellite (GEO-KOMPSAT-2A; GK-2A) launched on 4 December 2018, using the Advanced Himawari Imager (AHI) onboard Himawari-8, which is very similar to the Advanced Meteorological Imager onboard GK-2A. This study clearly [...] Read more.
We derived an atmospheric motion vector (AMV) algorithm for the Geostationary Korea Multipurpose Satellite (GEO-KOMPSAT-2A; GK-2A) launched on 4 December 2018, using the Advanced Himawari Imager (AHI) onboard Himawari-8, which is very similar to the Advanced Meteorological Imager onboard GK-2A. This study clearly describes the main steps in our algorithm and optimizes it for the target box size and height assignment methods by comparing AMVs with numerical weather prediction (NWP) and rawinsonde profiles for July 2016 and January 2017. Target box size sensitivity tests were performed from 8 × 8 to 48 × 48 pixels for three infrared channels and from 16 × 16 to 96 × 96 pixels for one visible channel. The results show that the smaller box increases the speed, whereas the larger one slows the speed without quality control. The best target box sizes were found to be 16 × 16 for CH07, 08, and 13, and 48 × 48 pixels for CH03. Height assignment sensitivity tests were performed for several methods, such as the cross-correlation coefficient (CCC), equivalent blackbody temperature (EBBT), infrared/water vapor (IR/WV) intercept, and CO2 slicing methods for a cloudy target as well as normalized total contribution (NTC) and normalized total cumulative contribution (NTCC) for a clear-air target. For a cloudy target, the CCC method is influenced by the quality of the cloud’s top pressure. Better results were found when using EBBT and IR/WV intercept methods together rather than individually. Furthermore, CO2 slicing had the best statistics. For a clear-air target, the combined use of NTC and NTCC had the best statistics. Additionally, the mean vector difference, root-mean-square (RMS) vector difference, bias, and RMS error (RMSE) between GK-2A AMVs and NWP or rawinsonde were smaller by approximately 18.2% on average than in the case of the Communication, Ocean and Meteorology Satellite (COMS) AMVs. In addition, we verified the similarity between GK-2A and Meteosat Third Generation (MTG) AMVs using the AHI of Himawari-8 from 21 July 2016. This similarity can provide evidence that the GK-2A algorithm works properly because the GK-2A AMV algorithm borrows many methods of the MTG AMV algorithm for geostationary data and inversion layer corrections. The Pearson correlation coefficients in the speed, direction, and height of the prescribed GK-2A and MTG AMVs were larger than 0.97, and the corresponding bias/RMSE were0.07/2.19 m/s, 0.21/14.8°, and 2.61/62.9 hPa, respectively, considering common quality indicator with forecast (CQIF) > 80. Full article
(This article belongs to the Special Issue Satellite-Derived Wind Observations)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Station information of rawinsonde captured at 12:00 UTC on 21 July 2016. The background plot is the CH13 (10.2 μm) brightness temperature of AHI over our study area. In this full disk area of AHI, the total number of stations is about 160. The nominal time of the rawinsonde launch was 0000 or 1200 UTC, with occasional launches occurring at 06:00 and 18:00 UTC.</p>
Full article ">Figure 2
<p>Flowchart of the GEO-KOMPSAT-2A atmospheric motion vector (GK-2A AMV) retrieval scheme. The GK-2A AMV algorithm consists of four main steps such as target selection, height assignment, tracking, and quality control. (<b>a</b>) For visible (VIS) (CH03), shortwave infrared (SWIR) (CH07), and two infrared (IR) channels (CH13 and 14), there are only cloudy AMVs. But (<b>b</b>) for three water vapor (WV) channels (CH08, 09, and 10), there are both cloudy and clear-air AMVs. If the type of target is decided after target selection, different height assignment methods are adopted whether it is cloudy or clear. For a cloudy target, inversion layer correction is carried out additionally. After that, speed and direction are calculated in the tracking process using the cross-correlation (CC) method. Just for the cross-correlation coefficient (CCC) case, height assignment and tracking processes are conducted at the same time. Finally, quality indicator (QI) and expected error (EE) values of each vector are calculated to estimate the quality.</p>
Full article ">Figure 3
<p>Example of a change in target center in the target selection process for a given target box size. The original center of 25 × 25 pixels of the blue target box moves to the new center of 3 × 3 pixels of the red nested box with the largest standard deviation of nine pixels in brightness temperature for SWIR, IR, and WV or albedo for VIS.</p>
Full article ">Figure 4
<p>Plots of the mean speed of the collocated GK-2A AMVs against Global Data Assimilation and Prediction System (GDAPS) versus (<b>a</b>) target box size and (<b>b</b>) real-scale target box size for July 2016 and those against rawinsonde versus (<b>c</b>) target box size and (<b>d</b>) real-scale target box size for the same period. Each solid and dotted curve in (<b>b</b>) and (<b>d</b>) represents the mean speed of the collocated AMVs and the corresponding reference data, respectively. The results obtained for January 2016 were similar.</p>
Full article ">Figure 5
<p>Plots of validation scores of normalized MVD (NMVD), normalized RMSVD (NRMSVD), normalized Bias (NBias), and normalized RMSE (NRMSE) versus the target box size for cloudy AMVs for CH03, 07, 08, and 13 and clear-air AMVs for CH08 for July 2016. All scores were normalized by the speed of the corresponding reference vectors. GK-2A AMVs for CH03 compared with GDAPS are shown for (<b>a</b>) nonzero QIF and (<b>b</b>) QIF &gt; 80; those compared with rawinsonde are shown in (<b>c</b>) and (<b>d</b>), respectively. The same types of plots were used for cloudy AMVs for (<b>e</b>–<b>h</b>) CH07, (<b>i</b>–<b>l</b>) CH08, and (<b>m</b>–<b>p</b>) CH13 and for clear-air AMVs for (<b>q</b>–<b>t</b>) CH08. All validation scores were best at 48 × 48 for CH03 and at 16 × 16 for the other channels. Their NRMSE, NMVD, and NRMSVD became smaller than 0.5 after quality control with AMVs of QIF &gt; 80, compared with GDAPS and rawinsonde. The results obtained for January 2017 were similar.</p>
Full article ">Figure 6
<p>Plots of (<b>a</b>) GK-2A and (<b>b</b>) COMS AMVs in Typhoon Nepartak at 00:00 UTC on 7 July 2016. Colors represent pressure altitude of AMVs. The background plot is the CH13 (10.2 μm) brightness temperature of AHI. The orange square symbols represent the stations of rawinsonde captured at the same period. GK2A AMVs survived approximately 64.7% more than COMS AMVs after quality control, considering QIF &gt; 80. Moreover, GK-2A AMVs were more consistent with GDAPS and rawinsonde than COMS AMVs. The specific figures are presented in <a href="#remotesensing-11-02054-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 7
<p>Distribution comparison of (<b>a</b>) speed, (<b>b</b>) direction, and (<b>c</b>) height of GK-2A and MTG AMVs for 12:10 UTC on 21 July 2016. (<b>a</b>), (<b>b</b>) GK-2A AMVs with heights from CCC in the specific configuration (<b>black</b>) and in the prescribed configuration (<b>red</b>) as well as MTG AMVs with heights from CCC in the specific configuration (<b>blue</b>) and in the prescribed configuration (<b>violet</b>). The GK-2A AMVs with heights obtained from the CO<sub>2</sub> slicing (<b>orange</b>) and the combined use of EBBT and IR/WV intercept methods (<b>green</b>) are additionally shown in (<b>c</b>). (<b>d</b>–<b>f</b>) the corresponding scatter plots are shown for the collocated AMVs for others versus MTG AMVs with heights from CCC in the prescribed configuration. The colors are the same as those in (<b>a</b>–<b>c</b>). The results for 05:40 UTC on 21 July 2016 were very similar.</p>
Full article ">Figure 8
<p>Distribution comparisons of (<b>a</b>) QI, (<b>b</b>) QIF, (<b>c</b>) CQI, and (<b>d</b>) CQIF of GK-2A and MTG AMVs for 12:10 UTC on 21 July 2016. The colors represent GK-2A AMVs with heights from CCC in the specific configuration (<b>black</b>) and CCC in prescribed configuration (<b>red</b>) as well as MTG AMVs with heights from the CCC in specific configuration (<b>blue</b>) and CCC in the prescribed configuration (<b>violet</b>). The results obtained for 05:40 UTC on 21 July 2016 were very similar.</p>
Full article ">Figure 9
<p>Comparison of AMV heights and Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) level 1 and 2 lidar products for 05:40 UTC on 21 July 2016. (<b>a</b>) CALIOP 532 total attenuated backscatter (TAB) and (<b>c</b>) level 2 cloud profiles for the red segment of (<b>e</b>). (<b>b</b>) CALIOP 532 attenuated backscatter and (<b>d</b>) level 2 cloud profiles for the blue segment of (<b>e</b>). Symbols represent prescribed CCC heights (<span style="color:#FF66FF">●</span>) of MTG AMVs and the prescribed CCC heights (<span style="color:red">■</span>), specific CO<sub>2</sub> slicing heights (<span style="color:#FFCC00">◆</span>), and EBBT and IR/WV intercept heights (<span style="color:lime">▲</span>) of GK-2A AMVs, respectively. (<b>e</b>) The corresponding track of CALIOP is also plotted. No collocated AMV data were available for 12:10 UTC on 21 July 2016.</p>
Full article ">
19 pages, 6592 KiB  
Article
Extracting Raft Aquaculture Areas from Remote Sensing Images via an Improved U-Net with a PSE Structure
by Binge Cui, Dong Fei, Guanghui Shao, Yan Lu and Jialan Chu
Remote Sens. 2019, 11(17), 2053; https://doi.org/10.3390/rs11172053 - 1 Sep 2019
Cited by 63 | Viewed by 5526
Abstract
Remote sensing has become a primary technology for monitoring raft aquaculture products. However, due to the complexity of the marine aquaculture environment, the boundaries of the raft aquaculture areas in remote sensing images are often blurred, which will result in ‘adhesion’ phenomenon in [...] Read more.
Remote sensing has become a primary technology for monitoring raft aquaculture products. However, due to the complexity of the marine aquaculture environment, the boundaries of the raft aquaculture areas in remote sensing images are often blurred, which will result in ‘adhesion’ phenomenon in the raft aquaculture areas extraction. The fully convolutional network (FCN) based methods have made great progress in the field of remote sensing in recent years. In this paper, we proposed an FCN-based end-to-end raft aquaculture areas extraction model (which is called UPS-Net) to overcome the ‘adhesion’ phenomenon. The UPS-Net contains an improved U-Net and a PSE structure. The improved U-Net can simultaneously capture boundary and contextual information of raft aquaculture areas from remote sensing images. The PSE structure can adaptively fuse the boundary and contextual information to reduce the ‘adhesion’ phenomenon. We selected laver raft aquaculture areas in eastern Lianyungang in China as the research region to verify the effectiveness of our model. The experimental results show that compared with several state-of-the-art models, the proposed UPS-Net model performs better at extracting raft aquaculture areas and can significantly reduce the ‘adhesion’ phenomenon. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Raft aquaculture areas in natural environments: (<b>a</b>) Raft aquaculture species being cultivated in seawater; (<b>b</b>) Raft aquaculture species raised above the seawater surface to kill the green algal and germs on surface of the thallus.</p>
Full article ">Figure 2
<p>Raft aquaculture areas images: Remote sensing images of laver raft aquaculture areas with 1.8-m spatial resolution. The right image shows the close-up view of the raft aquaculture areas image denoted by red dashed boxes in the left figure.</p>
Full article ">Figure 3
<p>Squeeze-excitation module.</p>
Full article ">Figure 4
<p>The general process of our raft aquaculture areas extraction. The data preprocessing stage, the training stage and the testing stage are illustrated in the <b>upper</b>, <b>middle</b>, and <b>lower</b> parts of the figure, respectively. The UPS-Net model is a simplified version of the network.</p>
Full article ">Figure 5
<p>The improved U-Net. Each gray rectangle represents a multichannel feature map with the channel number at the top of the gray rectangle and the size on the left sides.</p>
Full article ">Figure 6
<p>The PSE structure, including the pyramid upsampling module and the squeeze-excitation module.</p>
Full article ">Figure 7
<p>(<b>a</b>) Experimental area; (<b>b</b>) Experimental image; (<b>c</b>) Ground truth map.</p>
Full article ">Figure 8
<p>Different learning rate of Adam algorithm on validation set.</p>
Full article ">Figure 9
<p>Extraction of raft aquaculture areas on the test image: (<b>a</b>) Test image; (<b>b</b>) Ground truth; (<b>c</b>) SVM; (<b>d</b>) Mask R-CNN; (<b>e</b>) FCN; (<b>f</b>) U-Net; (<b>g</b>) PSPNet; (<b>h</b>) DeepLabv3+; (<b>i</b>) DS-HCN; (<b>j</b>) UPS-Net. The red rectangles indicate raft aquaculture areas here were misrecognized as seawater, and the green rectangles indicate seawater areas here were misrecognized as raft aquaculture areas.</p>
Full article ">Figure 10
<p>The ‘adhesion’ phenomenon in the result maps of raft aquaculture areas extraction: (<b>a</b>) Test image; (<b>b</b>) Ground truth; (<b>c</b>) U-Net; (<b>d</b>) DeepLabv3+; (<b>e</b>) DS-HCN; (<b>f</b>) UPS-Net. The yellow rectangles indicate that raft aquaculture areas appeared ‘adhesion’ phenomenon.</p>
Full article ">Figure 11
<p>Visualization: the weight information of vector <math display="inline"><semantics> <mi mathvariant="bold">E</mi> </semantics></math> from the squeeze-excitation module. The horizontal axis shows channels of the vector <math display="inline"><semantics> <mi mathvariant="bold">E</mi> </semantics></math>. The vertical axis shows weights of the vector <math display="inline"><semantics> <mi mathvariant="bold">E</mi> </semantics></math>.</p>
Full article ">
52 pages, 13183 KiB  
Article
Mapping with Pléiades—End-to-End Workflow
by Roland Perko, Hannes Raggam and Peter M. Roth
Remote Sens. 2019, 11(17), 2052; https://doi.org/10.3390/rs11172052 - 1 Sep 2019
Cited by 22 | Viewed by 8490
Abstract
In this work, we introduce an end-to-end workflow for very high-resolution satellite-based mapping, building the basis for important 3D mapping products: (1) digital surface model, (2) digital terrain model, (3) normalized digital surface model and (4) ortho-rectified image mosaic. In particular, we describe [...] Read more.
In this work, we introduce an end-to-end workflow for very high-resolution satellite-based mapping, building the basis for important 3D mapping products: (1) digital surface model, (2) digital terrain model, (3) normalized digital surface model and (4) ortho-rectified image mosaic. In particular, we describe all underlying principles for satellite-based 3D mapping and propose methods that extract these products from multi-view stereo satellite imagery. Our workflow is demonstrated for the Pléiades satellite constellation, however, the applied building blocks are more general and thus also applicable for different setups. Besides introducing the overall end-to-end workflow, we need also to tackle single building blocks: optimization of sensor models represented by rational polynomials, epipolar rectification, image matching, spatial point intersection, data fusion, digital terrain model derivation, ortho rectification and ortho mosaicing. For each of these steps, extensions to the state-of-the-art are proposed and discussed in detail. In addition, a novel approach for terrain model generation is introduced. The second aim of the study is a detailed assessment of the resulting output products. Thus, a variety of data sets showing different acquisition scenarios are gathered, allover comprising 24 Pléiades images. First, the accuracies of the 2D and 3D geo-location are analyzed. Second, surface and terrain models are evaluated, including a critical look on the underlying error metrics and discussing the differences of single stereo, tri-stereo and multi-view data sets. Overall, 3D accuracies in the range of 0.2 to 0.3 m in planimetry and 0.2 to 0.4 m in height are achieved w.r.t. ground control points. Retrieved surface models show normalized median absolute deviations around 0.9 m in comparison to reference LiDAR data. Multi-view stereo outperforms single stereo in terms of accuracy and completeness of the resulting surface models. Full article
(This article belongs to the Special Issue 3D Reconstruction Based on Aerial and Satellite Imagery)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Proposed end-to-end 3D mapping workflow for multi-view stereo Pléiades images. The four core tasks are subdivided into according sub tasks.</p>
Full article ">Figure 2
<p>Proposed end-to-end 3D mapping flow graph for multi-view stereo Pléiades images. The four core tasks introduced in <a href="#remotesensing-11-02052-f001" class="html-fig">Figure 1</a> are depicted in detail.</p>
Full article ">Figure 3
<p>The test site Innsbruck (Austria): (<b>a</b>) topographic map (opentopomap.org (CC-BY-SA)), (<b>b</b>) RGB ortho-image and (<b>c</b>) relief shaded DSM. (<b>b</b>,<b>c</b>) were produced employing the proposed and implemented mapping workflow.</p>
Full article ">Figure 4
<p>The test site Trento (Italy): (<b>a</b>) topographic map (opentopomap.org (CC-BY-SA)), (<b>b</b>) CIR ortho-image and (<b>c</b>) relief shaded DSM. (<b>b</b>,<b>c</b>) were produced employing the proposed and implemented mapping workflow.</p>
Full article ">Figure 5
<p>The test site Ljubljana (Slovenia): (<b>a</b>) topographic map (opentopomap.org (CC-BY-SA)), (<b>b</b>) RGB ortho-image and (<b>c</b>) relief shaded DSM. (<b>b</b>,<b>c</b>) were produced employing the proposed and implemented mapping workflow.</p>
Full article ">Figure 6
<p>Multiple view geometry Ljubljana data set, adapted from Reference [<a href="#B5-remotesensing-11-02052" class="html-bibr">5</a>]. The Google Earth visualization depicts the preview of the footprints and the satellite’s position.</p>
Full article ">Figure 7
<p>The test site Tian Shui (China): (<b>a</b>) topographic map (opentopomap.org (CC-BY-SA)), (<b>b</b>) CIR ortho-image and (<b>c</b>) relief shaded DSM. (<b>b</b>,<b>c</b>) were produced employing the proposed and implemented mapping workflow.</p>
Full article ">Figure 8
<p>Relation of stereo input images, their epipolar rectified versions, forward and backward addresses and both disparity maps. Boxes represent images and arrows the geometric transformation between them.</p>
Full article ">Figure 9
<p>Image matching principle with epipolar rectified input images. The main steps are calculation of the disparity space image, cost aggregation and disparity selection. The final disparity map <span class="html-italic">D</span> holds the shifts in column directions for each pixel such that they point from the reference image <math display="inline"><semantics> <msub> <mi>X</mi> <mi>r</mi> </msub> </semantics></math> to the search image <math display="inline"><semantics> <msub> <mi>X</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>Cost volume <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="bold-italic">p</mi> </msub> <mrow> <mo>(</mo> <mi>d</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of classical SGM versus our implementation is shown on a toy example. While SGM evaluates the whole cost volume, we only check the disparities visualized in light blue color around the disparity predictions from the previous pyramid level depicted in dark blue color. In this toy example the search space is reduced to <math display="inline"><semantics> <mrow> <mn>44</mn> <mo>%</mo> </mrow> </semantics></math>, however, the reduction is larger in real satellite images.</p>
Full article ">Figure 11
<p>Subpixel distribution (<b>a</b>) based on the parabola interpolation [<a href="#B61-remotesensing-11-02052" class="html-bibr">61</a>] and (<b>b</b>) after our proposed normalization.</p>
Full article ">Figure 12
<p>DSM resampling: (<b>a</b>) representation of 3D point cloud as a multi-band image and (<b>b</b>) regridding options.</p>
Full article ">Figure 13
<p>Example of fitting a normal distribution into non-normal distributed input data (blue). Default fit based on mean and standard deviation estimates (red) and robust fit based on median and nmAD estimates (green).</p>
Full article ">Figure 14
<p>DSM profiles for an artificial building on a tilted surface with added noise: (<b>a</b>) Original DSM profile and the minimal height value (black) within the window (blue). (<b>b</b>) Robust fit of the surface (green). (<b>c</b>) Slope corrected DSM profile and the according minimal height value.</p>
Full article ">Figure 15
<p>Comparison of robust terrain slope fit to the proposed simple fit. (<b>a</b>) shows a subset of a DSM and (<b>b</b>) its Gaussian filtered variant. (<b>c</b>) visualizes the profile of the DSM along the green arrow together with the terrain slope fits and the corrected profiles.</p>
Full article ">Figure 16
<p>Concept of splitting 8 scanlines spanning the 2D image space (<b>a</b>) into 2 passes (<b>b</b>) down-right and (<b>c</b>) up-left.</p>
Full article ">Figure 17
<p>The 3D error is plotted versus the convergence angle, which are in the range from <math display="inline"><semantics> <msup> <mn>4</mn> <mo>∘</mo> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>35</mn> <mo>∘</mo> </msup> </semantics></math>, assuming an 2D location measurement error of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> m. On top of that the RMS values of the adjusted lengths (cf. <a href="#remotesensing-11-02052-t006" class="html-table">Table 6</a>) are shown in blue.</p>
Full article ">Figure 18
<p>The 3D length discrepancies for the Ljubljana set sorted from lowest to highest ICP error. ICP values are given in red and corresponding GCP values in blue.</p>
Full article ">Figure 19
<p>Detail view of a hospital at the Trento test site.</p>
Full article ">Figure 20
<p>Detail view of an urban area at the Innsbruck stereo test site.</p>
Full article ">Figure 21
<p>Detail view of a partly forested area at the Innsbruck stereo test site.</p>
Full article ">Figure 22
<p>Distribution of differences from LiDAR and Pléiades DSMs for all 15 stereo pairs of the Ljubljana test site. The error range of −6 m to <math display="inline"><semantics> <mrow> <mo>+</mo> <mn>6</mn> </mrow> </semantics></math> m is shown.</p>
Full article ">Figure 23
<p>Plots showing (<b>a</b>,<b>b</b>) STD, (<b>c</b>,<b>d</b>) nmAD and (<b>e</b>,<b>f</b>) percentage of nodata values plotted versus the convergence angle for the Ljubljana test site. (<b>a</b>,<b>c</b>,<b>e</b>) are based on the complete error distribution, while (<b>b</b>,<b>d</b>,<b>f</b>) is based on a clipped distribution, where all gross outliers with an absolute value larger than 6 m are removed.</p>
Full article ">Figure 24
<p>Selected DSM subsets with <math display="inline"><semantics> <mrow> <mn>800</mn> <mo>×</mo> <mn>800</mn> </mrow> </semantics></math> m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> and 1 m GSD of the Ljubljana test site. Height ranges from 410 m to 500 m. Shown are (<b>a</b>) LiDAR reference, (<b>b</b>) Pléiades DSM based on all stereo pairs with convergence angles in the range of <math display="inline"><semantics> <msup> <mn>20</mn> <mo>∘</mo> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>30</mn> <mo>∘</mo> </msup> </semantics></math>, (<b>c</b>) Pléiades DSM based on stereo pair 1–2 and (<b>d</b>) Pléiades DSM based on stereo pair 1–3. Dark blue pixel indicate non-reconstructed regions.</p>
Full article ">Figure 25
<p>Ortho CIR image (<b>a</b>) and relief shaded DSM (<b>b</b>) are visualized for a subset of the Tian Shui test site covering <math display="inline"><semantics> <mrow> <mn>3.4</mn> </mrow> </semantics></math> km<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 26
<p>Subset of (<b>a</b>) true-ortho image, (<b>c</b>) input DSM and (<b>b</b>,<b>d</b>,<b>e</b>) together with ground masks using different methods.</p>
Full article ">Figure 27
<p>3D view of (<b>a</b>) the DSM as shown in <a href="#remotesensing-11-02052-f026" class="html-fig">Figure 26</a> and (<b>b</b>) the resulting DTM after applying the proposed filtering method.</p>
Full article ">Figure 28
<p>Urban subset (<b>left</b>) and suburban dataset (<b>right</b>). Shown are (<b>a</b>,<b>b</b>) ortho image, (<b>c</b>,<b>d</b>) extracted DSM, (<b>e</b>,<b>f</b>) derived DTM and (<b>g</b>,<b>h</b>) according nDSM in relief view.</p>
Full article ">Figure 29
<p>Profile comparisons of DSMs and DTMs from LiDAR and Pléiades for two small scenes, representing (<b>a</b>) a skyscraper and (<b>b</b>) a round building.</p>
Full article ">
30 pages, 5817 KiB  
Article
Synergy of Satellite, In Situ and Modelled Data for Addressing the Scarcity of Water Quality Information for Eutrophication Assessment and Monitoring of Swedish Coastal Waters
by Susanne Kratzer, Dmytro Kyryliuk, Moa Edman, Petra Philipson and Steve W. Lyon
Remote Sens. 2019, 11(17), 2051; https://doi.org/10.3390/rs11172051 - 31 Aug 2019
Cited by 15 | Viewed by 3681
Abstract
Monthly CHL-a and Secchi Depth (SD) data derived from the full mission data of the Medium Resolution Imaging Spectrometer (MERIS; 2002–2012) were analysed along a horizontal transect from the inner Bråviken bay and out into the open sea. The CHL-a values were calibrated [...] Read more.
Monthly CHL-a and Secchi Depth (SD) data derived from the full mission data of the Medium Resolution Imaging Spectrometer (MERIS; 2002–2012) were analysed along a horizontal transect from the inner Bråviken bay and out into the open sea. The CHL-a values were calibrated using an algorithm derived from Swedish lakes. Then, calibrated Chl-a and Secchi Depth (SD) estimates were extracted from MERIS data along the transect and compared to conventional monitoring data as well as to data from the Swedish Coastal zone Model (SCM), providing physico-biogeochemical parameters such as temperature, nutrients, Chlorophyll-a (CHL-a) and Secchi depth (SD). A high negative correlation was observed between satellite-derived CHL-a and SD (ρ = −0.91), similar to the in situ relationship established for several coastal gradients in the Baltic proper. We also demonstrate that the validated MERIS-based estimates and data from the SCM showed strong correlations for the variables CHL-a, SD and total nitrogen (TOTN), which improved significantly when analysed on a monthly basis across basins. The relationship between satellite-derived CHL-a and modelled TOTN was also evaluated on a monthly basis using least-square linear regression models. The predictive power of the models was strong for the period May-November (R2: 0.58–0.87), and the regression algorithm for summer was almost identical to the algorithm generated from in situ data in Himmerfjärden bay. The strong correlation between SD and modelled TOTN confirms that SD is a robust and reliable indicator to evaluate changes in eutrophication in the Baltic proper which can be assessed using remote sensing data. Amongst all three assessed methods, only MERIS CHL-a was able to correctly depict the pattern of phytoplankton phenology that is typical for the Baltic proper. The approach of combining satellite data and physio-biogeochemical models could serve as a powerful tool and value-adding complement to the scarcely available in situ data from national monitoring programs. In particular, satellite data will help to reduce uncertainties in long-term monitoring data due to its improved measurement frequency. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Bråviken, north-western Baltic proper with the largest catchment (green) discharging into Bråviken and its waterbodies (pink); (<b>b</b>) Close-up to 5 Bråviken sub-basins (pink) and its 3 adjacent catchment areas (green); (<b>c</b>) Bråviken sub-basins (numbered and marked in yellow) with the horizontal transects through the bay and out to the open sea. The monitoring stations (GB11, GB20, GB22 and GB16) are marked in red. The map was generated with QGIS (3.0.3-Girona) using several pre-defined shapefiles (overview map: Europe coastline shapefile; shapefile for sub-basin division (<span class="html-italic">in Swedish</span>: havsområden): havsområden_SVAR_2016; shapefile for the main Bråviken catchment areas (<span class="html-italic">in Swedish:</span> huvudavrinningsområden): SVAR2012_2 [<a href="#B29-remotesensing-11-02051" class="html-bibr">29</a>]. Copernicus EU-DEM v1.1 [<a href="#B30-remotesensing-11-02051" class="html-bibr">30</a>]. The arrow on the left in each map indicates the geographic North.</p>
Full article ">Figure 2
<p>Monthly mean concentrations of (<b>a</b>) Chlorophyll-a (CHL-a) Medium Resolution Imaging Spectrometer (MERIS) before calibration (<b>b</b>) CHL-a MERIS after calibration plotted against CHL-a SHARKweb from all available match-ups between satellite and monitoring stations.</p>
Full article ">Figure 3
<p>Horizontal transects of surface CHL-a concentrations derived from the MERIS archive and extracted from the head of Bråviken (Pampusfjärden) out to the open sea before (black dots) and after applying the calibration algorithm (green dots). The red dots represent available values measured in situ at the monitoring stations. Rows: years of MERIS operation (2002–2012); columns month of each year: month 1–12 (i.e., Jan–Dec).</p>
Full article ">Figure 4
<p>A subset of horizontal transects of monthly mean CHL-a concentration along a transect from inner Pampusfjärden out to the open sea derived from MERIS summer data (June-Aug) 2009–2011. The red dots represent values sampled within the Swedish National Monitoring Program. Black dots: MERIS data before; green dots: MERIS data after applying the calibration algorithm. Red dots represent values measured in situ at the monitoring stations.</p>
Full article ">Figure 5
<p>Secchi depth (MERIS) plotted again Secchi depth (from SHARKweb) from all available match-ups between satellite and monitoring stations.</p>
Full article ">Figure 6
<p>Horizontal transect of Secchi depth (SD) along a transect from Pampusfjärden out to the open sea derived from the MERIS archive (2002–2012). Black dots: MERIS data, red dots: values measured in situ at the monitoring stations.</p>
Full article ">Figure 7
<p>A subset of horizontal transect of Secchi depth along a transect from Pampusfjärden out to the open sea derived from MERIS summer data (June–Aug) 2009–2011. Black dots: MERIS data, red dots: values measured in situ at the monitoring stations.</p>
Full article ">Figure 8
<p>Correlation matrix between water quality variables (CHL-a and SD) derived from MERIS (first 2 rows; blue area) and modelled parameters (rows 3–9: water temperature, total phosphorus (TOTP), inorganic nitrate (NO<sub>3</sub>), total nitrogen (TOTN), inorganic phosphate (PO4), CHL-a and Secchi depth computed by the Swedish Coastal Model; red area). On the bottom of the diagonal the bivariate scatterplots are displayed each with a linear trendline (in red). The value of the Spearman’s rank correlation coefficient (ρ) and the significant level (*** <span class="html-italic">p</span> &lt; 0.001, ** <span class="html-italic">p</span> &lt; 0.01, * <span class="html-italic">p</span> &lt; 0.05, ˙ <span class="html-italic">p</span> &lt; 0.1) for each correlation are shown on top of the diagonal.</p>
Full article ">Figure 9
<p>Correlations between calibrated CHL-a MERIS vs. CHL-a derived from the Swedish Coastal zone Model (SCM) and divided by month each during 2002–2012. The Spearman’s rank correlation coefficient, ρ (here shown as R) is given per month, the Mean Normalized Bias, the Root-Mean Square Error and the average Absolute Percentage Difference are given on each plot.</p>
Full article ">Figure 10
<p>Correlation between Secchi depth (SD) MERIS versus Secchi depth from SCM divided by month each over the period 2002–2012. The Spearman’s rank correlation coefficient, ρ (here shown as R) is given per month, the Mean Normalized Bias, the Root-Mean Square Error and the average Absolute Percentage Difference are noted on each plot.</p>
Full article ">Figure 11
<p>Calibrated CHL-a MERIS versus total nitrogen (TOTN) from SCM divided by month between 2002–2012. Linear regression model with coefficient of determination <math display="inline"> <semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </semantics> </math>.</p>
Full article ">Figure 12
<p>Phytoplankton phenology during the years (2009–2011) in inner Bråviken as depicted by (<b>a</b>) MERIS (monthly averages in green), (<b>b</b>) in situ point sampling in dark red and (<b>c</b>) daily values from the SCM (as presented at <a href="http://vattenwebb.smhi.se" target="_blank">vattenwebb.smhi.se</a>), all modelled data in light red. Figure modified from [<a href="#B51-remotesensing-11-02051" class="html-bibr">51</a>].</p>
Full article ">
18 pages, 1980 KiB  
Article
The Value of Sentinel-2 Spectral Bands for the Assessment of Winter Wheat Growth and Development
by Andrew Revill, Anna Florence, Alasdair MacArthur, Stephen P. Hoad, Robert M. Rees and Mathew Williams
Remote Sens. 2019, 11(17), 2050; https://doi.org/10.3390/rs11172050 - 31 Aug 2019
Cited by 34 | Viewed by 7369
Abstract
Leaf Area Index (LAI) and chlorophyll content are strongly related to plant development and productivity. Spatial and temporal estimates of these variables are essential for efficient and precise crop management. The availability of open-access data from the European Space Agency’s (ESA) Sentinel-2 satellite—delivering [...] Read more.
Leaf Area Index (LAI) and chlorophyll content are strongly related to plant development and productivity. Spatial and temporal estimates of these variables are essential for efficient and precise crop management. The availability of open-access data from the European Space Agency’s (ESA) Sentinel-2 satellite—delivering global coverage with an average 5-day revisit frequency at a spatial resolution of up to 10 metres—could provide estimates of these variables at unprecedented (i.e., sub-field) resolution. Using synthetic data, past research has demonstrated the potential of Sentinel-2 for estimating crop variables. Nonetheless, research involving a robust analysis of the Sentinel-2 bands for supporting agricultural applications is limited. We evaluated the potential of Sentinel-2 data for retrieving winter wheat LAI, leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC). In coordination with destructive and non-destructive ground measurements, we acquired multispectral data from an Unmanned Aerial Vehicle (UAV)-mounted sensor measuring key Sentinel-2 spectral bands (443 to 865 nm). We applied Gaussian processes regression (GPR) machine learning to determine the most informative Sentinel-2 bands for retrieving each of the variables. We further evaluated the GPR model performance when propagating observation uncertainty. When applying the best-performing GPR models without propagating uncertainty, the retrievals had a high agreement with ground measurements—the mean R2 and normalised root-mean-square error (NRMSE) were 0.89 and 8.8%, respectively. When propagating uncertainty, the mean R2 and NRMSE were 0.82 and 11.9%, respectively. When accounting for measurement uncertainty in the estimation of LAI and CCC, the number of most informative Sentinel-2 bands was reduced from four to only two—the red-edge (705 nm) and near-infrared (865 nm) bands. This research demonstrates the value of the Sentinel-2 spectral characteristics for retrieving critical variables that can support more sustainable crop management practices. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Winter wheat trial plot layout including: (<b>a</b>) an aerial image (acquired on 15 May 2018) and (<b>b</b>) Latin square experimental design with varying levels of nitrogen application. Destructive sample analysis was carried out at five plots highlighted within the dashed line.</p>
Full article ">Figure 2
<p>Comparison of in situ non-destructive to mean destructive sample measurements acquired on three dates (25 May, 13 June and 4 July 2018) for five winter wheat trial plots, including (<b>a</b>) in situ measured Leaf Area Index (LAI) (SunScan) compared to LAI measured from destructive samples and (<b>b</b>) chlorophyll meter (soil plant analyses development (SPAD)) measured leaf chlorophyll content (LCC) compared to leaf nitrogen content measurements. Note: Fit line (black line) is defined using reduced major axis where <span class="html-italic">y</span>-axis error bars are derived from the standard deviations of non-destructive measurements and <span class="html-italic">x</span>-axis error bars represent the mean range between two destructive samples that were analysed for each plot.</p>
Full article ">Figure 3
<p>Sentinel-2 band analysis using Gaussian processes regression (GPR) modelling trained using multi-date wheat observation, both without (left) and with (right) accounting for uncertainty in observations, for deriving Leaf Area Index (LAI), leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC). Statics include the mean and standard deviations of the coefficient of determination (R<sup>2</sup>) and normalised root-mean-square-error (NRMSE) from a three-fold cross-validation of the corresponding GPR models. The best performing GPR model, selected based on the lowest Akaike information criterion (AIC) value, is shown in boldface.</p>
Full article ">Figure 4
<p>Comparison of the normalised spectral responses and regression analysis between the most sensitive MAIA/Sentinel-2 bands and non-destructive ground measurements of (<b>a</b>) bias corrected LAI and (<b>b</b>) LCC.</p>
Full article ">Figure 5
<p>Independent GPR model evaluation for estimating (<b>a</b>) LAI, (<b>b</b>) LCC and (<b>c</b>) CCC. Note: Fit line (black line) is defined using reduced major axis where <span class="html-italic">y</span>-axis error bars are derived from the standard deviations of the GPR model estimates and <span class="html-italic">x</span>-axis error bars represent the standard deviations of the corresponding non-destructive ground measurements (available for LAI and LCC only).</p>
Full article ">Figure 6
<p>Independent evaluation of a multivariate linear regression model for (<b>a</b>) LAI, (<b>b</b>) LCC and (<b>c</b>) CCC.</p>
Full article ">
31 pages, 27866 KiB  
Article
Supervised Distance-Based Feature Selection for Hyperspectral Target Detection
by Amir Moeini Rad, Ali Akbar Abkar and Barat Mojaradi
Remote Sens. 2019, 11(17), 2049; https://doi.org/10.3390/rs11172049 - 30 Aug 2019
Cited by 5 | Viewed by 3126
Abstract
Feature/band selection (FS/BS) for target detection (TD) attempts to select features/bands that increase the discrimination between the target and the image background. Moreover, TD usually suffers from background interference. Therefore, bands that help detectors to effectively suppress the background and magnify the target [...] Read more.
Feature/band selection (FS/BS) for target detection (TD) attempts to select features/bands that increase the discrimination between the target and the image background. Moreover, TD usually suffers from background interference. Therefore, bands that help detectors to effectively suppress the background and magnify the target signal are considered to be more useful. In this regard, three supervised distance-based filter FS methods are proposed in this paper. The first method is based on the TD concept. It uses the image autocorrelation matrix and the target signature in the detection space (DS) for FS. Features that increase the first-norm distance between the target energy and the mean energy of the background in DS are selected as optimal. The other two methods use background modeling via image clustering. The cluster mean spectra, along with the target spectrum, are then transferred into DS. Orthogonal subspace projection distance (OSPD) and first-norm distance (FND) are used as two FS criteria to select optimal features. Two datasets, HyMap RIT and SIM.GA, are used for the experiments. Several measures, i.e., true positives (TPs), false alarms (FAs), target detection accuracy (TDA), total negative score (TNS), and the receiver operating characteristics (ROC) area under the curve (AUC) are employed to evaluate the proposed methods and to investigate the impact of FS on the TD performance. The experimental results show that our proposed FS methods, as compared with five existing FS methods, have improving impacts on common target detectors and help them to yield better results. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The HyMap dataset color image of Cook City, Montana, USA.</p>
Full article ">Figure 2
<p>Zoomed views of the fabric panels (<b>left</b>) and vehicles (<b>right</b>) used as targets in the HyMap dataset.</p>
Full article ">Figure 3
<p>The reflectance spectra of targets in the HyMap dataset.</p>
Full article ">Figure 4
<p>The SIM.GA dataset color image of Viareggio City, Tuscany, Italy.</p>
Full article ">Figure 5
<p>The radiance spectra of targets in the SIM.GA dataset.</p>
Full article ">Figure 6
<p>(<b>a</b>) The target <math display="inline"><semantics> <mi mathvariant="bold-italic">t</mi> </semantics></math> and background <math display="inline"><semantics> <mi mathvariant="bold-italic">e</mi> </semantics></math> spectra in the detection space (DS). (<b>b</b>) The absolute values of the difference between <math display="inline"><semantics> <mi mathvariant="bold-italic">t</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="bold-italic">e</mi> </semantics></math> in all features obtained by the autocorrelation-based feature selection (AFS) method in the full-dimensional DS. <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi mathvariant="bold-italic">f</mi> <mn>6</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> is the first optimal feature.</p>
Full article ">Figure 7
<p>The distance between the target and the background in different feature subsets selected by AFS. The first feature is the first optimal feature and the last feature subset contains all of the selected and sorted features. (<b>a</b>) The distance obtained for the first target and (<b>b</b>) all targets in the HyMap dataset. (<b>c</b>) The distance obtained for the first target and (<b>d</b>) all targets in the SIM.GA dataset.</p>
Full article ">Figure 8
<p>(<b>a</b>) Features represented in a 2-dimensional transformed detection space (TDS) built by the target and cluster 1 using all the original feature vectors in the orthogonal subspace projection distance (OSPD) method. <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">o</mi> <mi mathvariant="bold-italic">s</mi> <msub> <mi mathvariant="bold-italic">p</mi> <mn>6</mn> </msub> </mrow> </semantics></math> is the orthogonal distance vector from feature <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="bold-italic">f</mi> <mn>6</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> to the space diagonal. (<b>b</b>) The values of the orthogonal distances for all features in the <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mi>p</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>-dimensional TDS. <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi mathvariant="bold-italic">f</mi> <mn>6</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> is the first optimal feature.</p>
Full article ">Figure 9
<p>(<b>a</b>) The target <math display="inline"><semantics> <mi mathvariant="bold-italic">t</mi> </semantics></math> and background <math display="inline"><semantics> <mover accent="true"> <mi mathvariant="bold-italic">C</mi> <mo>˜</mo> </mover> </semantics></math> spectra in the detection space (DS). (<b>b</b>) The sum of first-norm distances between the target and the cluster means obtained by the first-norm distance (FND) method for each feature in the full-dimensional DS. Feature (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">f</mi> <mrow> <mn>74</mn> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> is selected as the first optimal feature.</p>
Full article ">Figure 10
<p>The number of false alarm (FA) pixels produced by the constrained energy minimization (CEM) and adaptive matched filter (AMF) detectors using feature subsets selected by different feature selection (FS) methods. Figure (<b>a</b>,<b>b</b>) are the results for the HyMap dataset and figure (<b>c</b>,<b>d</b>) show the results for the SIM.GA dataset. The curves in solid color lines are generated by the proposed FS methods. The black horizontal line in each figure indicates the full-band (FB) result for comparison with FS-based results.</p>
Full article ">Figure 11
<p>The best target detection accuracy (TDA) obtained by the CEM and AMF detectors using different feature selection (FS) methods. Figure (<b>a</b>,<b>b</b>) show the results for the HyMap dataset. The results of the SIM.GA dataset are displayed in figure (<b>c</b>,<b>d</b>). The curves in solid color lines are generated by the proposed FS methods. The black horizontal line in each figure indicates the full-band (FB) result for comparison with FS-based results.</p>
Full article ">Figure 12
<p>The receiver operating characteristics area under the curve (ROC AUC) versus the number of optimal features in each feature subset using the CEM and AMF detectors. Figure (<b>a</b>,<b>b</b>) show the results of the HyMap dataset; figure (<b>c</b>,<b>d</b>) show the results of the SIM.GA dataset. The curves in solid color lines are generated by the proposed FS methods. The black horizontal lines indicate the full-band result.</p>
Full article ">Figure 13
<p>The ratio of target detection accuracy (TDA) to the number of features (#F) for the AFS, OSPD and FND methods. Detection was conducted by CEM using the HyMap dataset.</p>
Full article ">Figure 14
<p>The first target and the HyMap dataset: (<b>a</b>) The optimal bands selected by AFS. The selected bands are represented by red vertical lines. (<b>b</b>) The last ten and the least discriminative bands selected by AFS. These features are represented by black vertical lines. (<b>c</b>) The target-background first-norm distances in the detection space with the last ten bands displayed by black spots. The first target and the SIM.GA dataset: (<b>d</b>), (<b>e</b>), and (<b>f</b>) are similar to (<b>a</b>), (<b>b</b>), and (<b>c</b>), respectively. In (<b>a</b>,<b>b</b>) and (<b>d</b>,<b>e</b>), the green curve with bigger width represents the target signature and the blue curves are the cluster mean spectra.</p>
Full article ">
17 pages, 4778 KiB  
Letter
PLANHEAT’s Satellite-Derived Heating and Cooling Degrees Dataset for Energy Demand Mapping and Planning
by Panagiotis Sismanidis, Iphigenia Keramitsoglou, Stefano Barberis, Hrvoje Dorotić, Benjamin Bechtel and Chris T. Kiranoudis
Remote Sens. 2019, 11(17), 2048; https://doi.org/10.3390/rs11172048 - 30 Aug 2019
Cited by 2 | Viewed by 3826
Abstract
The urban heat island (UHI) effect influences the heating and cooling (H&C) energy demand of buildings and should be taken into account in H&C energy demand simulations. To provide information about this effect, the PLANHEAT integrated tool—which is a GIS-based, open-source software tool [...] Read more.
The urban heat island (UHI) effect influences the heating and cooling (H&C) energy demand of buildings and should be taken into account in H&C energy demand simulations. To provide information about this effect, the PLANHEAT integrated tool—which is a GIS-based, open-source software tool for selecting, simulating and comparing alternative low-carbon and economically sustainable H&C scenarios—includes a dataset of 1 × 1 km hourly heating and cooling degrees (HD and CD, respectively). HD and CD are energy demand proxies that are defined as the deviation of the outdoor surface air temperature from a base temperature, above or below which a building is assumed to need heating or cooling, respectively. PLANHEAT’s HD and CD are calculated from a dataset of gridded surface air temperatures that have been derived using satellite thermal data from Meteosat-10 Spinning Enhanced Visible and Near-Infrared Imager (SEVIRI). This article describes the method for producing this dataset and presents the results for Antwerp (Belgium), which is one of the three validation cities of PLANHEAT. The results demonstrate the spatial and temporal information of PLANHEAT’s HD and CD dataset, while the accuracy assessment reveals that they agree well with reference values retrieved from in situ surface air temperatures. This dataset is an example of application-oriented research that provides location-specific results with practical utility. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Urban Climatology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow for retrieving the heating degree (HD) and cooling degree (CD) data from a multiyear time-series of hourly surface air temperatures.</p>
Full article ">Figure 2
<p>The annual cycle parameter (ACP) retrieval principle.</p>
Full article ">Figure 3
<p>(<b>a</b>) The 2012 Urban Atlas land cover and land use (LCLU) for Antwerp’s functional urban area (FUA) and (<b>b</b>) the PLANHEAT aggregated (annual sum) HD calculated using a base temperature of 15.5 °C.</p>
Full article ">Figure 4
<p>(<b>a</b>) The 2012 Urban Atlas LCLU for Antwerp’s FUA and (<b>b</b>) the PLANHEAT aggregated (annual sum) CD calculated using a base temperature of 22.0 °C.</p>
Full article ">Figure 5
<p>The mean (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>HD</mi> <mi>Annual</mi> </msub> </mrow> </semantics> </math> (base temperature: 15.5 °C) and (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>CD</mi> <mi>Annual</mi> </msub> </mrow> </semantics> </math> values (with the corresponding 95% confidence intervals) for the dominant local climate zone (LCZ) of Antwerp’s FUA. LCZ 2: compact midrise; LCZ 3: compact low-rise; LCZ 6: open low-rise; LCZ 8: large low-rise; LCZ 9: sparsely built; LCZ A: dense trees; LCZ B: scattered trees; and LCZ D: low plants.</p>
Full article ">Figure 6
<p>The hourly mean (<b>a</b>) HD (base temperature: 15.5 °C) and (<b>b</b>) CD for Antwerp’s LCZ 2.</p>
Full article ">Figure 7
<p>The hourly sum of HD (base temperature: 15.5 °C) for Antwerp’s FUA for the time period between 1 October and 31 March.</p>
Full article ">Figure 8
<p>Density plots presenting the IAASARS/NOA vs. in situ surface air temperatures for (<b>a</b>) the Sint Katelijne-Waver weather station and (<b>b</b>) the Deurne weather station.</p>
Full article ">Figure 9
<p>Scatterplots presenting the PLANHEAT vs. reference HD (base temperature: 15.5 °C) and CD for (<b>a</b>,<b>c</b>) the Sint Katelijne-Waver and (<b>b</b>,<b>d</b>) the Deurne weather stations, respectively.</p>
Full article ">
34 pages, 12627 KiB  
Article
Refugee Camp Monitoring and Environmental Change Assessment of Kutupalong, Bangladesh, Based on Radar Imagery of Sentinel-1 and ALOS-2
by Andreas Braun, Falah Fakhri and Volker Hochschild
Remote Sens. 2019, 11(17), 2047; https://doi.org/10.3390/rs11172047 - 30 Aug 2019
Cited by 39 | Viewed by 8656
Abstract
Approximately one million refugees of the Rohingya minority population in Myanmar crossed the border to Bangladesh on 25 August 2017, seeking shelter from systematic oppression and persecution. This led to a dramatic expansion of the Kutupalong refugee camp within a couple of months [...] Read more.
Approximately one million refugees of the Rohingya minority population in Myanmar crossed the border to Bangladesh on 25 August 2017, seeking shelter from systematic oppression and persecution. This led to a dramatic expansion of the Kutupalong refugee camp within a couple of months and a decrease of vegetation in the surrounding forests. As many humanitarian organizations demand frameworks for camp monitoring and environmental impact analysis, this study suggests a workflow based on spaceborne radar imagery to measure the expansion of settlements and the decrease of forests. Eleven image pairs of Sentinel-1 and ALOS-2, as well as a digital elevation model, were used for a supervised land cover classification. These were trained on automatically-derived reference areas retrieved from multispectral images to reduce required user input and increase transferability. Results show an overall decrease of vegetation of 1500 hectares, of which 20% were used to expand the camp and 80% were deforested, which matches findings from other studies of this case. The time-series analysis reduced the impact of seasonal variations on the results, and accuracies between 88% and 95% were achieved. The most important input variables for the classification were vegetation indices based on synthetic aperture radar (SAR) backscatter intensity, but topographic parameters also played a role. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study area and footprints of the satellite images. ESA = European Space Agency.</p>
Full article ">Figure 2
<p>Workflow of the landcover classification for the area around Kutupalong. NDVI = Normalized Difference Vegetation Index, EVI = Enhanced Vegetation Index, MNDWI = Modified Normalized Difference Water Index, BUI = Built-up Index, IBI = Index-based Built-up Index, NDBI = Normalized Difference Build-up Index, HH = horizontal–horizontal, HV = horizontal–vertical, VH = vertical–horizontal, VV = vertical–vertical, S1 = Sentinel-1, A2 = ALOS-2, SRTM = Shuttle Radar Topography Mission, CART = Classification and Regression Tree, LK = average like-polarized magnitude, CS = average cross-polarized magnitude, CSI = Canopy Structure Index, VSI = Volume Scattering Index.</p>
Full article ">Figure 3
<p>Selected SAR features in the study area (outlined by the red polygon in <a href="#remotesensing-11-02047-f001" class="html-fig">Figure 1</a>) based on SAR data from March 2019. (<b>A</b>,<b>B</b>) composite HH (red), HV (green), and VV (blue) for filter images of 5 × 5 and 25 × 25 pixels; (<b>C</b>) combined indices CI1 (red, HH+VV), CI2 (green, HV+VH), and CI3 (blue, HH-VV); (<b>D</b>) Canopy Structure Index (red to yellow) for a median filter image of 9 × 9 pixels. Black line: camp outline in March 2019.</p>
Full article ">Figure 4
<p>Land cover classification of the study area between March 2016 and March 2019.</p>
Full article ">Figure 5
<p>Land cover in hectares of the land cover classes vegetation, open, water, and camp. (<b>A</b>) full study area (including precipitation); (<b>B</b>) 2 km buffer around the camp (as illustrated by the dashed gray line in <a href="#remotesensing-11-02047-f003" class="html-fig">Figure 3</a>D).</p>
Full article ">Figure 6
<p>Absolute change in land cover between March 2016 and March 2019, as detected in this study. Note: The dark tones indicate constant land cover, while the bright tones indicate transitions between two classes. For the sake of readability, gray colors were assigned to transitions related to water.</p>
Full article ">Figure 7
<p>Land cover change between July 2016 and January 2019. Left: area around Kutupalong. Each square consists of nine sub-squares representing the dominant land cover for each date, as coded in the legend on the top right. Right: selected trends in the entire study area based on the three first and last dates.</p>
Full article ">Figure 8
<p>Correspondence and differences between the results of this study and the land cover maps produced by Hassan et al. in 2018 [<a href="#B23-remotesensing-11-02047" class="html-bibr">23</a>] for the pre-influx situation (<b>A</b>) and the post-influx situation (<b>B</b>). Pixels in green, beige, and red have the same class in both studies (V = vegetation, O = open, C = camp). The other hues of the color matrix indicate the differences regarding these classes assigned in this study (columns) and the study of Hassan et al. in 2018 (rows).</p>
Full article ">Figure 9
<p>Frequency of window sizes of the 500 most important features used for the classification. Note: The window size of one pixel represents the unfiltered raster products.</p>
Full article ">Figure A1
<p>CART for March 2016.</p>
Full article ">Figure A2
<p>CART for July 2016.</p>
Full article ">Figure A3
<p>CART for October 2016.</p>
Full article ">Figure A4
<p>CART for February 2017.</p>
Full article ">Figure A5
<p>CART for June 2017.</p>
Full article ">Figure A6
<p>CART for July 2017.</p>
Full article ">Figure A7
<p>CART for February 2018.</p>
Full article ">Figure A8
<p>CART for June 2018.</p>
Full article ">Figure A9
<p>CART for July 2018.</p>
Full article ">Figure A10
<p>CART for January 2019.</p>
Full article ">Figure A11
<p>CART for March 2019.</p>
Full article ">
24 pages, 11147 KiB  
Article
UAV-Based Slope Failure Detection Using Deep-Learning Convolutional Neural Networks
by Omid Ghorbanzadeh, Sansar Raj Meena, Thomas Blaschke and Jagannath Aryal
Remote Sens. 2019, 11(17), 2046; https://doi.org/10.3390/rs11172046 - 30 Aug 2019
Cited by 101 | Viewed by 8444
Abstract
Slope failures occur when parts of a slope collapse abruptly under the influence of gravity, often triggered by a rainfall event or earthquake. The resulting slope failures often cause problems in mountainous or hilly regions, and the detection of slope failure is therefore [...] Read more.
Slope failures occur when parts of a slope collapse abruptly under the influence of gravity, often triggered by a rainfall event or earthquake. The resulting slope failures often cause problems in mountainous or hilly regions, and the detection of slope failure is therefore an important topic for research. Most of the methods currently used for mapping and modelling slope failures rely on classification algorithms or feature extraction, but the spatial complexity of slope failures, the uncertainties inherent in expert knowledge, and problems in transferability, all combine to inhibit slope failure detection. In an attempt to overcome some of these problems we have analyzed the potential of deep learning convolutional neural networks (CNNs) for slope failure detection, in an area along a road section in the northern Himalayas, India. We used optical data from unmanned aerial vehicles (UAVs) over two separate study areas. Different CNN designs were used to produce eight different slope failure distribution maps, which were then compared with manually extracted slope failure polygons using different accuracy assessment metrics such as the precision, F-score, and mean intersection-over-union (mIOU). A slope failure inventory data set was produced for each of the study areas using a frequency-area distribution (FAD). The CNN approach that was found to perform best (precision accuracy assessment of almost 90% precision, F-score 85%, mIOU 74%) was one that used a window size of 64 × 64 pixels for the sample patches, and included slope data as an additional input layer. The additional information from the slope data helped to discriminate between slope failure areas and roads, which had similar spectral characteristics in the optical imagery. We concluded that the effectiveness of CNNs for slope failure detection was strongly dependent on their design (i.e., the window size selected for the sample patch, the data used, and the training strategies), but that CNNs are currently only designed by trial and error. While CNNs can be powerful tools, such trial and error strategies make it difficult to explain why a particular pooling or layer numbering works better than any other. Full article
(This article belongs to the Special Issue Mass Movement and Soil Erosion Monitoring Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) The geographic location of the study areas (<b>B</b>) the area used for training and (<b>C</b>) the area used for testing.</p>
Full article ">Figure 2
<p>(<b>A</b>) The unmanned aerial vehicle (UAV) used to obtain remote sensing imagery for this study (a DJI Mavic 2 pro), and (<b>B</b>) field photographs of slope failures observed during a field visit to the Himalayas in April 2019.</p>
Full article ">Figure 3
<p>Camera locations and image overlap density: (<b>A</b>) training area, and (<b>B</b>) testing area. Blue areas indicate the highest overlap densities and red areas correspond to low overlap densities.</p>
Full article ">Figure 4
<p>Digital elevation model (DEM) and slope map produced for both training and testing areas.</p>
Full article ">Figure 5
<p>Presentation of derived inventories for (<b>A</b>) three zones of the training area, and (<b>B</b>) the testing area.</p>
Full article ">Figure 6
<p>Schematic representation of the main components of a non-cumulative FAD for a landslide inventory [<a href="#B33-remotesensing-11-02046" class="html-bibr">33</a>].</p>
Full article ">Figure 7
<p>Optimizing sample patch selection from the considered three training zones through the use of a fishnet tool. Red points represent slope failure areas, yellow points correspond to areas with no slope failures.</p>
Full article ">Figure 8
<p>CNN architectures with (<b>a</b>) a nine-layer depth CNN and (<b>b</b>) a six-layer depth CNN, trained separately, either with just three RGB spectral layers, or combining slope data with the RGB data. Input window sizes of 32 × 32 and 48 × 48 pixels were used for the six-layer depth CNN, and of 64 × 64 and 80 × 80 pixels for the nine-layer depth CNN.</p>
Full article ">Figure 9
<p>Dependence of the probability densities on the areas of three slope failure inventories for (<b>a</b>) the training area, and (<b>b</b>) the testing area.</p>
Full article ">Figure 10
<p>An illustration of convolution input sample patches with different window sizes for (<b>a</b>) two areas with no slope failures and (<b>b</b>) two areas containing slope failures.</p>
Full article ">Figure 11
<p>Landslide detection results using different CNN approaches, training datasets, and parameters. The CNN approaches have window sizes of (<b>A</b>) 32 × 32, (<b>B</b>) 48 × 48, (<b>C</b>) 64 × 64 and (<b>D</b>) 80 × 80, trained with optical data of RGB and also together with slope data.</p>
Full article ">Figure 12
<p>(<b>A</b>) Inventory of slope failure areas. (<b>B</b>) Slope failure areas detected by CNNs. (<b>C</b>) True positive (TP), false positive (FP) and false negative (FN) areas identified by comparing spatial overlaps between the polygons of (<b>A</b>,<b>B</b>).</p>
Full article ">Figure 13
<p>Illustration of the area of overlap and that of the union.</p>
Full article ">Figure 14
<p>Enlarged maps of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>CNN</mi> </mrow> <mrow> <mn>32</mn> <mo>_</mo> <mi>RGB</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>CNN</mi> <mrow> <mn>32</mn> <mo>_</mo> <mi>RGB</mi> <mo>,</mo> <mi mathvariant="normal">S</mi> </mrow> </msub> </mrow> </semantics></math>, illustrating the impact of adding slope data to the optical data to differentiate the (<b>A</b>,<b>B</b>) road body, (<b>C</b>) agricultural land, (<b>D</b>) river bed, and (<b>E</b>) bare soil and dirt track from slope failures.</p>
Full article ">Figure 14 Cont.
<p>Enlarged maps of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>CNN</mi> </mrow> <mrow> <mn>32</mn> <mo>_</mo> <mi>RGB</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>CNN</mi> <mrow> <mn>32</mn> <mo>_</mo> <mi>RGB</mi> <mo>,</mo> <mi mathvariant="normal">S</mi> </mrow> </msub> </mrow> </semantics></math>, illustrating the impact of adding slope data to the optical data to differentiate the (<b>A</b>,<b>B</b>) road body, (<b>C</b>) agricultural land, (<b>D</b>) river bed, and (<b>E</b>) bare soil and dirt track from slope failures.</p>
Full article ">Figure 15
<p>The influence of different training data sets (RGB, and RGB plus slope data) on the positive predictive value (PPV) (<b>left</b>), and the true positive rate (TPR) (<b>right</b>) of a CNN approach, using multiple sample patch window sizes.</p>
Full article ">
18 pages, 7110 KiB  
Article
Riverine Plastic Litter Monitoring Using Unmanned Aerial Vehicles (UAVs)
by Marlein Geraeds, Tim van Emmerik, Robin de Vries and Mohd Shahrizal bin Ab Razak
Remote Sens. 2019, 11(17), 2045; https://doi.org/10.3390/rs11172045 - 30 Aug 2019
Cited by 105 | Viewed by 14212
Abstract
Plastic debris has become an abundant pollutant in marine, coastal and riverine environments, posing a large threat to aquatic life. Effective measures to mitigate and prevent marine plastic pollution require a thorough understanding of its origin and eventual fate. Several models have estimated [...] Read more.
Plastic debris has become an abundant pollutant in marine, coastal and riverine environments, posing a large threat to aquatic life. Effective measures to mitigate and prevent marine plastic pollution require a thorough understanding of its origin and eventual fate. Several models have estimated that land-based sources are the main source of marine plastic pollution, although field data to substantiate these estimates remain limited. Current methodologies to measure riverine plastic transport require the availability of infrastructure and accessible riverbanks, but, to obtain measurements on a higher spatial and temporal scale, new monitoring methods are required. This paper presents a new methodology for quantifying riverine plastic debris using Unmanned Aerial Vehicles (UAVs), including a first application on Klang River, Malaysia. Additional plastic measurements were done in parallel with the UAV-based approach to make comparisons between the two methods. The spatiotemporal distribution of the plastics obtained with both methods show similar patterns and variations. With this, we show that UAV-based monitoring methods are a promising alternative for currently available approaches for monitoring riverine plastic transport, especially in remote and inaccessible areas. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of drone flight path used for the aerial survey. Under normal conditions, <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> = 5 m, for quantitative plastic transport estimation and plastic classification, and <math display="inline"><semantics> <msub> <mi>H</mi> <mn>2</mn> </msub> </semantics></math> = 15 m, for qualitatively indicating plastic transport “hot pots” along the river transect. <math display="inline"><semantics> <msub> <mi>H</mi> <mn>3</mn> </msub> </semantics></math> is calculated based on the total river width, <span class="html-italic">B</span>.</p>
Full article ">Figure 2
<p>(<b>A</b>) Measurement locations along a transect at the case study site for the UAV-based measuring procedure on Klang River, Malaysia. Measurement location numbering starts at the opposing riverbank relative to the take-off location. (<b>B</b>) Overview of the measuring locations along the Jalan Tengku Kalana bridge in the city of Klang. (<b>C</b>) Overview of the Klang River and the locations of the observation sites relative to each other. “A” indicates the visual counting observation site at the Jalan Tengku Kalana bridge, and “B” indicates the drone observation site.</p>
Full article ">Figure 3
<p>Examples of aerial images obtained during the aerial survey. (<b>A</b>) An example of an aerial image mainly showing floating plastic debris and some organic debris. (<b>B</b>) An example of an aerial image showing (partially) embedded riverbank plastics.</p>
Full article ">Figure 4
<p>Deployment of the sampling net at the additional plastic sampling location (Credit: Florent Beauverd).</p>
Full article ">Figure 5
<p>Cross-sectional profiles of observed plastic density and fictional plastic density for the case <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>actual</mi> </msub> <mo>=</mo> <msub> <mi>H</mi> <mi>recorded</mi> </msub> <mo>+</mo> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math> over the river width for 30 April, 1 May and 4 May 2019. The maximum of the error band indicates the maximum of the plastic density at height <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>recorded</mi> </msub> <mo>+</mo> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math> or the plastic density at height <math display="inline"><semantics> <msub> <mi>H</mi> <mi>recorded</mi> </msub> </semantics></math> plus measurement error <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>/</mo> <msqrt> <mi>N</mi> </msqrt> </mrow> </semantics></math>, and the minimum of the error band indicates the minimum of the plastic density at height <math display="inline"><semantics> <msub> <mi>H</mi> <mi>recorded</mi> </msub> </semantics></math> and the plastic density at height <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>+</mo> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math> minus the measurement error of the sample. (<b>A</b>) The cross-sectional profile of the mean plastic density on 30 April; (<b>B</b>) the cross-sectional profile of the mean plastic density on 1 May 2019; and (<b>C</b>) the cross-sectional profile of the mean plastic density on 4 May 2019.</p>
Full article ">Figure 6
<p>(<b>A</b>) Plot of the cumulative plastic transport over the river width, measured with both the aerial survey and the bridge-based visual survey. The cumulative plastic transport over the river width for resampled visual survey data is also indicated. (<b>B</b>) Overview of the distribution of plastic transport over the river width, measured using the aerial survey and the bridge-based visual counts. The distribution of plastic transport over the river width using resampled visual survey data is also indicated.</p>
Full article ">Figure 7
<p>Overview of measured plastic statistics. (<b>A</b>) Spatiotemporal variation of the plastic density on 4 May 2019. The mean plastic density without taking into account the possible error introduced by the altitude difference <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math>, and the plastic density calculated for the case that the maximum recorded altitude difference <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>H</mi> </mrow> </semantics></math> is introduced everywhere during flight are indicated. The shown densities are absolute, i.e., not taking into account the flow direction of the river. (<b>B</b>) The recorded flow velocity obtained by visual measurements for all individual segments at the Jalan Tengku Kalana bridge on 4 May 2019. Negative values indicate an upstream current. (<b>C</b>) Overview of the mean plastic density in <math display="inline"><semantics> <msup> <mrow> <mrow> <mo>#</mo> <mo>/</mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> <mn>2</mn> </msup> </semantics></math> from drone measurements averaged over the width of the bridge, set out against the total plastic concentration in <math display="inline"><semantics> <msup> <mrow> <mrow> <mo>#</mo> <mo>/</mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> <mn>3</mn> </msup> </semantics></math> as measured by plastic sampling.</p>
Full article ">
26 pages, 22361 KiB  
Article
Time Series of Landsat Imagery Shows Vegetation Recovery in Two Fragile Karst Watersheds in Southwest China from 1988 to 2016
by Jie Pei, Li Wang, Xiaoyue Wang, Zheng Niu, Maggi Kelly, Xiao-Peng Song, Ni Huang, Jing Geng, Haifeng Tian, Yang Yu, Shiguang Xu, Lei Wang, Qing Ying and Jianhua Cao
Remote Sens. 2019, 11(17), 2044; https://doi.org/10.3390/rs11172044 - 30 Aug 2019
Cited by 34 | Viewed by 4923
Abstract
Since the implementation of China’s afforestation and conservation projects during recent decades, an increasing number of studies have reported greening trends in the karst regions of southwest China using coarse-resolution satellite imagery, but small-scale changes in the heterogenous landscapes remain largely unknown. Focusing [...] Read more.
Since the implementation of China’s afforestation and conservation projects during recent decades, an increasing number of studies have reported greening trends in the karst regions of southwest China using coarse-resolution satellite imagery, but small-scale changes in the heterogenous landscapes remain largely unknown. Focusing on two typical karst regions in the Nandong and Xiaojiang watersheds in Yunnan province, we processed 2,497 Landsat scenes from 1988 to 2016 using the Google Earth Engine cloud platform and analyzed vegetation trends and associated drivers. We found that both watersheds experienced significant increasing trends in annual fractional vegetation cover, at a rate of 0.0027 year−1 and 0.0020 year−1, respectively. Notably, the greening trends have been intensifying during the conservation period (2001–2016) even under unfavorable climate conditions. Human-induced ecological engineering was the primary factor for the increased greenness. Moreover, vegetation change responded differently to variations in topographic gradients and lithological types. Relatively more vegetation recovery was found in regions with moderate slopes and elevation, and pure limestone, limestone and dolomite interbedded layer as well as impure carbonate rocks than non-karst rocks. Partial correlation analysis of vegetation trends and temperature and precipitation trends suggested that climate change played a minor role in vegetation recovery. Our findings contribute to an improved understanding of the mechanisms behind vegetation changes in karst areas and may provide scientific supports for local afforestation and conservation policies. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: (<b>a</b>) An example of karst rocky desertification on hillsides near Xibeile Township, Mengzi city, Yunnan province (photo taken during a field survey in 2018); (<b>b</b>) the location of Yunnan province and Kunming (the provincial capital city), as well as of Nandong and Xiaojiang watersheds; (<b>c</b>) land cover types of the Nandong watershed in 2015; and (<b>d</b>) land cover types of the Xiaojiang watershed in 2015; and (<b>e</b>) lithological types and township/city seats in Nandong watershed; and (<b>f</b>) lithological types and township/county seats in Xiaojiang watershed. Main lithological attributes include PL (pure limestone), LD (limestone and dolomite interbedded layer), IC (impure carbonate rocks), and NK (non-karst rocks).</p>
Full article ">Figure 2
<p>Workflow of detection and attribution of vegetation greening trends in two typical karst watersheds of Nandong and Xiaojiang in southwest China from 1988 to 2016.</p>
Full article ">Figure 3
<p>Result of accuracy validation of the sub-pixel model in: (<b>a</b>) Nandong, and (<b>b</b>) Xiaojiang watershed.</p>
Full article ">Figure 4
<p>Spatial distribution of average annual fractional vegetation cover (annFVC) for different time periods. Top row is the Nandong watershed: (<b>a</b>) Full time series (1988–2016); (<b>b</b>) reference period (1988–2000); and (<b>c</b>) conservation period (2001–2016); and bottom row is Xiaojiang watershed; (<b>d</b>) full time series (1988–2016); (<b>e</b>) reference period (1988–2000); and (<b>f</b>) conservation period (2001–2016).</p>
Full article ">Figure 5
<p>The temporal trends in annual fractional vegetation cover (annFVC) at the watershed level from 1988 to 2016 (full time series) and during 1988–2000 (reference period), and 2001-2016 (conservation period) in: (<b>a</b>) Nandong, and (<b>b</b>) Xiaojiang watershed. Slopes of the trends during these three periods and their corresponding significance levels are shown. The black solid line represents the regression fit to the annFVC data during 1988–2016, while the red and green lines denote the regression fit during 1988–2000 and 2001–2016, respectively.</p>
Full article ">Figure 6
<p>The spatial distribution of vegetation trends for different periods in the Nandong watershed: (<b>a</b>) Full time series (1988–2016); (<b>b</b>) reference period (1988–2000); (<b>c</b>) conservation period (2001–2016); and (<b>d</b>) vegetation trend slope difference between the conservation period (2001–2016) and reference period (1988–2000).</p>
Full article ">Figure 7
<p>The spatial distribution of vegetation trends for different periods in the Xiaojiang watershed: (<b>a</b>) Full time series (1988–2016); (<b>b</b>) reference period (1988–2000); (<b>c</b>) conservation period (2001–2016); and (<b>d</b>) vegetation trend slope difference between the conservation period (2001–2016) and reference period (1988–2000).</p>
Full article ">Figure 8
<p>The spatial distribution of the terrain niche index (TNI) values in: (<b>a</b>) Nandong; and (<b>b</b>) Xiaojiang watersheds; and the pixel proportion of vegetation cover trends for different terrain niche index (TNI) intervals in; (<b>c</b>) Nandong; and (<b>d</b>) Xiaojiang watersheds.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of average annFVC under different lithological settings in the karst watersheds of Nandong and Xiaojiang during the study period of 1988–2016; and the pixel proportion of vegetation trends under different lithological attributes in: (<b>b</b>) Nandong; and (<b>c</b>) Xiaojiang watersheds. Main lithological attributes include PL (pure limestone), LD (limestone and dolomite interbedded layer), IC (impure carbonate rocks), and NK (non-karst rocks).</p>
Full article ">Figure 10
<p>The change trend of human-induced residual of vegetation changes from 1988 to 2016 in: (<b>a</b>) Nandong; and (<b>b</b>) Xiaojiang watersheds.</p>
Full article ">Figure 11
<p>Annual (right y-axis) and cumulative (left y-axis) areas of government-funded afforestation and conservation efforts from 2002 to 2016 in: (<b>a</b>) Nandong; and (<b>b</b>) Xiaojiang watersheds.</p>
Full article ">
17 pages, 7198 KiB  
Article
A New Vegetation Index to Detect Periodically Submerged Mangrove Forest Using Single-Tide Sentinel-2 Imagery
by Mingming Jia, Zongming Wang, Chao Wang, Dehua Mao and Yuanzhi Zhang
Remote Sens. 2019, 11(17), 2043; https://doi.org/10.3390/rs11172043 - 29 Aug 2019
Cited by 123 | Viewed by 12175
Abstract
Mangrove forests are tropical trees and shrubs that grow in sheltered intertidal zones. Accurate mapping of mangrove forests is a great challenge for remote sensing because mangroves are periodically submerged by tidal floods. Traditionally, multi-tides images were needed to remove the influence of [...] Read more.
Mangrove forests are tropical trees and shrubs that grow in sheltered intertidal zones. Accurate mapping of mangrove forests is a great challenge for remote sensing because mangroves are periodically submerged by tidal floods. Traditionally, multi-tides images were needed to remove the influence of water; however, such images are often unavailable due to rainy climates and uncertain local tidal conditions. Therefore, extracting mangrove forests from a single-tide imagery is of great importance. In this study, reflectance of red-edge bands in Sentinel-2 imagery were utilized to establish a new vegetation index that is sensitive to submerged mangrove forests. Specifically, red and short-wave near infrared bands were used to build a linear baseline; the average reflectance value of four red-edge bands above the baseline is defined as the Mangrove Forest Index (MFI). To evaluate MFI, capabilities of detecting mangrove forests were quantitatively assessed between MFI and four widely used vegetation indices (VIs). Additionally, the practical roles of MFI were validated by applying it to three mangrove forest sites globally. Results showed that: (1) theoretically, Jensen–Shannon divergence demonstrated that a submerged mangrove forest and water pixels have the largest distance in MFI compared to other VIs. In addition, the boxplot showed that all submerged mangrove forests could be separated from the water background in the MFI image. Furthermore, in the MFI image, to separate mangrove forests and water, the threshold is a constant that is equal to zero. (2) Practically, after applying the MFI to three global sites, 99–102% of submerged mangrove forests were successfully extracted by MFI. Although there are still some uncertainties and limitations, the MFI offers great benefits in accurately mapping mangrove forests as well as other coastal and aquatic vegetation worldwide. Full article
(This article belongs to the Special Issue Remote Sensing of Mangroves)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Snapshots of low- and high-tide Sentinel MSI images of study area ((<b>A</b>) during local low tide all mangrove forests were emerged; (<b>B</b>) during local high tide some of the mangrove forests were submerged).</p>
Full article ">Figure 2
<p>Work flow for identifying submerge mangrove forests.</p>
Full article ">Figure 3
<p>Distribution of emerged and submerged mangrove forests in high-tide MSI image.</p>
Full article ">Figure 4
<p>The spectral curves of water, emerged, and submerged vegetation, as well as the absorption coefficients of water (cm<sup>−1</sup>). ((<b>A</b>) field-measurement [<a href="#B15-remotesensing-11-02043" class="html-bibr">15</a>]; (<b>B</b>) field-measured of submerged vegetation’s reflectance at 1.5, 16, and 40 cm below water surface [<a href="#B25-remotesensing-11-02043" class="html-bibr">25</a>]).</p>
Full article ">Figure 5
<p>(<b>A</b>) Typical spectral curves of emerged (EMF), submerged mangrove forests (SMF) and water (WB) in Sentinel-2A MSI image. (<b>B</b>) EMF and SMF forests in Sentinel MSI image and field photo. (<b>a</b>) Represents shallow submerged mangrove forests (0–30 cm), (<b>b</b>) Represents deep submerged mangrove forests (30–60 cm).</p>
Full article ">Figure 6
<p>Baseline theory of establishing Mangrove Forest Index (MFI), including reflectance of submerged mangrove forest and water.</p>
Full article ">Figure 7
<p>Boxplot of different index values over submerged mangrove forest pixels and water pixels (MFI and Floating Algae Index (FAI) are 10 times their original value. SMF means submerged mangrove forest, WB means Water Body. The horizontal axis represents different indices).</p>
Full article ">Figure 8
<p>Global study sites of mangrove forests. (Displayed imagery: R:G:B = Sentinel MSI Band 8A: 4:3. (<b>a</b>) Zhenzhu Harbor, Guangxi, China; (<b>b</b>) Dalhousie Island, Sundarbans, India; (<b>c</b>) Baia do Arraial, Amazon Coast, Brazil).</p>
Full article ">Figure 9
<p>Apply the MFI to extract mangrove forests in Zhenzhu Harbor, Guangxi, China. (<b>A</b>) MFI image, (<b>B</b>) mangrove forests extracted from MFI image, and (<b>C</b>) reference map.</p>
Full article ">Figure 10
<p>Apply the MFI to extract mangrove forests in Dalhousie Island, Sundarbans, India. (<b>A</b>) Sentinel MSI image (Band combination: R:G:B = 8A: 4: 3); (<b>B</b>) Sentinel MSI-based MFI image; (<b>a</b>–<b>c</b>): Google Earth snapshot.</p>
Full article ">Figure 11
<p>Apply the MFI to extract mangrove forests in Baia do Arraial, Amazon Coast, Brazil. (<b>A</b>) Sentinel MSI image (band combination: R:G:B = 8A: 4: 3), (<b>B</b>) Sentinel MSI based MFI image, (<b>a</b>,<b>b</b>): Google Earth snapshot.</p>
Full article ">Figure 12
<p>Intermittently flooded mangrove forests in local low tide period.</p>
Full article ">
43 pages, 9784 KiB  
Review
Modeling 3D Free-geometry Volumetric Sources Associated to Geological and Anthropogenic Hazards from Space and Terrestrial Geodetic Data
by Antonio G. Camacho and José Fernández
Remote Sens. 2019, 11(17), 2042; https://doi.org/10.3390/rs11172042 - 29 Aug 2019
Cited by 6 | Viewed by 4029
Abstract
Recent decades have shown an explosion in the quantity and quality of geodetic data, mainly space-based geodetic data, that are being applied to geological and anthropogenic hazards. This has produced the need for new approaches for analyzing, modeling and interpreting these geodetic data. [...] Read more.
Recent decades have shown an explosion in the quantity and quality of geodetic data, mainly space-based geodetic data, that are being applied to geological and anthropogenic hazards. This has produced the need for new approaches for analyzing, modeling and interpreting these geodetic data. Typically, modeling of deformation and gravity changes follows an inverse approach using analytical or numerical solutions, where normally regular geometries (point sources, disks, prolate or oblate spheroids, etc.) are assumed at the initial stages and the inversion is carried out in a linear context. Here we review an original methodology for the simultaneous, nonlinear inversion of gravity changes and/or surface deformation (measured with different techniques) to determine 3D (three-dimensional) bodies, without any a priori assumption about their geometries, embedded into an elastic or poroelastic medium. Such a fully nonlinear inversion has led to interesting results in volcanic environments and in the study of water tables variation due to its exploitation. This methodology can be used to invert geodetic remote sensing data or terrestrial data alone, or in combination. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of survey benchmarks for repeated gravity and leveling in Campi Flegrei. (<b>b</b>) Temporal changes for gravity (red) and elevation (blue) at Serapeo from 1980 to 2000. The vertical dimension of the symbols is representative of the errors. A high correlation is observed between both data types, but the elevation values show a more continuous pattern. Modified from [<a href="#B1-remotesensing-11-02042" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) <span class="html-italic">LOS</span> deformation velocity computed from ascending passes for the period 1993–2000 and (<b>b</b>) <span class="html-italic">LOS</span> deformation velocity computed from descending passes for the period 1992–2000. Modified from [<a href="#B1-remotesensing-11-02042" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Three cross sections of the 3-D model for depressurization: Horizontal (depth 1500 m) and NNE–SSW and WNW–ESE vertical sections of the under-pressure model across a central position resulting from the simultaneous inversion of the gravity changes, leveling changes, and DInSAR data in Campi Flegrei for 1992–2000 assuming an elastic half-space [<a href="#B1-remotesensing-11-02042" class="html-bibr">1</a>].</p>
Full article ">Figure 4
<p>Some additional views of the LOS ascending data fit: (<b>a</b>) residual map and (<b>b</b>) WE central profile for observed-modeled comparison [<a href="#B1-remotesensing-11-02042" class="html-bibr">1</a>].</p>
Full article ">Figure 5
<p>MSBAS results, 1993–2013, for the images detailed in <a href="#remotesensing-11-02042-t001" class="html-table">Table 1</a>. (<b>a</b>) Vertical cumulative component of deformation in centimeters, 1993–2013; (<b>b</b>) east-west cumulative component of deformation in centimeters, 1993–2013; (<b>c</b>) time series of vertical and east-west components shown in panels (<b>a</b>) and (<b>b</b>) at location of maximum subsidence, identified with the green dot. The reference location for MSBAS processing is located at 4532380, 4335351 (14.23°N, 40.94°E). Modified from [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 6
<p>Source location, depth and shape for deflation period of 1993–1999. (<b>a</b>) 3D perspective; (<b>b</b>) map view of source below Campi Flegrei caldera; (<b>c</b>) EW vertical profile; (<b>d</b>) NS vertical profile [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Observed vertical displacement rate (left), cm/yr, for the subsidence period, 1993–1999; modelled vertical displacement rate from inversion (center); and residual of observed and modelled displacements (right). (<b>b</b>) Observed EW displacement rate (left), cm/yr, 1993–1999; modelled EW displacement from inversion (center); and residual of observed and modelled displacements (right). Images were saturated at the corresponding scales in order to improve comparison and highlight differences among panels [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 8
<p>Source location, depth and shape for deflation period of 1999–2000. (<b>a</b>) 3D perspective; (<b>b</b>) map view of source below Campi Flegrei caldera; (<b>c</b>) EW vertical profile; (<b>d</b>) NS vertical profile [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) Observed vertical displacement rate (left), cm/yr, for the subsidence period, 1999–2000; modelled vertical displacement rate from inversion (center); and residual of observed and modelled displacements (right). (<b>b</b>) Observed EW displacement rate (left), cm/yr, 1999–2000; modelled EW displacement from inversion (center); and residual of observed and modelled displacements (right). Images were saturated at the corresponding scales in order to improve comparison and highlight differences among panels [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 10
<p>Source location, depth and shape for deflation period of 2000–2005. (<b>a</b>) 3D perspective; (<b>b</b>) map view of source below Campi Flegrei caldera; (<b>c</b>) EW vertical profile; (<b>d</b>) NS vertical profile [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) Observed vertical displacement rate (left), cm/yr, for the subsidence period, 2000–2005; modelled vertical displacement rate from inversion (center); and residual of observed and modelled displacements (right). (<b>b</b>) Observed EW displacement rate (left), cm/yr, 2000–2005; modelled EW displacement from inversion (center); and residual of observed and modelled displacements (right). Images were saturated at the corresponding scales in order to improve comparison and highlight differences among panels [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 12
<p>Source location, depth and shape for deflation period of 2005–2007. (<b>a</b>) 3D perspective; (<b>b</b>) map view of source below Campi Flegrei caldera; (<b>c</b>) EW vertical profile; (<b>d</b>) NS vertical profile [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) Observed vertical displacement rate (left), cm/yr, for the subsidence period, 2005–2007; modelled vertical displacement rate from inversion (center); and residual of observed and modelled displacements (right). (<b>b</b>) Observed EW displacement rate (left), cm/yr, 2005–2007; modelled EW displacement from inversion (center); and residual of observed and modelled displacements (right). Images were saturated at the corresponding scales in order to improve comparison and highlight differences among panels [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 14
<p>Source location, depth and shape for deflation period of 2007–2013. (<b>a</b>) 3D perspective; (<b>b</b>) map view of source below Campi Flegrei caldera; (<b>c</b>) EW vertical profile; (<b>d</b>) NS vertical profile [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) Observed vertical displacement rate (left), cm/yr, for the subsidence period, 2007–2013; modelled vertical displacement rate from inversion (center); and residual of observed and modelled displacements (right). (<b>b</b>) Observed EW displacement rate (left), cm/yr, 2007–2013; modelled EW displacement from inversion (center); and residual of observed and modelled displacements (right). Images were saturated at the corresponding scales in order to improve comparison and highlight differences among panels [<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>].</p>
Full article ">Figure 16
<p>Planar view (<b>a</b>) and EW vertical cut view (<b>b</b>) of the main elements from the real time modelling process covering the inflation period and the eruption on 13 May 2008, and possible connections between the source bodies and the location of the earthquakes (blue circles) for 12–14 May 2008. Contours correspond to surface topography. Gray triangles indicate the location of the GPS stations. Shaded gray area outlines the position of the High Velocity Body (HVB) [<a href="#B100-remotesensing-11-02042" class="html-bibr">100</a>]. In green, the cumulated sources during the yearlong recharging phase and in orange the sources active during the day of the eruption. Cannavò et al. [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>] suggested fast magma ascent from the reservoir level (2–3 km depth bsl) along paths indicated by the source geometries and the earthquake locations (dashed lines in the figures) [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 17
<p>(<b>a</b>) Best windowing. In blue the mean (for all the stations and components) standard deviations, for different window sizes, of the estimated displacement time series from GPS solutions. (<b>b</b>) Mean autocorrelation function with shaded error bar for all the 1-Hz GPS time series. It is possible to see the steep decay of autocorrelation after lags of a few hundreds of seconds. Modified from [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 18
<p>Model for the inflation phase. Cumulated magmatic body (in green) modelled for the period 1 January 2007 to 12 May 2008 and time evolution (in red) for the sub-period that corresponds to the months of June and July 2007, when the system was subject to a recharging episode, which appears as an ascending high-pressure body. Parallelepiped sources represent active point pressure sources within the same volume. Blue circles correspond to location of the earthquakes for 12–14 May 2008 [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 19
<p>Time sequence of pressurized source models (described by aggregation of parallelepiped cells which map active point pressure sources) corresponding to key instants from the real time inversions during the 13 May 2008, eruption of Mount Etna volcano at different UTC times: (<b>a</b>) at 7:00, (<b>b</b>) at 8:30, (<b>c</b>) at 11:30, and (<b>d</b>) at 15:00. Red volumes are positive-pressure sources while blue volumes are negative pressure sources. Structures marked in orange correspond to the cumulated sources describing the main source structures across the sequence. Contours correspond to surface topography. Gray triangles indicate the location of the GPS stations. Blue circles correspond to location of the earthquakes for 12–14 May 2008 [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 20
<p>Graphic summary of the sources modeled in the studied 2008 Mt. Etna eruption. <b>HVB</b>: High velocity body, <b>F</b>: the dyke as modelled by the present study and previous works [<a href="#B102-remotesensing-11-02042" class="html-bibr">102</a>], <b>M</b>: main source body for the inflation phase preceding the eruption. UTM coordinates and depth are expressed in meters. The azimuth view is 40° [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 21
<p>3D uncertainty maps for the considered GPS network of 10 stations, representing, respectively, (<b>a</b>) the pressure changes, (<b>b</b>), the depth changes, and (<b>c</b>) the horizontal deviations required to produce a surface deformation with quadratic mean value 1 mm. Map (<b>d</b>) represents the minimum horizontal deviation for an isolated body of 1 km<sup>3</sup> with a pressure change given in panel (<b>a</b>) to produce a deformation at the GPS stations with rms magnitude of 1 mm [<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>].</p>
Full article ">Figure 22
<p>(<b>a</b>) Location of the Alto Guadalentín Basin, the Bajo Guadalentín Basin and the Guadalentín River that formed the two basins. Black lines depict main faults in the area. The locations and names of the main cities in the area are shown. The topography has been obtained from MDT05 2015 CC-BY 4.0 digital elevation model [<a href="#B111-remotesensing-11-02042" class="html-bibr">111</a>]. (<b>b</b>) Subsidence area detected in previous studies31 by means of DInSAR techniques along the Alto Guadalentín Basin. Subsidence rates have a maximum of 16 cm/yr for the period 2006–2011 located ~4 km south-west the city of Lorca. The black stars are damage locations due to the M = 5.1 May 2008 Lorca earthquake. Red lines are main faults (AMF, Alhama de Murcia Fault). The contour lines indicate 2 cm/yr DInSAR subsidence due to groundwater pumping. (<b>c</b>) Location of the monitoring GNSS control stations deployed in the area of Alto Guadalentín. The network consists of 33 monitoring stations (blue circles show their location) and covers an area of about 70 km<sup>2</sup>. The network is designed to allow high accuracy GNSS surveys and also includes two existing continuous GNSS stations. Main population centers are depicted with white stars. Modified from [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>].</p>
Full article ">Figure 23
<p>Displacement rates determined from GNSS observations. Results corresponding to the period November 2015–February 2017. (<b>a</b>) Annual vertical displacement rates, subsidence, measured with standard confidence bars. (<b>b</b>) Average annual horizontal displacements with standard confidence regions. Additional results are shown in the Supplementary Information from [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>]. (<b>c</b>) and (<b>d</b>) show the results obtained from the A-DInSAR processing using CPT technique. Both geometries, ascending and descending, have been processed using a multilook window of 3 × 13 pixels (azimuth × range) which generates a square pixel of about 60 × 60 meters in ground resolution. Coherence method has been used for pixel selection coherence method. Results are shown for the period November 2015–February 2017. (<b>c</b>) Line of Sight (LOS) velocity values obtained for the ascending orbit. (<b>d</b>) LOS velocity values for the descending orbit. Black dots locate the GNSS stations. Modified from [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>].</p>
Full article ">Figure 24
<p>East-West and Vertical displacements obtained by A-DInSAR. (<b>a</b>) Horizontal (East-West) and (<b>b</b>) vertical (Up-Down) displacement rates estimations obtained by decomposition of the LOS detected velocity using ascending and descending orbits. GNSS displacements are also plotted with arrows to compare. Results are shown for the period November 2015–February 2017. Modified from [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>].</p>
Full article ">Figure 25
<p>Representation of the inversion results obtained for the 1D, 2D, 3D and 2D+3D considered data sets. (<b>a</b>) Obtained source for Case A; (<b>b</b>) for Case B; (<b>c</b>) for Case C; (<b>d</b>) Results for Case D; (<b>e</b>) for Case E; (<b>f</b>) for Case F; (<b>g</b>) for Case G; (<b>i</b>) for Case I and (<b>j</b>) for Case J. Blue color indicates negative pressure value cells, produced by water extraction. White color indicates positive pressure change cells. These positive pressure sources adjust the errors and the effects of other deformation sources, different from water extraction (e.g., of tectonic origin) [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>].</p>
Full article ">Figure 25 Cont.
<p>Representation of the inversion results obtained for the 1D, 2D, 3D and 2D+3D considered data sets. (<b>a</b>) Obtained source for Case A; (<b>b</b>) for Case B; (<b>c</b>) for Case C; (<b>d</b>) Results for Case D; (<b>e</b>) for Case E; (<b>f</b>) for Case F; (<b>g</b>) for Case G; (<b>i</b>) for Case I and (<b>j</b>) for Case J. Blue color indicates negative pressure value cells, produced by water extraction. White color indicates positive pressure change cells. These positive pressure sources adjust the errors and the effects of other deformation sources, different from water extraction (e.g., of tectonic origin) [<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>].</p>
Full article ">Figure 26
<p>Schematic flow diagram of the described inversion methodology [<a href="#B1-remotesensing-11-02042" class="html-bibr">1</a>,<a href="#B5-remotesensing-11-02042" class="html-bibr">5</a>,<a href="#B17-remotesensing-11-02042" class="html-bibr">17</a>,<a href="#B53-remotesensing-11-02042" class="html-bibr">53</a>,<a href="#B54-remotesensing-11-02042" class="html-bibr">54</a>] where we start from the data set, the medium characteristics and its 3D gridding, to get (using the direct model equations and complementary conditions) a 3D source model of the anomalous sources via a growth process.</p>
Full article ">
20 pages, 12659 KiB  
Article
Analysis of Changes and Potential Characteristics of Cultivated Land Productivity Based on MODIS EVI: A Case Study of Jiangsu Province, China
by Weiyi Xu, Jiaxin Jin, Xiaobin Jin, Yuanyuan Xiao, Jie Ren, Jing Liu, Rui Sun and Yinkang Zhou
Remote Sens. 2019, 11(17), 2041; https://doi.org/10.3390/rs11172041 - 29 Aug 2019
Cited by 30 | Viewed by 4685
Abstract
Cultivated land productivity is a basic guarantee of food security. This study extracted the multiple cropping index (MCI) and most active days (MAD, i.e., days when the EVI exceeded a threshold) based on crop growth EVI curves to analyse the changes and potential [...] Read more.
Cultivated land productivity is a basic guarantee of food security. This study extracted the multiple cropping index (MCI) and most active days (MAD, i.e., days when the EVI exceeded a threshold) based on crop growth EVI curves to analyse the changes and potential characteristics of cultivated land productivity in Jiangsu Province during 2001–2017. The results are as follows: (1) The MCI of 83.8% of cultivated land remained unchanged in Jiangsu, the cultivated land with changed MCI (16.2%) was mainly concentrated in the southern and eastern coastal areas of Jiangsu, and the main cropping systems were single and double seasons. (2) The changes in cultivated land productivity were significant and had an obvious spatial distribution. The areas where the productivity of single cropping system changed occupied 67.8% of the total cultivated land of single cropping system, and the decreased areas (46.5%) were concentrated in southern Jiangsu. (3) For double cropping systems, the percentages of the changed productivity areas accounting for cultivated land were 82.7% and 73.3%. The decreased areas were distributed in central Jiangsu. In addition, the productivity of the first crop showed an overall (72%) increasing trend and increased areas (40.8%) of the second crop were found in northern Jiangsu. (4) During 2001–2017, cultivated land productivity greatly improved in Jiangsu. In the areas where productivity increased, the proportions of cultivated land with productivity potential space greater than 20% in single and double cropping systems were greater than 60% and 90%, respectively. In the areas where productivity decreased, greater than 25% and 75% of cultivated land had potential space in greater than 80% of the single and double cropping systems, respectively. This result shows that productivity still has much room for development in Jiangsu. This study provides new insight for studying cultivated land productivity and provides references for guiding agricultural production. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Jiangsu Province in China.</p>
Full article ">Figure 2
<p>Study framework.</p>
Full article ">Figure 3
<p>Changes in the MCI (<b>a</b>) and main cropping systems (<b>b</b>) under the invariant area of the MCI in Jiangsu Province during the period 2001–2017.</p>
Full article ">Figure 4
<p>Spatial distribution of the MAD of single cropping system. Partial results (T<sub>1</sub>, T<sub>3</sub>, T<sub>5</sub>, T<sub>7</sub>, T<sub>9</sub>, and T<sub>11</sub>) are shown. T<sub>1</sub>: 2001–2007; T<sub>2</sub>: 2002–2008; T<sub>3</sub>: 2003–2009; …; T<sub>11</sub>: 2011–2017.</p>
Full article ">Figure 5
<p>Changes and statistical results of the MAD of single cropping system. Partial results (P<sub>1</sub>, P<sub>4</sub>, P<sub>7</sub>, and P<sub>10</sub>) are shown. The statistical chart shows that the proportion of cultivated land of the MAD changed from P<sub>1</sub> to P<sub>10</sub>. P<sub>1</sub>: T<sub>2</sub>–T<sub>1</sub>; P<sub>2</sub>: T<sub>3</sub>–T<sub>2</sub>; …; P<sub>10</sub>: T<sub>11</sub>–T<sub>10</sub>.</p>
Full article ">Figure 6
<p>Slope, p, productivity changes, and potential space of single cropping system. (<b>a</b>) the slope of the linear regression equation; (<b>b</b>) the significance of the productivity changes (<span class="html-italic">p</span> &lt; 0.05); (<b>c</b>) the productivity changes; (<b>d</b>,<b>e</b>) the productivity potential space of a single cropping system in an area with a significant change in productivity.</p>
Full article ">Figure 7
<p>Spatial distribution of the first crop MAD during a double season. Partial results (T1, T3, T5, T7, T9, and T11) are shown. T1: 2001–2007; T2: 2002–2008; T3: 2003–2009; …; T11: 2011–2017.</p>
Full article ">Figure 8
<p>Changes and statistical results of the first crop MAD during a double season. Partial results (P<sub>1</sub>, P<sub>4</sub>, P<sub>7</sub>, and P<sub>10</sub>) are shown. The statistical chart shows that the proportion of cultivated land of the first crop MAD changed from P<sub>1</sub> to P<sub>10</sub>. P<sub>1</sub>: T<sub>2</sub>-T<sub>1</sub>; P<sub>2</sub>: T<sub>3</sub>-T<sub>2</sub>; …; P<sub>10</sub>: T<sub>11</sub>–T<sub>10</sub>.</p>
Full article ">Figure 9
<p>Slope, p, productivity changes, and potential space of the first crop during a double season. (<b>a</b>) the slope of the linear regression equation; (<b>b</b>) the significance of the productivity changes (<span class="html-italic">p</span> &lt; 0.05); (<b>c</b>) the productivity changes of the first crop; (<b>d</b>,<b>e</b>) the productivity potential space of the first crop in the areas where productivity changed.</p>
Full article ">Figure 10
<p>Spatial distribution of the second crop MAD during a double season. Partial results (T<sub>1</sub>, T<sub>3</sub>, T<sub>5</sub>, T<sub>7</sub>, T<sub>9</sub>, and T<sub>11</sub>) are shown. T<sub>1</sub>: 2001–2007; T<sub>2</sub>: 2002–2008; T<sub>3</sub>: 2003–2009; …; T<sub>11</sub>: 2011–2017.</p>
Full article ">Figure 11
<p>Changes and statistical results of the second crop MAD during a double season. Partial results (P<sub>1</sub>, P<sub>4</sub>, P<sub>7</sub> and P<sub>10</sub>) are shown. The statistical chart shows that the proportion of cultivated land of the second crop MAD changed from P<sub>1</sub> to P<sub>10</sub>. P<sub>1</sub>: T<sub>2</sub>–T<sub>1</sub>; P<sub>2</sub>: T<sub>3</sub>–T<sub>2</sub>; …; P<sub>10</sub>: T<sub>11</sub>–T<sub>10.</sub></p>
Full article ">Figure 12
<p>Slope, p, productivity changes, and potential space of the second crop MAD during a double season. (<b>a</b>) the slope of the linear regression equation; (<b>b</b>) the significance of the productivity changes (<span class="html-italic">p</span> &lt; 0.05); (<b>c</b>) the productivity changes in the second crop; (<b>d</b>,<b>e</b>) the potential space of the second crop in the area with significant changes in productivity.</p>
Full article ">
Previous Issue
Back to TopTop