Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 14, February-2
Previous Issue
Volume 14, January-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 14, Issue 3 (February-1 2022) – 375 articles

Cover Story (view full-size image): The degradation of forest roads in Canada was documented by identifying relevant spatiotemporal variables with (1) predictive models of gravel forest road degradation, and (2) using topography, roughness, and vegetation indices obtained from Airborne Laser Scanning and Sentinel-2 optical data to spatialise it. The field approach (n = 207) showed that after five years without maintenance, the rate of degradation on a road, regardless of its width, increased exponentially, exacerbated by a high slope gradient and loss of road surface. The remote sensing approach performed gives us valuable tools to document the state of gravel forest road degradation, providing us a piece of critical information for maintaining and sustaining access to Canada’s boreal forest. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 196762 KiB  
Article
A Voxel-Based Individual Tree Stem Detection Method Using Airborne LiDAR in Mature Northeastern U.S. Forests
by Jeff L. Hershey, Marc E. McDill, Douglas A. Miller, Brennan Holderman and Judd H. Michael
Remote Sens. 2022, 14(3), 806; https://doi.org/10.3390/rs14030806 - 8 Feb 2022
Cited by 6 | Viewed by 4595
Abstract
This paper describes a new method for detecting individual tree stems that was designed to perform well in the challenging hardwood-dominated, mixed-species forests common to the northeastern U.S., where canopy height-based methods have proven unreliable. Most prior research in individual tree detection has [...] Read more.
This paper describes a new method for detecting individual tree stems that was designed to perform well in the challenging hardwood-dominated, mixed-species forests common to the northeastern U.S., where canopy height-based methods have proven unreliable. Most prior research in individual tree detection has been performed in homogenous coniferous or conifer-dominated forests with limited hardwood presence. The study area in central Pennsylvania, United States, includes 17+ tree species and contains over 90% hardwoods. Existing methods have shown reduced performance as the proportion of hardwood species increases, due in large part to the crown-focused approaches they have employed. Top-down approaches are not reliable in deciduous stands due to the inherent complexity of the canopy and tree crowns in such stands. This complexity makes it difficult to segment trees and accurately predict tree stem locations based on detected crown segments. The proposed voxel column-based approach has advantages over both traditional canopy height model-based methods and computationally demanding point-based solutions. The method was tested on 1125 reference trees, ≥10 cm diameter at breast height (DBH), and it detected 68% of all reference trees and 87% of medium and large (sawtimber-sized) trees ≥28 cm DBH. Significantly, the commission rate (false predictions) was negligible as most raw false positives were confirmed in follow-up field visits to be either small trees below the threshold for recording or trees that were otherwise missed during the initial ground survey. Minimizing false positives was a priority in tuning the method. Follow-up in-situ evaluation of individual omission and commission instances was facilitated by the high spatial accuracy of predicted tree locations generated by the method. The mean and maximum predicted-to-reference tree distances were 0.59 m and 2.99 m, respectively, with over 80% of matches within <1 m. A new tree-matching method utilizing linear integer programming is presented that enables rigorous, repeatable matching of predicted and reference trees and performance evaluation. Results indicate this new tree detection method has potential to be operationalized for both traditional forest management activities and in providing the more frequent and scalable inventories required by a growing forest carbon offsets industry. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Relative locations of the study sites and layout of study plots.</p>
Full article ">Figure 2
<p>Visualization of manual data collection process and key data points.</p>
Full article ">Figure 3
<p>Aggregate tree count by species for the four one-hectare study plots. ACRU—<span class="html-italic">Acer rubrum</span>, QUMO—<span class="html-italic">Quercus montana</span>, QUAL—<span class="html-italic">Q. alba</span>, PIST—<span class="html-italic">Pinus strobus</span>, QUVE—<span class="html-italic">Q. velutina</span>, NYSY—<span class="html-italic">Nyssa sylvatica</span>, QURU—<span class="html-italic">Q. rubra</span>, SNAG—dead tree, BELE—<span class="html-italic">Betula lenta</span>, COFL—<span class="html-italic">Cornus florida</span>, CASP—<span class="html-italic">Carya</span> species, ACPE—<span class="html-italic">A. pensylvanicum</span>, QUCO, <span class="html-italic">Q. coccinea</span>, FAGR—<span class="html-italic">Fagus grandifolia</span>, LITU—<span class="html-italic">Liriodendron tulipifera</span>, ACSA—<span class="html-italic">A. saccharum</span>, TIAM—<span class="html-italic">Tilia americana</span>, TSCA—<span class="html-italic">Tsuga canadensis</span>.</p>
Full article ">Figure 4
<p>Distribution of study area trees by diameter at breast height (DBH).</p>
Full article ">Figure 5
<p>TopCon Total Station ES-105 in use at the study site.</p>
Full article ">Figure 6
<p>Open traverse through study plots utilizing GNSS anchor points.</p>
Full article ">Figure 7
<p>Conceptual illustration of use of point density distribution to determine a target height range in the point cloud that excludes tree crowns and ground foliage so that tree stems can be more readily distinguished.</p>
Full article ">Figure 8
<p>Portion of a voxelized point cloud (<b>left</b>) with visible voxel columns and a sample of the raw voxel data containing coordinates, vertical height increment and number of points in that voxel (<b>right</b>).</p>
Full article ">Figure 9
<p>Visual summary of logic and rules for the two-pass voxel-based approach of identifying predicted tree locations from voxel columns.</p>
Full article ">Figure 10
<p>Summary of duplicate point merging steps in ArcGIS Pro. (<b>A</b>) The predicted tree locations generated by the two-pass approach, (<b>B</b>) 0.5 m buffers around all prediction points and (<b>C</b>) overlapping buffers converted to merged polygons and new predicted tree locations.</p>
Full article ">Figure 11
<p>Overview of tree matching process. A 3 m radius (C) was used to generate a table of potential matches between ground-measured reference trees and predicted trees. Reference trees (A and B) each have multiple potential matches with predicted trees (1, 2 and 3). The objective is to match two predicted trees to the reference trees such that the total distance between matched trees is minimized. An optimal solution can be identified via linear integer programming using the TreeMatch software. In this example, reference trees A and B are matched to predicted trees 1 and 2, respectively, while 3 is considered a false positive pending field revisit and review.</p>
Full article ">Figure 12
<p>A multi-stem tree (<b>left</b>), which was recorded as four separate trees in the tree survey but identified as one tree using our method, and a double stem (<b>right</b>) recorded as two trees in the survey but detected as one tree by our method.</p>
Full article ">Figure 13
<p>Examples of downed trees discovered through follow-up field visits, which were vital to gaining context and analyzing false negatives and false positives.</p>
Full article ">Figure 14
<p>Flowchart of key steps of the tree detection algorithm.</p>
Full article ">Figure 15
<p>Tree detection performance for various DBH ranges.</p>
Full article ">Figure 16
<p>Tree detection performance by species with (tree count). ACSA—<span class="html-italic">Acer saccharum</span>, ACPE—<span class="html-italic">A. pensylvanicum</span>, TSCA—<span class="html-italic">Tsuga canadensis</span>, QURU—<span class="html-italic">Quercus rubra</span>, QUAL—<span class="html-italic">Q. alba</span>, QUCO, <span class="html-italic">Q. coccinea</span>, QUVE—<span class="html-italic">Q. velutina</span>, FAGR—<span class="html-italic">Fagus grandifolia</span>, LITU—<span class="html-italic">Liriodendron tulipifera</span>, QUMO—<span class="html-italic">Q. montana</span>, ACRU—<span class="html-italic">A. rubrum</span>, CASP—<span class="html-italic">Carya species</span>, NYSY—<span class="html-italic">Nyssa sylvatica</span>, PIST—<span class="html-italic">Pinus strobus</span>, BELE—<span class="html-italic">Betula lenta</span>, SNAG—dead tree, COFL—<span class="html-italic">Cornus florida</span>.</p>
Full article ">Figure 17
<p>Fork above DBH resulting in two detections but cataloged as one tree per the field data collection protocol.</p>
Full article ">Figure 18
<p>Eastern hemlock branches resulted in a false positive (<b>left</b>) as did the branching pattern of an American beech (<b>right</b>). These are the two false positives where phantom trees were detected.</p>
Full article ">Figure 19
<p>Summary of false negatives by DBH range.</p>
Full article ">Figure 20
<p>Summary of false negatives by species. ACRU—<span class="html-italic">Acer rubrum</span>, PIST—<span class="html-italic">Pinus strobus</span>, NYSY—<span class="html-italic">Nyssa sylvatica</span>, SNAG—dead tree, QUMO—<span class="html-italic">Quercus montana</span>, BELE—<span class="html-italic">Betula lenta</span>, QUVE—<span class="html-italic">Q. velutina</span>, COFL—<span class="html-italic">Cornus florida</span>, QUAL—<span class="html-italic">Q. alba</span>, CASP—<span class="html-italic">Carya</span> species, QURU—<span class="html-italic">Q. rubra</span>, QUCO—<span class="html-italic">Q. coccinea</span>, FAGR—<span class="html-italic">Fagus grandifolia</span>, LITU—<span class="html-italic">Liriodendron tulipifera</span>.</p>
Full article ">Figure 21
<p>Example of short snag missed due to the height ranges used for voxel rules.</p>
Full article ">Figure 22
<p>A 54 cm black oak not detected, potentially due to being surrounded by eastern white pines, which likely contributed to diffusion of the LiDAR pulses.</p>
Full article ">
18 pages, 29114 KiB  
Article
Spatiotemporal Hybrid Random Forest Model for Tea Yield Prediction Using Satellite-Derived Variables
by S Janifer Jabin Jui, A. A. Masrur Ahmed, Aditi Bose, Nawin Raj, Ekta Sharma, Jeffrey Soar and Md Wasique Islam Chowdhury
Remote Sens. 2022, 14(3), 805; https://doi.org/10.3390/rs14030805 - 8 Feb 2022
Cited by 31 | Viewed by 4788
Abstract
Crop yield forecasting is critical for enhancing food security and ensuring an appropriate food supply. It is critical to complete this activity with high precision at the regional and national levels to facilitate speedy decision-making. Tea is a big cash crop that contributes [...] Read more.
Crop yield forecasting is critical for enhancing food security and ensuring an appropriate food supply. It is critical to complete this activity with high precision at the regional and national levels to facilitate speedy decision-making. Tea is a big cash crop that contributes significantly to economic development, with a market of USD 200 billion in 2020 that is expected to reach over USD 318 billion by 2025. As a developing country, Bangladesh can be a greater part of this industry and increase its exports through its tea yield and production with favorable climatic features and land quality. Regrettably, the tea yield in Bangladesh has not increased significantly since 2008 like many other countries, despite having suitable climatic and land conditions, which is why quantifying the yield is imperative. This study developed a novel spatiotemporal hybrid DRS–RF model with a dragonfly optimization (DR) algorithm and support vector regression (S) as a feature selection approach. This study used satellite-derived hydro-meteorological variables between 1981 and 2020 from twenty stations across Bangladesh to address the spatiotemporal dependency of the predictor variables for the tea yield (Y). The results illustrated that the proposed DRS–RF hybrid model improved tea yield forecasting over other standalone machine learning approaches, with the least relative error value (11%). This study indicates that integrating the random forest model with the dragonfly algorithm and SVR-based feature selection improves prediction performance. This hybrid approach can help combat food risk and management for other countries. Full article
Show Figures

Figure 1

Figure 1
<p>A comparison of tea yield between 2008 and 2018 over the neighboring countries of Bangladesh.</p>
Full article ">Figure 2
<p>The study area and selected 20 stations which were used to extract the predictor variables to develop the hybrid DRS–RF model.</p>
Full article ">Figure 3
<p>An integrated workflow of the present study to develop DRS–RF integrated with DR and support vector regression for tea yield prediction.</p>
Full article ">Figure 4
<p>(<b>a</b>) The selected predictors with their respective stations (subscripted) using a butterfly optimization algorithm; (<b>b</b>) correlation coefficient (r) of the SVR model for feature selection; (<b>c</b>) the input combinations prepared by selecting the best resulting variables one-by-one in ascending order.</p>
Full article ">Figure 5
<p>Box plots of proposed hybrid models (i.e., DRS–RF) along with their respective standalone counterparts in predicting tea yield in terms of Correlation Coefficient (r) and MAPE (%).</p>
Full article ">Figure 6
<p>The RRMSE of the proposed model and other comparison models and the respective change in percentage from the standalone model.</p>
Full article ">Figure 7
<p>Scatter plot of predicted vs. observed Y using the proposed hybrid model and comparison models. A least-square regression line and coefficient of determination (R<sup>2</sup>) with a linear fit equation are shown in each subpanel.</p>
Full article ">Figure 8
<p>Empirical Cumulative Distribution Function (CDF) of prediction error |FE| of Y generated by the proposed DRS–RF vs. benchmark models.</p>
Full article ">
16 pages, 10559 KiB  
Technical Note
Local Freeman Decomposition for Robust Imaging and Identification of Subsurface Anomalies Using Misaligned Full-Polarimetric GPR Data
by Haoqiu Zhou, Xuan Feng, Zejun Dong, Cai Liu, Wenjing Liang and Yafei An
Remote Sens. 2022, 14(3), 804; https://doi.org/10.3390/rs14030804 - 8 Feb 2022
Cited by 9 | Viewed by 2543
Abstract
A full-polarimetric ground penetrating radar (FP-GPR) uses an antenna array to detect subsurface anomalies. Compared to the traditional GPR, FP-GPR can obtain more abundant information about the subsurface. However, in field FP-GPR measurements, the arrival time of the received electromagnetic (EM) waves from [...] Read more.
A full-polarimetric ground penetrating radar (FP-GPR) uses an antenna array to detect subsurface anomalies. Compared to the traditional GPR, FP-GPR can obtain more abundant information about the subsurface. However, in field FP-GPR measurements, the arrival time of the received electromagnetic (EM) waves from different channels cannot be strictly aligned due to the limitations of human operation errors and the craftsmanship of the equipment. Small misalignments between the radargrams acquired from different channels of an FP-GPR can lead to erroneous identification results of the classic Freeman decomposition (FD) method. Here, we propose a local Freeman decomposition (LFD) method to enhance the robustness of the classic FD method when managing with misaligned FP-GPR data. The tests on three typical targets demonstrate that misalignments will severely interfere with the imaging and the identification results of the classic FD method for the plane and dihedral scatterers. In contrast, the proposed LFD method can produce smooth images and accurate identification results. Besides, the identification of the volume scatterer is not affected by misalignments. A test of ice-fracture detection further verifies the capability of the LFD method in field measurements. Due to the different relative magnitudes of the permittivity of the media on two sides of the interfaces, the ice surface and ice fracture show the features of surface-like and double-bounce scattering, respectively. However, the definition of double-bounce scattering is different from the definition in polarimetric synthetic aperture radar (SAR). Finally, a quantitative analysis shows that the sensitivities of the FD and LFD methods to misalignments are related to both the type of target and the polarized mode of the misaligned data. The tolerable range of the LFD method for misalignments is approximately ±0.2 times the wavelength of the EM wave, which is much wider than that of the FD method. In most cases, the LFD method can guarantee an accurate result of identification. Full article
(This article belongs to the Special Issue Latest Results on GPR Algorithms, Applications and Systems)
Show Figures

Figure 1

Figure 1
<p>An antenna array for FP-GPR [<a href="#B8-remotesensing-14-00804" class="html-bibr">8</a>]. H and V represent the horizontal and the vertical polarization, respectively.</p>
Full article ">Figure 2
<p>The ultrawide-bandwidth(UWB) stepped-frequency FP-GPR system. (<b>a</b>) the photo of the system; (<b>b</b>) the operation flow chart of the system.</p>
Full article ">Figure 3
<p>The typical targets used in the measurements. (<b>a</b>) The metallic plate; (<b>b</b>) The metallic dihedral; (<b>c</b>) The volume scatterer with many branches.</p>
Full article ">Figure 4
<p>The FP-GPR data of the three typical targets before and after Kirchhoff migrations. (<b>a</b>–<b>c</b>) are the radargrams of the metallic plate, the metallic dihedral, and the volume scatterer with many branches, respectively. (<b>d</b>–<b>f</b>) are the results after the Kirchhoff migrations.</p>
Full article ">Figure 5
<p>The misaligned signals of the metallic plate. (<b>a</b>) <span class="html-italic">S<sub>HH</sub></span> signals; (<b>b</b>) <span class="html-italic">S<sub>HV</sub></span> signals; (<b>c</b>) <span class="html-italic">S<sub>VV</sub></span> signals.</p>
Full article ">Figure 6
<p>The single traces of FD and LFD results for misaligned data. (<b>a</b>) The results of the classic FD method; (<b>b</b>) The results of the proposed LFD method.</p>
Full article ">Figure 7
<p>The RGB images of FD and LFD results for misaligned data. (<b>a</b>) results of classic FD method; (<b>b</b>) results of proposed LFD method. The white dashed lines denote the general contours of the targets.</p>
Full article ">Figure 8
<p>The FP-GPR system and the ice fracture detection test. (<b>a</b>) The polarimetric antenna array used in the field measurement; (<b>b</b>) The field FP-GPR measurement on the ice fracture.</p>
Full article ">Figure 9
<p>The field FP-GPR data of the ice fracture. (<b>a</b>) Data before Kirchhoff migrations; (<b>b</b>) Data after Kirchhoff migrations.</p>
Full article ">Figure 10
<p>The schematic diagram for analyzing the obtained FP-GPR data in the field ice fracture detection test.</p>
Full article ">Figure 11
<p>The single traces of FD and LFD results for the misaligned data. (<b>a</b>) The results of the classic FD method; (<b>b</b>) The results of the proposed LFD method.</p>
Full article ">Figure 12
<p>The imaging and identification results of field FP-GPR data using the two methods. (<b>a</b>) The result using the classic FD method; (<b>b</b>) The result using the proposed LFD method. The white dashed lines denote the general contours of the targets.</p>
Full article ">Figure 13
<p>The accuracy curves of the FD and LFD results when increasing the degrees of misalignments. (<b>a</b>) The plate scatterer; (<b>b</b>) The dihedral scatterer; (<b>c</b>) The volume scatterer.</p>
Full article ">Figure 14
<p>The test for producing the cases of misinterpretations due to the misalignments (<b>a</b>–<b>d</b>) The FD results when the misalignments are set to 0, 0.1<span class="html-italic">λ</span>, 0.2<span class="html-italic">λ</span>, and 0.3<span class="html-italic">λ</span>. (<b>e</b>–<b>h</b>) The LFD results. The white dashed lines denote the general contours of the targets.</p>
Full article ">
25 pages, 4152 KiB  
Article
Forest Disturbance Detection with Seasonal and Trend Model Components and Machine Learning Algorithms
by Jonathan V. Solórzano and Yan Gao
Remote Sens. 2022, 14(3), 803; https://doi.org/10.3390/rs14030803 - 8 Feb 2022
Cited by 8 | Viewed by 4013
Abstract
Forest disturbances reduce the extent of natural habitats, biodiversity, and carbon sequestered in forests. With the implementation of the international framework Reduce Emissions from Deforestation and forest Degradation (REDD+), it is important to improve the accuracy in the estimation of the extent of [...] Read more.
Forest disturbances reduce the extent of natural habitats, biodiversity, and carbon sequestered in forests. With the implementation of the international framework Reduce Emissions from Deforestation and forest Degradation (REDD+), it is important to improve the accuracy in the estimation of the extent of forest disturbances. Time series analyses, such as Breaks for Additive Season and Trend (BFAST), have been frequently used to map tropical forest disturbances with promising results. Previous studies suggest that in addition to magnitude of change, disturbance accuracy could be enhanced by using other components of BFAST that describe additional aspects of the model, such as its goodness-of-fit, NDVI seasonal variation, temporal trend, historical length of observations and data quality, as well as by using separate thresholds for distinct forest types. The objective of this study is to determine if the BFAST algorithm can benefit from using these model components in a supervised scheme to improve the accuracy to detect forest disturbance. A random forests and support vector machines algorithms were trained and verified using 238 points in three different datasets: all-forest, tropical dry forest, and temperate forest. The results show that the highest accuracy was achieved by the support vector machines algorithm using the all-forest dataset. Although the increase in accuracy of the latter model vs. a magnitude threshold model is small, i.e., 0.14% for sample-based accuracy and 0.71% for area-weighted accuracy, the standard error of the estimated total disturbed forest area was 4352.59 ha smaller, while the annual disturbance rate was also smaller by 1262.2 ha year−1. The implemented approach can be useful to obtain more precise estimates in forest disturbance, as well as its associated carbon emissions. Full article
(This article belongs to the Special Issue Advances in the Remote Sensing of Forest Cover Change)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Study area location and spatial distribution of the temperate and tropical dry forest. (<b>B</b>) Illustrations of sample points manually classified as disturbance and non-disturbance in the tropical dry forest (TDF) and temperate forest (TF) using Sentinel-2 natural color composites available in Google Earth Engine. Here the sample points are depicted by circles with yellow outlines.</p>
Full article ">Figure 2
<p>Flowchart of the method.</p>
Full article ">Figure 3
<p>Variable importance scaled to a 0–100 range for the random forests (<b>A</b>) and support vector machines (<b>B</b>) models.</p>
Full article ">Figure 4
<p>ROC curves for the RF and SVM models in the all-forest, TDF, and TF datasets. AUC values are shown in percentages.</p>
Full article ">Figure 5
<p>Visual comparison of the predicted disturbance by the most accurate models using different datasets. The points indicate the validation data for the disturbance class (orange) and non-disturbance class (blue) (<b>A</b>) Complete study area disturbance prediction using the all-forest baseline model. Examples of the disturbed areas detected using different models and datasets are presented in line; (<b>B</b>) All-forest; (<b>C</b>) Tropical dry forest (TDF); and (<b>D</b>) temperate forest (TF). The three best models are in column 1: Baseline model, 2: random forests (RF), and 3: support vector machines (SVM). The three maps at line B illustrate the disturbance detected by the three models using the all-forest dataset; similarly, the three maps at line C illustrate the disturbance with the TDF dataset and line D with the TF dataset.</p>
Full article ">Figure 6
<p>Example of areas with correct disturbance detection ((<b>A</b>) TDF and (<b>C</b>) TF) and false detection ((<b>B</b>) TDF and (<b>D</b>) TF) in the baseline model, but correctly classified as non-disturbance in the SVM all-forest model. Each point corresponds to an observed NDVI value in the time series, while its color indicates its correspondence to the historical (green) or monitoring period (red). In turn, the blue solid line shows the model fitted to the data in the historical period and projected to the monitoring window. The dashed lines show the NDVI behavior through time in the historical (green) and monitoring periods (red). Finally, the black dashed line indicates the start of the monitoring period, while the solid yellow line indicates the date of the detected breakpoint.</p>
Full article ">
20 pages, 6520 KiB  
Article
Retrieving Freeze/Thaw Cycles Using Sentinel-1 Data in Eastern Nunavik (Québec, Canada)
by Yueli Chen, Lingxiao Wang, Monique Bernier and Ralf Ludwig
Remote Sens. 2022, 14(3), 802; https://doi.org/10.3390/rs14030802 - 8 Feb 2022
Cited by 7 | Viewed by 2468
Abstract
In the terrestrial cryosphere, freeze/thaw (FT) state transitions play an important and measurable role in climatic, hydrological, ecological, and biogeochemical processes in permafrost landscapes. Active and passive microwave remote sensing has shown a principal capacity to provide effective monitoring of landscape FT dynamics. [...] Read more.
In the terrestrial cryosphere, freeze/thaw (FT) state transitions play an important and measurable role in climatic, hydrological, ecological, and biogeochemical processes in permafrost landscapes. Active and passive microwave remote sensing has shown a principal capacity to provide effective monitoring of landscape FT dynamics. The study presents a seasonal threshold approach, which examines the timeseries progression of remote sensing measurements relative to signatures acquired during seasonal frozen and thawed reference states. This is used to estimate the FT state from the Sentinel-1 database and applied and evaluated for the region of Eastern Nunavik (Québec, Canada). An optimization process of the threshold is included. In situ measurements from the meteorological station network were used for the validation process. Overall, acceptable estimation accuracy (>70%) was achieved in most tests; on the best-performing sites, an accuracy higher than 90% was reached. The performance of the seasonal threshold approach over the study region was further discussed with consideration of land cover, spatial heterogeneity, and soil depth. This work is dedicated to providing more accurate data to capture the spatiotemporal heterogeneity of freeze/thaw transitions and to improving our understanding of related processes in permafrost landscapes. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>GlobPermafrost zonation, permafrost extent after Brown et al. (1997) and difference between borehole and modelled MAGT for Canada and Alaska, yellow frame: the Québec–Labrador Peninsula [<a href="#B37-remotesensing-14-00802" class="html-bibr">37</a>].</p>
Full article ">Figure 2
<p>Nunavik map with all test sites (base map: MODIS land cover type yearly global 500 m; <a href="https://lpdaac.usgs.gov/products/mcd12q1v006/" target="_blank">https://lpdaac.usgs.gov/products/mcd12q1v006/</a>, accessed on 10 December 2021).</p>
Full article ">Figure 3
<p>Backscatter signal time series of chosen test sites, <b>left</b>: stations <b>2</b>, <b>6,</b> and <b>36</b>; <b>right:</b> stations <b>4</b>, <b>31,</b> and <b>38</b>; blue, VV polarization; green, VH polarization.</p>
Full article ">Figure 4
<p>Backscatter signal reference of all test sites, <b>left</b>: comparison of references of different polarizations in the same season (<b>top</b>: winter, <b>bottom</b>: summer), <b>right</b>: comparison of references of the same polarization in different seasons (<b>top</b>: VH, <b>bottom</b>: VV).</p>
Full article ">Figure 5
<p>Best-performing threshold at different test soil depths at each site (0 cm, green; 2 cm, blue; 5 cm, yellow; 10 cm, red).</p>
Full article ">Figure 6
<p>Accuracy comparison of results based on differently preprocessed data.</p>
Full article ">Figure 7
<p>Comparison of the results from the two different polarizations.</p>
Full article ">Figure 8
<p>Environmental heterogeneity at each site of RGB and NDVI signals.</p>
Full article ">Figure 9
<p>Accuracy of different test soil depths, based on datasets from two preprocessing methods, clustered by four land-cover categories.</p>
Full article ">Figure 10
<p>Accuracy of different test soil depths: comparison between two polarizations, clustered by four land cover categories.</p>
Full article ">Figure 11
<p>Best-performing threshold at different soil depths, clustered by four land-cover categories.</p>
Full article ">
21 pages, 5972 KiB  
Article
Genetic Programming Approach for the Detection of Mistletoe Based on UAV Multispectral Imagery in the Conservation Area of Mexico City
by Paola Andrea Mejia-Zuluaga, León Dozal and Juan C. Valdiviezo-N.
Remote Sens. 2022, 14(3), 801; https://doi.org/10.3390/rs14030801 - 8 Feb 2022
Cited by 6 | Viewed by 3624
Abstract
The mistletoe Phoradendron velutinum (P. velutinum) is a pest that spreads rapidly and uncontrollably in Mexican forests, becoming a serious problem since it is a cause of the decline of 23.3 million hectares of conifers and broadleaves in the country. The lack of [...] Read more.
The mistletoe Phoradendron velutinum (P. velutinum) is a pest that spreads rapidly and uncontrollably in Mexican forests, becoming a serious problem since it is a cause of the decline of 23.3 million hectares of conifers and broadleaves in the country. The lack of adequate phytosanitary control has negative social, economic, and environmental impacts. However, pest management is a challenging task due to the difficulty of early detection for proper control of mistletoe infestations. Automating the detection of this pest is important due to its rapid spread and the high costs of field identification tasks. This paper presents a Genetic Programming (GP) approach for the automatic design of an algorithm to detect mistletoe using multispectral aerial images. Our study area is located in a conservation area of Mexico City, in the San Bartolo Ameyalco community. Images of 148 hectares were acquired by means of an Unmanned Aerial Vehicle (UAV) carrying a sensor sensitive to the R, G, B, red edge, and near-infrared bands, and with an average spatial resolution of less than 10 cm per pixel. As a result, it was possible to obtain an algorithm capable of classifying mistletoe P. velutinum at its flowering stage for the specific case of the study area in conservation area with an Overall Accuracy (OA) of 96% and a value of fitness function based on weighted Cohen’s Kappa (kw) equal to 0.45 in the test data set. Additionally, our method’s performance was compared with two traditional image classification methods; in the first, a classical spectral index, named Intensive Pigment Index of Structure 2 (SIPI2), was considered for the detection of P. velutinum. The second method considers the well-known Support Vector Machine classification algorithm (SVM). We also compare the accuracy of the best GP individual with two additional indices obtained during the solution analysis. According to our experimental results, our GP-based algorithm outperforms the results obtained by the aforementioned methods for the identification of P. velutinum. Full article
(This article belongs to the Special Issue Detecting Anomalies and Tracking Biodiversity for Forest Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area and flight polygon distribution for Unmanned Aerial Vehicle (UAV) image collection.</p>
Full article ">Figure 2
<p>Complete diagram of the operation of the algorithm to detect mistletoe in multispectral images. In the Image Transformation stage, we illustrate, as an example, the internal nodes or functions and the leaf nodes or terminals required by the Genetic Programming (GP) algorithm to implement the NDVI index.</p>
Full article ">Figure 3
<p>Diagram of the main steps of Genetic Programming.</p>
Full article ">Figure 4
<p>Frequency distribution of the use of terminals in the best individuals evolved by GP. The bar diagram shows how many of the best individuals have used the corresponding terminals; i.e., the <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>R</mi> <mi>E</mi> <mi>G</mi> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>I</mi> <mi>R</mi> </mrow> </semantics></math> bands; the line graph shows the number of times that the terminals have been used in total within the best solutions individuals.</p>
Full article ">Figure 5
<p>Frequency distribution of the use of functions in the best individuals evolved by GP. The bar diagram shows how many of the best individuals have used the corresponding functions (arithmetic operators), while the line graph shows the number of times that the functions have been used in total within the best-solution individuals.</p>
Full article ">Figure 6
<p>Example of the partial results obtained by applying the first factor of the best solution, shown in Equation (<a href="#FD4-remotesensing-14-00801" class="html-disp-formula">4</a>), on an image. The letters (<b>a</b>–<b>g</b>) show the performance (visual result) of each term in the equation that is expressed below the corresponding image.</p>
Full article ">Figure 7
<p>Example of the partial results obtained by applying the second factor of the best solution, shown in Equation (<a href="#FD4-remotesensing-14-00801" class="html-disp-formula">4</a>), on an image. The letters (<b>a</b>–<b>g</b>) show the performance (visual result) of each term in the equation that is expressed below the corresponding image.</p>
Full article ">Figure 8
<p>Example of the result of the best solution, shown in Equation (<a href="#FD4-remotesensing-14-00801" class="html-disp-formula">4</a>), when applied on an image. The letters (<b>a</b>–<b>c</b>) show the performance (visual result) of each term in the equation that is expressed below the corresponding image.</p>
Full article ">Figure 9
<p>Classification map of <span class="html-italic">Phoradendron velutinum (P. velutinum)</span> with the best GP algorithm. A section of the Conservation Area of Mexico City is shown, in which the detection of <span class="html-italic">Phoradendron velutinum</span> was carried out employing the best individual generated from GP (Equation (<a href="#FD4-remotesensing-14-00801" class="html-disp-formula">4</a>)). The main areas with infestation of <span class="html-italic">P. velutinum</span> are shown in red and <span class="html-italic">P. velutinum</span> in situ GPS sampling points are represented by blue circles.</p>
Full article ">
24 pages, 11692 KiB  
Article
Spectral-Spatial Residual Network for Fusing Hyperspectral and Panchromatic Remote Sensing Images
by Rui Zhao and Shihong Du
Remote Sens. 2022, 14(3), 800; https://doi.org/10.3390/rs14030800 - 8 Feb 2022
Cited by 5 | Viewed by 2502
Abstract
Fusing hyperspectral and panchromatic remote sensing images can obtain the images with high resolution in both spectral and spatial domains. In addition, it can complement the deficiency of high-resolution hyperspectral and panchromatic remote sensing images. In this paper, a spectral–spatial residual network (SSRN) [...] Read more.
Fusing hyperspectral and panchromatic remote sensing images can obtain the images with high resolution in both spectral and spatial domains. In addition, it can complement the deficiency of high-resolution hyperspectral and panchromatic remote sensing images. In this paper, a spectral–spatial residual network (SSRN) model is established for the intelligent fusion of hyperspectral and panchromatic remote sensing images. Firstly, the spectral–spatial deep feature branches are built to extract the representative spectral and spatial deep features, respectively. Secondly, an enhanced multi-scale residual network is established for the spatial deep feature branch. In addition, an enhanced residual network is established for the spectral deep feature branch This operation is adopted to enhance the spectral and spatial deep features. Finally, this method establishes the spectral–spatial deep feature simultaneity to circumvent the independence of spectral and spatial deep features. The proposed model was evaluated on three groups of real-world hyperspectral and panchromatic image datasets which are collected with a ZY-1E sensor and are located at Baiyangdian, Chaohu and Dianchi, respectively. The experimental results and quality evaluation values, including RMSE, SAM, SCC, spectral curve comparison, PSNR, SSIM ERGAS and Q metric, confirm the superior performance of the proposed model compared with the state-of-the-art methods, including AWLP, CNMF, GIHS, MTF_GLP, HPF and SFIM methods. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of spectral–spatial deep feature branch.</p>
Full article ">Figure 2
<p>Schematic diagram of multi-scale residual enhancement for spatial deep features and residual enhancement for spectral deep features.</p>
Full article ">Figure 3
<p>Spectral–spatial deep feature alignment.</p>
Full article ">Figure 4
<p>The Baiyangdian region dataset: (<b>a</b>) panchromatic image, (<b>b</b>) RGB 3D cube of hyperspectral image, (<b>c</b>) RGB 3D cube of ground truth image.</p>
Full article ">Figure 5
<p>The Chaohu region dataset: (<b>a</b>) panchromatic image, (<b>b</b>) RGB 3D cube of hyperspectral image, (<b>c</b>) RGB 3D cube of ground truth image.</p>
Full article ">Figure 6
<p>The Dianchi region dataset: (<b>a</b>) panchromatic image, (<b>b</b>) RGB 3D cube of hyperspectral image, (<b>c</b>) RGB 3D cube of ground truth image.</p>
Full article ">Figure 7
<p>RGB images of the ground truth, the compared methods and the proposed SSRN method on Baiyangdian region dataset. (<b>a</b>) Ground truth, (<b>b</b>) AWLP, (<b>c</b>) CNMF, (<b>d</b>) GIHS, (<b>e</b>) MTF_GLP, (<b>f</b>) HPF, (<b>g</b>) SFIM and (<b>h</b>) SSRN.</p>
Full article ">Figure 8
<p>RGB images of the ground truth, the compared methods and the proposed SSRN method on Chaohu region dataset. (<b>a</b>) Ground truth, (<b>b</b>) AWLP, (<b>c</b>) CNMF, (<b>d</b>) GIHS, (<b>e</b>) MTF_GLP, (<b>f</b>) HPF, (<b>g</b>) SFIM and (<b>h</b>) SSRN.</p>
Full article ">Figure 9
<p>RGB images of the ground truth, the compared methods and the proposed SSRN method on Dianchi region dataset. (<b>a</b>) Ground truth, (<b>b</b>) AWLP, (<b>c</b>) CNMF, (<b>d</b>) GIHS, (<b>e</b>) MTF_GLP, (<b>f</b>) HPF, (<b>g</b>) SFIM and (<b>h</b>) SSRN.</p>
Full article ">Figure 10
<p>Quality evaluation for the compared and proposed fusion methods on the Baiyangdian region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 10 Cont.
<p>Quality evaluation for the compared and proposed fusion methods on the Baiyangdian region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 11
<p>Quality evaluation for the compared and proposed fusion methods on the Chaohu region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 11 Cont.
<p>Quality evaluation for the compared and proposed fusion methods on the Chaohu region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 12
<p>Quality evaluation for the compared and proposed fusion methods on the Dianchi region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 12 Cont.
<p>Quality evaluation for the compared and proposed fusion methods on the Dianchi region dataset. (<b>a</b>) RMSE, (<b>b</b>) SAM, (<b>c</b>) PSNR, (<b>d</b>) SSIM, (<b>e</b>) SCC, (<b>f</b>) spectral curve comparison, (<b>g</b>) ERGAS and (<b>h</b>) Q metric.</p>
Full article ">Figure 13
<p>Reflectance of four pixels in Baiyangdian dataset for comparison of hyperspectral image and SSRN fusion result. (<b>a</b>) Pixel (30, 30) in hyperspectral image versus pixel (360, 360) in SSRN fusion result. (<b>b</b>) Pixel (30, 270) in hyperspectral image versus pixel (360, 3240) in SSRN fusion result. (<b>c</b>) Pixel (270, 30) in hyperspectral image versus pixel (3240, 360) in SSRN fusion result. (<b>d</b>) Pixel (270, 270) in hyperspectral image versus pixel (3240, 3240) in SSRN fusion result.</p>
Full article ">Figure 13 Cont.
<p>Reflectance of four pixels in Baiyangdian dataset for comparison of hyperspectral image and SSRN fusion result. (<b>a</b>) Pixel (30, 30) in hyperspectral image versus pixel (360, 360) in SSRN fusion result. (<b>b</b>) Pixel (30, 270) in hyperspectral image versus pixel (360, 3240) in SSRN fusion result. (<b>c</b>) Pixel (270, 30) in hyperspectral image versus pixel (3240, 360) in SSRN fusion result. (<b>d</b>) Pixel (270, 270) in hyperspectral image versus pixel (3240, 3240) in SSRN fusion result.</p>
Full article ">Figure 14
<p>Reflectance of four pixels in Chaohu dataset for comparison of hyperspectral image and SSRN fusion result. (<b>a</b>) Pixel (30, 30) in hyperspectral image versus pixel (360, 360) in SSRN fusion result. (<b>b</b>) Pixel (30, 270) in hyperspectral image versus pixel (360, 3240) in SSRN fusion result. (<b>c</b>) Pixel (270, 30) in hyperspectral image versus pixel (3240, 360) in SSRN fusion result. (<b>d</b>) Pixel (270, 270) in hyperspectral image versus pixel (3240, 3240) in SSRN fusion result.</p>
Full article ">Figure 15
<p>Reflectance of four pixels in Dianchi dataset for comparison of hyperspectral image and SSRN fusion result. (<b>a</b>) Pixel (30, 30) in hyperspectral image versus pixel (360, 360) in SSRN fusion result. (<b>b</b>) Pixel (30, 270) in hyperspectral image versus pixel (360, 3240) in SSRN fusion result. (<b>c</b>) Pixel (270, 30) in hyperspectral image versus pixel (3240, 360) in SSRN fusion result. (<b>d</b>) Pixel (270, 270) in hyperspectral image versus pixel (3240, 3240) in SSRN fusion result.</p>
Full article ">Figure 15 Cont.
<p>Reflectance of four pixels in Dianchi dataset for comparison of hyperspectral image and SSRN fusion result. (<b>a</b>) Pixel (30, 30) in hyperspectral image versus pixel (360, 360) in SSRN fusion result. (<b>b</b>) Pixel (30, 270) in hyperspectral image versus pixel (360, 3240) in SSRN fusion result. (<b>c</b>) Pixel (270, 30) in hyperspectral image versus pixel (3240, 360) in SSRN fusion result. (<b>d</b>) Pixel (270, 270) in hyperspectral image versus pixel (3240, 3240) in SSRN fusion result.</p>
Full article ">
17 pages, 13195 KiB  
Article
Early Detection of Basal Stem Rot Disease in Oil Palm Tree Using Unmanned Aerial Vehicle-Based Hyperspectral Imaging
by Junichi Kurihara, Voon-Chet Koo, Cheaw Wen Guey, Yang Ping Lee and Haryati Abidin
Remote Sens. 2022, 14(3), 799; https://doi.org/10.3390/rs14030799 - 8 Feb 2022
Cited by 32 | Viewed by 6353
Abstract
Early detection of basal stem rot (BSR) disease in oil palm trees is important for the sustainable production of palm oil in the limited land for plantation in Southeast Asia. However, previous studies based on satellite and aircraft hyperspectral remote sensing could not [...] Read more.
Early detection of basal stem rot (BSR) disease in oil palm trees is important for the sustainable production of palm oil in the limited land for plantation in Southeast Asia. However, previous studies based on satellite and aircraft hyperspectral remote sensing could not discriminate oil palm trees in the early-stage of the BSR disease from healthy or late-stage trees. In this study, hyperspectral imaging of oil palm trees from an unmanned aerial vehicle (UAV) and machine learning using a random forest algorithm were employed for the classification of four infection categories of the BSR disease: healthy, early-stage, late-stage, and dead trees. A concentric disk segmentation was applied to tree crown segmentation at the sub-plant scale, and recursive feature elimination was used for feature selection. The results revealed that the classification performance for the early-stage trees is maximum at the specific tree crown segments, and only a few spectral bands in the red-edge region are sufficient to classify the infection categories. These findings will be useful for future UAV-based multispectral imaging to efficiently cover a wide area of oil palm plantations for the early detection of BSR disease. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Location of the studied oil palm plantation site (<b>upper left</b>) and a high spatial resolution aerial map of the site enclosed by the white dotted line (<b>center</b>). Categories of healthy, early-stage, late-stage, and dead trees classified by field survey and visual inspection are indicated by green, yellow, orange, and red circles, respectively.</p>
Full article ">Figure 2
<p>Examples of tree crown images extracted from the high spatial resolution aerial map.</p>
Full article ">Figure 3
<p>Concentric disk segmentation of tree crowns.</p>
Full article ">Figure 4
<p>A flow chart of the data collection, data preprocessing, and machine learning methods in this study.</p>
Full article ">Figure 5
<p>Reflectance spectra of segment E in the same oil palm tree crown acquired in different scenes. (<b>a</b>) Original reflectance spectra and (<b>b</b>) normalized reflectance spectra. Labels of the spectra indicate their flight IDs and scene IDs.</p>
Full article ">Figure 6
<p>Mean normalized reflectance spectra of oil palm tree crowns by infection category in different segments: (<b>a</b>) Segment A; (<b>b</b>) Segment B; (<b>c</b>) Segment C; (<b>d</b>) Segment D; (<b>e</b>) Segment E. Error bars present the standard deviation of the corresponding dataset.</p>
Full article ">Figure 7
<p>Mean precision, recall, and F1-score of classification models trained by different feature vectors: (<b>a</b>) NR, (<b>b</b>) SR, (<b>c</b>) NDSI, and (<b>d</b>) all the features.</p>
Full article ">Figure 7 Cont.
<p>Mean precision, recall, and F1-score of classification models trained by different feature vectors: (<b>a</b>) NR, (<b>b</b>) SR, (<b>c</b>) NDSI, and (<b>d</b>) all the features.</p>
Full article ">Figure 8
<p>Overall accuracy and the Cohen’s kappa coefficient of classification models trained by different feature vectors.</p>
Full article ">Figure 9
<p>Overall accuracy variations with number of features selected by RFE. Data for number of features between 20 and 1089 were omitted.</p>
Full article ">
16 pages, 12130 KiB  
Article
Spatio-Temporal Quality Indicators for Differential Interferometric Synthetic Aperture Radar Data
by Yismaw Wassie, S. Mohammad Mirmazloumi, Michele Crosetto, Riccardo Palamà, Oriol Monserrat and Bruno Crippa
Remote Sens. 2022, 14(3), 798; https://doi.org/10.3390/rs14030798 - 8 Feb 2022
Cited by 5 | Viewed by 3316
Abstract
Satellite-based interferometric synthetic aperture radar (InSAR) is an invaluable technique in the detection and monitoring of changes on the surface of the earth. Its high spatial coverage, weather friendly and remote nature are among the advantages of the tool. The multi-temporal differential InSAR [...] Read more.
Satellite-based interferometric synthetic aperture radar (InSAR) is an invaluable technique in the detection and monitoring of changes on the surface of the earth. Its high spatial coverage, weather friendly and remote nature are among the advantages of the tool. The multi-temporal differential InSAR (DInSAR) methods in particular estimate the spatio-temporal evolution of deformation by incorporating information from multiple SAR images. Moreover, opportunities from the DInSAR techniques are accompanied by challenges that affect the final outputs. Resolving the inherent ambiguities of interferometric phases, especially in areas with a high spatio-temporal deformation gradient, represents the main challenge. This brings the necessity of quality indices as important DInSAR data processing tools in achieving ultimate processing outcomes. Often such indices are not provided with the deformation products. In this work, we propose four scores associated with (i) measurement points, (ii) dates of time series, (iii) interferograms and (iv) images involved in the processing. These scores are derived from a redundant set of interferograms and are calculated based on the consistency of the unwrapped interferometric phases in the frame of a least-squares adjustment. The scores reflect the occurrence of phase unwrapping errors and represent valuable input for the analysis and exploitation of the DInSAR results. The proposed tools were tested on 432,311 points, 1795 interferograms and 263 Sentinel-1 single look complex images by employing the small baseline technique in the PSI processing chain, PSIG of the geomatics division of the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC). The results illustrate the importance of the scores—mainly in the interpretation of the DInSAR outputs. Full article
Show Figures

Figure 1

Figure 1
<p>The spatial (<b>b</b>) and temporal (<b>c</b>) baselines information of selected interferograms. The temporal resolution of interferograms processed is either 6 or 12 days. Nodes and edges in (<b>a</b>) refer to images and interferograms, respectively.</p>
Full article ">Figure 2
<p>Example of unwrapped interferogram classified in C3 (<b>a</b>) and plot of scores of interferograms associated with image 44—the image classified in C3 (<b>b</b>).</p>
Full article ">Figure 3
<p>Subsets of processed images of cumulative phases with different scores (in radar geometry): image 43 classified in C1 (<b>a</b>) and image 44 classified in C3 (<b>b</b>). The scatter plot in (<b>c</b>) is obtained from phase information of image 43 (horizontal axis) and image 44 (vertical axis).</p>
Full article ">Figure 4
<p>Improvement in the TS of phases of a PS before (in blue color) and after (in black color) excluding erroneous images. The images and the point were identified based on <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">S</mi> <mrow> <mi>im</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">S</mi> <mrow> <mi>pt</mi> </mrow> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 5
<p>Map of point scores of Venice, Italy—(<b>a</b>) with PSs classified in C<sub>1</sub> (green color), C<sub>2</sub> (yellow color) and C<sub>3</sub> (maroon color), (<b>b</b>) Bar graph of the number of PSs per each class and (<b>c</b>) an example of a phase TS with unwrapping error taken from C<sub>3</sub>.</p>
Full article ">Figure 6
<p>A comparison of geocoded cumulative phase information at the final image (<b>a</b>) with the quality scores (<b>b</b>) for the PSs taken from the sub-area of Venice lagoon, Italy.</p>
Full article ">Figure 7
<p>TS measurements and the corresponding scores are indicated in blue and in orange colors, respectively. All the TS scores belong to C<sub>1</sub>—indicating the measurement point is reliable (<b>a</b>). From (<b>b</b>–<b>d</b>), we noticed TS scores from all the classes.</p>
Full article ">Figure 8
<p>Spatial distribution of classified PSs per class per threshold. Maps on each row correspond to maps from each class (C<sub>1</sub> in green, C<sub>2</sub> in yellow and C<sub>3</sub> in maroon color) and the columns correspond to the threshold parameters.</p>
Full article ">Figure 9
<p>The plot of the number of PSs per threshold per class (<b>a</b>) and the total number of PSs per threshold (<b>b</b>).</p>
Full article ">
18 pages, 8287 KiB  
Article
Fractional Fourier Transform-Based Tensor RX for Hyperspectral Anomaly Detection
by Lili Zhang, Jiachen Ma, Baozhi Cheng and Fang Lin
Remote Sens. 2022, 14(3), 797; https://doi.org/10.3390/rs14030797 - 8 Feb 2022
Cited by 16 | Viewed by 2596
Abstract
Anomaly targets in a hyperspectral image (HSI) are often multi-pixel, rather than single-pixel, objects. Therefore, algorithms using a test point vector may ignore the spatial characteristics of the test point. In addition, hyperspectral anomaly detection (AD) algorithms usually use original spectral signatures. In [...] Read more.
Anomaly targets in a hyperspectral image (HSI) are often multi-pixel, rather than single-pixel, objects. Therefore, algorithms using a test point vector may ignore the spatial characteristics of the test point. In addition, hyperspectral anomaly detection (AD) algorithms usually use original spectral signatures. In a fractional Fourier transform (FrFT), the signals in the fractional Fourier domain (FrFD) possess complementary characteristics of both the original reflectance spectrum and its Fourier transform. In this paper, a tensor RX (TRX) algorithm based on FrFT (FrFT-TRX) is proposed for hyperspectral AD. First, the fractional order of FrFT is selected by fractional Fourier entropy (FrFE) maximization. Then, the HSI is transformed into the FrFD by FrFT. Next, TRX is employed in the FrFD. Finally, according to the optimal spatial dimensions of the target and background tensors, the optimal AD result is achieved by adjusting the fractional order. TRX employs a test point tensor, making better use of the spatial characteristics of the test point. TRX in the FrFD exploits the complementary advantages of the intermediate domain to increase discrimination between the target and background. Six existing algorithms are used for comparison in order to verify the AD performance of the proposed FrFT-TRX over five real HSIs. The experimental results demonstrate the superiority of the proposed algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of the proposed FrFT-TRX algorithm.</p>
Full article ">Figure 2
<p>Data L and two-dimensional diagrams of detection results: (<b>a</b>) 100th band of data L; (<b>b</b>) Ground-truth map; (<b>c</b>) GRX; (<b>d</b>) LRX; (<b>e</b>) KRX; (<b>f</b>) FrFE-RX; (<b>g</b>) FrFE-LRX; (<b>h</b>) PCA-TRX; and (<b>i</b>) FrFT-TRX.</p>
Full article ">Figure 3
<p>ROC curves and AUC values for data L.</p>
Full article ">Figure 4
<p>Separability graphs for data L.</p>
Full article ">Figure 5
<p>Data C and two-dimensional diagrams of detection results: (<b>a</b>) 100th band of data C; (<b>b</b>) Ground-truth map; (<b>c</b>) GRX; (<b>d</b>) LRX; (<b>e</b>) KRX; (<b>f</b>) FrFE-RX; (<b>g</b>) FrFE-LRX; (<b>h</b>) PCA-TRX; and (<b>i</b>) FrFT-TRX.</p>
Full article ">Figure 6
<p>ROC curves and AUC values for data C.</p>
Full article ">Figure 7
<p>Separability graphs for data C.</p>
Full article ">Figure 8
<p>Data P and two-dimensional diagrams of detection results: (<b>a</b>) 100th band of data P; (<b>b</b>) Ground-truth map; (<b>c</b>) GRX; (<b>d</b>) LRX; (<b>e</b>) KRX; (<b>f</b>) FrFE-RX; (<b>g</b>) FrFE-LRX; (<b>h</b>) PCA-TRX; and (<b>i</b>) FrFT-TRX.</p>
Full article ">Figure 9
<p>ROC curves and AUC values for data P.</p>
Full article ">Figure 10
<p>Separability graphs for data P.</p>
Full article ">Figure 11
<p>Data T and two-dimensional diagrams of detection results: (<b>a</b>) 100th band of data T; (<b>b</b>) Ground-truth map; (<b>c</b>) GRX; (<b>d</b>) LRX; (<b>e</b>) KRX; (<b>f</b>) FrFE-RX. (<b>g</b>) FrFE-LRX; (<b>h</b>) PCA-TRX; and (<b>i</b>) FrFT-TRX.</p>
Full article ">Figure 12
<p>ROC curves and AUC values for data T.</p>
Full article ">Figure 13
<p>Separability graphs for data T.</p>
Full article ">Figure 14
<p>Data S and two-dimensional diagrams of detection results: (<b>a</b>) 100th band of data S; (<b>b</b>) Ground-truth map; (<b>c</b>) GRX; (<b>d</b>) LRX; (<b>e</b>) KRX; (<b>f</b>) FrFE-RX; (<b>g</b>) FrFE-LRX; (<b>h</b>) PCA-TRX; and (<b>i</b>) FrFT-TRX.</p>
Full article ">Figure 15
<p>ROC curves and AUC values for data S.</p>
Full article ">Figure 16
<p>Separability graphs for data S.</p>
Full article ">Figure 17
<p>AUC values versus <span class="html-italic">p</span> for data L.</p>
Full article ">Figure 18
<p>AUC values versus <span class="html-italic">p</span> for data C.</p>
Full article ">Figure 19
<p>AUC values versus <span class="html-italic">p</span> for data P.</p>
Full article ">Figure 20
<p>AUC values versus <span class="html-italic">p</span> for data T.</p>
Full article ">Figure 21
<p>AUC values versus <span class="html-italic">p</span> for data S.</p>
Full article ">
22 pages, 8985 KiB  
Article
A Novel Speckle Suppression Method with Quantitative Combination of Total Variation and Anisotropic Diffusion PDE Model
by Jiamu Li, Zijian Wang, Wenbo Yu, Yunhua Luo and Zhongjun Yu
Remote Sens. 2022, 14(3), 796; https://doi.org/10.3390/rs14030796 - 8 Feb 2022
Cited by 11 | Viewed by 2520
Abstract
Speckle noise seriously affects synthetic aperture radar (SAR) image application. Speckle suppression aims to smooth the homogenous region while preserving edge and texture in the image. A novel speckle suppression method based on the combination of total variation and partial differential equation denoising [...] Read more.
Speckle noise seriously affects synthetic aperture radar (SAR) image application. Speckle suppression aims to smooth the homogenous region while preserving edge and texture in the image. A novel speckle suppression method based on the combination of total variation and partial differential equation denoising models is proposed in this paper. Taking full account of the local statistics in the image, a quantization technique—which is different from the normal edge detection method—is supported by the variation coefficient of the image. Accordingly, a quantizer is designed to respond to both noise level and edge strength. This quantizer automatically determines the threshold of diffusion coefficient and controls the weight between total variation filter and anisotropic diffusion partial differential equation filter. A series of experiments are conducted to test the performance of the quantizer and proposed filter. Extensive experimental results have demonstrated the superiority of the proposed method with both synthetic images and natural SAR images. Full article
(This article belongs to the Special Issue Advances in SAR Image Processing and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The generation principle of speckle: the upper and lower rectangles represent the SAR image and the corresponding ground area, respectively. The orange and blue boxes denote the minimum resolution units on the ground. Here, we divide the resolution unit into 25 cells, for example, and each cell has at least one scatter whose intensity of reflection is represented by the color depth.</p>
Full article ">Figure 2
<p>The relationship between diffusion coefficient and threshold: for a fixed gradient <math display="inline"><semantics> <mrow> <mo>|</mo> <mo>∇</mo> <mi mathvariant="bold-italic">I</mi> <mo>|</mo> <mo>=</mo> <mn>120</mn> </mrow> </semantics></math>, with the increase of the threshold <math display="inline"><semantics> <mi>Ψ</mi> </semantics></math>, the larger diffusion coefficient means stronger edge retention and vice versa.</p>
Full article ">Figure 3
<p>Adaptive window size. Assuming that it is currently in a homogeneous region, the window corresponding to pixel <math display="inline"><semantics> <mi>p</mi> </semantics></math> has a small radius. As the position changes, the window radius increases when <math display="inline"><semantics> <mi>q</mi> </semantics></math> is reached. The highlighted pixels consist of the window boundary.</p>
Full article ">Figure 4
<p>Effect of adaptive threshold on diffusion coefficient in the ADPDE model: (<b>a</b>) is a row pixel intensity noised by speckle, and (<b>b</b>) is the diffusion coefficient calculated by different fixed thresholds and adaptive threshold.</p>
Full article ">Figure 5
<p>Overview of the proposed QAD method. The input image <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">I</mi> <mn>0</mn> </msub> </mrow> </semantics></math> is first evaluated quantitatively by the proposed size-adaptive quantizer. Then, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">I</mi> <mn>0</mn> </msub> </mrow> </semantics></math> is filtered by TV and ADPDE methods. The adaptive threshold <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">Ψ</mi> <mrow> <mi>AT</mi> </mrow> </msub> </mrow> </semantics></math> is obtained according to the output of quantizer. After the TV filter and threshold-adaptive ADPDE output their result, a weighting controller integrates them by (21).</p>
Full article ">Figure 6
<p>Test results of different approaches on various edge-type signals: various edge-type synthetic signals as shown in the first subplot in (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>) are used to test the accuracy and stability of each method. Speckle noises with a mean of 1 and different standard derivation are added to them, as shown in the first subplot in (<b>b</b>,<b>c</b>,<b>e</b>,<b>f</b>,<b>h</b>,<b>i</b>,<b>k</b>,<b>l</b>). And the detection results are illustrated below them. As a result, the Sobel detector makes the most mistakes, and the detection result of the ROA detector is not stable enough.</p>
Full article ">Figure 7
<p>Monte Carlo experiment results for different methods on different edge-like signals added with Speckle noise. (<b>a</b>) Results of Sobel detector for various synthetic edge-like signals noise by speckle. (<b>b</b>,<b>c</b>) Results of ROA detector and proposed quantizer, respectively, for various signals.</p>
Full article ">Figure 8
<p>Experiment results of speckle suppression on synthetic images: (<b>a</b>–<b>c9</b>) are the original images. For each row, the images with number from 1 to 9 represent speckle noised image and filtering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, methods in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The detail of each image is also displayed in the corner. The denoising results of the proposed method are highlighted by red dotted boxes.</p>
Full article ">Figure 9
<p>1-D data comparation for image 1. (<b>a</b>–<b>h</b>) are the Monte Carlo experiments results of applying proposed QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>], and RGF on the first synthetic image.</p>
Full article ">Figure 10
<p>1-D data comparation for image 2. (<b>a</b>–<b>h</b>) Monte Carlo experiment results of applying proposed QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>], and RGF on the second synthetic image.</p>
Full article ">Figure 10 Cont.
<p>1-D data comparation for image 2. (<b>a</b>–<b>h</b>) Monte Carlo experiment results of applying proposed QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>], and RGF on the second synthetic image.</p>
Full article ">Figure 11
<p>1-D data comparation for image 3. (<b>a</b>–<b>h</b>) Monte Carlo experiment results of applying proposed QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>], and RGF on the third synthetic image.</p>
Full article ">Figure 12
<p>Speckle suppression results for the X-band SAR image: (<b>a</b>) The original images. (<b>b</b>–<b>i</b>) Filtering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The details of each image are also displayed beside.</p>
Full article ">Figure 13
<p>Speckle suppression results for the S-band SAR image: (<b>a</b>) The original images. (<b>b</b>–<b>i</b>) aFiltering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The details of each image are also displayed beside.</p>
Full article ">Figure 14
<p>Speckle suppression results for the C-band SAR image: (<b>a</b>) The original images. (<b>b</b>–<b>i</b>) Filtering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The details of each image are also displayed beside.</p>
Full article ">Figure 14 Cont.
<p>Speckle suppression results for the C-band SAR image: (<b>a</b>) The original images. (<b>b</b>–<b>i</b>) Filtering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The details of each image are also displayed beside.</p>
Full article ">Figure 15
<p>Speckle suppression results for the ultrasonic image: (<b>a</b>) is the original images. (<b>b</b>–<b>i</b>) are filtering results of QAD, SRAD, ROAPDE, WNNM, DA-Frost, EDS, method in [<a href="#B25-remotesensing-14-00796" class="html-bibr">25</a>] and RGF, respectively. The details of each image are also displayed beside.</p>
Full article ">Figure 16
<p>Normalized FR results. The lowest bar means the best speckle suppression performance. It is shown that in all cases, the proposed method outperforms other methods in denoising.</p>
Full article ">
25 pages, 8824 KiB  
Article
DGS-SLAM: A Fast and Robust RGBD SLAM in Dynamic Environments Combined by Geometric and Semantic Information
by Li Yan, Xiao Hu, Leyang Zhao, Yu Chen, Pengcheng Wei and Hong Xie
Remote Sens. 2022, 14(3), 795; https://doi.org/10.3390/rs14030795 - 8 Feb 2022
Cited by 43 | Viewed by 6250
Abstract
Visual Simultaneous Localization and Mapping (VSLAM) is a prerequisite for robots to accomplish fully autonomous movement and exploration in unknown environments. At present, many impressive VSLAM systems have emerged, but most of them rely on the static world assumption, which limits their application [...] Read more.
Visual Simultaneous Localization and Mapping (VSLAM) is a prerequisite for robots to accomplish fully autonomous movement and exploration in unknown environments. At present, many impressive VSLAM systems have emerged, but most of them rely on the static world assumption, which limits their application in real dynamic scenarios. To improve the robustness and efficiency of the system in dynamic environments, this paper proposes a dynamic RGBD SLAM based on a combination of geometric and semantic information (DGS-SLAM). First, a dynamic object detection module based on the multinomial residual model is proposed, which executes the motion segmentation of the scene by combining the motion residual information of adjacent frames and the potential motion information of the semantic segmentation module. Second, a camera pose tracking strategy using feature point classification results is designed to achieve robust system tracking. Finally, according to the results of dynamic segmentation and camera tracking, a semantic segmentation module based on a semantic frame selection strategy is designed for extracting potential moving targets in the scene. Extensive evaluation in public TUM and Bonn datasets demonstrates that DGS-SLAM has higher robustness and speed than state-of-the-art dynamic RGB-D SLAM systems in dynamic scenes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of DGS-SLAM.</p>
Full article ">Figure 2
<p>Flowchart of the dynamic object module. The red arrows represent the input of the semantic segmentation module, and the black arrows represent the execution procedures of the dynamic object detection module.</p>
Full article ">Figure 3
<p>The input and output of K-means. (<b>a</b>) The input depth image. (<b>b</b>) The output result of K-means clustering.</p>
Full article ">Figure 4
<p>Comparison of the two ORB matching strategies. (<b>a</b>) Camera pose tracking by original ORB matching. (<b>b</b>) Camera pose tracking by the static feature points of the previous frame. The red box indicates the presence of dynamic objects in the region. Coarse tracking uses only static feature points from the previous frame, reducing the effect of dynamic objects on the matching result.</p>
Full article ">Figure 5
<p>The principle of data residual. We can evaluate the dynamic of feature points using camera ego-motion.</p>
Full article ">Figure 6
<p>The process of building connectivity graph. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>6</mn> </msub> </mrow> </semantics></math> are the clusters after K-means clustering. The table on the right shows its connectivity graph. The blue cells indicate that two clusters have spatial connectivity, and the purple cells indicate that two clusters have spatial connectivity and spatial correlation.</p>
Full article ">Figure 7
<p>Example results of the residual model. (<b>a</b>) The original input RGB frame. (<b>b</b>) The residual grayscale graph. To visualize the residual results, we make the gray value of the maximum residual 256. The rest of the clusters calculate the corresponding gray value according to the ratio of their residuals to the maximum residual and generate the residual gray map.</p>
Full article ">Figure 8
<p>Example of a residuals histogram in a highly dynamic scene.</p>
Full article ">Figure 9
<p>Example of mask dilation. (<b>a</b>) The original dynamic object mask. (<b>b</b>) The mask after dilation. (<b>c</b>) The dynamic feature points before mask dilation. (<b>d</b>) The dynamic feature points after mask dilation. Outliers on the boundaries of dynamic objects are removed. Features in red are dynamic features, and those in green are static features.</p>
Full article ">Figure 10
<p>Classification results of feature points. The features in blue belong to unknown subset. Green features belong to static subset, and red features belong to dynamic subset.</p>
Full article ">Figure 11
<p>The input and output of YOLACT++. (<b>a</b>) The input RGB image. (<b>b</b>) The output result of YOLACT++. (<b>c</b>) Mask of potential moving objects.</p>
Full article ">Figure 12
<p>ATE Distribution for ORB-SLAM3 (green) and DGS-SLAM (blue) on two highly dynamic datasets. (<b>a</b>) 1341846647–1341846664s part of the fr3/w_rpy sequence; (<b>b</b>) 1341846434–1341846460s part of the fr3/w_half sequence. The RGB images of blue lines show the dynamic objects in the scenes.</p>
Full article ">Figure 13
<p>APE distribution for ORB-SLAM3 and different configurations of DGS-SLAM. (<b>a</b>) fr3/s_ static. (<b>b</b>) fr3/w_rpy. (<b>c</b>) fr3/w_static The rectangular portions represent the distribution area of 3/4 APE data, and the other portions represent the distribution area of the remaining APE data. The top of the graph (horizontal line or small black dot) indicates the maximum value of APE, and the bottom line represents the minimum.</p>
Full article ">Figure 14
<p>The first row is the input RGB images. The second row is the mask output by the instance segmentation network. The third row is the feature point classification result output by DynaSLAM (red points are dynamic points, and green points are static points). The fourth row is the feature point classification result output by DGS-SLAM.</p>
Full article ">Figure 15
<p>The estimated camera trajectories of different methods. (<b>a</b>) Results of ORB-SLAM3. (<b>b</b>) Results of DynaSLAM. (<b>c</b>) Results of RDS-SLAM (<b>d</b>) Results of DGS-SALM. The first row is fr3/walking_halfsphere. The second row is fr3/w_rpy. The third row is fr3/w_static. The fourth row is fr3/w_xyz. The red line refers to the difference between the ground truth and the estimated trajectory.</p>
Full article ">Figure 16
<p>Example results of dynamic object detection on the Bonn RGB-D dynamic dataset. (<b>a</b>) The ballon2 sequence. (<b>b</b>) The crowd2 sequence. The first row is the input RGB images. The second row is the mask output by the instance segmentation network. The red masks in the third and fourth lines are the dynamic regions segmented by DynaSLAM and DGS-SLAM, respectively.</p>
Full article ">Figure 17
<p>Example results of feature point classification on the moving_obstructing_box.</p>
Full article ">
16 pages, 4476 KiB  
Article
Remotely Sensed Winter Habitat Indices Improve the Explanation of Broad-Scale Patterns of Mammal and Bird Species Richness in China
by Likai Zhu and Yuanyuan Guo
Remote Sens. 2022, 14(3), 794; https://doi.org/10.3390/rs14030794 - 8 Feb 2022
Cited by 4 | Viewed by 2589
Abstract
Climate change is transforming winter environmental conditions rapidly. Shifts in snow regimes and freeze/thaw cycles that are unique to the harsh winter season can strongly influence ecological processes and biodiversity patterns of mammals and birds. However, the role of the winter environment in [...] Read more.
Climate change is transforming winter environmental conditions rapidly. Shifts in snow regimes and freeze/thaw cycles that are unique to the harsh winter season can strongly influence ecological processes and biodiversity patterns of mammals and birds. However, the role of the winter environment in structuring a species richness pattern is generally downplayed, especially in temperate regions. Here we developed a suite of winter habitat indices at 500 m spatial resolution by fusing MODIS snow products and NASA MEaSUREs daily freeze/thaw records from passive microwave sensors and tested how these indices could improve the explanation of species richness patterns across China. We found that the winter habitat indices provided unique and mutually complementary environmental information compared to the commonly used Dynamic Habitat Indices (DHIs). Winter habitat indices significantly increased the explanatory power for species richness of all mammal and bird groups. Particularly, winter habitat indices contributed more to the explanation of bird species than mammals. Regarding the independent contribution, winter season length made the largest contributions to the explained variance of winter birds (30%), resident birds (27%), and mammals (18%), while the frequency of snow-free frozen ground contributed the most to the explanation of species richness for summer birds (23%). Our research provides new insights into the interpretation of broad-scale species diversity, which has great implications for biodiversity assessment and conservation. Full article
(This article belongs to the Special Issue Remote Sensing of Ecosystems in Cold Regions)
Show Figures

Figure 1

Figure 1
<p>Geographical features of study area.</p>
Full article ">Figure 2
<p>Flowchart showing data processing procedures to derive the winter habitat indices based on remote sensing products.</p>
Full article ">Figure 3
<p>Spatial patterns of species richness for different groups across China. (<b>a</b>) Mammal species richness; (<b>b</b>) winter bird species richness; (<b>c</b>) summer bird species richness; (<b>d</b>) resident bird species richness. Numbers indicate eco-geographic regions of China: I—Cold temperate zone; II—Humid/semi-humid medium temperate zone; III—Semi-arid medium and warm temperate zone; IV—Humid/semi-humid warm temperate zone; V—North subtropical zone; VI—Middle subtropical zone; VII—South subtropical zone; VIII—Tropical zone; IX—Arid medium and warm temperate zone; X—Tibetan Plateau zone.</p>
Full article ">Figure 4
<p>Spatial patterns of remotely sensed winter habitat indices across China. (<b>a</b>) Winter season length; (<b>b</b>) snow cover duration; (<b>c</b>) frequency of snow-free frozen ground; (<b>d</b>) snow variability. Numbers indicate eco-geographic regions of China: I—Cold temperate zone; II—Humid/semi-humid medium temperate zone; III—Semi-arid medium and warm temperate zone; IV—Humid/semi-humid warm temperate zone; V—North subtropical zone; VI—Middle subtropical zone; VII—South subtropical zone; VIII—Tropical zone; IX—Arid medium and warm temperate zone; X—Tibetan Plateau zone.</p>
Full article ">Figure 5
<p>Correlations between remote sensing derived indices and 19 WorldClim bioclimatic variables to show how winter habitat indices (WinterL, Dsc, Dnsc, and SnowVar) represent environmental information that potentially impacts species richness. The blank grids indicate insignificant Spearman’s correlation coefficients at the significance level of 0.001.</p>
Full article ">Figure 6
<p>Comparison of explanatory power between full models with winter habitat indices and reduced models. The formula of full model with winter habitat indices is richness~dem+cumDHI+minDHI+varDHI+WinterL+Dnsc+SnowVar); the reduced model without winter habitat indices is richness~dem+cumDHI+minDHI+varDHI). The goodness of fit is measured by deviance from generalized linear model. The ANOVA method is used to compare the two models, and the “*” sign indicates that the full model shows a significant increase in the explanatory power according to the chi-squared test. Remotely sensing winter habitat indices increase the goodness of fit for models of all groups.</p>
Full article ">Figure 7
<p>Contributions of winter habitat indices to the explained variance for different groups. (<b>a</b>) Mammals; (<b>b</b>) Resident birds; (<b>c</b>) Summer birds; (<b>d</b>) Winter birds.</p>
Full article ">
18 pages, 89379 KiB  
Article
A Maritime Cloud-Detection Method Using Visible and Near-Infrared Bands over the Yellow Sea and Bohai Sea
by Yun-Jeong Choi, Hyun-Ju Ban, Hee-Jeong Han and Sungwook Hong
Remote Sens. 2022, 14(3), 793; https://doi.org/10.3390/rs14030793 - 8 Feb 2022
Cited by 5 | Viewed by 2893
Abstract
Accurate cloud-masking procedures to distinguish cloud-free pixels from cloudy pixels are essential for optical satellite remote sensing. Many studies on satellite-based cloud-detection have been performed using the spectral characteristics of clouds in terms of reflectance and temperature. This study proposes a cloud-detection method [...] Read more.
Accurate cloud-masking procedures to distinguish cloud-free pixels from cloudy pixels are essential for optical satellite remote sensing. Many studies on satellite-based cloud-detection have been performed using the spectral characteristics of clouds in terms of reflectance and temperature. This study proposes a cloud-detection method using reflectance in four bands: 0.56 μm, 0.86 μm, 1.38 μm, and 1.61 μm. Methodologically, we present a conversion relationship between the normalized difference water index (NDWI) and the green band in the visible spectrum for thick cloud detection using moderate-resolution imaging spectroradiometer (MODIS) observations. NDWI consists of reflectance at the 0.56 and 0.86 μm bands. For thin cloud detection, the 1.38 and 1.61 μm bands were applied with empirically determined threshold values. Case study analyses for the four seasons from 2000 to 2019 were performed for the sea surface area of the Yellow Sea and Bohai Sea. In the case studies, the comparison of the proposed cloud-detection method with the MODIS cloud mask (CM) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation data indicated a probability of detection of 0.933, a false-alarm ratio of 0.086, and a Heidke Skill Score of 0.753. Our method demonstrated an additional important benefit in distinguishing clouds from sea ice or yellow dust, compared to the MODIS CM products, which usually misidentify the latter as clouds. Consequently, our cloud-detection method could be applied to a variety of low-orbit and geostationary satellites with 0.56, 0.86, 1.38, and 1.61 μm bands. Full article
(This article belongs to the Special Issue Advances in Ocean Remote Sensing through Data and Algorithm Fusion)
Show Figures

Figure 1

Figure 1
<p>Example of misidentifying sea ice as clouds by MODIS CM. (<b>a</b>) MODIS RGB image using 0.56, 0.86, and 1.6 μm bands [<a href="#B36-remotesensing-14-00793" class="html-bibr">36</a>]; (<b>b</b>) MODIS CM (MOD35_L2) on 3 January 2013, 03:15 UTC.</p>
Full article ">Figure 2
<p>MODIS RGB image on 1 June 2017, 02:15 UTC and the selection of clouds. The pink box indicates clouds.</p>
Full article ">Figure 3
<p>Flowchart of the proposed cloud detection method.</p>
Full article ">Figure 4
<p>Study area including the Yellow Sea and Bohai Sea.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distribution of cloud pixels and (<b>b</b>) regression relationship between NDWI and MODIS <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mn>0.56</mn> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> for cloud pixels on 1 June 2017, 02:15 UTC. The pink pixels in (<b>a</b>) are clouds selected from the RGB image. The grey pixels in (<b>a</b>) denote all MODIS pixels on the same date. Blue pixels in (<b>b</b>) indicate the cloud pixels. The red line in (<b>b</b>) is the regression relationship for cloud pixels.</p>
Full article ">Figure 6
<p>Threshold value-dependent statistical scores obtained from the comparison between MODIS CM and the proposed CM for the (<b>a</b>) 1.38 and (<b>b</b>) 1.61 μm bands using the test dataset found in the pink box (<a href="#remotesensing-14-00793-f002" class="html-fig">Figure 2</a>), for 1 June 2017, 02:15 UTC. Cyan, green, yellow, blue, and red lines indicate CC, RMSE, bias, HSS, and POD-FAR values, respectively.</p>
Full article ">Figure 7
<p>Case studies: (first column) MODIS RGB images, (second column) cloud mask using only NDWI–green band relationship, (third column) cloud mask using only <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mn>1.38</mn> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> method, (fourth column) cloud mask using only <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mn>1.61</mn> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </msub> </mrow> </semantics></math> method, and (fifth column) cloud mask using the proposed cloud detection method (combination of (<b>b</b>–<b>d</b>)) on (<b>a</b>) 29 April 2014, 02:20 UTC (spring); (<b>b</b>) 12 June 2015 02:15 UTC (summer); (<b>c</b>) 4 October 2019, 02:15 UTC (autumn); and (<b>d</b>) 6 January 2016, 02:15 UTC (winter).</p>
Full article ">Figure 8
<p>Quantitative comparison of the proposed CM and MODIS CM for the four cases for (<b>a</b>) spring (29 April 2014. 02:20 UTC), (<b>b</b>) summer (12 June 2015, 02:15 UTC), (<b>c</b>) autumn (4 October 2019, 02:15 UTC), and (<b>d</b>) winter (6 January 2016, 02:15 UTC). The light cyan colors indicate that both CMs detected the pixel as a cloud. The red pixels indicate that only MODIS CM detected the pixel as a cloud. The green pixels indicate that only the proposed CM detected the pixel as a cloud pixel. The blue colors indicates that both CMs detected the pixel as cloud-free sea surfaces. Tan colors indicate the land mask.</p>
Full article ">Figure 9
<p>(<b>a</b>) spring case: 25 March 2020 (05:05 UTC (MODIS) and 05:14 UTC (CALIPSO)); (<b>b</b>) summer case: 15 July 2020 (05:05 UTC (MODIS) and 05:21 UTC (CALIPSO)); (<b>c</b>) autumn case: 26 September 2020 (05:00 UTC (MODIS) and 05:27 UTC (CALIPSO)); (<b>d</b>) winter case: 16 January 2021 (05:00 UTC (MODIS) and 05:28 UTC (CALIPSO)). The first, second, and third rows indicate the MODIS RGB image, the proposed CM, and MODIS CM, respectively. CALIPSO laser footprints are overlaid in the second and third columns where cyan and blue colors indicate cloud or no cloud pixels, respectively.</p>
Full article ">Figure 10
<p>CALIPSO VFM plots on cases in <a href="#remotesensing-14-00793-f008" class="html-fig">Figure 8</a>, (<b>a</b>) 25 March 2020 05:14 UTC; (<b>b</b>) 15 July 2020 05:21 UTC; (<b>c</b>) 26 September 2020 05:27 UTC; (<b>d</b>) 16 January 2021 05:28 UTC. The legends of no signal, subsurf, surface, strato, aerosol, cloud, and clear denote no signal (complete signal attenuation), subsurface, surface, stratospheric aerosol, tropospheric aerosol, cloud, and clear air, respectively.</p>
Full article ">Figure 10 Cont.
<p>CALIPSO VFM plots on cases in <a href="#remotesensing-14-00793-f008" class="html-fig">Figure 8</a>, (<b>a</b>) 25 March 2020 05:14 UTC; (<b>b</b>) 15 July 2020 05:21 UTC; (<b>c</b>) 26 September 2020 05:27 UTC; (<b>d</b>) 16 January 2021 05:28 UTC. The legends of no signal, subsurf, surface, strato, aerosol, cloud, and clear denote no signal (complete signal attenuation), subsurface, surface, stratospheric aerosol, tropospheric aerosol, cloud, and clear air, respectively.</p>
Full article ">Figure 11
<p>Temporal variations of POD, FAR, and HSS between MODIS CM and our CM from 2000 to 2019.</p>
Full article ">Figure 12
<p>RGB images of a mixture of clouds and sea ice (left column), the proposed CM (middle column), and MODIS CM (right column) on (<b>a</b>) 6 January 2011, 02:25 UTC; (<b>b</b>) 6 January 2016, 02:15 UTC; and (<b>c</b>) 5 January 2018, 02:50 UTC.</p>
Full article ">Figure 13
<p>RGB images of a mixture of clouds and dust (left column), the proposed CM (middle column), and MODIS CM (right column) on (<b>a</b>) 2 April 2001, 02:25 UTC; (<b>b</b>) 8 April 2003, 02:15 UTC; and (<b>c</b>) 16 April 2015, 02:20 UTC.</p>
Full article ">
22 pages, 49094 KiB  
Article
Soil Moisture Estimation for Winter-Wheat Waterlogging Monitoring by Assimilating Remote Sensing Inversion Data into the Distributed Hydrology Soil Vegetation Model
by Xiaochun Zhang, Xu Yuan, Hairuo Liu, Hongsi Gao and Xiugui Wang
Remote Sens. 2022, 14(3), 792; https://doi.org/10.3390/rs14030792 - 8 Feb 2022
Cited by 10 | Viewed by 3068
Abstract
Waterlogging crop disasters are caused by continuous and excessive soil water in the upper layer of soil. In order to enable waterlogging monitoring, it is important to collect continuous and accurate soil moisture data. The distributed hydrology soil vegetation model (DHSVM) is selected [...] Read more.
Waterlogging crop disasters are caused by continuous and excessive soil water in the upper layer of soil. In order to enable waterlogging monitoring, it is important to collect continuous and accurate soil moisture data. The distributed hydrology soil vegetation model (DHSVM) is selected as the basic hydrological model for soil moisture estimation and winter-wheat waterlogging monitoring. To handle the error accumulation of the DHSVM and the poor continuity of remote sensing (RS) inversion data, an agro-hydrological model that assimilates RS inversion data into the DHSVM is used for winter-wheat waterlogging monitoring. The soil moisture content maps retrieved from satellite images are assimilated into the DHSVM by the successive correction method. Moreover, in order to reduce the modeling error accumulation, monthly and real-time RS inversion maps that truly reflect local soil moisture distributions are regularly assimilated into the agro-hydrological modeling process each month. The results show that the root mean square errors (RMSEs) of the simulated soil moisture value at two in situ experiment points were 0.02077 and 0.02383, respectively, which were 9.96% and 12.02% of the measured value. From the accurate and continuous soil moisture results based on the agro-hydrological assimilation model, the waterlogging-damaged ratio and grade distribution information for winter-wheat waterlogging were extracted. The results indicate that there were almost no high-damaged-ratio and severe waterlogging damage areas in Lixin County, which was consistent with the local field investigation. Full article
(This article belongs to the Special Issue Remote Sensing in Agricultural Hydrology and Water Resources Modeling)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of the Xifei River Basin; (<b>b</b>) location of Lixin County; (<b>c</b>) location of smart soil moisture monitoring instrument (SSMMI).</p>
Full article ">Figure 2
<p>(<b>a</b>) Simulation results with different spatial resolutions from the SSMMI B location; (<b>b</b>) simulation results with different temporal resolutions from the SSMMI B location.</p>
Full article ">Figure 3
<p>The simulation results of soil moisture by the DHSVM model without assimilation.</p>
Full article ">Figure 4
<p>The improvement effect by assimilating RS inversion maps for SSMMI A. (<b>a</b>) The effect by assimilating one RS inversion map; (<b>b</b>) the effect by assimilating two RS inversion maps; (<b>c</b>) the effect by assimilating three RS inversion maps; (<b>d</b>) the effect by assimilating four RS inversion maps.</p>
Full article ">Figure 5
<p>The final improvement effect of the agro-hydrological assimilation model (<b>a</b>) for SSMMI A and (<b>b</b>) SSMMI B.</p>
Full article ">Figure 6
<p>The accuracy verification of the agro-hydrological assimilation model (<b>a</b>) for SSMMI A and (<b>b</b>) SSMMI B.</p>
Full article ">Figure 7
<p>The monthly samples of soil moisture distribution results from the agro-hydrological assimilation model in Lixin County.</p>
Full article ">Figure 8
<p>The waterlogging damaged ratio map during the wheat growth period from October 2020 to May 2021.</p>
Full article ">Figure 9
<p>Grade distribution maps of waterlogging damage from 7–17 June 2021.</p>
Full article ">Figure 10
<p>The area change of three damage grades from 7–17 June 2021.</p>
Full article ">Figure 11
<p>Daily rainfall during the calibration and verification period.</p>
Full article ">
28 pages, 53843 KiB  
Article
Development of an App and Teaching Concept for Implementation of Hyperspectral Remote Sensing Data into School Lessons Using Augmented Reality
by Claudia Lindner, Andreas Rienow, Karl-Heinz Otto and Carsten Juergens
Remote Sens. 2022, 14(3), 791; https://doi.org/10.3390/rs14030791 - 8 Feb 2022
Cited by 11 | Viewed by 3901
Abstract
For the purpose of expanding STEM (science, technology, engineering, mathematics) education with remote sensing (RS) data and methods, an augmented reality (AR) app was developed in combination with a worksheet and lesson plan. Data from the Hyperspectral Imager for the Coastal Ocean (HICO) [...] Read more.
For the purpose of expanding STEM (science, technology, engineering, mathematics) education with remote sensing (RS) data and methods, an augmented reality (AR) app was developed in combination with a worksheet and lesson plan. Data from the Hyperspectral Imager for the Coastal Ocean (HICO) was searched for topics applicable to STEM curricula, which was found in the example of a harmful algal bloom in Lake Erie, USA, in 2011. Spectral shape algorithms were applied to differentiate between less harmful green and more harmful blue algae in the lake. The data was pre-processed to reduce its size significantly without losing too much information and then integrated into an app that was developed in Unity with the Vuforia extension. It was designed to let students browse and understand the raw data in RGB and a tangible hyperspectral cube, as well as to analyze algae maps derived from it. The app runs on Android smartphones with minimized data usage to make it less dependent on school funding and the socioeconomic background of students. Using educational concepts, such as active and collaborative learning, moderate constructivism, and scientific inquiry, the data was integrated into a lesson about environmental problems that was enhanced by the AR app. The app and worksheet were evaluated in two advanced geography courses (n = 36) and found to be complex, but doable and understandable, for the target group of German high school students in their final two school years. Thus, hyperspectral data can be used for STEM lessons using AR technology on students’ smartphones with several limitations both in the technology used and gainable knowledge. Full article
(This article belongs to the Collection Teaching and Learning in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the study.</p>
Full article ">Figure 2
<p>HICO bands on the electromagnetic spectrum and band 36 (553 nm), cut and resized to fit the area of interest, but not oriented. Scene H2011246173456 (Data source: NASA Ocean Color portal).</p>
Full article ">Figure 3
<p>Location of the utilized HICO scene H201124617359 in Lake Erie as true color image. (Data source: NASA Ocean Color portal).</p>
Full article ">Figure 4
<p>Distribution of sample sites. (Data source: CIGLR).</p>
Full article ">Figure 5
<p>Correlation between extracted phycocyanin and total cell count of cyanobacteria from CIGLR data. (Data source: CIGLR).</p>
Full article ">Figure 6
<p>Correlation between extracted phycocyanin and particulate microcystin from the CIGLR data. (Data source: CIGLR).</p>
Full article ">Figure 7
<p>Map of microcystin content derived from phycocyanin content derived from HICO images. Not oriented to better fit the screen in the app (see <a href="#sec2dot2-remotesensing-14-00791" class="html-sec">Section 2.2</a>) (Data source: NASA Ocean Color portal).</p>
Full article ">Figure 8
<p>Map of microcystin content derived from chlorophyll a content derived from HICO images. Not oriented to better fit the screen in the app (see <a href="#sec2dot2-remotesensing-14-00791" class="html-sec">Section 2.2</a>) (Data source: NASA Ocean Color portal).</p>
Full article ">Figure 9
<p>Map of extraction points for exemplary spectral signatures. The points represent: (1) one of the highest chlorophyll values, (2) one of the highest phycocyanin values with low chlorophyll value, (3) one of the darkest water points with low chlorophyll and phycocyanin values, (4) urban surface in Toledo without vegetation, (5) green fields on the peninsula between Lake Erie and Lake St. Clair, (6) near the Toledo water pump intake, (7) in the western corner of Sandusky bay with medium chlorophyll and very low phycocyanin values, and (8) near the Ottawa County water pump intake. (Data source: NASA Ocean Color portal).</p>
Full article ">Figure 10
<p>Implementation of the RGB viewer (Source: authors).</p>
Full article ">Figure 11
<p>Spectral signatures as seen in the app UI and their origins. The circle arrow resets all signatures when they get too cluttered. The signatures are numbered as in <a href="#remotesensing-14-00791-f009" class="html-fig">Figure 9</a> (Source: authors).</p>
Full article ">Figure 12
<p>Implementation of the image stack with special rainbow coloring and edge transparency. (Source: authors).</p>
Full article ">Figure 13
<p>Worksheet “Algal bloom in the water supply”. (<b>P1</b>): cover sheet, doubles as marker image for maps in <a href="#remotesensing-14-00791-f014" class="html-fig">Figure 14</a>. (<b>P2</b>): Tasks 0 (preparatory) to 3. (<b>P4</b>): Introduction text to be read in task 1 and respective marker for the HDEV video (<b>P7</b>): Optional task 5 using the image stack from <a href="#remotesensing-14-00791-f012" class="html-fig">Figure 12</a>. (Source: authors).</p>
Full article ">Figure 14
<p>Microcystin distribution maps based on chlorophyll a and phycocyanin (English version of the app). Students can zoom in to see individual pixels by positioning their smartphone closer to the marker image below (Source: authors).</p>
Full article ">Figure 15
<p>Subjective answers regarding the understanding of the app.</p>
Full article ">Figure 16
<p>Subjective answers regarding the students’ opinion of the app.</p>
Full article ">Figure 17
<p>Answers from both test groups about the properties of hyperspectral data.</p>
Full article ">Figure 18
<p>Answers from both test groups about the properties of spectral signatures.</p>
Full article ">Figure 19
<p>Answers from both test groups about the use of hyperspectral data in algal blooms.</p>
Full article ">Figure 20
<p>Answers from Q1 group to determine whether they understood the conclusive task that combines all knowledge gathered through the worksheet and app.</p>
Full article ">Figure 21
<p>Comparison of the grades of the Q1 class between the evaluation results and the written grades they achieved in the same semester.</p>
Full article ">Figure 22
<p>Comparison of the grades of the Q2 class between the evaluation results and the written grades they achieved in the same semester.</p>
Full article ">
13 pages, 2478 KiB  
Article
Wildfire Dynamics along a North-Central Siberian Latitudinal Transect Assessed Using Landsat Imagery
by Yury Dvornikov, Elena Novenko, Mikhail Korets and Alexander Olchev
Remote Sens. 2022, 14(3), 790; https://doi.org/10.3390/rs14030790 - 8 Feb 2022
Cited by 6 | Viewed by 3173
Abstract
The history of wildfires along a latitudinal transect from forest–tundra to middle taiga in North-Central Siberia was reconstructed for the period from 1985 to 2020 using Landsat imagery. The transect passed through four key regions (75 × 75 km2) with different [...] Read more.
The history of wildfires along a latitudinal transect from forest–tundra to middle taiga in North-Central Siberia was reconstructed for the period from 1985 to 2020 using Landsat imagery. The transect passed through four key regions (75 × 75 km2) with different climate and landscape conditions that allowed us to evaluate regional wildfire dynamics as well as estimate differences in post-fire forest recovery. The Level-2A Landsat data (TM, ETM+, and OLI) were used to derive: (i) burned area (BA) locations, (ii) timing of wildfire occurrence (date, month, or season), (iii) fire severity, and (iv) trends in post-fire vegetation recovery. We used pre-selected and pre-processed scenes suitable for BA mapping taken within four consecutive time intervals covering the entire period of data analysis (1985–2020). Pre- and post-fire dynamics of forest vegetation were described using spectral indices, i.e., NBR and NDVI. We found that during the last three decades, the maximum BA occurred in the southernmost Vanavara region where ≈58% of the area burned. Total BA gradually decreased to the northwest with a minimum in the Igarka region (≈1%). Nearly half of these BAs appeared between summer 2013 and autumn 2020 due to higher frequency of hot and dry weather. The most severe wildfires were detected in the most northeastern Tura region. Analysis of NDVI and NBR dynamics showed that the mean period of post-fire vegetation recovery ranged between 20 and 25 years. The time of vegetation recovery at BAs with repeat wildfires and high severity was significantly longer. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Selected key regions along a latitudinal transect in North-Central Siberia (Basemap: ESRI©).</p>
Full article ">Figure 2
<p>Temporal variability in total BA and number of wildfires across the four key regions. Total burned area—bar chart. Number of wildfires—line chart.</p>
Full article ">Figure 3
<p>Burned areas within the key regions detected from Landsat scenes: (<b>a</b>) Vanavara, green: 4 July 1994; yellow: 4 July 2000; orange: 24 July 2013; red: July 2020; (<b>b</b>) Tura, green: 5 August 2001; yellow: 29 July 2007; orange: 20 July 2013; red: 30 July 2020; (<b>c</b>) Turukhansk, green: 29 August 1987; yellow: 16 August 1994; orange: 18 August 2009; red: 21 August 2019; (<b>d</b>) Igarka, green: 17 July 1998; red: 18 July 2019. Basemap: OpenStreetMap©. Datum: WGS-84, Projection: Universal Transverse Mercator Zone 48N (<b>a</b>), Zone 47N (<b>b</b>), Zone 45N (<b>c</b>,<b>d</b>). Black triangles show locations of the district centers.</p>
Full article ">Figure 4
<p>Temporal NDVI variability of Bas burned in the periods (i) between 1990 and 1995; (ii) earlier than 1990; and (iii) twice burned: before 1990 and during the period between 2015 and 2020. Dashed blue line represents median NDVI across undisturbed areas within four key regions (number of undisturbed sites: 31).</p>
Full article ">Figure 5
<p>Differences of NBR (dNBR) observed within BAs (n = 91) across all study regions. For recurrent wildfires, we consider fire episodes as having a fire return interval of less than 20 years. Blue dashed line is the median dNBR value for all BAs.</p>
Full article ">
23 pages, 9265 KiB  
Article
Extracting Urban Road Footprints from Airborne LiDAR Point Clouds with PointNet++ and Two-Step Post-Processing
by Haichi Ma, Hongchao Ma, Liang Zhang, Ke Liu and Wenjun Luo
Remote Sens. 2022, 14(3), 789; https://doi.org/10.3390/rs14030789 - 8 Feb 2022
Cited by 17 | Viewed by 3834
Abstract
In this paper, a novel framework for the automatic extraction of road footprints from airborne LiDAR point clouds in urban areas is proposed. The extraction process consisted of three phases: The first phase is to extract road points by using the deep learning [...] Read more.
In this paper, a novel framework for the automatic extraction of road footprints from airborne LiDAR point clouds in urban areas is proposed. The extraction process consisted of three phases: The first phase is to extract road points by using the deep learning model PointNet++, where the features of the input data include not only those selected from raw LiDAR points, such as 3D coordinate values, intensity, etc., but also the digital number (DN) of co-registered images and generated geometric features to describe a strip-like road. Then, the road points from PointNet++ were post-processed based on graph-cut and constrained triangulation irregular networks, where both the commission and omission errors were greatly reduced. Finally, collinearity and width similarity were proposed to estimate the connection probability of road segments, thereby improving the connectivity and completeness of the road network represented by centerlines. Experiments conducted on the Vaihingen data show that the proposed framework outperformed others in terms of completeness and correctness; in addition, some narrower residential streets with 2 m width, which have normally been neglected by previous studies, were extracted. The completeness and the correctness of the extracted road points were 84.7% and 79.7%, respectively, while the completeness and the correctness of the extracted centerlines were 97.0% and 86.3%, respectively. Full article
(This article belongs to the Special Issue Optical Remote Sensing Applications in Urban Areas II)
Show Figures

Figure 1

Figure 1
<p>Top view of the Vaihingen dataset. (<b>a</b>) Optical image, (<b>b</b>) Zoomed in training, and (<b>c</b>) testing area. Training and testing area correspond to blue and yellow boxes in (<b>a</b>), respectively.</p>
Full article ">Figure 2
<p>Workflow of the proposed framework.</p>
Full article ">Figure 3
<p>Locations of a reference point and virtual points.</p>
Full article ">Figure 4
<p>Main direction of reference point.</p>
Full article ">Figure 5
<p>Strip-like descriptor of roads and squares (Red point: the reference point; Yellow points: virtual points with similar intensity value to the reference point; Blue points: other virtual points; Red lines: the main directions; Dotted line: delineates the area of roads, squares and parking lots).</p>
Full article ">Figure 6
<p>Strip descriptors of points. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>t</mi> <mo>_</mo> <mi>l</mi> <mi>e</mi> <mi>n</mi> <mi>g</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>o</mi> <mi>t</mi> <mo>_</mo> <mi>d</mi> <mi>i</mi> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>c</b>) Optical image.</p>
Full article ">Figure 7
<p>Initial road points classification using PointNet++.</p>
Full article ">Figure 8
<p>Illustration of the deep feature learning architecture in PointNet++ [<a href="#B34-remotesensing-14-00789" class="html-bibr">34</a>].</p>
Full article ">Figure 9
<p>Road points in the testing area. (<b>a</b>) The true road points are obtained by hand labeling. (<b>b</b>) The initial road points are extracted with geometric PointNet++. (<b>c</b>) &amp; (<b>d</b>) The zoomed in displaying results of the selected area in (<b>b</b>).</p>
Full article ">Figure 10
<p>Road point smoothing via graph-cut [<a href="#B45-remotesensing-14-00789" class="html-bibr">45</a>].</p>
Full article ">Figure 11
<p>Illustration of nodes selection. (<b>a</b>) Nodes selected by KNN algorithm; (<b>b</b>) Flattened cuboid; (<b>c</b>) Nodes selected by the improved method.</p>
Full article ">Figure 12
<p>Illustration of the shape difference between a road and a parking lot. (<b>a</b>) Local Road; (<b>b</b>) Road Network; (<b>c</b>) Parking Lot.</p>
Full article ">Figure 13
<p>Triangulated irregular network (TIN). (<b>a</b>) TIN formed by Delaunay triangulation; (<b>b</b>) Result of (<b>a</b>) after step 2 described in the text is performed.</p>
Full article ">Figure 14
<p>Road point extraction results. (<b>a</b>) Road points extracted by PointNet++ with geometric features. (<b>b</b>) Smoothing with graph-cut. (<b>c</b>) Clustered non-road points removal. (<b>d</b>) Road points obtained by hand editing. (<b>e</b>) Optical image of the corresponding area.</p>
Full article ">Figure 15
<p>Collinearity measurement.</p>
Full article ">Figure 16
<p>Centerline extraction results. (<b>a</b>) Road points after two-step post-processing; (<b>b</b>) Centerlines extracted by [<a href="#B21-remotesensing-14-00789" class="html-bibr">21</a>]; (<b>c</b>) Centerlines extracted by the proposed method; (<b>d</b>) True centerlines obtained by hand editing.</p>
Full article ">Figure 17
<p>Road point extraction results. (<b>a</b>) True road obtained by hand editing; (<b>b</b>) PointNet++ result with 6 output classes. Only road points are displayed; (<b>c</b>) PointNet++ result with only two output classes: road and non-road points. Only road points are displayed; (<b>d</b>) Input data to PointNet++ are firstly filtered with TIN, then are classified to two classes (road and non-road) with 16D vector representing the input points. (<b>e</b>) Input data to PointNet++ are firstly filtered with TIN then are classified to two classes (road and non-road), but 9 strip descriptors are dropped from the vector for representing input points.</p>
Full article ">Figure 18
<p>Road point extraction results after two-step post-processing. (<b>a</b>) Smoothing the road points of <a href="#remotesensing-14-00789-f017" class="html-fig">Figure 17</a>d with graph-cut; (<b>b</b>) Clustered non-road points removal from <a href="#remotesensing-14-00789-f018" class="html-fig">Figure 18</a>a based on CTINs.</p>
Full article ">Figure 19
<p>Centerline extraction result. (<b>a</b>) True centerlines obtained from <a href="#remotesensing-14-00789-f017" class="html-fig">Figure 17</a>a by hand editing; (<b>b</b>) Duplicate of <a href="#remotesensing-14-00789-f018" class="html-fig">Figure 18</a>b for comparison; (<b>c</b>) Centerlines extracted from <a href="#remotesensing-14-00789-f019" class="html-fig">Figure 19</a>b by the proposed framework.</p>
Full article ">Figure 20
<p>Road points extracted under different point densities. (<b>a</b>) True road points. (<b>b</b>) Density = 4 points/m<sup>2</sup>. (<b>c</b>) Density = 1.0 points/m<sup>2</sup>. (<b>d</b>) Density = 0.25 points/m<sup>2</sup>.</p>
Full article ">
23 pages, 105652 KiB  
Article
Time-Varying Surface Deformation Retrieval and Prediction in Closed Mines through Integration of SBAS InSAR Measurements and LSTM Algorithm
by Bingqian Chen, Hao Yu, Xiang Zhang, Zhenhong Li, Jianrong Kang, Yang Yu, Jiale Yang and Lu Qin
Remote Sens. 2022, 14(3), 788; https://doi.org/10.3390/rs14030788 - 8 Feb 2022
Cited by 27 | Viewed by 4227
Abstract
After a coal mine is closed, the coal rock mass could undergo weathering deterioration and strength reduction due to factors such as stress and groundwater, which in turn changes the stress and bearing capacity of the fractured rock mass in the abandoned goaf, [...] Read more.
After a coal mine is closed, the coal rock mass could undergo weathering deterioration and strength reduction due to factors such as stress and groundwater, which in turn changes the stress and bearing capacity of the fractured rock mass in the abandoned goaf, leading to secondary or multiple surface deformations in the goaf. Currently, the spatiotemporal evolution pattern of the surface deformation of closed mines remains unclear, and there is no integrated monitoring and prediction model for closed mines. Therefore, this study proposed to construct an integrated monitoring and prediction model for closed mines using small baseline subset (SBAS) interferometric synthetic aperture radar (InSAR) and a deep learning-based long short-term memory (LSTM) neural network algorithm to achieve evolution pattern and dynamic prediction of spatiotemporal surface deformation of closed mines. Taking a closed mine in the western part of Xuzhou, China, as an example, based on Sentinel-1A SAR data between 21 December 2015, and 11 January 2021, SBAS InSAR technology was used to obtain the spatiotemporal evolution pattern of the surface during the 5 years after mine closure. The results showed that the ground surface subsided in the early stage of mine closure and then uplifted. In 5 years, the maximum subsidence rate in the study area is −43 mm/a, and the cumulative maximum subsidence is 310 mm; the maximum uplift rate is 29 mm/a, and the cumulative maximum uplift is 135 mm. Moreover, the maximum tilt and curvature are 3.5 mm/m and 0.19 mm/m2, respectively, which are beyond the safety thresholds of buildings; thus, continuous monitoring is necessary. Based on the evolution pattern of surface deformation, the surface deformation prediction model was proposed by integrating SBAS InSAR and an LSTM neural network. The experiment results showed that the LSTM neural network can accurately predict the deformation trend, with a maximum root mean square error (RMSE) of 5.1 mm. Finally, the relationship between the residual surface deformation and time after mine closure was analyzed, and the mechanisms of surface subsidence and uplift were discussed, which provide a theoretical reference for better understanding the surface deformation process of closed mines and the prevention of surface deformation. Full article
(This article belongs to the Special Issue SAR in Big Data Era II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structure of the LSTM recurrent unit. The red A is the cell state at the previous moment <span class="html-italic">X<sub>t</sub></span><sub>−1</sub>, the black A represents the cell state at the next moment <span class="html-italic">X<sub>t</sub></span><sub>+1</sub>, and the middle cell is the cell state at the current moment <span class="html-italic">X<sub>t</sub></span>.</p>
Full article ">Figure 2
<p>Flowchart of the integrated SBAS InSAR-LSTM model.</p>
Full article ">Figure 3
<p>Study area. (<b>a</b>) Location of the study area, (<b>b</b>) distribution of the working panels. The quadrilaterals of different colors represent different working panels. The blue line is the mine boundary, and the red dots are second-order leveling monitoring points.</p>
Full article ">Figure 4
<p>SAR image spatiotemporal baseline diagram. The red dot is the common master image, and the green dots are the slave images.</p>
Full article ">Figure 5
<p>Average annual deformation rate in the LOS direction between 21 December 2015, and 11 January 2021.</p>
Full article ">Figure 6
<p>Vertical deformation time series between 21 December 2015 and 11 January 2021. A represents the maximum surface subsidence rate area at the Pangzhuang Mine, C is the main uplift area at the junction of the Jiahe Mine and the Pangzhuang Mine, D represents the surface uplift area at the Pangzhuang Mine, E area starts to subside within 150 days (from 21 December 2015 to 23 May 2016) of mine closure at the Jiahe Mine.</p>
Full article ">Figure 7
<p>Distribution of tilt in each mine between 21 December 2015 and 11 January 2021. Red stars represent the points that need to be focused on.</p>
Full article ">Figure 8
<p>Distribution of curvature in each mine between 21 December 2015, and 11 January 2021.</p>
Full article ">Figure 9
<p>Time series results at the maximum deformation points of each mine. (<b>a</b>) Maximum uplift, (<b>b</b>) maximum subsidence, (<b>c</b>,<b>d</b>) maximum tilt, and (<b>e</b>,<b>f</b>) maximum curvature. The dotted line represents the turning points of deformation trend.</p>
Full article ">Figure 10
<p>(<b>a</b>) PS InSAR average annual deformation rate in LOS direction between 21 December 2015 and 11 January 2021, (<b>b</b>) fitting of 13,544 common points of the deformation rate obtained by PS InSAR and SBAS InSAR.</p>
Full article ">Figure 11
<p>Comparison between SBAS InSAR results and leveling data. (<b>a</b>) Comparison of different periods: 18 November 2020, through 18 December 2020, and 18 December 2020, through 11 January 2021, (<b>b</b>) fitting of SBAS InSAR results and leveling data. Note that leveling data was collected by Trimble DiNi03 electronic level with an accuracy of ±0.3 mm per kilometer round trip.</p>
Full article ">Figure 12
<p>Comparison of different prediction models. (<b>a</b>–<b>e</b>) Prediction results of LSTM; (<b>f</b>–<b>j</b>) prediction results of BP; (<b>k</b>–<b>o</b>) prediction results of SVR. The time points for the above predictions are 20 August 2020, 25 September 2020, 31 October 2020, 6 December 2020, and 11 January 2021.</p>
Full article ">Figure 13
<p>Residual errors of different models. (<b>a</b>–<b>e</b>) Residual errors of LSTM; (<b>f</b>–<b>j</b>) residual errors of BP; (<b>k</b>–<b>o</b>) residual errors of SVR. The time points for the above predictions are 20 August 2020, 25 September 2020, 31 October 2020, 6 December 2020, and 11 January 2021.</p>
Full article ">Figure 14
<p>The relationship between cumulative subsidence or uplift of each working panel and the mining termination time. (<b>a</b>–<b>c</b>) The relationship between cumulative uplift and mining termination time, (<b>d</b>–<b>h</b>) the relationship between cumulative subsidence and mining termination time, and (<b>e</b>–<b>g</b>) the relationship between first subsidence and uplift and mining termination time.</p>
Full article ">Figure 15
<p>Schematic diagram of changes in overlying strata and surface. I, II, and III are the bent zone, fractured zone, and caving zone, respectively. (<b>a</b>) The original condition of the three zones formed by coal mining, (<b>b</b>) changes in the overlying strata and the surface under load, (<b>c</b>) changes in the overlying strata and the surface under the influence of groundwater.</p>
Full article ">
18 pages, 3352 KiB  
Article
Radiometric Assessment of ICESat-2 over Vegetated Surfaces
by Amy Neuenschwander, Lori Magruder, Eric Guenther, Steven Hancock and Matt Purslow
Remote Sens. 2022, 14(3), 787; https://doi.org/10.3390/rs14030787 - 8 Feb 2022
Cited by 18 | Viewed by 3355
Abstract
The ice, cloud, and land elevation satellite-2 (ICESat-2) is providing global elevation measurements to the science community. ICESat-2 measures the height of the Earth’s surface using a photon counting laser altimeter, ATLAS (advanced topographic laser altimetry system). As a photon counting system, the [...] Read more.
The ice, cloud, and land elevation satellite-2 (ICESat-2) is providing global elevation measurements to the science community. ICESat-2 measures the height of the Earth’s surface using a photon counting laser altimeter, ATLAS (advanced topographic laser altimetry system). As a photon counting system, the number of reflected photons per shot, or radiometry, is a function primarily of the transmitted laser energy, solar elevation, surface reflectance, and atmospheric scattering and attenuation. In this paper, we explore the relationship between detected scattering and attenuation in the atmosphere against the observed radiometry for three general forest types, as well as the radiometry as a function of day versus night. Through this analysis, we found that ATLAS strong beam radiometry exceeds the pre-launch design cases for boreal and tropical forests but underestimates the predicted radiometry over temperate forests by approximately half a photon. The weak beams, in contrast, exceed all pre-launch conditions by a factor of two to six over all forest types. We also observe that the signal radiometry from day acquisitions is lower than night acquisitions by 10% and 40% for the strong and weak beams, respectively. This research also found that the detection ratio between each beam-pair was lower than the predicted 4:1 values. This research also presents the concept of ICESat-2 radiometric profiles; these profiles provide a path for calculating vegetation structure. The results from this study are intended to be informative and perhaps serve as a benchmark for filtering or analysis of the ATL08 data products over vegetated surfaces. Full article
Show Figures

Figure 1

Figure 1
<p>Beam configuration for ATLAS instrument on ICESat-2. Beams 1, 3, and 5 are strong beams and 2, 4, and 6 beams have a fourth of the energy. Spot 7 on the diagram is a virtual point representative of the sub-satellite point.</p>
Full article ">Figure 2
<p>Study areas used in this research representing three general forest types: boreal, temperate, and tropical.</p>
Full article ">Figure 3
<p>Profiles of the labeled photons from the ATL08 algorithm for three different forest types; (<b>a</b>) Alberta: boreal forest, (<b>b</b>) Germany: temperate forest, and (<b>c</b>) Tapajos: tropical forest. The strong beams are on the top row and weak beams are on the bottom row.</p>
Full article ">Figure 4
<p>Radiometric histograms for ground (<b>top</b>), canopy (<b>center</b>), and total (<b>bottom</b>) from the Alberta data set representing boreal forest in no snow conditions. The histograms for the strong beams are shown on the left and the weak beams are shown on the right.</p>
Full article ">Figure 5
<p>Radiometric histograms for ground (<b>top</b>), canopy (<b>center</b>) and total (<b>bottom</b>) from the Germany/Czech Republic data set representing temperate forest in non-snow conditions. The histograms for the strong beams are shown on the left and the weak beams are shown on the right.</p>
Full article ">Figure 6
<p>Radiometric histograms for ground (<b>top</b>), canopy (<b>center</b>), and total (<b>bottom</b>) from the Tapajos data set representing tropical forest. The histograms for the strong beams are shown on the left and the weak beams are shown on the right.</p>
Full article ">Figure 7
<p>Illustration highlighting the role of reflectance for canopy cover determination.</p>
Full article ">Figure 8
<p>Scatter plots between the ground radiometry and canopy radiometry for each of the six study locations. These plots represent the observed ATL08 segment radiometric profile.</p>
Full article ">Figure 9
<p>Radiometric profile for Finland for non-snow conditions during November (<b>left</b>) and for snow conditions (<b>right</b>).</p>
Full article ">
24 pages, 3136 KiB  
Article
Potential of Airborne LiDAR Derived Vegetation Structure for the Prediction of Animal Species Richness at Mount Kilimanjaro
by Alice Ziegler, Hanna Meyer, Insa Otte, Marcell K. Peters, Tim Appelhans, Christina Behler, Katrin Böhning-Gaese, Alice Classen, Florian Detsch, Jürgen Deckert, Connal D. Eardley, Stefan W. Ferger, Markus Fischer, Friederike Gebert, Michael Haas, Maria Helbig-Bonitz, Andreas Hemp, Claudia Hemp, Victor Kakengi, Antonia V. Mayr, Christine Ngereza, Christoph Reudenbach, Juliane Röder, Gemma Rutten, David Schellenberger Costa, Matthias Schleuning, Axel Ssymank, Ingolf Steffan-Dewenter, Joseph Tardanico, Marco Tschapka, Maximilian G. R. Vollstädt, Stephan Wöllauer, Jie Zhang, Roland Brandl and Thomas Naussadd Show full author list remove Hide full author list
Remote Sens. 2022, 14(3), 786; https://doi.org/10.3390/rs14030786 - 8 Feb 2022
Cited by 2 | Viewed by 4641
Abstract
The monitoring of species and functional diversity is of increasing relevance for the development of strategies for the conservation and management of biodiversity. Therefore, reliable estimates of the performance of monitoring techniques across taxa become important. Using a unique dataset, this study investigates [...] Read more.
The monitoring of species and functional diversity is of increasing relevance for the development of strategies for the conservation and management of biodiversity. Therefore, reliable estimates of the performance of monitoring techniques across taxa become important. Using a unique dataset, this study investigates the potential of airborne LiDAR-derived variables characterizing vegetation structure as predictors for animal species richness at the southern slopes of Mount Kilimanjaro. To disentangle the structural LiDAR information from co-factors related to elevational vegetation zones, LiDAR-based models were compared to the predictive power of elevation models. 17 taxa and 4 feeding guilds were modeled and the standardized study design allowed for a comparison across the assemblages. Results show that most taxa (14) and feeding guilds (3) can be predicted best by elevation with normalized RMSE values but only for three of those taxa and two of those feeding guilds the difference to other models is significant. Generally, modeling performances between different models vary only slightly for each assemblage. For the remaining, structural information at most showed little additional contribution to the performance. In summary, LiDAR observations can be used for animal species prediction. However, the effort and cost of aerial surveys are not always in proportion with the prediction quality, especially when the species distribution follows zonal patterns, and elevation information yields similar results. Full article
(This article belongs to the Special Issue Machine Learning Methods for Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Study area with sampling plots. Colors of symbols show different land covers, shapes show the different flight missions from 2015 and 2016. The background image indicates the large-scale vegetation zones along the elevational gradient (background: Google Maps [<a href="#B51-remotesensing-14-00786" class="html-bibr">51</a>]).</p>
Full article ">Figure 2
<p>The model training (upper right loop) uses a partial least squares regression (PLSR) and a forward feature selection with a 20-fold cross validation. Validation is carried out by predicting the values of the testing plots. The division of testing and training plots (outer loop) follows a repeated stratified sampling approach, with randomly chosen resamples of one plot per land cover for the testing, leaving the rest of the plots for training. Validation is based on the median root mean square error (RMSE) of the individual resamples, normalized by the standard deviation of these RMSE values.</p>
Full article ">Figure 3
<p>Modeling performances for each taxon (<b>a</b>) and feeding guild (<b>b</b>) in terms of the root mean square error normalized by standard deviation (RMSE/sd). Smaller values show a better model performance. Colors represent the different model types. Taxa are grouped into “elevation”, “structure” and “combination” depending on which of the three models shows the best median RMSE/sd. Stars indicate if the best model is significantly better than both of the other models. Within the groups, taxa and feeding guilds are sorted by descending RMSE/sd. The boxes include the median and the inter quartile range (IQR) with notches indicating roughly the 95% confidence interval. Whiskers are extending to <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>1.5</mn> </mrow> </semantics></math> times the IQR and points indicate single error values outside of this range.</p>
Full article ">Figure 4
<p>Modeling performance for the residuals of the elevation model for each taxon (<b>a</b>) or feeding guild (<b>b</b>) as root mean square error normalized by standard deviation [RMSE/sd]. Taxa (<b>a</b>) and feeding guilds (<b>b</b>) are sorted by increasing median modeling performance and therefore increasing influence of vegetation structure on the target variable (which means decreasing median RMSE/sd). For the description of plot elements, see <a href="#remotesensing-14-00786-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure A1
<p>Variable selection for the structure model of each taxon in forest. Colors show how often variables were included during 20-fold cross-validation. Structural variables are sorted by the total number of times they where selected. See Woellauer et al. [<a href="#B54-remotesensing-14-00786" class="html-bibr">54</a>] for variable details.</p>
Full article ">Figure A2
<p>Variable selection for structural variables for the residual model and combined model of each taxa in forest. Colors show how often variables were included during 20-fold cross-validation. Structural variables are sorted by the total number of times they where selected. See Woellauer et al. [<a href="#B54-remotesensing-14-00786" class="html-bibr">54</a>] for variable details.</p>
Full article ">Figure A3
<p>Variable selection for the structure model of each taxon in non-forest. Colors show how often variables were included during 20-fold cross-validation. Structural variables are sorted by the total number of times they where selected. See Woellauer et al. [<a href="#B54-remotesensing-14-00786" class="html-bibr">54</a>] for variable details.</p>
Full article ">Figure A4
<p>Variable selection for structural variables for the residual model and combined model of each taxa in non-forest. Colors show how often variables were included during 20-fold cross-validation. Structural variables are sorted by the total number of times they where selected. See Woellauer et al. [<a href="#B54-remotesensing-14-00786" class="html-bibr">54</a>] for variable details.</p>
Full article ">
17 pages, 3786 KiB  
Article
An Investigation of a Multidimensional CNN Combined with an Attention Mechanism Model to Resolve Small-Sample Problems in Hyperspectral Image Classification
by Jinxiang Liu, Kefei Zhang, Suqin Wu, Hongtao Shi, Yindi Zhao, Yaqin Sun, Huifu Zhuang and Erjiang Fu
Remote Sens. 2022, 14(3), 785; https://doi.org/10.3390/rs14030785 - 8 Feb 2022
Cited by 32 | Viewed by 5010
Abstract
The convolutional neural network (CNN) method has been widely used in the classification of hyperspectral images (HSIs). However, the efficiency and accuracy of the HSI classification are inevitably degraded when small samples are available. This study proposes a multidimensional CNN model named MDAN, [...] Read more.
The convolutional neural network (CNN) method has been widely used in the classification of hyperspectral images (HSIs). However, the efficiency and accuracy of the HSI classification are inevitably degraded when small samples are available. This study proposes a multidimensional CNN model named MDAN, which is constructed with an attention mechanism, to achieve an ideal classification performance of CNN within the framework of few-shot learning. In this model, a three-dimensional (3D) convolutional layer is carried out for obtaining spatial–spectral features from the 3D volumetric data of HSI. Subsequently, the two-dimensional (2D) and one-dimensional (1D) convolutional layers further learn spatial and spectral features efficiently at an abstract level. Based on the most widely used convolutional block attention module (CBAM), this study investigates a convolutional block self-attention module (CBSM) to improve accuracy by changing the connection ways of attention blocks. The CBSM model is used with the 2D convolutional layer for better performance of HSI classification purposes. The MDAN model is applied for classification applications using HSI, and its performance is evaluated by comparing the results with the support vector machine (SVM), 2D CNN, 3D CNN, 3D–2D–1D CNN, and CBAM. The findings of this study indicate that classification results from the MADN model show overall classification accuracies of 97.34%, 96.43%, and 92.23% for Salinas, WHU-Hi-HanChuan, and Pavia University datasets, respectively, when only 1% HSI data were used for training. The training and testing times of the MDAN model are close to those of the 3D–2D–1D CNN, which has the highest efficiency among all comparative CNN models. The attention model CBSM is introduced into MDAN, which achieves an overall accuracy of about 1% higher than that of the CBAM model. The performance of the two proposed methods is superior to the other models in terms of both efficiency and accuracy. The results show that the combination of multidimensional CNNs and attention mechanisms has the best ability for small-sample problems in HSI classification. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Architecture of the MDAN model; notably, the attention module CBSM is used to obtain the modified Feature 4.</p>
Full article ">Figure 2
<p>Structure of CBSM.</p>
Full article ">Figure 3
<p>Ground truth of the three datasets: (<b>a</b>) SA; (<b>b</b>) WHU; (<b>c</b>) PU. It should be noted that the category number of the datasets denoted by the color bar on the right represents the same category number as <a href="#remotesensing-14-00785-t001" class="html-table">Table 1</a>, <a href="#remotesensing-14-00785-t002" class="html-table">Table 2</a> and <a href="#remotesensing-14-00785-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 4
<p>Ground truth and classification results of the SA dataset: (<b>a</b>) ground truth; (<b>b</b>) SVM; (<b>c</b>) 2D CNN; (<b>d</b>) 3D CNN; (<b>e</b>) 3D–2D–1D CNN; (<b>f</b>) 3D–2D–CBAM–1D CNN; (<b>g</b>) MDAN (3D–2D–CBSM–1D CNN). This figure shows the classes with lower classification accuracy in the yellow circle and the red circle.</p>
Full article ">Figure 5
<p>Ground truth and classification results of the WHU dataset: (<b>a</b>) ground truth; (<b>b</b>) SVM; (<b>c</b>) 2D CNN; (<b>d</b>) 3D CNN; (<b>e</b>) 3D–2D–1D CNN; (<b>f</b>) 3D–2D–CBAM–1D CNN; (<b>g</b>) MDAN (3D–2D–CBSM–1D CNN). This figure shows the classes with lower classification accuracy in the yellow circle and the red circle.</p>
Full article ">Figure 6
<p>Ground truth and classification results of the PU dataset: (<b>a</b>) ground truth; (<b>b</b>) SVM; (<b>c</b>) 2D CNN; (<b>d</b>) 3D CNN; (<b>e</b>) 3D–2D–1D CNN; (<b>f</b>) 3D–2D–CBAM–1D CNN; (<b>g</b>) MDAN (3D–2D–CBSM–1D CNN). This figure shows the classes with lower classification accuracy in the yellow circle and the red circle.</p>
Full article ">Figure 7
<p>The confusion matrices of MDAN in the three selected datasets: (<b>a</b>) the confusion of the SA dataset; (<b>b</b>) the confusion of the WHU dataset; (<b>c</b>) the confusion of the PU dataset.</p>
Full article ">Figure 8
<p>Accuracies and loss convergence versus epochs of the proposed MDAN model using on the WHU, SA, and PU datasets: (<b>a</b>) accuracies; (<b>b</b>) loss.</p>
Full article ">
24 pages, 66358 KiB  
Article
Integration of DInSAR Time Series and GNSS Data for Continuous Volcanic Deformation Monitoring and Eruption Early Warning Applications
by Brianna Corsa, Magali Barba-Sevilla, Kristy Tiampo and Charles Meertens
Remote Sens. 2022, 14(3), 784; https://doi.org/10.3390/rs14030784 - 8 Feb 2022
Cited by 11 | Viewed by 4634
Abstract
With approximately 800 million people globally living within 100 km of a volcano, it is essential that we build a reliable observation system capable of delivering early warnings to potentially impacted nearby populations. Global Navigation Satellite System (GNSS) and satellite Synthetic Aperture Radar [...] Read more.
With approximately 800 million people globally living within 100 km of a volcano, it is essential that we build a reliable observation system capable of delivering early warnings to potentially impacted nearby populations. Global Navigation Satellite System (GNSS) and satellite Synthetic Aperture Radar (SAR) document comprehensive ground motions or ruptures near, and at, the Earth’s surface and may be used to detect and analyze natural hazard phenomena. These datasets may also be combined to improve the accuracy of deformation results. Here, we prepare a differential interferometric SAR (DInSAR) time series and integrate it with GNSS data to create a fused dataset with enhanced accuracy of 3D ground motions over Hawaii island from November 2015 to April 2021. We present a comparison of the raw datasets against the fused time series and give a detailed account of observed ground deformation leading to the May 2018 and December 2020 volcanic eruptions. Our results provide important new estimates of the spatial and temporal dynamics of the 2018 Kilauea volcanic eruption. The methodology presented here can be easily repeated over any region of interest where an SAR scene overlaps with GNSS data. The results will contribute to diverse geophysical studies, including but not limited to the classification of precursory movements leading to major eruptions and the advancement of early warning systems. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Region of study over the Big Island of Hawaii. The red outline shows the extent of the SAR scenes used for this study, Path 87 Frame 526, downloaded from the Alaskan Satellite Facility Vertex portal [<a href="#B21-remotesensing-14-00784" class="html-bibr">21</a>]. The yellow box shows the cropped outline of each interferogram used when generating time series. Grey circles and colored diamonds indicate GNSS station locations, which were used to create kriging interpolated GNSS maps in <a href="#sec2dot3-remotesensing-14-00784" class="html-sec">Section 2.3</a>. Twenty-four-hour final solution GNSS time series data, from stations listed in <a href="#remotesensing-14-00784-t001" class="html-table">Table 1</a> and aligned to the local, fixed, Pacific Plate reference frame were obtained through the Nevada Geodetic Laboratory (NGL), University of Nevada Reno (<a href="http://geodesy.unr.edu/" target="_blank">http://geodesy.unr.edu/</a> (accessed on 1 November 2021)). Stations are maintained by the USGS HVO [<a href="#B22-remotesensing-14-00784" class="html-bibr">22</a>,<a href="#B23-remotesensing-14-00784" class="html-bibr">23</a>], and data are archived and distributed by the UNAVCO GAGE facility. We take a closer look at the time series over the diamond GNSS locations in <a href="#sec3dot2-remotesensing-14-00784" class="html-sec">Section 3.2</a>, where the blue diamond is the NUPM GNSS station, the purple diamond corresponds to the CRIM station, orange represents the MKEA station, and green is the BLBP station. The blue triangle shows the location of the Mauna Loa volcano summit, and the yellow polygon indicates where the East Rift Zone is located. Background image taken from Google Earth/Data SIO, NOAA, U.S. Navy, NGA, GEBCO/Data LDEO-Columbia, NSF, NOAA/ Imagery Date: 13 December 2015.</p>
Full article ">Figure 2
<p>Example of an unwrapped LOS phase interferogram between 23 April 2018 and 4 June 2018, over the Big Island of Hawaii (Path 87 Frame 526) in units of radians from Stage 1 of the automated processing routine, based on GMTSAR. Here, warm colors and positive values are concentrated along the ERZ and represent an increase in slant range, corresponding to ground motion away from the satellite over this time. Cool colors and negative values represent a decrease in slant range, which means the ground moved towards the satellite. This color convention is reversed when the units of radians are converted to millimeters of deformation.</p>
Full article ">Figure 3
<p>Cumulative LOS displacement DInSAR time series results for Sentinel-1A/B Path 87 Frame 526 data over the Big Island of Hawaii from November 2015 to April 2021. These are twelve of the possible 275 time-steps from the 5.5-year-long time series. Each submap corresponds to the total deformation between (<b>a</b>) 11 November 2015 and (<b>b</b>) 09 May 2016; (<b>c</b>) 05 November 2016; (<b>d</b>) 04 May 2017; (<b>e</b>) 18 November 2017; (<b>f</b>) 05 May 2018; (<b>g</b>) 07 November 2018; (<b>h</b>) 06 May 2019; (<b>i</b>) 14 November 2019; (<b>j</b>) 12 May 2020; (<b>k</b>) 08 November 2020; and (<b>l</b>) 13 April 2021. Once the phase is converted to units of millimeters, the sign convention in GIAnT changes. Here, warm, positive colors represent regions of uplift and cool; negative colors correspond to subsidence.</p>
Full article ">Figure 4
<p>Interpolated GNSS displacement maps generated using 48 Hawaiian GNSS stations and the ordinary kriging algorithm. Each submap corresponds to the total deformation between 11 November 2015 and (<b>a</b>) 23 December 2016; (<b>b</b>) 18 November 2017; (<b>c</b>) 28 June 2018; (<b>d</b>) 14 November 2019; (<b>e</b>) 12 May 2020; and (<b>f</b>) 13 April 2021.</p>
Full article ">Figure 5
<p>Integrated DInSAR+GNSS 3D cumulative displacement maps from November 2015 to April 2021 in east, north, and up components of motion. Each submap corresponds to the total deformation between 22 November 2015, and (<b>a</b>) 05 November 2016; (<b>b</b>) 14 August 2017; (<b>c</b>) 22 February 2018; (<b>d</b>) 05 May 2018; (<b>e</b>) 28 June 2018; (<b>f</b>) 31 December 2018; (<b>g</b>) 14 November 2019; (<b>h</b>) 14 December 2020; (<b>i</b>) 13 April 2021. Subplots (<b>a–c</b>) are pre-eruption (which occurred 04 May 2018). In subplot a, the blue triangle corresponds to the summit of Mauna Loa, the purple diamond represents the summit of Kilauea, and the yellow polygon overlays the ERZ.</p>
Full article ">Figure 6
<p>Comparison of (<b>a</b>) the final time-step of the integrated time series converted to LOS with (<b>b</b>) the final cumulative LOS DInSAR scene and (<b>c</b>) the final cumulative GNSS interpolated map, also converted to LOS.</p>
Full article ">Figure 7
<p>Comparison of (<b>a</b>) the final time-step from the integrated time series with (<b>b</b>) the final time-steps of the GNSS variograms interpolated with kriging in the east, north, and up directions of motion.</p>
Full article ">Figure 8
<p>Integrated results compared to original, raw GNSS time series in the (<b>a</b>) east-, (<b>b</b>) north-, and (<b>c</b>) up-components of motion at the CRIM GNSS station (19.395°N, –155.274°W). (<b>d</b>) DInSAR LOS time series at the same pixel, over CRIM station. Yellow vertical lines indicate the May 2018 and December 2020 volcanic eruptions. The inset in subfigure <b>a</b> shows location of CRIM station in Hawaii.</p>
Full article ">Figure 9
<p>Integrated results compared to original, raw GNSS time series in the (<b>a</b>) east-, (<b>b</b>) north-, and (<b>c</b>) up-components of motion at NUPM GNSS station (19.385°N, –155.175°W). (<b>d</b>) DInSAR LOS time series at the same pixel, over NUPM station. Results clearly distinguish the Mw 6.9 earthquake rupture in 2018 and continued motion due to volcanic activity. Yellow lines are as in <a href="#remotesensing-14-00784-f008" class="html-fig">Figure 8</a>, above. The inset in subfigure (<b>a</b>) shows location of NUPM station in Hawaii.</p>
Full article ">Figure 10
<p>Integrated results compared to original, raw GNSS time series in the (<b>a</b>) east-, (<b>b</b>) north-, and (<b>c</b>) up-components of motion at MKEA GNSS station (19.801°N, –155.456°W). Motion in the east-west and north-south directions are slightly more constrained, while motion in the up-down direction is significantly transformed after combining the DInSAR and GNSS datasets together. (<b>d</b>) DInSAR LOS time series at the same pixel, MKEA station. Yellow lines are as in <a href="#remotesensing-14-00784-f008" class="html-fig">Figure 8</a>, above. The inset in subfigure <b>a</b> shows location of MKEA station in Hawaii.</p>
Full article ">Figure 11
<p>Integrated results compared to original, raw GNSS time series in the (<b>a</b>) east-, (<b>b</b>) north-, and (<b>c</b>) up-components of motion at BLBP GNSS station (19.355°N, –155.711°W). (<b>d</b>) DInSAR LOS time series at the same pixel, BLBP station. Yellow lines are as in <a href="#remotesensing-14-00784-f008" class="html-fig">Figure 8</a>, above. The inset in subfigure <b>a</b> shows location of BLBP station in Hawaii.</p>
Full article ">
11 pages, 6526 KiB  
Communication
Investigation of Turbulent Tidal Flow in a Coral Reef Channel Using Multi-Look WorldView-2 Satellite Imagery
by George Marmorino
Remote Sens. 2022, 14(3), 783; https://doi.org/10.3390/rs14030783 - 8 Feb 2022
Cited by 5 | Viewed by 2208
Abstract
The general topic here is the application of high-resolution satellite imagery to the study of ocean phenomena having horizontal length scales of several meters to a few kilometers. The present study investigates whether multiple images acquired quite closely in time can be used [...] Read more.
The general topic here is the application of high-resolution satellite imagery to the study of ocean phenomena having horizontal length scales of several meters to a few kilometers. The present study investigates whether multiple images acquired quite closely in time can be used to derive a spatial map of the surface current in situations where the near-surface hydrodynamics are dominated by bed-generated turbulence and associated wave–current interaction. The approach is illustrated using imagery of turbulent tidal flow in a channel through the outer part of the Great Barrier Reef. The main result is that currents derived from the imagery are found to reach speeds of nearly 4 m/s during a flooding tide—three times larger than published values for other parts of the Reef. These new findings may have some impact on our understanding of the transport of tracers and particles over the shelf. Full article
(This article belongs to the Special Issue Remote Sensing of the Aquatic Environments-Part II)
Show Figures

Figure 1

Figure 1
<p>Historical LANDSAT-8 image of a part of the Great Barrier Reef, showing flow circulation patterns made visible by material suspended in the water. The image was collected near the end of a flood tide on 23 August 2013, at 00:01 UTC. The focus of the present study is Molar Reef, which is labeled.</p>
Full article ">Figure 2
<p>(<b>a</b>) Molar Reef as seen by the WorldView-2 satellite at 23:37 UTC 22 August 2020. Shown are data from bands 5, 3, and 2 (red, green, and blue wave lengths) but the color ranges have been stretched to reveal both water surface scattering and water color changes across the channel. Red rectangle shows the analysis area, which is aligned with the main channel between Molar Reef and the unnamed reef lying to the southeast. (<b>b</b>) Bathymetry has 100-m spatial resolution [<a href="#B17-remotesensing-14-00783" class="html-bibr">17</a>]. Scale bar indicates depth in meters.</p>
Full article ">Figure 3
<p>Predictions of water surface elevation and surface current speed coinciding with WorldView-2 data collected at 23:37 UTC 22 August 2020 (<b>a</b>,<b>b</b>) and 23:59 UTC 4 September 2020 (<b>c</b>,<b>d</b>), as indicated by the two dashed, vertical lines. Data shown were extracted from the “ereefs” model archive at the center of the study area shown in <a href="#remotesensing-14-00783-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Example of the evolution of a boil (feature “A”) over time. (<b>a</b>–<b>i</b>) Approximate Lagrangian views of the same area of water surface, as captured in nine consecutive looks from the mid-flood case (<a href="#remotesensing-14-00783-t001" class="html-table">Table 1</a>, 22 August 2020). Times shown are relative to look 1; data are from the panchromatic band. Each panel has a size of 256 m by 256 m, which is the same size as the interrogation window used in the PIV calculations. Local water depth is 36 m.</p>
Full article ">Figure 5
<p>Velocity field at mid-flood as derived from PIV analysis of an image pair (looks 6 and 7) having a time separation of Δ<span class="html-italic">t</span> = <span class="html-italic">t</span><sub>2</sub> − <span class="html-italic">t</span><sub>1</sub> = 11.6 s (see <a href="#remotesensing-14-00783-t001" class="html-table">Table 1</a>). (<b>a</b>) Red-edge imagery at time <span class="html-italic">t</span><sub>1</sub>; (<b>b</b>) velocity vectors; (<b>c</b>) velocity magnitude. Black areas are a land mask. The longest vector in (<b>b</b>) has a length of 4.37 m/s. Dashed rectangle in (<b>c</b>) shows area used for spatial averaging.</p>
Full article ">Figure 6
<p>Velocity field at late-flood as derived from PIV analysis of an image pair (looks 5 and 6) having a time separation of Δ<span class="html-italic">t</span> = <span class="html-italic">t</span><sub>2</sub> − <span class="html-italic">t</span><sub>1</sub> = 12.6 s. (<b>a</b>) Red-edge imagery at time <span class="html-italic">t</span><sub>1</sub>; (<b>b</b>) velocity vectors; (<b>c</b>) velocity magnitude. Large, relatively bright and dark patches in the imagery are atmospheric artifacts. The longest vector in (<b>b</b>) has a length of 2.83 m/s. Note that the magnitude scaling differs from <a href="#remotesensing-14-00783-f005" class="html-fig">Figure 5</a>, as the upper limit here is only 3 m/s.</p>
Full article ">Figure 7
<p>Ensemble-averaged result for mid-flood case: (<b>a</b>) Velocity vectors; (<b>b</b>) velocity magnitude; (<b>c</b>) root-mean-square values. The results derive from PIV analyses of eight consecutive image pairs (see <a href="#remotesensing-14-00783-t001" class="html-table">Table 1</a>, 22 August 2020).</p>
Full article ">
19 pages, 4355 KiB  
Article
Snow Coverage Mapping by Learning from Sentinel-2 Satellite Multispectral Images via Machine Learning Algorithms
by Yucheng Wang, Jinya Su, Xiaojun Zhai, Fanlin Meng and Cunjia Liu
Remote Sens. 2022, 14(3), 782; https://doi.org/10.3390/rs14030782 - 8 Feb 2022
Cited by 15 | Viewed by 5326
Abstract
Snow coverage mapping plays a vital role not only in studying hydrology and climatology, but also in investigating crop disease overwintering for smart agriculture management. This work investigates snow coverage mapping by learning from Sentinel-2 satellite multispectral images via machine-learning methods. To this [...] Read more.
Snow coverage mapping plays a vital role not only in studying hydrology and climatology, but also in investigating crop disease overwintering for smart agriculture management. This work investigates snow coverage mapping by learning from Sentinel-2 satellite multispectral images via machine-learning methods. To this end, the largest dataset for snow coverage mapping (to our best knowledge) with three typical classes (snow, cloud and background) is first collected and labeled via the semi-automatic classification plugin in QGIS. Then, both random forest-based conventional machine learning and U-Net-based deep learning are applied to the semantic segmentation challenge in this work. The effects of various input band combinations are also investigated so that the most suitable one can be identified. Experimental results show that (1) both conventional machine-learning and advanced deep-learning methods significantly outperform the existing rule-based Sen2Cor product for snow mapping; (2) U-Net generally outperforms the random forest since both spectral and spatial information is incorporated in U-Net via convolution operations; (3) the best spectral band combination for U-Net is B2, B11, B4 and B9. It is concluded that a U-Net-based deep-learning classifier with four informative spectral bands is suitable for snow coverage mapping. Full article
(This article belongs to the Special Issue Remote Sensing for Smart Agriculture Management)
Show Figures

Figure 1

Figure 1
<p>The entire workflow is divided into data collection, data labeling, data exploration, model training and model evaluation.</p>
Full article ">Figure 2
<p>U-Net architecture used in this study. The blue boxes represent different multi-channel feature maps, with the numbers on the top and left edge of the box indicate the number of channels and the feature size (width and height) separately. Each white box represents a copied feature map. The arrows with different colors denote different operations. The number of channels is denoted on the top of the box.</p>
Full article ">Figure 3
<p>Loss curves for training data (blue) and validation data (orange) in training process of (<b>A</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math>, (<b>B</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>4</mn> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math> and (<b>C</b>) U-Net<math display="inline"><semantics> <msub> <mrow/> <mrow> <mn>12</mn> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math>. The dashed line indicates the epoch with smallest validation loss and the loss in the Y-axis represents the weighted cross-entropy.</p>
Full article ">Figure 4
<p>Geographical distribution of the 40 selected sites denoted by empty triangles, with different colors representing scenes obtained from different years, i.e., cyan, red and green denotes scenes dated from the years in 2019, 2020 and 2021.</p>
Full article ">Figure 5
<p>Visualization of all 40 scenes via RGB bands, with the above numbers being the scene captured date.</p>
Full article ">Figure 6
<p>Labeled classification masks of all 40 collected scenes. The three target classes are represented by different colors: black denotes background, red denotes cloud and cyan denotes snow.</p>
Full article ">Figure 7
<p>Boxplots comparing the bottom of atmosphere corrected reflectance of 12 spectral bands from Sentinel-2 L2A products for background (white), cloud (red) and snow (cyan). Note: the outliers of each boxplot are not displayed.</p>
Full article ">Figure 8
<p>NDSI distribution of snow (cyan), cloud (red) and background (black) pixels, where the NDSI is defined as NDSI = (B3 − B12)/(B3 + B12).</p>
Full article ">Figure 9
<p>Feature selection. (<b>A</b>) Forward sequential feature selection, where the tick name of the x-axis means sequentially adding the specified bands into the inputs of the model. (<b>B</b>) Backward sequential feature selection, where the tick name of the x-axis means sequentially removing the specified bands from the inputs of the model.</p>
Full article ">Figure 10
<p>Classification performance comparisons for different models applied in a training dataset images (n = 34) based on (<b>A</b>) precision, (<b>B</b>) F1 score, (<b>C</b>) recall and (<b>D</b>) IoU. The bars with three different colors, i.e., violet, green and blue, represent models with input subset made up of RGB bands, informative four bands and all 12 bands, respectively. The bar without texture denotes random forest model, while the bar with diagonal texture symbols U-Net model. Note: the evaluation was performed on image level, therefore the validation dataset paths are also included.</p>
Full article ">Figure 11
<p>Classification performance comparisons for different models applied in testing dataset images (n = 6) based on (<b>A</b>) precision, (<b>B</b>) F1 score, (<b>C</b>) recall and (<b>D</b>) IoU. The bars with three different colors, i.e., violet, green and blue, represent models with input subset made up of RGB bands, informative four bands and all 12 bands respectively. The bar without texture denotes random forest model, while the bar with diagonal texture symbols U-Net model.</p>
Full article ">Figure 12
<p>Visual comparisons of the classification performance in six independent scenes for different methods. Each row represents an independent test scene, and each column represents a different method. Except for the plots in the first column, the three target classes are represented by different colors, where black denotes background, red denotes cloud and cyan denotes snow.</p>
Full article ">
21 pages, 3252 KiB  
Article
Finding Mesolithic Sites: A Multichannel Ground-Penetrating Radar (GPR) Investigation at the Ancient Lake Duvensee
by Erica Corradini, Daniel Groß, Tina Wunderlich, Harald Lübke, Dennis Wilken, Ercan Erkul, Ulrich Schmölcke and Wolfgang Rabbel
Remote Sens. 2022, 14(3), 781; https://doi.org/10.3390/rs14030781 - 8 Feb 2022
Cited by 6 | Viewed by 3539
Abstract
The shift to the early Holocene in northern Europe is strongly associated with major environmental and climatic changes that influenced hunter-gatherers’ activities and occupation during the Mesolithic period. The ancient lake Duvensee (10,000–6500 cal. BCE) has been studied for almost a century, providing [...] Read more.
The shift to the early Holocene in northern Europe is strongly associated with major environmental and climatic changes that influenced hunter-gatherers’ activities and occupation during the Mesolithic period. The ancient lake Duvensee (10,000–6500 cal. BCE) has been studied for almost a century, providing archaeological sites consisting of bark mats and hazelnut-roasting hearths situated on small sand banks deposited by the glacier. No method is yet available to locate these features before excavation. Therefore, a key method for understanding the living conditions of hunter-gatherer groups is to reconstruct the paleoenvironment with a focus on the identification of areas that could possibly host Mesolithic camps and well-preserved archaeological artefacts. We performed a 16-channel MALÅ Imaging Radar Array (MIRA) system survey aimed at understanding the landscape surrounding the find spot Duvensee WP10, located in a hitherto uninvestigated part of the bog. Using an integrated approach of high-resolution ground radar mapping and targeted excavations enabled us to derive a 3D spatio-temporal landscape reconstruction of the investigated sector, including paleo-bathymetry, stratigraphy, and shorelines around the Mesolithic camps. Additionally, we detected previously unknown islands as potential areas for yet unknown dwelling sites. We found that the growth rates of the islands were in the order of approximately 0.3 m2/yr to 0.7 m2/yr between the late Preboreal and the Subboreal stages. The ground-penetrating radar surveying performed excellently in all aspects of near-surface landscape reconstruction as well as in identifying potential dwellings; however, the direct identification of small-scale artefacts, such as fireplaces, was not successful because of their similarity to natural structures. Full article
(This article belongs to the Special Issue Advanced Ground Penetrating Radar Theory and Applications II)
Show Figures

Figure 1

Figure 1
<p>Area of investigation including archaeological and geophysical research. (<b>a</b>) Location of the Duvensee area focused on the extent of the lake during the early Holocene (white line) [<a href="#B6-remotesensing-14-00781" class="html-bibr">6</a>]. (<b>b</b>) Dating and location of the excavated sites together with the positions of the former islands based on the geophysical results reported in [<a href="#B10-remotesensing-14-00781" class="html-bibr">10</a>] (orange dashed lines). The numbers refer to the names of each dwelling site. (<b>c</b>) Area of interest with the archaeological excavations carried out between 2018 and 2020. (<b>d</b>) Focus on the archaeological trenches. (<b>e</b>) GPR measurements with the 16-channel 400 MHz MALÅ Imaging Radar Array (MIRA).</p>
Full article ">Figure 2
<p>(<b>a</b>) Overview of the archaeological excavation results: the dense fine scatter (finds) is clearly visible as well as the different features that were recovered during the excavation. In sector 1, the hazelnut-roasting feature is represented by a sand lens that shows some outwash in the eastern part; in sector 4, the fireplace is visible as well as the ancient shoreline. (<b>b</b>) Picture from the excavation showing the fireplace.</p>
Full article ">Figure 3
<p>Comparison between GPR results and stratigraphy. (<b>a</b>) A GPR profile together with stratigraphy. The transitions between the sediment layers are indicated with dashed lines and the sediment with different colors. The yellow line (Interface1) indicates the transition between coarse organic sediments (i.e., peat and coarse detritus gyttja) and more fine organic sediments (i.e., fine detritus gyttja and calcareous gyttja). The green line (Interface2) indicates the transition between fine organic sediments (i.e., fine detritus gyttja and calcareous gyttja) and the clayish-loamy sediments. The third reflection (Interface3) represents the transition between the clayish-loamy layer and the basal sands. (<b>b</b>) Interpreted model for the development of the Duvensee bog during the Mesolithic period (according to [<a href="#B50-remotesensing-14-00781" class="html-bibr">50</a>] and [<a href="#B17-remotesensing-14-00781" class="html-bibr">17</a>]).</p>
Full article ">Figure 4
<p>(<b>a</b>) Interpretation of GPR depth-slices and profiles at the investigated area. Top—the visible features are indicated with a green box and dashed blue lines, while the locations of the GPR profiles are indicated with letters: AB and CD. (<b>b</b>) Two examples of GPR profiles intersecting the main features in the depth-slices. The dashed colored lines represent the main reflections (red marks the transition between clayish-loamy layer and the basal sand deposit; the yellow line represents the transition between the coarse organic sediments and the underlying fine organic sediments, according to [<a href="#B10-remotesensing-14-00781" class="html-bibr">10</a>] and [<a href="#B17-remotesensing-14-00781" class="html-bibr">17</a>]). The black dashed rectangle shows a small, rounded reflection, which probably corresponds to a small sand hill. The yellow dot symbolizes the location of archaeological excavations carried out between 2018 and 2020.</p>
Full article ">Figure 5
<p>Archaeological excavations together with amplitude maps at the same depth. The bright and dark blue colors indicate high and low amplitude values, respectively. The main archeological features are reported as interpretation. (<b>a</b>) Focus on time slice 14–15 ns, which corresponds to a depth of about 50 cm. (<b>b</b>) Focus on the archaeological trenches and their interpretation. The shoreline is indicated between trenches 2 and 4, and a high amplitude anomaly is highlighted with a dashed red line.</p>
Full article ">Figure 6
<p>Profile location and interpretation. (<b>a</b>) Location of the two GPR profiles intersecting the archaeological excavations (Profile_35 and Profile_70). These lines correspond to the GPS trace, which is located in the middle of the GPR antenna. (<b>b</b>) Archaeological sketch of the vertical section in trench 1 (AB). The different sediments are reported with colors and the roasting facility is marked in cyan. (<b>c</b>) Location of the different channels associated with Profile_35 and interpretation of portions CD and EF. The main interfaces are indicated with dashed lines and the sediments are inserted in the radargram. Interface1, which is the bottom of the peat, is depicted with a yellow dashed line. (<b>d</b>) Focus on Profile_35 and comparison with the stratigraphy delivered by the excavation. The shape of the former island is discernible.</p>
Full article ">Figure 7
<p>Location and interpretation of Profile_70. (<b>a</b>) Location of the different channels associated with Profile_70 and the archaeological features correlated to the shoreline and a tree stump. (<b>b</b>) Archaeological vertical section displaying the sediments with different colors and numbers. (<b>c</b>) Interpretation of Profile_70 considering channel 14 and channel 8.</p>
Full article ">Figure 8
<p>Distribution of Mesolithic camps and landscape reconstruction of the investigated area. (<b>a</b>) Updated map presenting the dating and location of the excavated sites together with the positions of the former islands. Moreover, the new island locations are reported. The numbers refer to the names of each dwelling site and the different colors display the time of occupation (modified after [<a href="#B10-remotesensing-14-00781" class="html-bibr">10</a>]). (<b>b</b>) Two-dimensional contour map of Interface3 marking the transition between clayish-loamy deposits and the basal sand. The colored stars represent the different island clusters, with red stars indicating the southernmost islands concentration and black stars the three small islands in the middle of the investigated area (cluster 7). The upper cyan star indicates the big island 8 (low (blue) to high (brown)) areas. (<b>c</b>) Three-dimensional reconstruction of the investigated area with a hypothetical water level and the occupation of island 6 by Mesolithic hunter-gatherers.</p>
Full article ">
17 pages, 4409 KiB  
Article
Mapping Forest Restoration Probability and Driving Archetypes Using a Bayesian Belief Network and SOM: Towards Karst Ecological Restoration in Guizhou, China
by Li Peng, Shuang Zhou and Tiantian Chen
Remote Sens. 2022, 14(3), 780; https://doi.org/10.3390/rs14030780 - 8 Feb 2022
Cited by 9 | Viewed by 2770
Abstract
To address ecological threats such as land degradation in the karst regions, several ecological restoration projects have been implemented for improved vegetation coverage. Forests are the most important types of vegetation. However, the evaluation of forest restoration is often uncertain, primarily owing to [...] Read more.
To address ecological threats such as land degradation in the karst regions, several ecological restoration projects have been implemented for improved vegetation coverage. Forests are the most important types of vegetation. However, the evaluation of forest restoration is often uncertain, primarily owing to the complexity of the underlying factors and lack of information related to changes in forest coverage in the future. To address this issue, a systematic case study based on the Guizhou Province, China, was carried out. First, three archetypes of driving factors were recognized through the self-organizing maps (SOM) algorithm: the high-strength ecological archetype, marginal archetype, and high-strength archetype dominated by human influence. Then, the probability of forest restoration in the context of ecological restoration was predicted using Bayesian belief networks in an effort to decrease the uncertainty of evaluation. Results show that the overall probability of forest restoration in the study area ranged from 22.27 to 99.29%, which is quite high. The findings from regions with different landforms suggest that the forest restoration probabilities of karst regions in the grid and the regional scales were lower than in non-karst regions. However, this difference was insignificant mainly because the ecological restoration in the karst regions accelerated local forest restoration and decreased the ecological impact. The proposed method of driving-factor clustering based on restoration as well as the method of predicting restoration probability have a certain reference value for forest management and the layout of ecological restoration projects in the mid-latitude ecotone. Full article
Show Figures

Figure 1

Figure 1
<p>Location and landform type of the study area.</p>
Full article ">Figure 2
<p>Correlation matrix showing the selected variables for forest restoration.</p>
Full article ">Figure 3
<p>Trends in forest coverage variation from 2005 to 2018.</p>
Full article ">Figure 4
<p>Self-organizing map of the socio-environmental factors.</p>
Full article ">Figure 5
<p>Z-score normalized values of drivers characterizing the three archetypes of socio-environmental factors: zero represents the mean of the factors; the natural drivers are presented in blue and the human drivers shown in red.</p>
Full article ">Figure 6
<p>Bayesian conceptual network and parametric results of forest restoration model. (<b>a</b>) Bayesian conceptual network of forest restoration model; and (<b>b</b>) Parametric results of forest restoration model.</p>
Full article ">Figure 7
<p>ROC curve of the slope of NDVI change.</p>
Full article ">Figure 8
<p>Sensitivity analysis of the driving factors.</p>
Full article ">Figure 9
<p>Distribution of the forest-restoration probability.</p>
Full article ">Figure 10
<p>Analysis of the probability of forest restoration in karst and non-karst regions at the (<b>a</b>) grid and (<b>b</b>) regional scales.</p>
Full article ">
15 pages, 4052 KiB  
Article
Strain-Rates from GPS Measurements in the Ordos Block, China: Implications for Geodynamics and Seismic Hazards
by Shoubiao Zhu
Remote Sens. 2022, 14(3), 779; https://doi.org/10.3390/rs14030779 - 7 Feb 2022
Cited by 4 | Viewed by 2594
Abstract
A number of devastating earthquakes have occurred around the Ordos Block in recent history. For the purpose of studying where the next major event will occur surrounding the Ordos Block, much work has been done, particularly in the investigation of the Earth’s surface [...] Read more.
A number of devastating earthquakes have occurred around the Ordos Block in recent history. For the purpose of studying where the next major event will occur surrounding the Ordos Block, much work has been done, particularly in the investigation of the Earth’s surface strain rates based on GPS measurements. However, there exist striking differences between the results from different authors although they used almost the same GPS data. Therefore, we validated the method for the calculation of GPS strain rates developed by Zhu et al. (2005, 2006) and found that the method is feasible and has high precision. With this approach and the updated GPS data, we calculated the strain rates in the region around the Ordos Block. The computed results show that the total strain rates in the interior of the Block are very small, and the high values are mainly concentrated on the peripheral zones of the Ordos Block and along the large-scale active faults, such as the Haiyuan fault, which are closely aligned to the results by geological and geophysical observations. Additionally, the strain rate results demonstrated that all rifted grabens on the margin of the Ordos Block exhibit extensional deformation. Finally, based on the strain rate, seismicity, and tectonic structures, we present some areas of high earthquake risk surrounding the Ordos Block in the future, which are located on the westernmost of the Weihe Graben, both the east and westernmost of the Hetao Graben, and in the middle of the Shanxi Graben. Hence, this work is significant in contributing to a better understanding of the geodynamics and seismic hazard assessment. Full article
(This article belongs to the Special Issue Geodetic Observations for Earth System)
Show Figures

Figure 1

Figure 1
<p>Geological framework and major earthquake distribution in and around the Ordos Block over the last 1000 years with magnitude M ≥ 6.0. In the figure, DNF, Daihai North Edge Fault; ONF, Ordos North Edge Fault; LSPF, Langshan Piedmont Fault; DBF, Dengkou-Benjing Fault; XGF, Xiaoguanshan Fault; LPF, Liupanshan Fault; QMF, Qishan-Mazhao Fault; WBF, Weihe Basin Fault; JHF, Jinhuo Fault.</p>
Full article ">Figure 2
<p>GPS velocities in the Ordos Block with respect to the stable Eurasian plate from 1991 to 2016 with error ellipses represented by a 70% confidence level (data from Wang and Shen [<a href="#B13-remotesensing-14-00779" class="html-bibr">13</a>]). The arrow stands for the movement direction of the ground surface.</p>
Full article ">Figure 3
<p>Comparing the real strain rates with the computed counterparts based on the same data. (<b>a</b>) Distribution map of principal strain rates calculated mathematically based on the Equations (1)–(6); (<b>b</b>) Computed principal strain rates with the method developed by Zhu et al. [<a href="#B9-remotesensing-14-00779" class="html-bibr">9</a>,<a href="#B10-remotesensing-14-00779" class="html-bibr">10</a>].</p>
Full article ">Figure 4
<p>Spatial distribution of principal strain rates in the Ordos Block with grid size of 0.25° × 0.25° (the arrow outward stands for tensile, and the inward represents compressive). The red arrows suggest the tensile deformation in compressive environment.</p>
Full article ">Figure 5
<p>Sketch map for explaining the mechanism of the extensional deformation under the compressive stress environment. The east Lipanshan fault is a thrust fault dipping SW, while the west Liupanshan fault is a back-thrust fault dipping NE.</p>
Full article ">Figure 6
<p>The spatial contour map of the maximum shear strain rates in the Ordos Block.</p>
Full article ">Figure 7
<p>The spatial contour map of the second invariants of strain rate tensors (SR) in and around the Ordos Block.</p>
Full article ">Figure 8
<p>Spatial distribution of SR and the most likely regions for future major earthquakes represented by A1, A2, A3, and A4 with dashed circles. In the figure, DNF, Daihai North Edge Fault; ONF, Ordos North Edge Fault; QMF, Qishan-Mazhao Fault; QLF, Qinling Fault; DBF, Dengkou-Benjing Fault; LSPF, Langshan Piedmont Fault; HPF, Huoshan Piedmont Fault; WQF, Western Qinliang Fault.</p>
Full article ">
22 pages, 4996 KiB  
Review
The Role of Remote Sensing Data and Methods in a Modern Approach to Fertilization in Precision Agriculture
by Dorijan Radočaj, Mladen Jurišić and Mateo Gašparović
Remote Sens. 2022, 14(3), 778; https://doi.org/10.3390/rs14030778 - 7 Feb 2022
Cited by 53 | Viewed by 8416
Abstract
The precision fertilization system is the basis for upgrading conventional intensive agricultural production, while achieving both high and quality yields and minimizing the negative impacts on the environment. This research aims to present the application of both conventional and modern prediction methods in [...] Read more.
The precision fertilization system is the basis for upgrading conventional intensive agricultural production, while achieving both high and quality yields and minimizing the negative impacts on the environment. This research aims to present the application of both conventional and modern prediction methods in precision fertilization by integrating agronomic components with the spatial component of interpolation and machine learning. While conventional methods were a cornerstone of soil prediction in the past decades, new challenges to process larger and more complex data have reduced their viability in the present. Their disadvantages of lower prediction accuracy, lack of robustness regarding the properties of input soil sample values and requirements for extensive cost- and time-expensive soil sampling were addressed. Specific conventional (ordinary kriging, inverse distance weighted) and modern machine learning methods (random forest, support vector machine, artificial neural networks, decision trees) were evaluated according to their popularity in relevant studies indexed in the Web of Science Core Collection over the past decade. As a shift towards increased prediction accuracy and computational efficiency, an overview of state-of-the-art remote sensing methods for improving precise fertilization was completed, with the accent on open-data and global satellite missions. State-of-the-art remote sensing techniques allowed hybrid interpolation to predict the sampled data supported by remote sensing data such as high-resolution multispectral, thermal and radar satellite or unmanned aerial vehicle (UAV)-based imagery in the analyzed studies. The representative overview of conventional and modern approaches to precision fertilization was performed based on 121 samples with phosphorous pentoxide (P2O5) and potassium oxide (K2O) in a common agricultural parcel in Croatia. It visually and quantitatively confirmed the superior prediction accuracy and retained local heterogeneity of the modern approach. The research concludes that remote sensing data and methods have a significant role in improving fertilization in precision agriculture today and will be increasingly important in the future. Full article
(This article belongs to the Special Issue Remote Sensing for Water Resources Assessment in Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The importance of remote sensing and the most frequently used spatial interpolation methods in precision fertilization according to the number of scientific papers indexed in the Web of Science Core Collection database.</p>
Full article ">Figure 2
<p>The integration of agronomic and spatial components in the three main steps of fertilization in precision agriculture.</p>
Full article ">Figure 3
<p>Primary steps in the prediction of soil properties using a conventional and modern approach.</p>
Full article ">Figure 4
<p>The most frequently used conventional interpolation methods according to the number of scientific papers indexed in the Web of Science Core Collection.</p>
Full article ">Figure 5
<p>The most frequently used kriging methods according to the number of scientific papers indexed in the Web of Science Core Collection.</p>
Full article ">Figure 6
<p>The most frequently used remote sensing data in precision fertilization according to the number of scientific papers indexed in the Web of Science Core Collection database.</p>
Full article ">Figure 7
<p>The most widely used remote sensing data in the modern approach to precision fertilization.</p>
Full article ">Figure 8
<p>The most frequently used remote sensing methods in precision fertilization according to the number of scientific papers indexed in the Web of Science Core Collection database.</p>
Full article ">Figure 9
<p>Comparative presentation of interpolation results using common parameters of OK and IDW for P<sub>2</sub>O<sub>5</sub>.</p>
Full article ">Figure 10
<p>Comparative presentation of interpolation results using common parameters of OK and IDW for K<sub>2</sub>O.</p>
Full article ">Figure 11
<p>A comparative display of the modern soil prediction with the most accurate results of the conventional approach on a representative soil sample set.</p>
Full article ">
23 pages, 6978 KiB  
Article
Object Tracking in Satellite Videos Based on Correlation Filter with Multi-Feature Fusion and Motion Trajectory Compensation
by Yaosheng Liu, Yurong Liao, Cunbao Lin, Yutong Jia, Zhaoming Li and Xinyan Yang
Remote Sens. 2022, 14(3), 777; https://doi.org/10.3390/rs14030777 - 7 Feb 2022
Cited by 20 | Viewed by 3376
Abstract
As a new type of earth observation satellite approach, video satellites can continuously monitor an area of the Earth and acquire dynamic and abundant information by utilizing video imaging. Hence, video satellites can afford to track various objects of interest on the Earth's [...] Read more.
As a new type of earth observation satellite approach, video satellites can continuously monitor an area of the Earth and acquire dynamic and abundant information by utilizing video imaging. Hence, video satellites can afford to track various objects of interest on the Earth's surface. Inspired by the capabilities of video satellites, this paper presents a novel method to track fast-moving objects in satellite videos based on the kernelized correlation filter (KCF) embedded with multi-feature fusion and motion trajectory compensation. The contributions of the suggested algorithm are multifold. First, a multi-feature fusion strategy is proposed to describe an object comprehensively, which is challenging for the single-feature approach. Second, a subpixel positioning method is developed to calculate the object’s position and overcome the poor tracking accuracy difficulties caused by inaccurate object localization. Third, introducing an adaptive Kalman filter (AKF) enables compensation and correction of the KCF tracker results and reduces the object’s bounding box drift, solving the moving object occlusion problem. Based on the correlation filtering tracking framework, combined with the above improvement strategies, our algorithm improves the tracking accuracy by at least 17% on average and the success rate by at least 18% on average compared to the KCF algorithm. Hence, our method effectively solves poor object tracking accuracy caused by complex backgrounds and object occlusion. The experimental results utilize satellite videos from the Jilin-1 satellite constellation and highlight the proposed algorithm's appealing tracking results against current state-of-the-art trackers regarding success rate, precision, and robustness metrics. Full article
Show Figures

Figure 1

Figure 1
<p>Satellite videos. (<b>a</b>) small object size is about 10 × 12 pixels. (<b>b</b>) the moving object is occluded. (<b>c</b>) the object is similar to its background.</p>
Full article ">Figure 2
<p>Flowchart of the proposed algorithm. CF and AKF are the abbreviations of correlation filter and adaptive Kalman filter. The Euclidean distance is the object center position of adjacent frames.</p>
Full article ">Figure 3
<p>Feature visualization of various layers from the VGG-19 Networks. (<b>a</b>) Input image. (<b>b</b>) Conv2-3 layer features. (<b>c</b>) Conv3-4 layer features. (<b>d</b>) Conv4-4 layer features. (<b>e</b>) Conv5-4 layer features. (<b>f</b>) fused image features.</p>
Full article ">Figure 4
<p>Feature visualization of small objects. (<b>a</b>) 256 channels’ features. (<b>b</b>) first 30 features.</p>
Full article ">Figure 5
<p>The graph of feature response.</p>
Full article ">Figure 6
<p>Visualization of an object occlusion process. (<b>a</b>) not occluded object. (<b>b</b>) partially occluded object. (<b>c</b>) end of object occlusion.</p>
Full article ">Figure 7
<p>Ablation study on all the unoccluded video sequences. The legend in the precision and success plot are the precision and AUC value score per object tracker, respectively. (<b>a</b>) Precision plots; (<b>b</b>) Success plots.</p>
Full article ">Figure 8
<p>Ablation study on all the occluded video sequences. The legend in the precision and success plot are the precision and AUC value score per object tracker, respectively. (<b>a</b>) Precision plots; (<b>b</b>) Success plots.</p>
Full article ">Figure 9
<p>Distribution of the SDM value of the response patch. We can see an obvious unimodal distribution. Hence, selecting an appropriate threshold to judge occlusion and non-occlusion.</p>
Full article ">Figure 10
<p>Success plot for per threshold and the legend is the AUC per threshold.</p>
Full article ">Figure 11
<p>Visualization of some tracking results for the occluded object. The data in the parenthesis marked by upper case letters denote the current frame’s SDM value of the images with the corresponding lower case letters. (<b>a</b>) Occlusion process of Car3 sequences. (<b>b</b>) Occlusion process of Car4 sequences. (<b>c</b>) Occlusion process of Car5 sequences. (<b>d</b>) Occlusion process of Car6 sequences.</p>
Full article ">Figure 12
<p>Precision plots of eight video sequences involving object sequences without and with occlusion. The legend in the precision plot is the corresponding precision score per object tracker.</p>
Full article ">Figure 13
<p>Success plots of eight video sequences involving object sequences without and with occlusion. The legend in the success plot presents the corresponding AUC per object tracker.</p>
Full article ">Figure 14
<p>Screenshots of some tracking results without occlusion. At each frame, the bounding boxes with different colors are the tracking results of the different trackers and the red number in the top-left corner is the frame number of the current frame in the satellite videos.</p>
Full article ">
Previous Issue
Back to TopTop