Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 12, March-2
Previous Issue
Volume 12, February-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 12, Issue 5 (March-1 2020) – 153 articles

Cover Story (view full-size image): Primary production by marine phytoplankton is one of the largest fluxes of carbon on our planet. In the past few decades, considerable progress has been made in estimating global primary production at high spatial and temporal scales by combining in situ measurements of primary production with remote sensing observations of phytoplankton biomass. Here, we address one of the major challenges in this approach by improving the assignment of appropriate model parameters that define the photosynthetic response of phytoplankton cells. A global database of over 9,000 in situ photosynthesis–irradiance measurements and a 20-year record of climate quality satellite observations were used to assess global primary production and its variability between 1998 and 2018.View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 8314 KiB  
Article
Taking the Motion out of Floating Lidar: Turbulence Intensity Estimates with a Continuous-Wave Wind Lidar
by Felix Kelberlau, Vegar Neshaug, Lasse Lønseth, Tania Bracchi and Jakob Mann
Remote Sens. 2020, 12(5), 898; https://doi.org/10.3390/rs12050898 - 10 Mar 2020
Cited by 35 | Viewed by 9358
Abstract
Due to their motion, floating wind lidars overestimate turbulence intensity ( T I ) compared to fixed lidars. We show how the motion of a floating continuous-wave velocity–azimuth display (VAD) scanning lidar in all six degrees of freedom influences the T I estimates, [...] Read more.
Due to their motion, floating wind lidars overestimate turbulence intensity ( T I ) compared to fixed lidars. We show how the motion of a floating continuous-wave velocity–azimuth display (VAD) scanning lidar in all six degrees of freedom influences the T I estimates, and present a method to compensate for it. The approach presented here uses line-of-sight measurements of the lidar and high-frequency motion data. The compensation algorithm takes into account the changing radial velocity, scanning geometry, and measurement height of the lidar beam as the lidar moves and rotates. It also incorporates a strategy to synchronize lidar and motion data. We test this method with measurement data from a ZX300 mounted on a Fugro SEAWATCH Wind LiDAR Buoy deployed offshore and compare its T I estimates with and without motion compensation to measurements taken by a fixed land-based reference wind lidar of the same type located nearby. Results show that the T I values of the floating lidar without motion compensation are around 50 % higher than the reference values. The motion compensation algorithm detects the amount of motion-induced T I and removes it from the measurement data successfully. Motion compensation leads to good agreement between the T I estimates of floating and fixed lidar under all investigated wind conditions and sea states. Full article
(This article belongs to the Special Issue Advances in Atmospheric Remote Sensing with Lidar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Visualization of the SEAWATCH Wind LiDAR Buoy in pitched orientation. Shown are the global right-handed North-West-Up (NWU) coordinate system and the north-east-down (NED) reference frame of the motion reference unit (MRU) (gray); unit vectors <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <mi>x</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <mi>y</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <mi>z</mi> </msub> </semantics></math> along the rotated body coordinate axes of the (MRU) (blue); unit vectors <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <msub> <mi>θ</mi> <mn>0</mn> </msub> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <msub> <mi>θ</mi> <mn>270</mn> </msub> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <mi>h</mi> </msub> </semantics></math> defining the lidar frame of reference (red); the line-of-sight (LOS) unit vector <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <msub> <mi>LOS</mi> <msub> <mi>θ</mi> <mn>0</mn> </msub> </msub> </msub> </semantics></math> for the azimuth offset angle <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>0</mn> </msub> </semantics></math> (green); and the LOS unit vector <math display="inline"><semantics> <msub> <mover accent="true"> <mi>e</mi> <mo>→</mo> </mover> <msub> <mi>LOS</mi> <mi>θ</mi> </msub> </msub> </semantics></math> for an arbitrary <math display="inline"><semantics> <mi>θ</mi> </semantics></math> (yellow). Additionally, the separation vector <math display="inline"><semantics> <mover accent="true"> <mi>d</mi> <mo>→</mo> </mover> </semantics></math> between the MRU and lidar prism is shown, as are the nominal and real azimuth (<math display="inline"><semantics> <mi>θ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>) and zenith angles (<math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ϕ</mi> <mi>r</mi> </msub> </semantics></math>). (Sketch not to scale).</p>
Full article ">Figure 2
<p>Overview of the influence of motion on line-of-sight estimates and reconstructed wind vectors of a velocity–azimuth display (VAD) scanning floating lidar system. Shown are examples of translational motion with <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>v</mi> <mo>^</mo> </mover> <mo>=</mo> <mn>1</mn> <mspace width="0.166667em"/> <msup> <mi>ms</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> amplitude oscillating with frequency (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>≪</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math>, and the rotational motion of <math display="inline"><semantics> <mrow> <msup> <mn>10.5</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> peak angle oscillating with (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>≪</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mn>1</mn> <mspace width="0.166667em"/> <mi>Hz</mi> </mrow> </semantics></math> is the rotation frequency of the lidar prism. Green lines (dashed in <b>c</b>,<b>d</b>) are the radial velocity components of constant horizontal wind blowing in <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> direction with a magnitude of <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>5</mn> <mspace width="0.166667em"/> <msup> <mi>ms</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> as a function of the lidar azimuth angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. Blue lines are the influence of translational motion. Red lines are the total line-of-sight velocities. Color shades represent different phases of the oscillatory motion. Circle and cross markers represent the reconstructed wind vectors after conventional VAD processing, where the position on the <span class="html-italic">y</span>-axis is the magnitude and the position on the <span class="html-italic">x</span>-axis is the wind direction <math display="inline"><semantics> <mo>Θ</mo> </semantics></math>. More information in <a href="#sec2dot3dot1-remotesensing-12-00898" class="html-sec">Section 2.3.1</a>.</p>
Full article ">Figure 3
<p>Comparison of turbulence intensity (<math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math>) estimates based on wind data time series from internal data processing vs. emulated data processing. Only three height levels shown for clarity. The dashed-dotted lines limit a <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>0.01</mn> </mrow> </semantics></math> interval parallel to the dashed <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> line.</p>
Full article ">Figure 4
<p>Standard deviation of the motion compensated horizontal wind speed <math display="inline"><semantics> <msub> <mi>σ</mi> <msub> <mi>u</mi> <mi>hor</mi> </msub> </msub> </semantics></math> as a function of timing offset between MRU and lidar data. <math display="inline"><semantics> <msub> <mi>σ</mi> <msub> <mi>u</mi> <mi>hor</mi> </msub> </msub> </semantics></math> is the mean of all height levels for one arbitrary ten-minute interval. The absolute minimum at <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0.16</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> indicates the sweet spot that corresponds to the real offset between the two datasets.</p>
Full article ">Figure 5
<p>Timing offset at which the sweet spot from <a href="#remotesensing-12-00898-f004" class="html-fig">Figure 4</a> is found for all available ten-minute intervals.</p>
Full article ">Figure 6
<p>Map indicating the location of the floating lidar unit 593 and the land-based fixed reference lidar unit 495. The elevation difference above sea level and the geometry of the measurement cones is shown for all measurement heights. The selected offshore wind sector <math display="inline"><semantics> <mrow> <mo>[</mo> <msup> <mn>135</mn> <mo>∘</mo> </msup> <mo>,</mo> <msup> <mn>250</mn> <mo>∘</mo> </msup> <mo>]</mo> </mrow> </semantics></math> is indicated in dark blue. (Map data adapted from <a href="http://www.kartverket.no" target="_blank">www.kartverket.no</a>).</p>
Full article ">Figure 7
<p>Average of measured horizontal mean wind velocities from the floating lidar with (red) and without (blue) motion compensation, as well as from the fixed reference lidar (green), sorted by measurement height.</p>
Full article ">Figure 8
<p>Average <math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> for all measurements and sorted by measurement heights. Blue circle markers indicate <math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> based on uncompensated measurements from the floating lidar. Red cross markers show corresponding values with motion compensation. Green square markers stand for values from the land-based fixed reference lidar for comparison. Bar plots show the motion-induced <math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> as the difference between measurements with the floating lidar and the fixed lidar (green) compared to the amount of motion-induced <math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> detected by the algorithm (red). The number of available measurement values at each height is given.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> from all measurement heights binned by mean wind velocity. Legend as in <a href="#remotesensing-12-00898-f008" class="html-fig">Figure 8</a> plus markers for the mean tilt amplitude <math display="inline"><semantics> <mover> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math> and mean translational velocity <math display="inline"><semantics> <mover> <mi>v</mi> <mo>¯</mo> </mover> </semantics></math> that scale with the right hand side <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> from all measurement heights binned by <math display="inline"><semantics> <mover> <mi>α</mi> <mo>¯</mo> </mover> </semantics></math>, the mean tilt angle of the buoy. Legend as in <a href="#remotesensing-12-00898-f008" class="html-fig">Figure 8</a> plus markers for the horizontal mean wind velocity <span class="html-italic">U</span> and the relative emulation error <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> that refer to the right hand side <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 11
<p>Top: (<b>a</b>) Overview of the individual error between <math display="inline"><semantics> <mrow> <mi>T</mi> <mspace width="-0.166667em"/> <mi>I</mi> </mrow> </semantics></math> measured by reference lidar and uncompensated floating lidar (blue) and compensated floating lidar (red). Bottom: Close up view of two examples of the plot above where the motion-induced turbulence is particularly high (<b>b</b>) and low (<b>c</b>). (<b>d</b>) Probability density functions (PDF) of the error</p>
Full article ">Figure 12
<p>Scatter plot of turbulence intensities from the floating lidar uncompensated (blue) and compensated (red) vs. from the land-based reference lidar. Deming regression lines are given in corresponding colors. The equations of the regression lines and their standard deviations are listed. The black dashed line is the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> line. Some datapoints lie outside the plotted area.</p>
Full article ">
14 pages, 1679 KiB  
Letter
Effectiveness of Innovate Educational Practices with Flipped Learning and Remote Sensing in Earth and Environmental Sciences—An Exploratory Case Study
by Juan Antonio López Núñez, Jesús López Belmonte, Antonio José Moreno Guerrero and Santiago Pozo Sánchez
Remote Sens. 2020, 12(5), 897; https://doi.org/10.3390/rs12050897 - 10 Mar 2020
Cited by 34 | Viewed by 6236
Abstract
The rapid advancements in the technological field, especially in the field of education, have led to the incorporation of remote sensing in learning spaces. This innovation requires active and effective teaching methods, among which is flipped learning. The objective of this research was [...] Read more.
The rapid advancements in the technological field, especially in the field of education, have led to the incorporation of remote sensing in learning spaces. This innovation requires active and effective teaching methods, among which is flipped learning. The objective of this research was to analyze the effectiveness of flipped learning on the traditional-expository methodology in the second year of high school. The research is part of a quantitative methodology based on a quasi-experimental design of descriptive and correlational type. Data collection was carried out through an ad hoc questionnaire applied in a sample of 59 students. The Student’s t-test was applied for independent samples, differentiating the means given between the experimental group and the control group. The results show that there was a better assessment of the teaching method through flipped learning than the traditional teaching method in all the variables analyzed, except in the academic results, where the difference was minimal. It is concluded that flipped learning provides improvements in instructional processes in high school students who have used remote sensing in training practices. Therefore, the combination of flipped learning and remote sensing is considered effective for the work of contents related to environmental sciences in said educational level. Full article
(This article belongs to the Collection Teaching and Learning in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>UAV (unmanned aerial vehicle) used for remote sensing.</p>
Full article ">Figure 2
<p>Aerial view of landforms and the coast: (<b>a</b>) UAVs allow students to know the complexity of the terrestrial landscape, the characteristics of elevations and depressions in the lithosphere. They also allow in-depth observation of the great forms of the relief: ancient massifs, sedimentary basins, or sedimentary plains and mountain ranges of recent formation. (<b>b</b>) UAVs used for remote sensing allow students live knowledge about the two main phenomena of marine erosion: sea currents and waves. Also, this tool makes it possible to know the effects of maritime erosion: cliffs, abrasion platforms, coastlines, sea caves, and peninsulas.</p>
Full article ">Figure 3
<p>Comparison between control groups and experimental groups.</p>
Full article ">
19 pages, 7362 KiB  
Article
Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks
by Chih-Chiang Wei and Po-Yu Hsieh
Remote Sens. 2020, 12(5), 896; https://doi.org/10.3390/rs12050896 - 10 Mar 2020
Cited by 16 | Viewed by 4483
Abstract
Taiwan is located at the junction of the tropical and subtropical climate zones adjacent to the Eurasian continent and Pacific Ocean. The island frequently experiences typhoons that engender severe natural disasters and damage. Therefore, efficiently estimating typhoon rainfall in Taiwan is essential. This [...] Read more.
Taiwan is located at the junction of the tropical and subtropical climate zones adjacent to the Eurasian continent and Pacific Ocean. The island frequently experiences typhoons that engender severe natural disasters and damage. Therefore, efficiently estimating typhoon rainfall in Taiwan is essential. This study examined the efficacy of typhoon rainfall estimation. Radar images released by the Central Weather Bureau were used to estimate instantaneous rainfall. Additionally, two proposed neural network-based architectures, namely a radar mosaic-based convolutional neural network (RMCNN) and a radar mosaic-based multilayer perceptron (RMMLP), were used to estimate typhoon rainfall, and the commonly applied Marshall–Palmer Z-R relationship (Z-R_MP) and a reformulated Z-R relationship at each site (Z-R_station) were adopted to construct benchmark models. Monitoring stations in Hualien, Sun Moon Lake, and Taichung were selected as the experimental stations in Eastern, Central, and Western Taiwan, respectively. This study compared the performance of the models in predicting rainfall at the three stations, and the results are outlined as follows: at the Hualien station, the estimations of the RMCNN, RMMLP, Z-R_MP, and Z-R_station models were mostly identical to the observed rainfall, and all models estimated an increase during peak rainfall on the hyetographs, but the peak values were underestimated. At the Sun Moon Lake and Taichung stations, however, the estimations of the four models were considerably inconsistent in terms of overall rainfall rates, peak rainfall, and peak rainfall arrival times on the hyetographs. The relative root mean squared error for overall rainfall rates of all stations was smallest when computed using RMCNN (0.713), followed by those computed using RMMLP (0.848), Z-R_MP (1.030), and Z-R_station (1.392). Moreover, RMCNN yielded the smallest relative error for peak rainfall (0.316), followed by RMMLP (0.379), Z-R_MP (0.402), and Z-R_station (0.688). RMCNN computed the smallest relative error for the peak rainfall arrival time (1.507 h), followed by RMMLP (2.673 h), Z-R_MP (2.917 h), and Z-R_station (3.250 h). The results revealed that the RMCNN model in combination with radar images could efficiently estimate typhoon rainfall. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geography of Taiwan and the locations of study sites and radar stations.</p>
Full article ">Figure 2
<p>Historical typhoon paths.</p>
Full article ">Figure 3
<p>Reflectivity images of analyzed typhoons.</p>
Full article ">Figure 4
<p>Architecture of the proposed radar mosaic-based multilayer perceptron (RMMLP).</p>
Full article ">Figure 5
<p>Architecture of radar mosaic-based convolutional neural network (RMCNN).</p>
Full article ">Figure 6
<p>Parameter calibration of the number of neuron nodes and learning rate at Hualien (<b>a</b>,<b>b</b>), Sun Moon Lake (<b>c</b>,<b>d</b>), and Taichung (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 7
<p>Rainfall hyetographs and model estimations for (<b>a</b>) Hualien, (<b>b</b>) Sun Moon Lake, and (<b>c</b>) Taichung stations.</p>
Full article ">Figure 8
<p>Scatterplots of observations versus estimations using RMCNN, RMMLP, Marshall–Palmer Z-R relationship (Z-R_MP), and reformulated Z-R relationship at each site (Z-R_station) models for Hualien (<b>a</b>–<b>d</b>), Sun Moon Lake (<b>e</b>–<b>h</b>), and Taichung (<b>i</b>–<b>l</b>).</p>
Full article ">Figure 9
<p>Absolute errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (<b>a</b>–<b>d</b>) MAE and (<b>e</b>–<b>h</b>) RMSE.</p>
Full article ">Figure 10
<p>Relative errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (<b>a</b>–<b>d</b>) rMAE and (<b>e</b>–<b>h</b>) rRMSE.</p>
Full article ">Figure 11
<p>Performance levels for RMCNN, RMMLP, Z-R_MP, and Z-R_station in terms of (<b>a</b>) the relative error of peak rainfall (RE<sub>peak</sub>) and (<b>b</b>) the absolute time error of peak rainfall (AT<sub>peak</sub>).</p>
Full article ">
25 pages, 32443 KiB  
Article
Remote Sensing Derived Indices for Tracking Urban Land Surface Change in Case of Earthquake Recovery
by Sahar Derakhshan, Susan L. Cutter and Cuizhen Wang
Remote Sens. 2020, 12(5), 895; https://doi.org/10.3390/rs12050895 - 10 Mar 2020
Cited by 12 | Viewed by 5286
Abstract
The study of post-disaster recovery requires an understanding of the reconstruction process and growth trend of the impacted regions. In case of earthquakes, while remote sensing has been applied for response and damage assessment, its application has not been investigated thoroughly for monitoring [...] Read more.
The study of post-disaster recovery requires an understanding of the reconstruction process and growth trend of the impacted regions. In case of earthquakes, while remote sensing has been applied for response and damage assessment, its application has not been investigated thoroughly for monitoring the recovery dynamics in spatially and temporally explicit dimensions. The need and necessity for tracking the change in the built-environment through time is essential for post-disaster recovery modeling, and remote sensing is particularly useful for obtaining this information when other sources of data are scarce or unavailable. Additionally, the longitudinal study of repeated observations over time in the built-up areas has its own complexities and limitations. Hence, a model is needed to overcome these barriers to extract the temporal variations from before to after the disaster event. In this study, a method is introduced by using three spectral indices of UI (urban index), NDVI (normalized difference vegetation index) and MNDWI (modified normalized difference water index) in a conditional algebra, to build a knowledge-based classifier for extracting the urban/built-up features. This method enables more precise distinction of features based on environmental and socioeconomic variability, by providing flexibility in defining the indices’ thresholds with the conditional algebra statements according to local characteristics. The proposed method is applied and implemented in three earthquake cases: New Zealand in 2010, Italy in 2009, and Iran in 2003. The overall accuracies of all built-up/non-urban classifications range between 92% to 96.29%; and the Kappa values vary from 0.79 to 0.91. The annual analysis of each case, spanning from 10 years pre-event, immediate post-event, and until present time (2019), demonstrates the inter-annual change in urban/built-up land surface of the three cases. Results in this study allow a deeper understanding of how the earthquake has impacted the region and how the urban growth is altered after the disaster. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Area of interest (AOI) for each case with the boundary of selected Landsat Scene. (<b>a</b>) Christchurch, New Zealand; (<b>b</b>) L’Aquila, Italy; (<b>c</b>) Bam, Iran (Basemap Source: Esri 2020).</p>
Full article ">Figure 2
<p>Spectral profiles of four classes in (<b>a</b>) Christchurch, New Zealand; (<b>b</b>) L’Aquila, Italy; and (<b>c</b>) Bam, Iran.</p>
Full article ">Figure 3
<p>Spectral profiles of four classes in the constructed RGB images for 100 random sample pixels in (<b>a</b>) Christchurch, New Zealand (2017); (<b>b</b>) L’Aquila, Italy (2017); and (<b>c</b>) Bam, Iran (2018)<b>.</b></p>
Full article ">Figure 4
<p>Comparison of (<b>a</b>) constructed RGB (R: UI, G: NDVI, B: MNDWI) (2017); (<b>b</b>) Binary Image (0: non-urban, 1: built-up) (2017); and (<b>c</b>) Aerial Image for case of Christchurch, New Zealand (Esri, DigitalGlobe 2020)<b>.</b></p>
Full article ">Figure 5
<p>Comparison of (<b>a</b>) constructed RGB (R: UI, G: NDVI, B: MNDWI) (2017), (<b>b</b>) binary image (0: non-urban, 1: built-up) (2017), and (<b>c</b>) aerial image for case of L’Aquila, Italy (Esri, DigitalGlobe 2020).</p>
Full article ">Figure 6
<p>Comparison of (<b>a</b>) constructed RGB (R: UI, G: NDVI, B: MNDWI) (2018); (<b>b</b>) Binary Image (0: non-urban, 1: built-up) (2018); and (<b>c</b>) Aerial Image for case of Bam, Iran (Esri, DigitalGlobe 2020).</p>
Full article ">Figure 7
<p>Urban Development for Christchurch, New Zealand for (<b>a</b>) pre-event years (with event year), and (<b>b</b>) post-event.</p>
Full article ">Figure 8
<p>Annual % Urban Land surface change for Christchurch, New Zealand (with 7.71% error variation depicted by dashed line).</p>
Full article ">Figure 9
<p>Urban Development for L’Aquila, Italy for (<b>a</b>) pre-event years (with event year), and (<b>b</b>) post-event.</p>
Full article ">Figure 10
<p>Annual % Urban Land surface change for L’Aquila, Italy (with 8% error variation depicted by dashed line).</p>
Full article ">Figure 11
<p>Urban development for Bam, Iran for (<b>a</b>) pre-event years (with event year), and (<b>b</b>) post-event.</p>
Full article ">Figure 12
<p>Annual % Urban Land surface change for Bam, Iran (with 8% error variation depicted by dashed line).</p>
Full article ">Figure 13
<p>Comparative overview of annual % built-up land surface change (year of the event = 0).</p>
Full article ">Figure 14
<p>(<b>a</b>) Annual % Urban/built-up Land surface change for Christchurch, New Zealand, by Districts, (<b>b</b>) District boundaries in the AOI for Christchurch, New Zealand.</p>
Full article ">
13 pages, 7241 KiB  
Letter
Research on Post-Earthquake Landslide Extraction Algorithm Based on Improved U-Net Model
by Peng Liu, Yongming Wei, Qinjun Wang, Yu Chen and Jingjing Xie
Remote Sens. 2020, 12(5), 894; https://doi.org/10.3390/rs12050894 - 10 Mar 2020
Cited by 97 | Viewed by 6624
Abstract
Seismic landslides are the most common and highly destructive earthquake-triggered geological hazards. They are large in scale and occur simultaneously in many places. Therefore, obtaining landslide information quickly after an earthquake is the key to disaster mitigation and relief. The survey results show [...] Read more.
Seismic landslides are the most common and highly destructive earthquake-triggered geological hazards. They are large in scale and occur simultaneously in many places. Therefore, obtaining landslide information quickly after an earthquake is the key to disaster mitigation and relief. The survey results show that most of the landslide-information extraction methods involve too much manual participation, resulting in a low degree of automation and the inability to provide effective information for earthquake rescue in time. In order to solve the abovementioned problems and improve the efficiency of landslide identification, this paper proposes an automatic landslide identification method named improved U-Net model. The intelligent extraction of post-earthquake landslide information is realized through the automatic extraction of hierarchical features. The main innovations of this paper include the following: (1) On the basis of the three RGB bands, three new bands, DSM, slope, and aspect, with spatial information are added, and the number of feature parameters of the training samples is increased. (2) The U-Net model structure is rebuilt by adding residual learning units during the up-sampling and down-sampling processes, to solve the problem that the traditional U-Net model cannot fully extract the characteristics of the six-channel landslide for its shallow structure. At the end of the paper, the new method is used in Jiuzhaigou County, Sichuan Province, China. The results show that the accuracy of the new method is 91.3%, which is 13.8% higher than the traditional U-Net model. It is proved that the new method is effective and feasible for the automatic extraction of post-earthquake landslides. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>U-Net model structure diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) Residual learning unit; (<b>b</b>) Two-layer residual learning unit.</p>
Full article ">Figure 3
<p>Structure of the improved U-Net network.</p>
Full article ">Figure 4
<p>Flowchart of U-Net model.</p>
Full article ">Figure 5
<p>Location of the study area.</p>
Full article ">Figure 6
<p>Test (<b>a</b>) and training (<b>b</b>) area locations.</p>
Full article ">Figure 7
<p>Dataset production.</p>
Full article ">Figure 8
<p>Extraction results. (<b>a</b>) Landslide distribution map obtained by manual visual interpretation; (<b>b</b>) Results of the U-Net + three-channel extraction; (<b>c</b>) Results of the U-Net + six-channel extraction; (<b>d</b>) Results of U-Net + six channels + ResNet.</p>
Full article ">Figure 8 Cont.
<p>Extraction results. (<b>a</b>) Landslide distribution map obtained by manual visual interpretation; (<b>b</b>) Results of the U-Net + three-channel extraction; (<b>c</b>) Results of the U-Net + six-channel extraction; (<b>d</b>) Results of U-Net + six channels + ResNet.</p>
Full article ">Figure 9
<p>(<b>a</b>) Incorrect extraction; (<b>b</b>) incorrect extraction; and (<b>c</b>) missed mentions.</p>
Full article ">
24 pages, 8239 KiB  
Article
X-Net-Based Radar Data Assimilation Study over the Seoul Metropolitan Area
by Ji-Won Lee, Ki-Hong Min, Young-Hee Lee and GyuWon Lee
Remote Sens. 2020, 12(5), 893; https://doi.org/10.3390/rs12050893 - 10 Mar 2020
Cited by 15 | Viewed by 5084
Abstract
This study investigates the ability of the high-resolution Weather Research and Forecasting (WRF) model to simulate summer precipitation with assimilation of X-band radar network data (X-Net) over the Seoul metropolitan area. Numerical data assimilation (DA) experiments with X-Net (S- and X-band Doppler radar) [...] Read more.
This study investigates the ability of the high-resolution Weather Research and Forecasting (WRF) model to simulate summer precipitation with assimilation of X-band radar network data (X-Net) over the Seoul metropolitan area. Numerical data assimilation (DA) experiments with X-Net (S- and X-band Doppler radar) radial velocity and reflectivity data for three events of convective systems along the Changma front are conducted. In addition to the conventional assimilation of radar data, which focuses on assimilating the radial velocity and reflectivity of precipitation echoes, this study assimilates null-echoes and analyzes the effect of null-echo data assimilation on short-term quantitative precipitation forecasting (QPF). A null-echo is defined as a region with non-precipitation echoes within the radar observation range. The model removes excessive humidity and four types of hydrometeors (wet and dry snow, graupel, and rain) based on the radar reflectivity by using a three-dimensional variational (3D-Var) data assimilation technique within the WRFDA system. Some procedures for preprocessing radar reflectivity data and using null-echoes in this assimilation are discussed. Numerical experiments with conventional radar DA over-predicted the precipitation. However, experiments with additional null-echo information removed excessive water vapor and hydrometeors and suppressed erroneous model precipitation. The results of statistical model verification showed improvements in the analysis and objective forecast scores, reducing the amount of over-predicted precipitation. An analysis of a contoured frequency by altitude diagram (CFAD) and time–height cross-sections showed that increased hydrometeors throughout the data assimilation period enhanced precipitation formation, and reflectivity under the melting layer was simulated similarly to the observations during the peak precipitation times. In addition, overestimated hydrometeors were reduced through null-echo data assimilation. Full article
(This article belongs to the Special Issue Precipitation and Water Cycle Measurements Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of the S-band radars (blue dots), X-band radars (red dots), automated weather station (AWS) sites (gray dots), and radiosondes (yellow dots), with the radar coverage areas in circles.</p>
Full article ">Figure 2
<p>Flow chart for assimilating radar reflectivity with null-echo observation operators.</p>
Full article ">Figure 3
<p>(<b>a</b>) Domain configuration and topography (shaded) of D01, D02, and D03; (<b>b</b>) the border lines of city and provinces used in D03 (the borders are shown as a red line for Seoul, an orange line for Gyeonggi, and a blue line for Hwanghae).</p>
Full article ">Figure 4
<p>Synoptic analysis for (<b>a</b>) 0000 UTC 2 July 2017, (<b>b</b>) 1200 UTC 22 July 2017, and (<b>c</b>) 0000 UTC 28 August 2018.</p>
Full article ">Figure 5
<p>Comparison of (<b>a</b>) the radar reflectivity (dBZ), reflectivity of (<b>b</b>) CTRL, (<b>c</b>) DA, and (<b>d</b>) DA_NP and the difference in q<sub>v</sub> between (<b>e</b>) DA and CTRL and between (<b>f</b>) DA_NP and CTRL at 0900 UTC 2 July 2017.</p>
Full article ">Figure 6
<p>Same as <a href="#remotesensing-12-00893-f005" class="html-fig">Figure 5</a> except for 2100 UTC 22 July 2017.</p>
Full article ">Figure 7
<p>Same as <a href="#remotesensing-12-00893-f005" class="html-fig">Figure 5</a> except for 0900 UTC 28 August 2018.</p>
Full article ">Figure 8
<p>Cumulative precipitation (mm) distribution for Case 1 from the (<b>a</b>) AWSs, (<b>b</b>) CTRL, (<b>c</b>) DA, and (<b>d</b>) DA_NP at D03.</p>
Full article ">Figure 9
<p>Same as <a href="#remotesensing-12-00893-f008" class="html-fig">Figure 8</a> except for Case 2.</p>
Full article ">Figure 10
<p>Same as <a href="#remotesensing-12-00893-f008" class="html-fig">Figure 8</a> except for Case 3.</p>
Full article ">Figure 11
<p>Verification statistics of the (<b>a</b>) accuracy, (<b>b</b>) critical success index (CSI), and (<b>c</b>) equitable threat score (ETS) for CTRL (black lines), DA (red lines), and DA_NP (blue lines).</p>
Full article ">Figure 12
<p>Vertical profiles biases of (<b>a</b>) water vapor mixing ratio, (<b>b</b>) temperature, (<b>c</b>) U wind component, and (d) V wind component for CTRL (black lines), DA (red lines), and DA_NP (blue lines) at 2017.07.02.1200 UTC (solid lines) and 2018.08.28.1200 UTC (dashed lines).</p>
Full article ">Figure 13
<p>Contoured frequency by altitude diagram (CFAD) percentiles of the (<b>a</b>) Kwanak Mountain Mountain (KWK) observed and (<b>b</b>) CTRL, (<b>c</b>) DA, and (<b>d</b>) DA_NP simulated radar reflectivity for Case 2. The horizontal black dotted line in each panel represents the model’s 0 °C height (b, c, and d). The cumulative reflectivity frequencies of the 25th, 50th, and 75th percentiles are marked with black solid lines, the average reflectivity factor in linear units (mm<sup>−6</sup>·m<sup>−3</sup>) is marked by a white solid line, and the average reflectivity is marked by a red solid line (KWK) and blue solid line (models).</p>
Full article ">Figure 14
<p>Same as <a href="#remotesensing-12-00893-f013" class="html-fig">Figure 13</a> except for Case 3 forecast period.</p>
Full article ">Figure 15
<p>Time–height cross-sections for Case 2 at the KWK radar site for (<b>a</b>) the observations, (<b>b</b>) CTRL, (<b>c</b>) DA, and (<b>d</b>) DA_NP.</p>
Full article ">Figure 16
<p>Same as <a href="#remotesensing-12-00893-f015" class="html-fig">Figure 15</a> except for Case 3 forecast period.</p>
Full article ">
25 pages, 14879 KiB  
Article
Combining InfraRed Thermography and UAV Digital Photogrammetry for the Protection and Conservation of Rupestrian Cultural Heritage Sites in Georgia: A Methodological Application
by William Frodella, Mikheil Elashvili, Daniele Spizzichino, Giovanni Gigli, Luka Adikashvili, Nikoloz Vacheishvili, Giorgi Kirkitadze, Akaki Nadaraia, Claudio Margottini and Nicola Casagli
Remote Sens. 2020, 12(5), 892; https://doi.org/10.3390/rs12050892 - 10 Mar 2020
Cited by 44 | Viewed by 6876
Abstract
The rock-cut city of Vardzia is an example of the extraordinary rupestrian cultural heritage of Georgia. The site, Byzantine in age, was carved in the steep tuff slopes of the Erusheti mountains, and due to its peculiar geological characteristics, it is particularly vulnerable [...] Read more.
The rock-cut city of Vardzia is an example of the extraordinary rupestrian cultural heritage of Georgia. The site, Byzantine in age, was carved in the steep tuff slopes of the Erusheti mountains, and due to its peculiar geological characteristics, it is particularly vulnerable to weathering and degradation, as well as frequent instability phenomena. These problems determine serious constraints on the future conservation of the site, as well as the safety of the visitors. This paper focuses on the implementation of a site-specific methodology, based on the integration of advanced remote sensing techniques, such as InfraRed Thermography (IRT) and Unmanned Aerial Vehicle (UAV)-based Digital Photogrammetry (DP), with traditional field surveys and laboratory analyses, with the aim of mapping the potential criticality of the rupestrian complex on a slope scale. The adopted methodology proved to be a useful tool for the detection of areas of weathering and degradation on the tuff cliffs, such as moisture and seepage sectors related to the ephemeral drainage network of the slope. These insights provided valuable support for the design and implementation of sustainable mitigation works, to be profitably used in the management plan of the site of Vardzia, and can be used for the protection and conservation of rupestrian cultural heritage sites characterized by similar geological contexts. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The rock-cut city of Vardzia (<b>a</b>), Geographic location of the study area (<b>b</b>), the Mtkvari river valley and the Vardzia complex location, including InfraRed Thermography (IRT) camera installation point (<b>c</b>), the monastery complex: arched structures (<b>d</b>), frescoed chapels (<b>e</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) Vardzia area geological map, (<b>b</b>) schematic cross-section, (<b>c</b>) stereoplot diagrams of discontinuities collected along the rock mass (modified after Margottini et al., 2015).</p>
Full article ">Figure 3
<p>Work plan of the adopted methodology.</p>
Full article ">Figure 4
<p>Weather microclimate data recorded by the Weather Station of Vardzia: (<b>a</b>) Daily cumulated rainfall recorded during 2016 in Vardzia and weather conditions during IRT surveys S1 and S2, (<b>b</b>) Air temperature and relative humidity recorded during 2016.</p>
Full article ">Figure 5
<p>3D geological map of Vardzia slope (obtained with Unmanned Aerial Vehicle (UAV)-based Digital Photogrammetry (DP) 3D surface).</p>
Full article ">Figure 6
<p>Vardzia slope structural setting and slope instabilities: subveritcal joint set (<b>a</b>), planar failure (<b>b</b>) direct fall (<b>a</b>) and topple (<b>c</b>) along the contact between Upper breccia and White Tuffs. White Tuffs: weathered surfaces in correspondence of planar failures (<b>e</b>–<b>f</b>: high-angle joint set parallel to the slope) and wedge failures (<b>g</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Moisture stations, (<b>b</b>) VC4 Microclimate observation station data.</p>
Full article ">Figure 8
<p>IRT data acquired in S1: (<b>a</b>) Mosaicked thermograms acquired on 16 July 2016 at 19:00, including surface temperature profiles Li1-3, (<b>b</b>) corresponding classified image.</p>
Full article ">Figure 9
<p>IRT data acquired in S2: (<b>a</b>) Mosaicked thermograms acquired on November 20th at 18:40, including ST profiles Li1-3, (<b>b</b>) corresponding classified image. Corresponding visible image (<b>c</b>).</p>
Full article ">Figure 10
<p>Front view of the modeled drainage network on the Vardzia slope 3D surface.</p>
Full article ">Figure 11
<p>UAV-DP products: (<b>a</b>) 2D Orthoprojection of Vardzia’s hydrographic network on the Upper slope sector and cliff face, (<b>b</b>) detail of runnel-retaining wall sector impacted by boulders.</p>
Full article ">Figure 12
<p>Mosaicked 3D surface temperature maps of the Vardzia monastery, obtained by merging the single thermograms with the UAV-DP slope surface.</p>
Full article ">Figure 13
<p>Field evidence of water erosion-sediment transport: (<b>a</b>) bottom view of the main streams (1-2-3), rock slope cuts of the Stream 1 (<b>b</b>), 2 (<b>c</b>) and 3 (<b>d</b>), view of the erosional–transport features in the slope brake between upper slope and cliff face Stream 1 (<b>e</b>–<b>h</b>), 2 (<b>f</b>–<b>i</b>) and 3 (<b>g</b>–<b>l</b>).</p>
Full article ">Figure 14
<p>Conservation criticalities due to water runoff-infiltration: intense water runoff during a rainfall event in 23 May 2014 on the slope face (<b>a</b>) and slope toe (<b>b</b>) (courtesy of the director of the site), detail of collapsed block in AOI5 (<b>c</b>: before the collapse – photo taken on 25 May 2015) and after the collapse (<b>d</b>). Details of rock collapses occurred after rainfall events: 11 July 2016 (<b>e</b>) and 19 September 2016 (<b>f</b>).</p>
Full article ">Figure 15
<p>Close-ups of the criticalities in the cave systems acquired on S1: Mosaicked thermogram (<b>a</b>) and correspondent photo (<b>b</b>) of cave system characterized by persistent and spaced fracture system, mosaicked thermograms of intensively fractured cave system (<b>c</b>) with correspondent photo (<b>d</b>), mosaicked thermogram of single cave (<b>e</b>) and correspondent photo (<b>f</b>) characterized by spalling of the rock surface in the roof of the room, controlled by rock weathering and stress degradation.</p>
Full article ">Figure 16
<p>(<b>a</b>) Contribution to the General Master Plan of the proposed mitigation measures for the whole Vardzia Monastery: system of surface water collection and runnel-retaining walls built along the monastery slope rock wall upper sector (modified after [<a href="#B17-remotesensing-12-00892" class="html-bibr">17</a>]), sectors characterized by instability on the rock cliff with related anchor type, (<b>b</b>) Water draining structures in Vardzia formed by the runnel-retaining wall system, (<b>c</b>) detail of scattered boulder of the upper slope sector.</p>
Full article ">
23 pages, 6223 KiB  
Article
Exploration for Object Mapping Guided by Environmental Semantics using UAVs
by Reem Ashour, Tarek Taha, Jorge Manuel Miranda Dias, Lakmal Seneviratne and Nawaf Almoosa
Remote Sens. 2020, 12(5), 891; https://doi.org/10.3390/rs12050891 - 10 Mar 2020
Cited by 9 | Viewed by 4215
Abstract
This paper presents a strategy to autonomously explore unknown indoor environments, focusing on 3D mapping of the environment and performing grid level semantic labeling to identify all available objects. Unlike conventional exploration techniques that utilize geometric heuristics and information gain theory on an [...] Read more.
This paper presents a strategy to autonomously explore unknown indoor environments, focusing on 3D mapping of the environment and performing grid level semantic labeling to identify all available objects. Unlike conventional exploration techniques that utilize geometric heuristics and information gain theory on an occupancy grid map, the work presented in this paper considers semantic information, such as the class of objects, in order to gear the exploration towards environmental segmentation and object labeling. The proposed approach utilizes deep learning to map 2D semantically segmented images into 3D semantic point clouds that encapsulate both occupancy and semantic annotations. A next-best-view exploration algorithm is employed to iteratively explore and label all the objects in the environment using a novel utility function that balances exploration and semantic object labeling. The proposed strategy was evaluated in a realistically simulated indoor environment, and results were benchmarked against other exploration strategies. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Proposed semantically-aware exploration of object labeling, and the 3D mapping system architecture.</p>
Full article ">Figure 2
<p>Pyramid scene parsing network (PSPNet). Extracted from [<a href="#B28-remotesensing-12-00891" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>RGB image and semantic segment image.</p>
Full article ">Figure 4
<p>Semantic labels of ADE20K data set in BGR format.</p>
Full article ">Figure 5
<p>Point cloud generation.</p>
Full article ">Figure 6
<p>Max fusion.</p>
Full article ">Figure 7
<p>Right: an octree example where the free voxels are shaded white and occupied voxels are shaded black. Left: volumetric model that shows the corresponding tree representation on the right [<a href="#B41-remotesensing-12-00891" class="html-bibr">41</a>].</p>
Full article ">Figure 8
<p>General components of the NBV method.</p>
Full article ">Figure 9
<p>Overview of the functional concept of the proposed semantically exploration planner. At every iteration, a random tree of finite depth is sampled in the known free space of robot configurations. All branches of this tree are evaluated in terms of a combined information gain related to exploration of unknown areas and semantic-information-based class confidence value as represented in the occupancy map. The best branch is identified, and the step to its first viewpoint is conducted by the robot. Subsequently, the whole procedure is repeated in a receding horizon fashion.</p>
Full article ">Figure 10
<p>SVV Utility function illustration: occupied (black), free (green), unknown (gray), and rays (red lines), sensor range (dashed blue), visible unknown (pink cubes).</p>
Full article ">Figure 11
<p>Simulation environment.</p>
Full article ">Figure 12
<p>Generated semantic maps using different utility functions after 120 iterations. The colors correspond to semantic classes, the green line segments represent the path travelled, and the red arrows represent the viewpoints along the path.</p>
Full article ">Figure 13
<p>Volumetric coverage reached by different utilities using the same number of iterations.</p>
Full article ">Figure 14
<p>Number of voxels labelled for each object using different utility functions.</p>
Full article ">Figure 15
<p>Volumetric coverage (proposed utility SVOI where the object of interest is the person).</p>
Full article ">Figure 16
<p>Person detection at the voxel level.</p>
Full article ">
24 pages, 3451 KiB  
Article
Can Landsat-Derived Variables Related to Energy Balance Improve Understanding of Burn Severity From Current Operational Techniques?
by Alfonso Fernández-Manso, Carmen Quintano and Dar A. Roberts
Remote Sens. 2020, 12(5), 890; https://doi.org/10.3390/rs12050890 - 10 Mar 2020
Cited by 6 | Viewed by 3657
Abstract
Forest managers rely on accurate burn severity estimates to evaluate post-fire damage and to establish revegetation policies. Burn severity estimates based on reflective data acquired from sensors onboard satellites are increasingly complementing field-based ones. However, fire not only induces changes in reflected and [...] Read more.
Forest managers rely on accurate burn severity estimates to evaluate post-fire damage and to establish revegetation policies. Burn severity estimates based on reflective data acquired from sensors onboard satellites are increasingly complementing field-based ones. However, fire not only induces changes in reflected and emitted radiation measured by the sensor, but also on energy balance. Evapotranspiration (ET), land surface temperature (LST) and land surface albedo (LSA) are greatly affected by wildfires. In this study, we examine the usefulness of these elements of energy balance as indicators of burn severity and compare the accuracy of burn severity estimates based on them to the accuracy of widely used approaches based on spectral indexes. We studied a mega-fire (more than 450 km2 burned) in Central Portugal, which occurred from 17 to 24 June 2017. The official burn severity map acted as a ground reference. Variations induced by fire during the first year following the fire event were evaluated through changes in ET, LST and LSA derived from Landsat data and related to burn severity. Fisher’s least significant difference test (ANOVA) revealed that ET and LST images could discriminate three burn severity levels with statistical significance (uni-temporal and multi-temporal approaches). Burn severity was estimated from ET, LST and LSA using thresholding. Accuracy of ET and LST based on burn severity estimates was adequate (κ = 0.63 and 0.57, respectively), similar to the accuracy of the estimate based on dNBR (κ = 0.66). We conclude that Landsat-derived surface energy balance variables, in particular ET and LST, in addition to acting as useful indicators of burn severity for mega-fires in Mediterranean ecosystems, may provide critical information about how energy balance changes due to fire. Full article
Show Figures

Figure 1

Figure 1
<p><b>Left</b>: location of study area; <b>right</b>: vegetation species map (<b>upper</b>), climatic map (<b>center</b>); altitude map (<b>lower</b>).</p>
Full article ">Figure 2
<p>Temporal evolution of evapotranspiration (ET) (<b>upper row</b>), land surface temperature (LST) (<b>second row</b>), land surface albedo (LSA) (<b>third row</b>); normalized difference vegetation index (NDVI) <b>(forth row</b>) and normalized burn ratio (NBR) (<b>lower row</b>) images. Left column: 06/15/17 second column: 07/01/17, third column: 08/02/17, fourth column 09/19/17, right column 08/05/18.</p>
Full article ">Figure 3
<p>Burn severity maps. Initial assessment. <b>Left</b>: based on ET (uni-temporal perspective); <b>right</b>: based on dNBR_i (multi-temporal perspective).</p>
Full article ">
22 pages, 8756 KiB  
Article
Novel Soil Moisture Estimates Combining the Ensemble Kalman Filter Data Assimilation and the Method of Breeding Growing Modes
by Yize Li, Hong Shu, B. G. Mousa and Zhenhang Jiao
Remote Sens. 2020, 12(5), 889; https://doi.org/10.3390/rs12050889 - 10 Mar 2020
Cited by 4 | Viewed by 3722
Abstract
Soil moisture plays an important role in climate prediction and drought monitoring. Data assimilation, as a method of integrating multi-geographic spatial data, plays an increasingly important role in estimating soil moisture. Model prediction error, an important part of the background field information, occupies [...] Read more.
Soil moisture plays an important role in climate prediction and drought monitoring. Data assimilation, as a method of integrating multi-geographic spatial data, plays an increasingly important role in estimating soil moisture. Model prediction error, an important part of the background field information, occupies a position that could not be ignored in data assimilation. The model prediction error in data assimilation consists of three parts: forcing data error, initial field error, and model error. However, the influence of model error in current data assimilation methods has not been completely considered in many studies. Therefore, we proposed a theoretical framework of the ensemble Kalman filter (EnKF) data assimilation based on the breeding of growing modes (BGM) method. This framework used the BGM method to perturb the initial field error term w of EnKF, and the EnKF data assimilation to assimilate the data to obtain the soil moisture analysis value. The feasibility and superiority of the proposed framework were verified, taking into consideration breeding length and ensemble size through experiments. We conducted experiments and evaluated the accuracy of the BGM and the Monte Carlo (MC) methods. The experiment showed that the BGM method could improve the estimation accuracy of the assimilated soil moisture and solve the problem of model error which is not fully expressed in data assimilation. This study can be widely used in data assimilation and has a significant role in weather forecast and drought monitoring. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The theoretical framework of the ensemble Kalman filter (EnKF) data assimilation based on the breeding of growing mode (BGM) method. Variable infiltration capacity (VIC) model indicates the variable infiltration capacity model.</p>
Full article ">Figure 2
<p>The process of the breeding of growing modes. “I” indicates initial field, “P” indicates random perturbation, “D” indicates the difference between the perturbation prediction value and the control forecast value.</p>
Full article ">Figure 3
<p>Study area and the spatial distribution of the in situ soil moisture sites.</p>
Full article ">Figure 4
<p>The root mean square error (RMSE) values between the CCI data and: (<b>a</b>) VIC simulation; the BGM method using the breeding lengths of (<b>b</b>) 24 h; (<b>c</b>) 48 h; and (<b>d</b>) 72 h.</p>
Full article ">Figure 5
<p>The different correlation coefficient (R) values between the CCI data and: (<b>a</b>) VIC simulation; the BGM method using the breeding lengths of (<b>b</b>) 24 h; (<b>c</b>) 48 h; and (<b>d</b>) 72 h.</p>
Full article ">Figure 6
<p>The RMSE values between the CCI data and the assimilated data with ensemble sizes of: (<b>a</b>) 20; (<b>b</b>) 50; (<b>c</b>) 100; (<b>d</b>) 200; and (<b>e</b>) 500.</p>
Full article ">Figure 7
<p>The R values between the CCI data and the assimilated data with ensemble sizes of: (<b>a</b>) 20; (<b>b</b>) 50; (<b>c</b>) 100; (<b>d</b>) 200; and (<b>e</b>) 500.</p>
Full article ">Figure 8
<p>The RMSE between the CCI data and assimilated data using: (<b>a</b>) the BGM method; and (<b>b</b>) the Monte Carlo (MC) method.</p>
Full article ">Figure 9
<p>The R between the CCI data and assimilated data using: (<b>a</b>) the BGM method; and (<b>b</b>) the MC method.</p>
Full article ">Figure 10
<p>The deviation in RMSEs between the data assimilations using the BGM and the MC methods: (<b>a</b>) Spatial distribution of RMSE difference values over study area; (<b>b</b>) The values of RMSE difference values with grid cell number.</p>
Full article ">Figure 11
<p>The RMSE (<b>a</b>) and R (<b>b</b>) of the data assimilation using the BGM method, the MC method and the VIC simulation experiments.</p>
Full article ">
17 pages, 5340 KiB  
Article
Assessing the Link between Human Modification and Changes in Land Surface Temperature in Hainan, China Using Image Archives from Google Earth Engine
by Lixia Chu, Francis Oloo, Helena Bergstedt and Thomas Blaschke
Remote Sens. 2020, 12(5), 888; https://doi.org/10.3390/rs12050888 - 10 Mar 2020
Cited by 23 | Viewed by 4969
Abstract
In many areas of the world, population growth and land development have increased demand for land and other natural resources. Coastal areas are particularly susceptible since they are conducive for marine transportation, energy production, aquaculture, marine tourism and other activities. Anthropogenic activities in [...] Read more.
In many areas of the world, population growth and land development have increased demand for land and other natural resources. Coastal areas are particularly susceptible since they are conducive for marine transportation, energy production, aquaculture, marine tourism and other activities. Anthropogenic activities in the coastal areas have triggered unprecedented land use change, depletion of coastal wetlands, loss of biodiversity, and degradation of other vital ecosystem services. The changes can be particularly drastic for small coastal islands with rich biodiversity. In this study, the influence of human modification on land surface temperature (LST) for the coastal island Hainan in Southern China was investigated. We hypothesize that for this island, footprints of human activities are linked to the variation of land surface temperature, which could indicate environmental degradation. To test this hypothesis, we estimated LST changes between 2000 and 2016 and computed the spatio-temporal correlation between LST and human modification. Specifically, we classified temperature data for the four years 2000, 2006, 2012 and 2016 into 5 temperature zones based on their respective mean and standard deviation values. We then assessed the correlation between each temperature zone and a human modification index computed for the year 2016. Apart from this, we estimated mean, maximum and the standard deviation of annual temperature for each pixel in the 17 years to assess the links with human modification. The results showed that: (1) The mean LST temperature in Hainan Island increased with fluctuations from 2000 to 2016. (2) The moderate temperature zones were dominant in the island during the four years included in this study. (3) A strong positive correlation of 0.72 between human modification index and mean and maximum LST temperature indicated a potential link between human modification and mean and maximum LST temperatures over the 17 years of analysis. (4) The mean value of human modification index in the temperature zones in 2016 showed a progressive rise with 0.24 in the low temperature zone, 0.33 in the secondary moderate, 0.45 in the moderate, 0.54 in the secondary high and 0.61 in the high temperature zones. This work highlighted the potential value of using large and multi-temporal earth observation datasets from cloud platforms to assess the influence of human activities in sensitive ecosystems. The results could contribute to the development of sustainable management and coastal ecosystems conservation plans. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study area Hainan.</p>
Full article ">Figure 2
<p>Flow diagram of the main steps of analysis. The steps in blue outline the retrieval LST dynamics and human modification index within GEE archives. The steps highlighted in green outline the analysis of correlation between LST and human modification and land use/cover in a standard desktop GIS platform.</p>
Full article ">Figure 3
<p>The spatial (<b>a</b>) and temporal annual mean (<b>b</b>) temperature changes during 2000 to 2016.</p>
Full article ">Figure 4
<p>The LST pattern changes at the four years of 2000 (<b>a</b>), 2006 (<b>b</b>), 2012 (<b>c</b>), and 2016 (<b>d</b>).</p>
Full article ">Figure 5
<p>Comparison of mean (<b>a</b>), standard deviation (<b>b</b>), and maximum temperature (<b>c</b>) from 17 yearly data against the human modification data (<b>d</b>).</p>
Full article ">Figure 6
<p>The distribution of human modification in the temperature zones in 2016.</p>
Full article ">Figure 7
<p>Correlation between temperature (mean (<b>a</b>), maximum (<b>b</b>) and standard deviation (<b>c</b>)) and human modification index.</p>
Full article ">Figure 8
<p>The land use/land cover on Hainan Island (data source: Finer Resolution Observation and Monitoring—Global land cover dataset 2017 v1 version [<a href="#B53-remotesensing-12-00888" class="html-bibr">53</a>]).</p>
Full article ">Figure 9
<p>Variation of human modification index and mean land surface temperatures by land cover classes.</p>
Full article ">
18 pages, 64457 KiB  
Article
Pars pro toto—Remote Sensing Data for the Reconstruction of a Rounded Chalcolithic Site from NE Romania: The Case of Ripiceni–Holm Settlement (Cucuteni Culture)
by Andrei Asăndulesei, Felix Adrian Tencariu and Ionut Cristi Nicu
Remote Sens. 2020, 12(5), 887; https://doi.org/10.3390/rs12050887 - 10 Mar 2020
Cited by 14 | Viewed by 5122
Abstract
Prehistoric sites in NE Romania are facing major threats more than ever, both from natural and human-induced hazards. One of the main reasons are the climate change determined natural disasters, but human-induced activities should also not be neglected. The situation is critical for [...] Read more.
Prehistoric sites in NE Romania are facing major threats more than ever, both from natural and human-induced hazards. One of the main reasons are the climate change determined natural disasters, but human-induced activities should also not be neglected. The situation is critical for Chalcolithic sites, with a very high density in the region and minimal traces at the surface, that are greatly affected by one or more natural hazards and/or anthropic interventions. The case study, Ripiceni–Holm, belonging to Cucuteni culture, is one of the most important Chalcolithic discoveries in the region. It is also the first evidence from Romania of a concentric arrangement of buildings in the proto-urban mega-sites tradition in Cucuteni-Trypillia cultural complex, and a solid piece of evidence in terms of irreversible natural and anthropic destruction. Using archival cartographic material, alongside non-destructive and high-resolution airborne sensing and ground-based geophysical techniques (LiDAR, total field and vertical gradient magnetometry), we managed to detect diachronic erosion processes for 31 years, to identify a complex internal spatial organization of the actual site and to outline a possible layout of the initial extent of the settlement. The erosion was determined with the help of the DSAS tool and highlighted an average erosion rate of 0.96 m/year. The main results argue a high percent of site destruction (approximately 45%) and the presence of an active shoreline affecting the integrity of the cultural layer. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) The location of the study area in Europe, (<b>B</b>) NE Romania, and (<b>C</b>,<b>D</b>) Stânca-Costești reservoir micro-region; (spatial data source: (<b>A</b>) ESRI European boundaries, (<b>B</b>) 0,5 m/pixel LiDAR hillshade, and (<b>C</b>,<b>D</b>) digital elevation model provided by Romanian Water Administration).</p>
Full article ">Figure 2
<p>Cartographic material used in the study: (<b>A</b>) 3rd Military Mapping Survey of Austria, measurements from 1895 in the area, scale 1:200,000; (<b>B</b>) Romanian Military Maps, scale 1:20,000, edition 1939; (<b>C</b>) Topographic plans, scale 1:5000, edition 1976; (<b>D</b>) Topographic maps, scale 1:25,000, edition 1984; orthophotos, scale 1:5000, edition (<b>E</b>) 2005 and (<b>F</b>) 2012; Field surveys with differential GNSS system from (<b>F</b>) 2012 and (<b>F</b>) 2015; (<b>g</b>) LiDAR hillshade derived from 2013 DEM, belonging to the Romanian Water Administration; (<b>H</b>) Archaeological site view from the south.</p>
Full article ">Figure 3
<p>(<b>A</b>) Shoreline Change Envelope, (<b>B</b>) Net Shoreline Movement and (<b>C</b>) End Point Rate calculated parameters showing shoreline dynamics between 1984 and 2015; (Note that the SCE and NSM are similar due to the only negative values of the transects; no accretion is attested).</p>
Full article ">Figure 4
<p>Details with shoreline limits from (<b>A</b>) 2005 and 2015; (<b>B</b>,<b>C</b>) In situ cultural layer and (<b>D</b>) archaeological remains scattered on the beach from the north-eastern part of the settlement.</p>
Full article ">Figure 5
<p>(<b>A</b>) Magnetic map (−15/+15 nT, white/black) for the site in question; (<b>B</b>) Interpretation of the magnetogram.</p>
Full article ">Figure 6
<p>(<b>A</b>) Spatial organization of the magnetic features and possible layout of the initial site; (<b>B</b>) Magnetic map (−15/+15 nT, white/black) of Ripiceni–<span class="html-italic">Popoaia</span> archaeological site (Cucuteni culture) representing a similar planimetry with the case study.</p>
Full article ">Figure 7
<p>Shoreline forecast for the next 10 and 20 years showing endangered areas of the Ripiceni–<span class="html-italic">Holm</span> archaeological site.</p>
Full article ">
16 pages, 3189 KiB  
Article
Adapting Satellite Soundings for Operational Forecasting within the Hazardous Weather Testbed
by Rebekah B. Esmaili, Nadia Smith, Emily B. Berndt, John F. Dostalek, Brian H. Kahn, Kristopher White, Christopher D. Barnet, William Sjoberg and Mitchell Goldberg
Remote Sens. 2020, 12(5), 886; https://doi.org/10.3390/rs12050886 - 10 Mar 2020
Cited by 21 | Viewed by 5017
Abstract
In this paper, we describe how researchers and weather forecasters work together to make satellite sounding data sets more useful in severe weather forecasting applications through participation in National Oceanic and Atmospheric Administration (NOAA)’s Hazardous Weather Testbed (HWT) and JPSS Proving Ground and [...] Read more.
In this paper, we describe how researchers and weather forecasters work together to make satellite sounding data sets more useful in severe weather forecasting applications through participation in National Oceanic and Atmospheric Administration (NOAA)’s Hazardous Weather Testbed (HWT) and JPSS Proving Ground and Risk Reduction (PGRR) program. The HWT provides a forum for collaboration to improve products ahead of widespread operational deployment. We found that the utilization of the NOAA-Unique Combined Atmospheric Processing System (NUCAPS) soundings was improved when the product developer and forecaster directly communicated to overcome misunderstandings and to refine user requirements. Here we share our adaptive strategy for (1) assessing when and where NUCAPS soundings improved operational forecasts by using real, convective case studies and (2) working to increase NUCAPS utilization by improving existing products through direct, face-to-face interaction. Our goal is to discuss the lessons we learned and to share both our successes and challenges working with the weather forecasting community in designing, refining, and promoting novel products. We foresee that our experience in the NUCAPS product development life cycle may be relevant to other communities who can then build on these strategies to transition their products from research to operations (and operations back to research) within the satellite meteorological community. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>High-level flow chart of the step-wise NOAA-Unique Combined Atmospheric Processing System (NUCAPS) retrieval algorithm that outputs temperature (T), moisture (q) and trace gases. In Advanced Weather Interactive Processing Systems (AWIPS), NUCAPS retrievals of T, q and ozone (O3) are color-coded as red, yellow and green to indicate if and when they failed quality control checks. Steps B and D, which are yellow black text, are regression steps, and if they fail they will be flagged as yellow in AWIPS; these retrievals should be used with caution. Steps A, C, and E, which are red with white text, are cloud clearing or retrieval stages of the algorithm. If any of these fails, the retrieval is unlikely to yield meaningful results, and they will be flagged red in AWIPS. The entire algorithm runs regardless if any one step passes or fails.</p>
Full article ">Figure 2
<p>The four NUCAPS products demonstrated in the 2019 Hazardous Weather Testbed Experimental Forecast Program: Baseline NUCAPS soundings in (<b>a</b>) plan view with quality flags. The NSHARP display of (<b>b</b>) baseline NUCAPS soundings and (<b>c</b>) modified soundings northeast of Bismarck, ND on May 15, 2019 ahead of a low-level moisture gradient; (<b>d</b>) gridded NUCAPS showing 2FHAG Temperature on June 3, 2019; and (<b>e</b>) NUCAPS-Forecast on May 10, 2019 showing CAPE gradients five hours past initialization.</p>
Full article ">Figure 3
<p>Responses to “How helpful were the following NUCAPS products to making your forecast(s)?”.</p>
Full article ">Figure 4
<p>Reponses to the question (<b>a</b>) “Did you use NUCAPS products as a component in your decision to issue a warning or Special Weather Statement?” and the question (<b>b</b>) “Which product(s) factored into your decision process?”.</p>
Full article ">Figure 5
<p>Reponses to the question “Which of the following NUCAPS profiles did you use?”.</p>
Full article ">Figure 6
<p>Reponses to the question “If convection initiated, did NUCAPS-Forecast provide skill in determining the eventual convective intensity, convective mode, and type of severe weather produced?”.</p>
Full article ">Figure 7
<p>Reponses to the question “Did any of the following prevent you from using NUCAPS products in your analysis?”.</p>
Full article ">Figure 8
<p>Reponses to the question “How often would you use NUCAPS in the future?”.</p>
Full article ">
17 pages, 11917 KiB  
Article
Individual Tree Detection in a Eucalyptus Plantation Using Unmanned Aerial Vehicle (UAV)-LiDAR
by Juan Picos, Guillermo Bastos, Daniel Míguez, Laura Alonso and Julia Armesto
Remote Sens. 2020, 12(5), 885; https://doi.org/10.3390/rs12050885 - 10 Mar 2020
Cited by 62 | Viewed by 8413
Abstract
The present study addresses the tree counting of a Eucalyptus plantation, the most widely planted hardwood in the world. Unmanned aerial vehicle (UAV) light detection and ranging (LiDAR) was used for the estimation of Eucalyptus trees. LiDAR-based estimation of Eucalyptus is a challenge [...] Read more.
The present study addresses the tree counting of a Eucalyptus plantation, the most widely planted hardwood in the world. Unmanned aerial vehicle (UAV) light detection and ranging (LiDAR) was used for the estimation of Eucalyptus trees. LiDAR-based estimation of Eucalyptus is a challenge due to the irregular shape and multiple trunks. To overcome this difficulty, the layer of the point cloud containing the stems was automatically classified and extracted according to the height thresholds, and those points were horizontally projected. Two different procedures were applied on these points. One is based on creating a buffer around each single point and combining the overlapping resulting polygons. The other one consists of a two-dimensional raster calculated from a kernel density estimation with an axis-aligned bivariate quartic kernel. Results were assessed against the manual interpretation of the LiDAR point cloud. Both methods yielded a detection rate (DR) of 103.7% and 113.6%, respectively. Results of the application of the local maxima filter to the canopy height model (CHM) intensely depends on the algorithm and the CHM pixel size. Additionally, the height of each tree was calculated from the CHM. Estimates of tree height produced from the CHM was sensitive to spatial resolution. A resolution of 2.0 m produced a R2 and a root mean square error (RMSE) of 0.99 m and 0.34 m, respectively. A finer resolution of 0.5 m produced a more accurate height estimation, with a R2 and a RMSE of 0.99 and 0.44 m, respectively. The quality of the results is a step toward precision forestry in eucalypt plantations. Full article
(This article belongs to the Special Issue Individual Tree Detection and Characterisation from UAV Data)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the studied plots (aerial image from Plan Nacional de Ortofotografía Aérea (PNOA) 2016, <a href="https://pnoa.ign.es" target="_blank">https://pnoa.ign.es</a>).</p>
Full article ">Figure 2
<p>Flow chart for the three methods of individual tree detection.</p>
Full article ">Figure 3
<p>Extraction of the stem layer in a sample row of trees: (<b>a</b>) point cloud; (<b>b</b>) stem layer extraction from the point cloud; (<b>c</b>) horizontal projection of the extracted layer.</p>
Full article ">Figure 4
<p>Overview of Method 1 and Method 2 creation steps: (<b>a</b>) point cloud of a sample row of trees; (<b>b</b>) dissolve applied to overlapping polygons obtained by buffering stem points; (<b>c</b>) standardized density grid on the horizontal projection of the points.</p>
Full article ">Figure 5
<p>Example of the tree height measuring method.</p>
Full article ">Figure 6
<p>Detail of an aerial image superimposed with the laser returns corresponding to the stem layer (aerial image from Plan Nacional de Ortofotografía Aérea (PNOA) 2016, <a href="https://pnoa.ign.es" target="_blank">https://pnoa.ign.es</a>). Method 1 yielded density values closer to the reference value for both plots. An example of the yielded results for a tree line for both methods is shown in <a href="#remotesensing-12-00885-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 7
<p>Overview of the individual tree detection (ITD) results obtained by Method 1 and Method 2: (<b>a</b>) point cloud of a sample row of trees; (<b>b</b>) buffering on the horizontal projection of the points and the located individual; (<b>c</b>) standardized density raster on the horizontal projection of the points and the located individual.</p>
Full article ">Figure 8
<p>Examples of points that derived into commission errors: (<b>a</b>) and (<b>b</b>) are examples regarding canopy returns; (<b>c</b>) and (<b>d</b>) are examples regarding snag trees.</p>
Full article ">Figure 8 Cont.
<p>Examples of points that derived into commission errors: (<b>a</b>) and (<b>b</b>) are examples regarding canopy returns; (<b>c</b>) and (<b>d</b>) are examples regarding snag trees.</p>
Full article ">Figure 9
<p>False negative due to a merge of buffers.</p>
Full article ">Figure 10
<p>Sample of the position of detected trees for each method.</p>
Full article ">Figure 11
<p>Estimated vs. observed measured tree height for two different resolutions of the canopy height model: (<b>a</b>) 0.5 m; (<b>b</b>) 2.0 m.</p>
Full article ">
2 pages, 144 KiB  
Editorial
Editorial for the Special Issue “ASTER 20th Anniversary”
by Yasushi Yamaguchi and Michael Abrams
Remote Sens. 2020, 12(5), 884; https://doi.org/10.3390/rs12050884 - 10 Mar 2020
Viewed by 2422
Abstract
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research
facility instrument on NASA’s Terra spacecraft. Full article
(This article belongs to the Special Issue ASTER 20th Anniversary)
18 pages, 26233 KiB  
Article
Temporal Variation and Spatial Structure of the Kuroshio-Induced Submesoscale Island Vortices Observed from GCOM-C and Himawari-8 Data
by Po-Chun Hsu, Chia-Ying Ho, Hung-Jen Lee, Ching-Yuan Lu and Chung-Ru Ho
Remote Sens. 2020, 12(5), 883; https://doi.org/10.3390/rs12050883 - 9 Mar 2020
Cited by 17 | Viewed by 4328
Abstract
Dynamics of ocean current-induced island wake has been an important issue in global oceanography. Green Island, a small island located off southeast of Taiwan on the Kuroshio path was selected as the study area to more understand the spatial structure and temporal variation [...] Read more.
Dynamics of ocean current-induced island wake has been an important issue in global oceanography. Green Island, a small island located off southeast of Taiwan on the Kuroshio path was selected as the study area to more understand the spatial structure and temporal variation of well-organized vortices formed by the interaction between the Kuroshio and the island. Sea surface temperature (SST) and chlorophyll-a (Chl-a) concentration data derived from the Himawari-8 satellite and the second generation global imager (SGLI) of global change observation mission (GCOM-C) were used in this study. The spatial SST and Chl-a variations in designed observation lines and the cooling zone transitions on the left and right sides of the vortices were investigated using 250 m spatial resolution GCOM-C data. The Massachusetts Institute of Technology general circulation model (MITgcm) simulation confirmed that the positive and negative vortices were sequentially detached from each other in a few hours. In addition, totals of 101 vortexes from July 2015 to December 2019 were calculated from the 1-h temporal resolution Himawari-8 imagery. The average vortex propagation speed was 0.95 m/s. Totals of 38 cases of two continuous vortices suggested that the average vortex shedding period is 14.8 h with 1.15 m/s of the average incoming surface current speed of Green Island, and the results agreed to the ideal Strouhal-Reynolds number fitting curve relation. Combined with the satellite observation and numerical model simulation, this study demonstrates the structure of the wake area could change quickly, and the water may mix in different vorticity states for each observation station. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The bottom topography around Green Island and the path of the Kuroshio (red line and arrow).</p>
Full article ">Figure 2
<p>(<b>a</b>) The cruise experiment results with (<b>b</b>,<b>c</b>) velocity (U component positive in the east, V component positive in the north), (<b>d</b>) temperature, (<b>e</b>) salinity, and (<b>f</b>) Chl-a from stations A1 to A7 on 10 November 2012.</p>
Full article ">Figure 3
<p>Case of the island vortex obtained from the global change observation mission (GCOM-C) second-generation global imager (SGLI) data taken at 02:12 (UTC), 25 April 2019. (<b>a</b>) Sea surface temperature (SST) (°C), (<b>b</b>,<b>c</b>) zoom in on vortices of (<b>a</b>,<b>d</b>) Chl-a (<math display="inline"><semantics> <mrow> <mrow> <mi>mg</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mrow> </semantics></math>), (<b>e</b>,<b>f</b>) zoom in on vortices of (<b>d</b>). The black arrow in (<b>a</b>) is the current velocity from the OSCAR data. The first arrow (22.33°N, 121.33°E) has a speed of 0.58 m/s, and the second arrow (22.67°N, 121.67°E) has a speed of 0.66 m/s.</p>
Full article ">Figure 4
<p>SST and chlorophyll-a (Chl-a) of L1 to L10 in <a href="#remotesensing-12-00883-f003" class="html-fig">Figure 3</a>e.</p>
Full article ">Figure 5
<p>Case of the island vortex obtained from GCOM-C SGLI image taken at 02:22 (UTC), July 13, 2019. (<b>a</b>,<b>b</b>) SST (°C), (<b>c</b>,<b>d</b>) Chl-a (<math display="inline"><semantics> <mrow> <mrow> <mi>mg</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mrow> </semantics></math>), (<b>e</b>) SST, and Chl-a values of L1 to L4 in (b). The black arrow in (<b>a</b>) is the current velocity from the OSCAR data. The first arrow (22.33°N, 121.33°E) has a speed of 0.46 m/s, and the second arrow (22.67°N, 121.67°E) has a speed of 0.59 m/s.</p>
Full article ">Figure 6
<p>Two different spatially distributed vortices with (<b>a</b>,<b>b</b>) Chl-a (<math display="inline"><semantics> <mrow> <mrow> <mi>mg</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mrow> </semantics></math>) and (<b>c</b>,<b>d</b>) SST (°C). Two cases obtained from GCOM-C SGLI data taken at 02:07 (UTC) 27 July 2018 (left) and at 02:04 (UTC) 21 June 2019 (right), (<b>e</b>,<b>f</b>) SST and Chl-a values of L1 and L2.</p>
Full article ">Figure 7
<p>Results of the MITgcm numerical mode lasting one day. The background is the dimensionless parameter <math display="inline"><semantics> <mrow> <mrow> <mi mathvariant="italic">Ro</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Case of the island vortex train with (<b>a</b>) SST (°C) and (<b>b</b>) Chl-a (<math display="inline"><semantics> <mrow> <mrow> <mi>mg</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mrow> </semantics></math>) obtained from GCOM-C SGLI data taken at 02:07 (UTC), 27 July 2018 (UTC). (<b>c</b>,<b>d</b>) are zoom in on SST and Chl-a of vortex 3. (<b>e</b>,<b>f</b>) same as (<b>c</b>,<b>d</b>), but for vortex 4. (<b>g</b>–<b>j</b>) are SST and Chl-a values of L1 to L4. The black arrow (22.67°N, 121.67°E) in (<b>a</b>) is the current velocity from the OSCAR data with a speed of 0.51 m/s.</p>
Full article ">Figure 9
<p>The 24-h continuous Himawari-8 SST images from 21:00 UTC on 12 July 2016 to 20:00 UTC on 13 July 2016. Red stars and red dots represent the center positions of the two vortex cases.</p>
Full article ">Figure 10
<p>(<b>a</b>) trajectory of 101 vortex cases, (<b>b</b>) the distribution probability (%) of the vortices for 101 cases, and (<b>c</b>) a histogram of the property speed statistics for 101 vortex cases.</p>
Full article ">Figure 11
<p>The Strouhal number (<math display="inline"><semantics> <mrow> <mrow> <mi mathvariant="italic">St</mi> </mrow> </mrow> </semantics></math>) versus the Reynolds number (<math display="inline"><semantics> <mrow> <mrow> <mi mathvariant="italic">Re</mi> </mrow> </mrow> </semantics></math> ) diagram. The point for this study is expressed as the mean value with one standard deviation.</p>
Full article ">Figure 12
<p>The OSCAR sea surface current velocity from 2010 to 2019, (<b>a</b>) the annual mean for (<b>b</b>) summer and (<b>c</b>) winter and (<b>d</b>) the average of the incoming current velocity for each month.</p>
Full article ">Figure 13
<p>The probability distribution of the Chl-a concentration (&gt;0.15 <math display="inline"><semantics> <mrow> <mrow> <mi>mg</mi> <mo>/</mo> <msup> <mi mathvariant="normal">m</mi> <mn>3</mn> </msup> </mrow> </mrow> </semantics></math>) in different seasons.</p>
Full article ">Figure 14
<p>The island wake development from the MITgcm simulation for different Reynolds numbers. (<b>a</b>) <span class="html-italic">Re</span> = 70, (<b>b</b>) <span class="html-italic">Re</span> = 118, (<b>c</b>) <span class="html-italic">Re</span> = 156. The sub-image represents the change in speed (m/s) along 22.6°N.</p>
Full article ">
26 pages, 6942 KiB  
Article
Fusing China GF-5 Hyperspectral Data with GF-1, GF-2 and Sentinel-2A Multispectral Data: Which Methods Should Be Used?
by Kai Ren, Weiwei Sun, Xiangchao Meng, Gang Yang and Qian Du
Remote Sens. 2020, 12(5), 882; https://doi.org/10.3390/rs12050882 - 9 Mar 2020
Cited by 51 | Viewed by 6212
Abstract
The China GaoFen-5 (GF-5) satellite sensor, which was launched in 2018, collects hyperspectral data with 330 spectral bands, a 30 m spatial resolution, and 60 km swath width. Its competitive advantages compared to other on-orbit or planned sensors are its number of bands, [...] Read more.
The China GaoFen-5 (GF-5) satellite sensor, which was launched in 2018, collects hyperspectral data with 330 spectral bands, a 30 m spatial resolution, and 60 km swath width. Its competitive advantages compared to other on-orbit or planned sensors are its number of bands, spectral resolution, and swath width. Unfortunately, its applications may be undermined by its relatively low spatial resolution. Therefore, the data fusion of GF-5 with high spatial resolution multispectral data is required to further enhance its spatial resolution while preserving its spectral fidelity. This paper conducted a comprehensive evaluation study of fusing GF-5 hyperspectral data with three typical multispectral data sources (i.e., GF-1, GF-2 and Sentinel-2A (S2A)), based on quantitative metrics, classification accuracy, and computational efficiency. Datasets on three study areas of China were utilized to design numerous experiments, and the performances of nine state-of-the-art fusion methods were compared. Experimental results show that LANARAS (this method was proposed by lanaras et al.), Adaptive Gram–Schmidt (GSA), and modulation transfer function (MTF)-generalized Laplacian pyramid (GLP) methods are more suitable for fusing GF-5 with GF-1 data, MTF-GLP and GSA methods are recommended for fusing GF-5 with GF-2 data, and GSA and smoothing filtered-based intensity modulation (SFIM) can be used to fuse GF-5 with S2A data. Full article
(This article belongs to the Special Issue Advanced Techniques for Spaceborne Hyperspectral Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Our study areas. (<b>a</b>) Yellow River Estuary; (<b>b</b>) Taihu Lake; (<b>c</b>) Poyang Lake.</p>
Full article ">Figure 2
<p>Flow chart of the implemented tests on image fusion methods. GSA: Adaptive Gram–Schmidt (GSA); MTF-GLP: Modulation transfer function-generalized Laplacian pyramid; SFIM: smoothing filtered-based intensity modulation; CNMF: smoothing filtered-based intensity modulation; FUSE: fast fusion based on Sylvester equation; LANARAS: the method was proposed by lanaras; MAP-SMM: Maximum a posterior- stochastic mixing model; HCM: hybrid color mapping; Two-CNN-Fu: Two-branch Convolutional Neural Network.</p>
Full article ">Figure 3
<p>The use of the least square method to calculate the regression coefficient matrix between the independent variables and the dependent variables.</p>
Full article ">Figure 4
<p>Experimental results of the Taihu Lake-1 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 5
<p>Classification results of the Taihu Lake-1 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 6
<p>Classification accuracy of GF-1, GF-5 and fused images. (<b>a</b>) Taihu Lake-1 area. (<b>b</b>) Taihu Lake-2 area. (<b>c</b>) Poyang Lake-1 area.</p>
Full article ">Figure 7
<p>Experimental results of Taihu Lake-3 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 8
<p>Classification results of Taihu Lake-3 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 9
<p>Classification accuracy of GF-2, GF-5 and fused images. (<b>a</b>) Taihu Lake-3 area. (<b>b</b>) Taihu Lake-4 area. (<b>c</b>) Taihu Lake-5 area.</p>
Full article ">Figure 10
<p>Experimental results of Taihu Lake-6 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 11
<p>Classification results of Taihu Lake-6 dataset, presenting the original GF-1 and GF-5 data and the resulting image of each tested fusion method.</p>
Full article ">Figure 12
<p>Classification accuracy of S2A, GF-5 and fused images. (<b>a</b>) Yellow River Estuary area. (<b>b</b>) Poyang Lake-2 area. (<b>c</b>) Taihu Lake-6 area.</p>
Full article ">
26 pages, 18945 KiB  
Article
Estimating Ground-Level Particulate Matter in Five Regions of China Using Aerosol Optical Depth
by Qiaolin Zeng, Jinhua Tao, Liangfu Chen, Hao Zhu, SongYan Zhu and Yang Wang
Remote Sens. 2020, 12(5), 881; https://doi.org/10.3390/rs12050881 - 9 Mar 2020
Cited by 10 | Viewed by 3451
Abstract
Aerosol optical depth (AOD) has been widely used to estimate near-surface particulate matter (PM). In this study, ground-measured data from the Campaign on Atmospheric Aerosol Research network of China (CARE-China) and the Aerosol Robotic Network (AERONET) were used to evaluate the accuracy of [...] Read more.
Aerosol optical depth (AOD) has been widely used to estimate near-surface particulate matter (PM). In this study, ground-measured data from the Campaign on Atmospheric Aerosol Research network of China (CARE-China) and the Aerosol Robotic Network (AERONET) were used to evaluate the accuracy of Visible Infrared Imaging Radiometer Suite (VIIRS) AOD data for different aerosol types. These four aerosol types were from dust, smoke, urban, and uncertain and a fifth “type” was included for unclassified (i.e., total) aerosols. The correlation for dust aerosol was the worst (R2 = 0.15), whereas the correlations for smoke and urban types were better (R2 values of 0.69 and 0.55, respectively). The mixed-effects model was used to estimate the PM2.5 concentrations in Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), the Pearl River Delta (PRD), the Yangtze River Delta (YRD), and the Middle Yangtze River (MYR) using the classified aerosol type and unclassified aerosol type methods. The results suggest that the cross validation (CV) of different aerosol types has higher correlation coefficients than that of the unclassified aerosol type. For example, the R2 values for dust, smoke, urban, uncertain, and unclassified aerosol types BTH were 0.76, 0.85, 0.82, 0.82, and 0.78, respectively. Compared with the daily PM2.5 concentrations, the air quality levels estimated using the classified aerosol type method were consistent with ground-measured PM2.5, and the relative error was low (most RE was within ±20%). The classified aerosol type method improved the accuracy of the PM2.5 estimation compared to the unclassified method, although there was an overestimation or underestimation in some regions. The seasonal distribution of PM2.5 was analyzed and the PM2.5 concentrations were high during winter, low during summer, and moderate during spring and autumn. Spatially, the higher PM2.5 concentrations were predominantly distributed in areas of human activity and industrial areas. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The locations of the five regions in China: Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), Pearl River Delta (PRD), Yangtze River Delta (YRD), and Middle Yangtze River (MYR), and the location of the ground_measured aerosol optical depth (AOD).</p>
Full article ">Figure 2
<p>Scatter plots between the different aerosol types of VIIRS AOD and ground-measured AOD. The X axis is the ground-measured, and the Y axis is the VIIRS AOD.</p>
Full article ">Figure 3
<p>Cross validation of the predicted vs. observed PM<sub>2.5</sub> concentrations for different aerosol types. (<b>a</b>), (<b>b</b>), (<b>c</b>), (<b>d</b>), and (<b>e</b>) show the dust, smoke, urban, uncertainty, and all, respectively.</p>
Full article ">Figure 3 Cont.
<p>Cross validation of the predicted vs. observed PM<sub>2.5</sub> concentrations for different aerosol types. (<b>a</b>), (<b>b</b>), (<b>c</b>), (<b>d</b>), and (<b>e</b>) show the dust, smoke, urban, uncertainty, and all, respectively.</p>
Full article ">Figure 4
<p>The relative error (RE) in the PM<sub>2.5</sub> concentrations for the ground-measured and satellite retrieval result using the classified and unclassified aerosol type methods in Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), Pearl River Delta (PRD), Yangtze River Delta (YRD), and Middle Yangtze River (MYR). The red dashed, green solid, and dark lines are the RE values calculated via ground-measured, classified, and unclassified methods. The red, green, and dark rectangles are the PM<sub>2.5</sub> obtained using the ground-measured, classified, and unclassified methods, respectively.</p>
Full article ">Figure 4 Cont.
<p>The relative error (RE) in the PM<sub>2.5</sub> concentrations for the ground-measured and satellite retrieval result using the classified and unclassified aerosol type methods in Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), Pearl River Delta (PRD), Yangtze River Delta (YRD), and Middle Yangtze River (MYR). The red dashed, green solid, and dark lines are the RE values calculated via ground-measured, classified, and unclassified methods. The red, green, and dark rectangles are the PM<sub>2.5</sub> obtained using the ground-measured, classified, and unclassified methods, respectively.</p>
Full article ">Figure 5
<p>The distributions of PM<sub>2.5</sub> obtained using the classified and unclassified aerosol type methods in Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), Pearl River Delta (PRD), Yangtze River Delta (YRD), and Middle Yangtze River (MYR).</p>
Full article ">Figure 5 Cont.
<p>The distributions of PM<sub>2.5</sub> obtained using the classified and unclassified aerosol type methods in Beijing–Tianjin–Hebei (BTH), Sichuan–Chongqing (SC), Pearl River Delta (PRD), Yangtze River Delta (YRD), and Middle Yangtze River (MYR).</p>
Full article ">Figure 6
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in Beijing–Tianjin–Hebei BTH.</p>
Full article ">Figure 6 Cont.
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in Beijing–Tianjin–Hebei BTH.</p>
Full article ">Figure 7
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in Sichuan–Chongqing (SC).</p>
Full article ">Figure 7 Cont.
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in Sichuan–Chongqing (SC).</p>
Full article ">Figure 8
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Yangtze River Delta (YRD).</p>
Full article ">Figure 8 Cont.
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Yangtze River Delta (YRD).</p>
Full article ">Figure 9
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Middle Yangtze River (MYR).</p>
Full article ">Figure 9 Cont.
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Middle Yangtze River (MYR).</p>
Full article ">Figure 10
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Pearl River Delta (PRD).</p>
Full article ">Figure 10 Cont.
<p>Spatiotemporal distributions of PM<sub>2.5</sub> concentrations in the Pearl River Delta (PRD).</p>
Full article ">
27 pages, 5947 KiB  
Article
Quantifying Information Content in Multispectral Remote-Sensing Images Based on Image Transforms and Geostatistical Modelling
by Ying Zhang, Jingxiong Zhang and Wenjing Yang
Remote Sens. 2020, 12(5), 880; https://doi.org/10.3390/rs12050880 - 9 Mar 2020
Cited by 1 | Viewed by 3411
Abstract
Quantifying information content in remote-sensing images is fundamental for information-theoretic characterization of remote sensing information processes, with the images being usually information sources. Information-theoretic methods, being complementary to conventional statistical methods, enable images and their derivatives to be described and analyzed in terms [...] Read more.
Quantifying information content in remote-sensing images is fundamental for information-theoretic characterization of remote sensing information processes, with the images being usually information sources. Information-theoretic methods, being complementary to conventional statistical methods, enable images and their derivatives to be described and analyzed in terms of information as defined in information theory rather than data per se. However, accurately quantifying images’ information content is nontrivial, as information redundancy due to spectral and spatial dependence needs to be properly handled. There has been little systematic research on this, hampering wide applications of information theory. This paper seeks to fill this important research niche by proposing a strategy for quantifying information content in multispectral images based on information theory, geostatistics, and image transformations, by which interband spectral dependence, intraband spatial dependence, and additive noise inherent to multispectral images are effectively dealt with. Specifically, to handle spectral dependence, independent component analysis (ICA) is performed to transform a multispectral image into one with statistically independent image bands (not spectral bands of the original image). The ICA-transformed image is further normal-transformed to facilitate computation of information content based on entropy formulas for Gaussian distributions. Normal transform facilitates straightforward incorporation of spatial dependence in entropy computation for the aforementioned double-transformed image bands with inter-pixel spatial correlation modeled via variograms. Experiments were undertaken using Landsat ETM+ and TM image subsets featuring different dominant land cover types (i.e., built-up, agricultural, and hilly). The experimental results confirm that the proposed methods provide more objective estimates of information content than otherwise when spectral dependence, spatial dependence, or non-normality is not accommodated properly. The differences in information content between image subsets obtained with ETM+ and TM were found to be about 3.6 bits/pixel, indicating the former’s greater information content. The proposed methods can be adapted for information-theoretic analyses of remote sensing information processes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flowchart describing the methods for quantifying information content in multispectral images.</p>
Full article ">Figure 2
<p>The three study sites: built-up, agricultural, and hilly.</p>
Full article ">Figure 3
<p>Three Landsat ETM+ image subsets over sites with different dominant land cover types: (<b>a</b>) built-up, (<b>b</b>) agricultural, and, (<b>c</b>) hilly.</p>
Full article ">Figure 4
<p>Three Landsat TM image subsets over the same sites as in <a href="#remotesensing-12-00880-f003" class="html-fig">Figure 3</a> with different dominant land cover types: (<b>a</b>) built-up, (<b>b</b>) agricultural, and, (<b>c</b>) hilly.</p>
Full article ">Figure 5
<p>Estimated information content in Landsat ETM+ and Landsat TM image subsets over sites with different dominant land cover types: (<b>a</b>) built-up, (<b>b</b>) agricultural, and, (<b>c</b>) hilly.</p>
Full article ">Figure 6
<p>Differences in information content between Landsat ETM+ and Landsat TM image subsets, estimated by different methods.</p>
Full article ">Figure A1
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the built-up image subset: (<b>a</b>) independent component analysis (ICA)- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), (<b>c</b>) maximum noise fraction (MNF)-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">Figure A1 Cont.
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the built-up image subset: (<b>a</b>) independent component analysis (ICA)- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), (<b>c</b>) maximum noise fraction (MNF)-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">Figure A2
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the agricultural image subset: (<b>a</b>) ICA- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>). (<b>c</b>) MNF-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">Figure A2 Cont.
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the agricultural image subset: (<b>a</b>) ICA- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>). (<b>c</b>) MNF-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">Figure A3
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the hilly image subset: (<b>a</b>) ICA- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), (<b>c</b>) MNF-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">Figure A3 Cont.
<p>The experimental variograms (blue cross) and the corresponding models fitted (red lines) for the hilly image subset: (<b>a</b>) ICA- and normal-transformed image <span class="html-italic">bands</span> (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>″</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>″</mo> </msubsup> </mrow> </semantics></math>), (<b>b</b>) ICA-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), (<b>c</b>) MNF-transformed image bands (<math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">Z</mi> <mn>1</mn> <mo>′</mo> </msubsup> <mo>~</mo> <msubsup> <mi mathvariant="bold">Z</mi> <mn>6</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>), and (<b>d</b>) original image bands (<math display="inline"><semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>1</mn> <mrow/> </msubsup> <mo>~</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>Z</mi> </mstyle> <mn>6</mn> <mrow/> </msubsup> </mrow> </semantics></math>).</p>
Full article ">
17 pages, 6311 KiB  
Article
Accuracy Verification of Airborne Large-Footprint Lidar based on Terrain Features
by Weiqi Lian, Shaoning Li, Guo Zhang, Yanan Wang, Xinyang Chen and Hao Cui
Remote Sens. 2020, 12(5), 879; https://doi.org/10.3390/rs12050879 - 9 Mar 2020
Cited by 5 | Viewed by 3596
Abstract
Accuracy verification of airborne large-footprint lidar data is important for proper data application but is difficult when ground-based laser detectors are not available. Therefore, we developed a novel method for lidar accuracy verification based on the broadened echo pulse caused by signal saturation [...] Read more.
Accuracy verification of airborne large-footprint lidar data is important for proper data application but is difficult when ground-based laser detectors are not available. Therefore, we developed a novel method for lidar accuracy verification based on the broadened echo pulse caused by signal saturation over water. When an aircraft trajectory crosses both water and land, this phenomenon and the change in elevation between land and water surfaces can be used to verify the plane and elevation accuracy of the airborne large-footprint lidar data in conjunction with a digital surface model (DSM). Due to the problem of echo pulse broadening, the center-of-gravity (COG) method was proposed to optimize the processing flow. We conducted a series of experiments on terrain features (i.e., the intersection between water and land) in Xiangxi, Hunan Province, China. Verification results show that the elevation accuracy obtained in our experiments was better than 1 m and the plane accuracy was better than 5 m, which is well within the design requirements. Although this method requires specific terrain conditions for optimum applicability, the results can lead to valuable improvements in the flexibility and quality of lidar data collection. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of airborne large-footprint lidar measurement areas used in this study. Area 1 was used to verify the elevation accuracy, whereas Areas 2–4 were used to verify the plane accuracy.</p>
Full article ">Figure 2
<p>Flow chart of the proposed lidar data processing method.</p>
Full article ">Figure 3
<p>Schematic diagram of the echo wave pattern at signal saturation for (<b>a</b>) the transmitted pulse and (<b>b</b>) the received pulse. The solid red line is a Gaussian-fitted waveform, the dotted red line is the time of pulse reception obtained by the Gaussian fitting method, the solid black line is a broadened pulse resulting from signal saturation, and the time corresponding to the dotted black line is the pulse reception time obtained by the center-of-gravity (COG) method.</p>
Full article ">Figure 4
<p>The COG method.</p>
Full article ">Figure 5
<p>Schematic diagram of the laser-tight geometric positioning model.</p>
Full article ">Figure 6
<p>Schematic of laser spot locations at the intersection between water and land. The red arrow indicates the landward direction.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>l</b>) are echo waveform diagrams corresponding to each spot from A to I in <a href="#remotesensing-12-00879-f006" class="html-fig">Figure 6</a>, showing the changes in the footprint spot waveform from water to land. The red dotted line is the starting position of the spot saturation phenomenon and denotes the echo peak position of the water surface. The waveform marked by the green rectangle indicates that it is not sure whether there is a land echo or noise. The waveform marked by the blue rectangle indicates that it is not sure whether there is a water echo or noise. The waveform marked by the orange rectangle is the echo waveform from land.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>–<b>l</b>) are echo waveform diagrams corresponding to each spot from A to I in <a href="#remotesensing-12-00879-f006" class="html-fig">Figure 6</a>, showing the changes in the footprint spot waveform from water to land. The red dotted line is the starting position of the spot saturation phenomenon and denotes the echo peak position of the water surface. The waveform marked by the green rectangle indicates that it is not sure whether there is a land echo or noise. The waveform marked by the blue rectangle indicates that it is not sure whether there is a water echo or noise. The waveform marked by the orange rectangle is the echo waveform from land.</p>
Full article ">Figure 8
<p>Schematic of an aircraft trajectory that is nearly parallel to the shore and its resulting waveform.</p>
Full article ">Figure 9
<p>Example of lidar flight trajectory over unplanted wintertime fields.</p>
Full article ">Figure 10
<p>Elevation accuracy verification results for flat terrain. Gray histogram shows the difference between the measured (blue) and digital surface model (DSM, orange) elevation. The red circle is the data error caused by a ground object in the DSM data.</p>
Full article ">Figure 11
<p>Elevation accuracy verification results in water areas. Gray histogram shows the difference between the measured (blue) and DSM (orange) elevation values. Red circles are data errors caused by abnormal laser triggering.</p>
Full article ">Figure 12
<p>Comparison of the measured elevation values of the two methods in flat areas. The measured elevation values obtained by the COG method (blue) do not differ much from the values obtained by the Gaussian fitting method (gray).</p>
Full article ">Figure 13
<p>Comparison of the measured elevation values of the two methods in water areas. The measured elevation values obtained by the COG method (blue) are significantly higher than the values obtained by the Gaussian fitting method (gray).</p>
Full article ">Figure 14
<p>Schematic diagram of the footprint location. (<b>a</b>) shows the footprint location under ideal conditions. (<b>b</b>,<b>c</b>) show the situation with the largest deviation. The red vertical line is the central location of the water footprint spot and the land spot. The black vertical line is the water–land intersection.</p>
Full article ">
22 pages, 9443 KiB  
Article
A Bayesian Three-Cornered Hat (BTCH) Method: Improving the Terrestrial Evapotranspiration Estimation
by Xinlei He, Tongren Xu, Youlong Xia, Sayed M. Bateni, Zhixia Guo, Shaomin Liu, Kebiao Mao, Yuan Zhang, Huaize Feng and Jingxue Zhao
Remote Sens. 2020, 12(5), 878; https://doi.org/10.3390/rs12050878 - 9 Mar 2020
Cited by 31 | Viewed by 6236
Abstract
In this study, a Bayesian-based three-cornered hat (BTCH) method is developed to improve the estimation of terrestrial evapotranspiration (ET) by integrating multisource ET products without using any a priori knowledge. Ten long-term (30 years) gridded ET datasets from statistical or empirical, remotely-sensed, and [...] Read more.
In this study, a Bayesian-based three-cornered hat (BTCH) method is developed to improve the estimation of terrestrial evapotranspiration (ET) by integrating multisource ET products without using any a priori knowledge. Ten long-term (30 years) gridded ET datasets from statistical or empirical, remotely-sensed, and land surface models over contiguous United States (CONUS) are integrated by the BTCH and ensemble mean (EM) methods. ET observations from eddy covariance towers (ETEC) at AmeriFlux sites and ET values from the water balance method (ETWB) are used to evaluate the BTCH- and EM-integrated ET estimates. Results indicate that BTCH performs better than EM and all the individual parent products. Moreover, the trend of BTCH-integrated ET estimates, and their influential factors (e.g., air temperature, normalized differential vegetation index, and precipitation) from 1982 to 2011 are analyzed by the Mann–Kendall method. Finally, the 30-year (1982 to 2011) total water storage anomaly (TWSA) in the Mississippi River Basin (MRB) is retrieved based on the BTCH-integrated ET estimates. The TWSA retrievals in this study agree well with those from the Gravity Recovery and Climate Experiment (GRACE). Full article
(This article belongs to the Special Issue Remote Sensing and Modeling of the Terrestrial Water Cycle)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The land cover types and location of the 15 AmeriFlux stations over the contiguous United States (CONUS). DBF, EBF, and ENF represent deciduous broadleaf forest, evergreen broadleaf forest, and evergreen needleleaf forest, respectively.</p>
Full article ">Figure 2
<p>Spatial distribution of 30-year averaged evapotranspiration (ET) from the Bayesian-based three-cornered hat (BTCH) method.</p>
Full article ">Figure 3
<p>Maps of 30-year averaged normalized differential vegetation index (NDVI) and precipitation over the contiguous United States.</p>
Full article ">Figure 4
<p>Comparison of monthly BTCH-integrated, ensemble mean (EM), and ten parent evapotranspiration (ET) datasets with flux tower observations. There are 144, 336, and 300 samples for the deciduous broadleaf forest (DBF), evergreen broadleaf forest (ENF), and cropland, respectively.</p>
Full article ">Figure 5
<p>Comparison of annual BTCH-integrated, ensemble mean (EM), and ten parent evapotranspiration (ET) dataset with ET from water balance method from 1982 to 2011. There are 360 samples over the contiguous United States.</p>
Full article ">Figure 6
<p>Weight (%) of each evapotranspiration (ET) product in the Bayesian-based three-cornered hat (BTCH) method.</p>
Full article ">Figure 7
<p>The spatial pattern of trends in air temperature (Ta), normalized differential vegetation index (NDVI), precipitation (<span class="html-italic">P</span>), and evapotranspiration (ET) over the contiguous United States from 1982 to 2011. The black points represent 95% level of significance.</p>
Full article ">Figure 8
<p>Plots of 30-year-averaged evapotranspiration (ET) against air temperature (Ta), precipitation (<span class="html-italic">P</span>), and normalized differential vegetation index (NDVI) over the 12 river forecast centers (RFCs). The red line represents the fitted linear regression.</p>
Full article ">Figure 9
<p>Partial correlation coefficients (R) between the annual evapotranspiration (ET) and air temperature (Ta), precipitation (<span class="html-italic">P</span>), and normalized differential vegetation index (NDVI).</p>
Full article ">Figure 10
<p>Annual variations of BTCH-integrated evapotranspiration (ET), soil moisture, and precipitation anomalies for 12 river forecast centers (RFCs) from 1982 to 2011.</p>
Full article ">Figure 11
<p>Time series of the total water storage anomaly (TWSA) retrievals over the Mississippi River Basin (MRB) from 1982 to 2011 (top). Comparison of the TWSA estimates with those of the Gravity Recovery and Climate Experiment (GRACE) from 2003 to 2011 (bottom).</p>
Full article ">Figure 12
<p>Monthly variations of evapotranspiration (ET) estimates from Bayesian-based three-cornered hat (BTCH) for different time windows, and ensemble mean (EM). BTCH1, BTCH2, BTCH3, BTCH6, and BTCH12 denote 1-month, 2-month, 3-month, 6-month, and 12-month time windows, respectively. The black circles are observations from eddy covariance flux towers.</p>
Full article ">Figure 13
<p>Comparison of evapotranspiration (ET) estimates (1982 to 2011) from the ensemble mean (EM) and Bayesian-based three-cornered hat (BTCH) methods for different time windows.</p>
Full article ">Figure 14
<p>The uncertainties of evapotranspiration (ET) products from the ensemble mean (EM) and Bayesian-based three-cornered hat (BTCH) methods with different time windows over the contiguous United States.</p>
Full article ">
18 pages, 14778 KiB  
Article
Using Training Samples Retrieved from a Topographic Map and Unsupervised Segmentation for the Classification of Airborne Laser Scanning Data
by Zhishuang Yang, Wanshou Jiang, Yaping Lin and Sander Oude Elberink
Remote Sens. 2020, 12(5), 877; https://doi.org/10.3390/rs12050877 - 9 Mar 2020
Cited by 7 | Viewed by 3859
Abstract
The labeling of point clouds is the fundamental task in airborne laser scanning (ALS) point clouds processing. Many supervised methods have been proposed for the point clouds classification work. Training samples play an important role in the supervised classification. Most of the training [...] Read more.
The labeling of point clouds is the fundamental task in airborne laser scanning (ALS) point clouds processing. Many supervised methods have been proposed for the point clouds classification work. Training samples play an important role in the supervised classification. Most of the training samples are generated by manual labeling, which is time-consuming. To reduce the cost of manual annotating for ALS data, we propose a framework that automatically generates training samples using a two-dimensional (2D) topographic map and an unsupervised segmentation step. In this approach, input point clouds, at first, are separated into the ground part and the non-ground part by a DEM filter. Then, a point-in-polygon operation using polygon maps derived from a 2D topographic map is used to generate initial training samples. The unsupervised segmentation method is applied to reduce the noise and improve the accuracy of the point-in-polygon training samples. Finally, the super point graph is used for the training and testing procedure. A comparison with the point-based deep neural network Pointnet++ (average F1 score 59.4%) shows that the segmentation based strategy improves the performance of our initial training samples (average F1 score 65.6%). After adding the intensity value in unsupervised segmentation, our automatically generated training samples have competitive results with an average F1 score of 74.8% for ALS data classification while using the ground truth training samples the average F1 score is 75.1%. The result shows that our framework is feasible to automatically generate and improve the training samples with low time and labour costs. Full article
(This article belongs to the Special Issue Laser Scanning and Point Cloud Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow for the point-in-polygon operation.</p>
Full article ">Figure 2
<p>(<b>a</b>) Ground truth result; (<b>b</b>) Corresponding point-in-polygon result. Building, navy blue; tree, red; water, brown; terrain, green; bridge, cyan; and unlabeled, black.</p>
Full article ">Figure 3
<p>Incorrect labels in the point-in-polygon results. (<b>a</b>) Unlabeled building and tree points caused by datasets registration problem; (<b>b</b>) Mislabeled tree points as they are in the building polygon; (<b>c</b>) Unlabeled building points caused by missing building polygons; (<b>d</b>) Unlabeled tree points caused by missing vegetation polygons. Building, navy blue; tree, red; terrain, green; and unlabeled, black.</p>
Full article ">Figure 4
<p>Unsupervised segmentation results. (<b>a</b>) Segmentation result on unlabeled area caused by datasets registration problem; (<b>b</b>) Segmentation result on mislabeled tree points as they are in the building polygon; (<b>c</b>) Segmentation result on unlabeled building points caused by missing building polygons; (<b>d</b>) Segmentation result on unlabeled tree points caused by missing vegetation polygons.</p>
Full article ">Figure 5
<p>(<b>a</b>) Segmentation without intensity; (<b>b</b>) Segmentation with intensity.</p>
Full article ">Figure 6
<p>Experiment data in Rotterdam city. (<b>a</b>) AHN3 point clouds data; (<b>b</b>) Four classes derived from the BGT topographic map (polygons from the testing area are not used in the prediction work).</p>
Full article ">Figure 7
<p>Performance of our framework in the training area. (<b>a</b>) Initial point-in-polygon labels; (<b>b</b>) Unsupervised segmentation result; (<b>c</b>) Classification result by SGP. Building, navy blue; tree, red; terrain, green; unlabeled, black.</p>
Full article ">Figure 8
<p>Comparison between segment-based strategy and point-based strategy in the testing area using initial point-in-polygon training samples.</p>
Full article ">Figure 9
<p>(<b>a</b>) Classification results without intensity; (<b>b</b>) Classification results with intensity. Building, navy blue; tree, red; water, brown; terrain, green; and bridge, cyan.</p>
Full article ">Figure 10
<p>Classification results for the testing area using different training samples.</p>
Full article ">Figure 11
<p>(<b>a</b>) Prediction result by our framework in the testing area; (<b>b</b>) Prediction result by Pointnet++ in the testing area; (<b>c</b>) The segmentation result. Building, navy blue; tree, red; and terrain, green.</p>
Full article ">Figure 12
<p>(<b>a</b>) The point-in-polygon labels on training area; (<b>b</b>) Prediction result by our framework in the training area. Building, navy blue; tree, red; and terrain, green.</p>
Full article ">Figure 13
<p>(<b>a</b>) Water in the training area; (<b>b</b>) Water in the testing area. Building, navy blue; tree, red; water, brown; and terrain, green.</p>
Full article ">
23 pages, 21187 KiB  
Article
Determining the Suitable Number of Ground Control Points for UAS Images Georeferencing by Varying Number and Spatial Distribution
by Valeria-Ersilia Oniga, Ana-Ioana Breaban, Norbert Pfeifer and Constantin Chirila
Remote Sens. 2020, 12(5), 876; https://doi.org/10.3390/rs12050876 - 9 Mar 2020
Cited by 43 | Viewed by 6070
Abstract
Currently, products that are obtained by Unmanned Aerial Systems (UAS) image processing based on structure-from-motion photogrammetry (SfM) are being investigated for use in high precision projects. Independent of the georeferencing process being done directly or indirectly, Ground Control Points (GCPs) are needed to [...] Read more.
Currently, products that are obtained by Unmanned Aerial Systems (UAS) image processing based on structure-from-motion photogrammetry (SfM) are being investigated for use in high precision projects. Independent of the georeferencing process being done directly or indirectly, Ground Control Points (GCPs) are needed to increase the accuracy of the obtained products. A minimum of three GCPs is required to bring the results into a desired coordinate system through the indirect georeferencing process, but it is well known that increasing the number of GCPs will lead to a higher accuracy of the final results. The aim of this study is to find the suitable number of GCPs to derive high precision results and what is the effect of GCPs systematic or stratified random distribution on the accuracy of the georeferencing process and the final products, respectively. The case study involves an urban area of about 1 ha that was photographed with a low-cost UAS, namely, the DJI Phantom 3 Standard, at 28 m above ground. The camera was oriented in a nadiral position and 300 points were measured using a total station in a local coordinate system. The UAS images were processed using the 3DF Zephyr software performing a full BBA with a variable number of GCPs i.e., from four up to 150, while the number and the spatial location of check points (ChPs) was kept constant i.e., 150 for each independent distribution. In addition, the systematic and stratified random distribution of GCPs and ChPs spatial positions was analysed. Furthermore, the point clouds and the mesh surfaces that were automatically derived were compared with a terrestrial laser scanner (TLS) point cloud while also considering three test areas: two inside the area defined by GCPs and one outside the area. The results expressed a clear overview of the number of GCPs needed for the indirect georeferencing process with minimum influence on the final results. The RMSE can be reduced down to 50% when switching from four to 20 GCPs, whereas a higher number of GCPs only slightly improves the results. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area: (<b>a</b>) Regional context; and, (<b>b</b>) Faculty building.</p>
Full article ">Figure 2
<p>Flow cart of research methodology.</p>
Full article ">Figure 3
<p>(<b>a</b>) The spatial distribution of the 300 ground control points (GCPs) and check points (ChPs) on the Unmanned Aerial Systems (UAS)-derived orthophoto; (<b>b</b>) Description of the study area.</p>
Full article ">Figure 4
<p>(<b>a</b>) Flight lines; (<b>b</b>) Number of overlapping images; and, (<b>c</b>) Camera positions for the North to South flight (red) and for the West to East flight (blue) and the number of visible cameras for each GCP and ChP respectively.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Flight lines; (<b>b</b>) Number of overlapping images; and, (<b>c</b>) Camera positions for the North to South flight (red) and for the West to East flight (blue) and the number of visible cameras for each GCP and ChP respectively.</p>
Full article ">Figure 5
<p>The Terrestrial Laser Scanner (TLS) station points.</p>
Full article ">Figure 6
<p>TLS point clouds acquisition using the Maptek I-Site 8820 terrestrial laser scanner, from station point C (<b>a</b>) and A (<b>b</b>).</p>
Full article ">Figure 7
<p>The spheres used for indirect georeferencing of the “E” TLS point cloud.</p>
Full article ">Figure 8
<p>The Hausdorff distances calculated for (<b>a</b>) the parking lot and (<b>b</b>) the histogram.</p>
Full article ">Figure 9
<p>The final TLS point cloud resulted after georeferencing four individual point clouds.</p>
Full article ">Figure 10
<p>(<b>a</b>) First grid overlaid over the GCPs-ChPs area; (<b>b</b>) Second grid overlaid over the GCPs-ChPs area.</p>
Full article ">Figure 11
<p>The spatial distribution of four GCPs towards the interior/exterior rectangular boundary.</p>
Full article ">Figure 12
<p>The residual errors variation for the four GCPs scenarios.</p>
Full article ">Figure 13
<p>The residuals trend of the 150 ChPs for all GCPs scenarios (4, 8, 20, 25, 50, 75, 100, 125, 150 GCPs): (<b>a</b>) Systematic distribution and (<b>b</b>) Stratified random distribution.</p>
Full article ">Figure 14
<p>The spatial distribution of GCPs and ChPs and the residuals for all scenarios: (<b>a</b>) systematic distribution and (<b>b</b>) stratified random distribution.</p>
Full article ">Figure 14 Cont.
<p>The spatial distribution of GCPs and ChPs and the residuals for all scenarios: (<b>a</b>) systematic distribution and (<b>b</b>) stratified random distribution.</p>
Full article ">Figure 15
<p>Residuals of the 150 ChPs for all scenarios: (<b>a</b>) systematic distribution and (<b>b</b>) stratified random distribution.</p>
Full article ">Figure 16
<p>The representative surfaces used for quality assessment located inside (Control area 1: parking lot, Control area 2: roof) and outside (Control area 3: parking lot) the GCPs-ChPs area.</p>
Full article ">
23 pages, 4873 KiB  
Article
The Least Square Adjustment for Estimating the Tropical Peat Depth Using LiDAR Data
by Bambang Kun Cahyono, Trias Aditya and Istarno
Remote Sens. 2020, 12(5), 875; https://doi.org/10.3390/rs12050875 - 9 Mar 2020
Cited by 8 | Viewed by 4935
Abstract
High-accuracy peat maps are essential for peatland restoration management, but costly, labor-intensive, and require an extensive amount of peat drilling data. This study offers a new method to create an accurate peat depth map while reducing field drilling data up to 75%. Ordinary [...] Read more.
High-accuracy peat maps are essential for peatland restoration management, but costly, labor-intensive, and require an extensive amount of peat drilling data. This study offers a new method to create an accurate peat depth map while reducing field drilling data up to 75%. Ordinary least square (OLS) adjustments were used to estimate the elevation of the mineral soil surface based on the surrounding soil parameters. Orthophoto and Digital Terrain Models (DTMs) from LiDAR data of Tebing Tinggi Island, Riau, were used to determine morphology, topography, and spatial position parameters to define the DTM and its coefficients. Peat depth prediction models involving 100%, 50%, and 25% of the field points were developed using the OLS computations, and compared against the field survey data. Raster operations in a GIS were used in processing the DTM, to produce peat depth estimations. The results show that the soil map produced from OLS provided peat depth estimations with no significant difference from the field depth data at a mean absolute error of ±1 meter. The use of LiDAR data and the OLS method provides a cost-effective methodology for estimating peat depth and mapping for the purpose of supporting peat restoration. Full article
(This article belongs to the Special Issue Remote Sensing of Peatlands II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Aerial photograph visualization of the Peat Hydrological Unit (PHU) area of Tebing Tinggi Island with the river and stream (blue line) and street features (red line) added.</p>
Full article ">Figure 2
<p>Mineral Soil Elevation above the local Mean Sea Level (MSL).</p>
Full article ">Figure 3
<p>Condition of topographic (depressionless digital terrain model (DTM) above the local MSL) and distribution of peat measurements on file.</p>
Full article ">Figure 4
<p>The condition of elevation mineral soil resulted in the estimation model by involving all covariates. Note: the elevation is above the local MSL.</p>
Full article ">Figure 5
<p>Mineral soil position of estimation results by ignoring gravity parameters and outliers. Note: the elevation is above the local MSL.</p>
Full article ">Figure 6
<p>Estimation results of the mineral soil elevation after eliminating four insignificant parameters and removing outliers. Note: the elevation is above the local MSL.</p>
Full article ">Figure 7
<p>(<b>a</b>) Positions of the profile line and (b) the longitudinal profile view along the profile line. Note: cross-section 1, cross-section 2, and cross section 3 can be seen in <a href="#app1-remotesensing-12-00875" class="html-app">Appendix A</a>.</p>
Full article ">Figure 8
<p>Longitudinal profile of the estimation results of mineral soil elevation based on 25%, 50%, and 100% of data, which are overlaid onto the peat surface topography and the real mineral soil elevation.</p>
Full article ">Figure A1
<p>Cross-section view along the lines of Cross-section 1, Cross-section 2, and Cross-section 3 based on estimation 1, estimation 2, and estimation 3 results, which are overlaid on the real mineral soil surface and peat soil surface (topography).</p>
Full article ">Figure A2
<p>Cross-section view of Cross-section 1, Cross-section 2, and Cross-section 3 lines of mineral soil elevation based on 25%, 50%, and 100% of the data, which are overlaid on the peat surface topography and the real mineral soil elevation.</p>
Full article ">
24 pages, 4236 KiB  
Article
Morphometric Analysis for Soil Erosion Susceptibility Mapping Using Novel GIS-Based Ensemble Model
by Alireza Arabameri, John P. Tiefenbacher, Thomas Blaschke, Biswajeet Pradhan and Dieu Tien Bui
Remote Sens. 2020, 12(5), 874; https://doi.org/10.3390/rs12050874 - 9 Mar 2020
Cited by 64 | Viewed by 11999
Abstract
The morphometric characteristics of the Kalvārī basin were analyzed to prioritize sub-basins based on their susceptibility to erosion by water using a remote sensing-based data and a GIS. The morphometric parameters (MPs)—linear, relief, and shape—of the drainage network were calculated using data from [...] Read more.
The morphometric characteristics of the Kalvārī basin were analyzed to prioritize sub-basins based on their susceptibility to erosion by water using a remote sensing-based data and a GIS. The morphometric parameters (MPs)—linear, relief, and shape—of the drainage network were calculated using data from the Advanced Land-observing Satellite (ALOS) phased-array L-type synthetic-aperture radar (PALSAR) digital elevation model (DEM) with a spatial resolution of 12.5 m. Interferometric synthetic aperture radar (InSAR) was used to generate the DEM. These parameters revealed the network’s texture, morpho-tectonics, geometry, and relief characteristics. A complex proportional assessment of alternatives (COPRAS)-analytical hierarchy process (AHP) novel-ensemble multiple-criteria decision-making (MCDM) model was used to rank sub-basins and to identify the major MPs that significantly influence erosion landforms of the Kalvārī drainage basin. The results show that in evolutionary terms this is a youthful landscape. Rejuvenation has influenced the erosional development of the basin, but lithology and relief, structure, and tectonics have determined the drainage patterns of the catchment. Results of the AHP model indicate that slope and drainage density influence erosion in the study area. The COPRAS-AHP ensemble model results reveal that sub-basin 1 is the most susceptible to soil erosion (SE) and that sub-basin 5 is least susceptible. The ensemble model was compared to the two individual models using the Spearman correlation coefficient test (SCCT) and the Kendall Tau correlation coefficient test (KTCCT). To evaluate the prediction accuracy of the ensemble model, its results were compared to results generated by the modified Pacific Southwest Inter-Agency Committee (MPSIAC) model in each sub-basin. Based on SCCT and KTCCT, the ensemble model was better at ranking sub-basins than the MPSIAC model, which indicated that sub-basins 1 and 4, with mean sediment yields of 943.7 and 456.3 m 3 km 2   year 1 , respectively, have the highest and lowest SE susceptibility in the study area. The sensitivity analysis revealed that the most sensitive parameters of the MPSIAC model are slope (R2 = 0.96), followed by runoff (R2 = 0.95). The MPSIAC shows that the ensemble model has a high prediction accuracy. The method tested here has been shown to be an effective tool to improve sustainable soil management. Full article
(This article belongs to the Special Issue Remote Sensing of Soil Erosion)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area in Iran.</p>
Full article ">Figure 2
<p>Flowchart of research in the study area.</p>
Full article ">Figure 3
<p>Interferometric synthetic aperture radar (InSAR) data-processing procedure for digital elevation model (DEM) production.</p>
Full article ">Figure 4
<p>Ground control points in the study area.</p>
Full article ">Figure 5
<p>Sub-basins of the Kalvārī basin.</p>
Full article ">Figure 6
<p>Stream order in the Kalvārī basin.</p>
Full article ">Figure 7
<p>(<b>a</b>) Correlation between stream orders and logarithm of the number of streams in sub-basins (Horton’s first law); (<b>b</b>) correlation between stream orders and logarithm of stream length in sub-basins (Horton’s second law); and (<b>c</b>) relationship between area and stream length in the Kalvārī Basin.</p>
Full article ">Figure 8
<p>Relative importance of geomorphometric parameters using AHP model.</p>
Full article ">Figure 9
<p>Prioritization of sub-basins for conservation programs.</p>
Full article ">Figure 10
<p>Validation of ensemble model with MPSIAC model.</p>
Full article ">
21 pages, 5358 KiB  
Article
The Potential of Space-Based Sea Surface Salinity on Monitoring the Hudson Bay Freshwater Cycle
by Wenqing Tang, Simon H. Yueh, Daqing Yang, Ellie Mcleod, Alexander Fore, Akiko Hayashi, Estrella Olmedo, Justino Martínez and Carolina Gabarró
Remote Sens. 2020, 12(5), 873; https://doi.org/10.3390/rs12050873 - 9 Mar 2020
Cited by 10 | Viewed by 4115
Abstract
Hudson Bay (HB) is the largest semi-inland sea in the Northern Hemisphere, connecting with the Arctic Ocean through the Foxe Basin and the northern Atlantic Ocean through the Hudson Strait. HB is covered by ice and snow in winter, which completely melts in [...] Read more.
Hudson Bay (HB) is the largest semi-inland sea in the Northern Hemisphere, connecting with the Arctic Ocean through the Foxe Basin and the northern Atlantic Ocean through the Hudson Strait. HB is covered by ice and snow in winter, which completely melts in summer. For about six months each year, satellite remote sensing of sea surface salinity (SSS) is possible over open water. SSS links freshwater contributions from river discharge, sea ice melt/freeze, and surface precipitation/evaporation. Given the strategic importance of HB, SSS has great potential in monitoring the HB freshwater cycle and studying its relationship with climate change. However, SSS retrieved in polar regions (poleward of 50°) from currently operational space-based L-band microwave instruments has large uncertainty (~ 1 psu) mainly due to sensitivity degradation in cold water (<5°C) and sea ice contamination. This study analyzes SSS from NASA Soil Moisture Active and Passive (SMAP) and European Space Agency (ESA) Soil Moisture and Ocean Salinity(SMOS) missions in the context of HB freshwater contents. We found that the main source of the year-to-year SSS variability is sea ice melting, in particular, the onset time and places of ice melt in the first couple of months of open water season. The freshwater contribution from surface forcing P-E is smaller in magnitude comparing with sea ice contribution but lasts on longer time scale through the whole open water season. River discharge is comparable with P-E in magnitude but peaks before ice melt. The spatial and temporal variations of freshwater contents largely exceed the remote sensed SSS uncertainty. This fact justifies the use of remote sensed SSS for monitoring the HB freshwater cycle. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the Hudson Bay (HB) system schematically divided into five sub-domains: the eastern Hudson Bay and James Bay (green), the Hudson Bay interior (cyan), the western Hudson Bay boundary (orange), the Foxe Basin (yellow), and the Hudson Strait (red). Circles are the locations where daily discharge rates available, with color indicating river groups as described in text.</p>
Full article ">Figure 2
<p>SMAP Sea Surface Salinity (JPL V4.3) from June to December for 2015, 2016, 2017 and 2018 in the Hudson Bay including the Foxe Basin and the Hudson Strait. (Similar maps for other SSS products are provided in <a href="#app1-remotesensing-12-00873" class="html-app">supplemental materials</a>.).</p>
Full article ">Figure 3
<p>Time series of daily discharge rate combined for each group (as defined in Sec.3.2 and color coded in <a href="#remotesensing-12-00873-f001" class="html-fig">Figure 1</a>): (<b>a</b>) river_JeHB, (<b>b</b>) river_wHB, (<b>c</b>) river_sHS. Red curves are daily climatology based on three years data except for the west HB group where only first two years data are currently available.</p>
Full article ">Figure 4
<p>Surface freshwater flux (P-E) of June to December from 2015 to 2018 in the Hudson Bay System. Missing values, masked as grey, are caused by missing E values.</p>
Full article ">Figure 5
<p>Time series of P-E (black), P (red) and -E (green) integrated over the Hudson Bay area from Jan. 1 2015 to December 31, 2018. Thick lines indicate integration in area where P and E are valid independently, while thin lines indicate integration of P and E over area both are valid (E data has spatial gaps).</p>
Full article ">Figure 6
<p>Freshwater flux from sea ice change (I<sub>local</sub>) from June to December for 2015, 2016, 2017, and 2018 in the Hudson Bay. I<sub>local</sub> is calculated assuming ice thickness of 1 m, considering only areas which are critical to SSS retrieval, i.e., where the daily sea ice concentration is less than 3% in at least one of the two adjacent days involved in calculation of I<sub>local,</sub> then the monthly I<sub>local</sub> shown is the average of all daily I<sub>local</sub> in the month.</p>
Full article ">Figure 7
<p>The time series (Janunary 1 2015 to December 31, 2018) of the daily freshwater from sea ice changes (I<sub>local</sub>, left axis) integrated over the area in the sub-domain of Hudson Bay system where at least one of SIC involved in the calculation is less than 20% (black), 10% (red), 5% (green), and 3% (blue). Overplot is the daily time series of SIC (right axis) averaged in the same area. A 30-day moving average is applied on SIC before calculating I<sub>local</sub>.</p>
Full article ">Figure 8
<p>Surface currents from HYCOM averaged for August 2016. Red lines indicate location of Hudson Bay gateways, where salt advection is estimated (<a href="#remotesensing-12-00873-f009" class="html-fig">Figure 9</a>).</p>
Full article ">Figure 9
<p>(<b>a</b>) Time series of salt transport (H<sub>adv</sub>*Vol) into the Hudson Bay through channels at northern Hudson Bay defined in the text for G1 (red), G2 (green), G3(blue) and the total (black). (<b>b</b>) Time series of SMAP SSS average over G1 (red), G2(green), G3(blue) and JHB (the James Bay and the Hudson Bay, black). (<b>c</b>) Time series of HYCOM surface current projected to the normal of gateway sections and averaged over G1 (red), G2 (green), and G3 (blue).</p>
Full article ">Figure 10
<p>(top) Daily time series of SSS (black) and dS/dt (red) averaged over James and Hudson Bay (JHB) area with 30-days moving average applied. (bottom) Daily time series of salinity tendency in JHB associated with freshwater contribution from surface forcing (red), sea ice changes (green), river discharge (blue), horizontal advection (grey), and their total (black). A 3-7-15-7-3 day moving average filter is applied on daily time series of dS/dt. Note the river discharge R for 2018 is climatology of previous years due to lack of data after 2017; I<sub>local</sub> used C<sub>cut</sub> of 3% (see <a href="#sec4dot4-remotesensing-12-00873" class="html-sec">Section 4.4</a> for definition); P-E is obtained by P and E independently integrated over area (i.e., (P-E)<sub>1</sub> in <a href="#sec4dot3-remotesensing-12-00873" class="html-sec">Section 4.3</a>); and H<sub>adv</sub> is the total salt advection into HB through G1, G2, and G3 (see <a href="#sec4dot5-remotesensing-12-00873" class="html-sec">Section 4.5</a>).</p>
Full article ">
20 pages, 4172 KiB  
Article
Multi-scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images
by Ronghua Shang, Jiyu Zhang, Licheng Jiao, Yangyang Li, Naresh Marturi and Rustam Stolkin
Remote Sens. 2020, 12(5), 872; https://doi.org/10.3390/rs12050872 - 9 Mar 2020
Cited by 81 | Viewed by 7690
Abstract
Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of [...] Read more.
Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and efficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of our proposed multi-scale context extraction module. It contains three parts namely A, B, and C. Part A extracts global information. Part B is the parallel connection of atrous convolutions with different dilatation rate. Part C is the feature map itself. GAP stands for global average pooling. Con<math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> represents a 1 × 1 convolution layer. UP denotes upsample operation. Concat means that the features are concatenated according to channel.</p>
Full article ">Figure 2
<p>Structure of the adaptive fusion module. A is a low-level feature map. B is a high-level feature map. B’ is a feature map obtained by B. C is a feature map of A and B’ combined by channel. D is a feature map of the channel changed by C. E is the feature map adjusted with channel weights. F is the final fusion feature map.</p>
Full article ">Figure 3
<p>The overall structure of the proposed multi-scale adaptive feature fusion network (MANet). Part A is the backbone network. Part B is a multi-scale context extraction module. Part C is a high- level and low-level feature adaptive fusion module.</p>
Full article ">Figure 4
<p>Images of the Potsdam and the Vaihingen dataset and their corresponding labels.</p>
Full article ">Figure 5
<p>Precision-recall (PR) curves for each category of the seven models on the Potsdam dataset.</p>
Full article ">Figure 6
<p>Visual comparison of seven models on the Potsdam dataset.</p>
Full article ">Figure 7
<p>PR curves for each category of the seven models on the Vaihingen dataset.</p>
Full article ">Figure 8
<p>Visual comparison of seven models on the Vaihingen dataset.</p>
Full article ">Figure 9
<p>Example from the Zurich dataset.</p>
Full article ">
24 pages, 2952 KiB  
Article
Estimating the Growing Stem Volume of Chinese Pine and Larch Plantations based on Fused Optical Data Using an Improved Variable Screening Method and Stacking Algorithm
by Xinyu Li, Zhaohua Liu, Hui Lin, Guangxing Wang, Hua Sun, Jiangping Long and Meng Zhang
Remote Sens. 2020, 12(5), 871; https://doi.org/10.3390/rs12050871 - 9 Mar 2020
Cited by 30 | Viewed by 4165
Abstract
Accurately estimating growing stem volume (GSV) is very important for forest resource management. The GSV estimation is affected by remote sensing images, variable selection methods, and estimation algorithms. Optical images have been widely used for modeling key attributes of forest stands, including GSV [...] Read more.
Accurately estimating growing stem volume (GSV) is very important for forest resource management. The GSV estimation is affected by remote sensing images, variable selection methods, and estimation algorithms. Optical images have been widely used for modeling key attributes of forest stands, including GSV and aboveground biomass (AGB), because of their easy availability, large coverage and related mature data processing and analysis technologies. However, the low data saturation level and the difficulty of selecting feature variables from optical images often impede the improvement of estimation accuracy. In this research, two GaoFen-2 (GF-2) images, a Landsat 8 image, and fused images created by integrating GF-2 bands with the Landsat multispectral image using the Gram–Schmidt method were first used to derive various feature variables and obtain various datasets or data scenarios. A DC-FSCK approach that integrates feature variable screening and a combination optimization procedure based on the distance correlation coefficient and k-nearest neighbors (kNN) algorithm was proposed and compared with the stepwise regression analysis (SRA) and random forest (RF) for feature variable selection. The DC-FSCK considers the self-correlation and combination effect among feature variables so that the selected variables can improve the accuracy and saturation level of GSV estimation. To validate the proposed approach, six estimation algorithms were examined and compared, including Multiple Linear Regression (MLR), kNN, Support Vector Regression (SVR), RF, eXtreme Gradient Boosting (XGBoost) and Stacking. The results showed that compared with GF-2 and Landsat 8 images, overall, the fused image (Red_Landsat) of GF-2 red band with Landsat 8 multispectral image improved the GSV estimation accuracy of Chinese pine and larch plantations. The Red_Landsat image also performed better than other fused images (Pan_Landsat, Blue_Landsat, Green_Landsat and Nir_Landsat). For most of the combinations of the datasets and estimation models, the proposed variable selection method DC-FSCK led to more accurate GSV estimates compared with SRA and RF. In addition, in most of the combinations obtained by the datasets and variable selection methods, the Stacking algorithm performed better than other estimation models. More importantly, the combination of the fused image Red_Landsat with the DC-FSCK and Stacking algorithm led to the best performance of GSV estimation with the greatest adjusted coefficients of determination, 0.8127 and 0.6047, and the smallest relative root mean square errors of 17.1% and 20.7% for Chinese pine and larch, respectively. This study provided new insights on how to choose suitable optical images, variable selection methods and optimal modeling algorithms for the GSV estimation of Chinese pine and larch plantations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) and (<b>b</b>) The location of the study area in North China and Inner Mongolia; and (<b>c</b>) the spatial distribution of larch and Chinese pine plots.</p>
Full article ">Figure 2
<p>Methodological framework of forest GSV estimation for Chinese pine and larch plantations.</p>
Full article ">Figure 3
<p>The scatter graphs between the observed and estimated GSV values of the Chinese pine plots using three datasets and three variable screening methods (SRA, RF, and DC-FSCK): (<b>a</b>,<b>b</b>,<b>c</b>) are the GSV estimated by the GF-2, Landsat 8, and Fusion image Red_Landsat, respectively; (<b>1</b>,<b>2</b>,<b>3</b>) are the GSV estimated using the variables selection methods SRA, RF, and DC-FSCK, respectively. Each graph corresponds to the best estimation model with the smallest RMSEr value for each data scenario in <a href="#remotesensing-12-00871-t006" class="html-table">Table 6</a>. The black diagonal line is the theoretical best fit reference line, and the blue parallel dashed lines are the estimation residual reference lines with a 50% deviation from the sample mean.</p>
Full article ">Figure 4
<p>The scatter graphs between the observed and estimated GSV values of the larch plots using three dataset and three variable screening methods (SRA, RF, and DC-FSCK): (<b>a</b>,<b>b</b>,<b>c</b>) are the GSV estimated by the GF-2, Landsat 8, and Fusion image Red_Landsat, respectively; (<b>1</b>,<b>2</b>,<b>3</b>) are the GSV estimated using three variable screening methods SRA, RF, and DC-FSCK, respectively. Each graph corresponds to the best estimation model with the smallest RMSEr value for each data scenario in <a href="#remotesensing-12-00871-t006" class="html-table">Table 6</a>. The black diagonal line is the theoretical best fit reference line, and the blue parallel dashed lines are the estimation residual reference lines with a 50% deviation from the sample mean.</p>
Full article ">Figure 5
<p>The GSV maps of (<b>a1–a3</b>) Chinese pine and (<b>b1–b3</b>) larch plantations in the study area estimated based on the fusion image Red_Landsat using the variables selected by SRA, RF, and DC-FSCK method, respectively. Each map corresponds to the best estimation model with the smallest RMSEr value for each data scenario in <a href="#remotesensing-12-00871-t006" class="html-table">Table 6</a>.</p>
Full article ">
21 pages, 9488 KiB  
Article
A Novel Stereo Matching Algorithm for Digital Surface Model (DSM) Generation in Water Areas
by Wenhuan Yang, Xin Li, Bo Yang and Yu Fu
Remote Sens. 2020, 12(5), 870; https://doi.org/10.3390/rs12050870 - 8 Mar 2020
Cited by 16 | Viewed by 4737
Abstract
Image dense matching has become one of the widely used means for DSM generation due to its good performance in both accuracy and efficiency. However, for water areas, the most common ground object, accurate disparity estimation is always a challenge to excellent image [...] Read more.
Image dense matching has become one of the widely used means for DSM generation due to its good performance in both accuracy and efficiency. However, for water areas, the most common ground object, accurate disparity estimation is always a challenge to excellent image dense matching methods, as represented by semi-global matching (SGM), due to the poor texture. For this reason, a great deal of manual editing is always inevitable before practical applications. The main reason for this is the lack of uniqueness of matching primitives, with fixed size and shape, used by those methods. In this paper, we propose a novel DSM generation method, namely semi-global and block matching (SGBM), to achieve accurate disparity and height estimation in water areas by adaptive block matching instead of pixel matching. First, the water blocks are extracted by seed point growth, and an adaptive block matching strategy considering geometrical deformations, called end-block matching (EBM), is adopted to achieve accurate disparity estimation. Then, the disparity of all other pixels beyond these water blocks is obtained by SGM. Last, the median value of height of all pixels within the same block is selected as the final height for this block after forward intersection. Experiments are conducted on ZiYuan-3 (ZY-3) stereo images, and the results show that DSM generated by our method in water areas has high accuracy and visual quality. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The comparison of “Fixed Window” and “Adaptive Window” as the matching primitive. The matching cost curves were obtained with a disparity range (0,20) along epipolar line. For the “Fixed Window”, there is no obvious minimum, while the “Adaptive Window” has an obvious minimum.</p>
Full article ">Figure 2
<p>Flowchart of SGBM.</p>
Full article ">Figure 3
<p>(<b>a</b>) Square template for seed point detection. (<b>b</b>) 4-neighborhood and 8-neighborhood.</p>
Full article ">Figure 4
<p>Geometrical deformation process. The red frame is the extracted block, which is divided into three sub-blocks. The yellow frames are the end-blocks of the middle sub-block. <span class="html-italic">TH</span> is a given value.</p>
Full article ">Figure 5
<p>End-block-matching (EBM) strategy.</p>
Full article ">Figure 6
<p>Experimental image pair blocks. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">Figure 7
<p>The seed points on the top pyramid image and ROIs on all pyramid images, s is the down sampling scale. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">Figure 8
<p>The comparison of the water masks and the ROIs for image pair block 1. (<b>a</b>) FROM-GLC10 water masks. (<b>b</b>) ROIs.</p>
Full article ">Figure 9
<p>The comparison of the water masks and the ROIs for image pair block 2. (<b>a</b>) FROM-GLC10 water masks. (<b>b</b>) ROIs.</p>
Full article ">Figure 10
<p>Left image ROIs were embedded into right image. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">Figure 11
<p>The image pair block 1 ROI matching results. (<b>a</b>) Block matching. (<b>b</b>) Sub-block matching. (<b>c</b>) End-block matching.</p>
Full article ">Figure 12
<p>The image pair block 2 ROI matching results. (<b>a</b>) Block matching. (<b>b</b>) Sub-block matching. (<b>c</b>) End-block matching.</p>
Full article ">Figure 13
<p>The ROI matching results. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">Figure 14
<p>The DSMs generated by SGM, Geomatica2018 and the proposed SGBM. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">Figure 14 Cont.
<p>The DSMs generated by SGM, Geomatica2018 and the proposed SGBM. (<b>a</b>) Image pair block 1. (<b>b</b>) Image pair block 2.</p>
Full article ">
26 pages, 8962 KiB  
Article
A Precise Indoor Visual Positioning Approach Using a Built Image Feature Database and Single User Image from Smartphone Cameras
by Ming Li, Ruizhi Chen, Xuan Liao, Bingxuan Guo, Weilong Zhang and Ge Guo
Remote Sens. 2020, 12(5), 869; https://doi.org/10.3390/rs12050869 - 8 Mar 2020
Cited by 22 | Viewed by 5888
Abstract
Indoor visual positioning is a key technology in a variety of indoor location services and applications. The particular spatial structures and environments of indoor spaces is a challenging scene for visual positioning. To address the existing problems of low positioning accuracy and low [...] Read more.
Indoor visual positioning is a key technology in a variety of indoor location services and applications. The particular spatial structures and environments of indoor spaces is a challenging scene for visual positioning. To address the existing problems of low positioning accuracy and low robustness, this paper proposes a precision single-image-based indoor visual positioning method for a smartphone. The proposed method includes three procedures: First, color sequence images of the indoor environment are collected in an experimental room, from which an indoor precise-positioning-feature database is produced, using a classic speed-up robust features (SURF) point matching strategy and the multi-image spatial forward intersection. Then, the relationships between the smartphone positioning image SURF feature points and object 3D points are obtained by an efficient similarity feature description retrieval method, in which a more reliable and correct matching point pair set is obtained, using a novel matching error elimination technology based on Hough transform voting. Finally, efficient perspective-n-point (EPnP) and bundle adjustment (BA) methods are used to calculate the intrinsic and extrinsic parameters of the positioning image, and the location of the smartphone is obtained as a result. Compared with the ground truth, results of the experiments indicate that the proposed approach can be used for indoor positioning, with an accuracy of approximately 10 cm. In addition, experiments show that the proposed method is more robust and efficient than the baseline method in a real scene. In the case where sufficient indoor textures are present, it has the potential to become a low-cost, precise, and highly available indoor positioning technology. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow chart of the indoor visual positioning system proposed in this paper.</p>
Full article ">Figure 2
<p>The workflow of the positioning feature database establishment.</p>
Full article ">Figure 3
<p>Three-dimensional object point cloud from multi-image spatial forward intersection.</p>
Full article ">Figure 4
<p>The workflow of single smartphone image positioning.</p>
Full article ">Figure 5
<p>An instance of points in a 2D space being transformed into sinusoids in Hough space: (<b>a</b>) three points in 2D space and (<b>b</b>) three sinusoids in Hough space.</p>
Full article ">Figure 6
<p>The schematic diagram of three-point collinearity in smartphone photography.</p>
Full article ">Figure 7
<p>The schematic map of P<span class="html-italic">3</span>P.</p>
Full article ">Figure 8
<p>The decorated experimental rooms in a building: (<b>a</b>) location of rooms, (<b>b</b>) undecorated room, and (<b>c</b>) decorated room.</p>
Full article ">Figure 9
<p>The real conference scene room.</p>
Full article ">Figure 10
<p>Experimental measure equipment: (<b>a</b>) ring crosshair, (<b>b</b>) Leica TS60, and (<b>c</b>) demo App.</p>
Full article ">Figure 11
<p>Positioning images: (<b>a</b>) images in Room 212 and (<b>b</b>) images in Room 214.</p>
Full article ">Figure 12
<p>Matching optimization results of different algorithms: (<b>a</b>) RANSAC and (<b>b</b>) PROSAC.</p>
Full article ">Figure 13
<p>The precision-recall curves of RANSAC and PROSAC.</p>
Full article ">Figure 14
<p>Time-proportion of interior points: comparison between RANSAC and PROSAC.</p>
Full article ">Figure 15
<p>Ten experimental smartphone positioning images.</p>
Full article ">Figure 16
<p>Positioning result co-ordinate offset in Room 212: (<b>a</b>) Samsung and (<b>b</b>) Huawei smartphones.</p>
Full article ">Figure 17
<p>Positioning result co-ordinate offset in Room 214: (<b>a</b>) Samsung and (<b>b</b>) Huawei smartphones.</p>
Full article ">Figure 18
<p>Scatter plot of the MSEP distribution.</p>
Full article ">Figure 19
<p>Four control group positioning images.</p>
Full article ">Figure 20
<p>The location error distribution in different Rooms: (<b>a</b>) Room 212 and (<b>b</b>) Room 214.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop