Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, April-1
Previous Issue
Volume 15, March-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 6 (March-2 2023) – 247 articles

Cover Story (view full-size image): Azimuth multichannel (AMC) technology is one of the mainstream methods for achieving high-resolution wide-range (HRWS) imaging. However, the inevitable imbalance between channels can seriously affect the spectrum reconstruction results and reduce the quality of SAR images. According to the impact of mismatched reconstruction filters on the weighting matrix, this paper proposes a channel consistency correction method based on the range-Doppler domain to solve this problem. This method first performs spectrum reconstruction on multichannel echo signals with errors and then finds the phase error between channels by minimizing the sum of the sub-band norms (MSSBN) optimization model. Experimental results of simulated data and GF-3 measured data verify the proposed algorithm's high estimation accuracy and excellent computational efficiency. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 9177 KiB  
Article
Study on the Identification of Habitat Suitability Areas for the Dominant Locust Species Dasyhippus Barbipes in Inner Mongolia
by Xianwei Zhang, Wenjiang Huang, Huichun Ye and Longhui Lu
Remote Sens. 2023, 15(6), 1718; https://doi.org/10.3390/rs15061718 - 22 Mar 2023
Cited by 5 | Viewed by 2026
Abstract
Grassland locusts harm a large amount of grassland every year. Grassland locusts have caused devastating disasters across grassland resources and have greatly impacted the lives of herdsmen. Due to the impacts of climate change and human activity, the distribution of grassland locust habitats [...] Read more.
Grassland locusts harm a large amount of grassland every year. Grassland locusts have caused devastating disasters across grassland resources and have greatly impacted the lives of herdsmen. Due to the impacts of climate change and human activity, the distribution of grassland locust habitats changes constantly. The monitoring and identification of locust habitats is of great significance for the production and utilization of grassland resources. In order to further understand the behavior of these grassland pests and carry out precise prevention and control strategies, researchers have often used survey points to reveal the distribution of habitat-suitability areas or establish the high density of locusts (more than 15 locusts/m2) to identify the different risk levels of habitat-suitability areas for grassland locusts. However, the results of these two methods have often been too large, which is not conducive to the precise control of grassland locusts in large areas. Starting from the sample points of our locust investigation, we conducted a hierarchical prediction of the density of locusts and used the probability value of locust occurrence, as predicted by a maximum entropy model, to categorize the habitat-suitability areas according to the probability thresholds of suitable species growth. The results were in good agreement with the actual situation and there was little difference between the prediction results for locust densities greater than 15 locusts/m2 in the middle- and high-density habitat-suitability areas and those for all survey points, while there was a big difference between the prediction results for densities in the middle- and low-density habitat-suitability areas and those for all survey points. These results could provide a basis for the efficient and accurate control of grassland locusts and could have practical significance for future guidance. Full article
Show Figures

Figure 1

Figure 1
<p>The survey data points for the dominant locust species in Inner Mongolia.</p>
Full article ">Figure 2
<p>A flowchart of the research approach.</p>
Full article ">Figure 3
<p>The correlations between selected variables at different density levels: (<b>a</b>) the low–density correlation coefficient; (<b>b</b>) the medium–low–density correlation coefficient; (<b>c</b>) the medium–high–density correlation coefficient; (<b>d</b>) the high–density correlation coefficient; (<b>e</b>) the all–points correlation coefficient.</p>
Full article ">Figure 4
<p>The average AUC values of the different density scenarios: (<b>a</b>) all survey points; (<b>b</b>) the low-density survey points; (<b>c</b>) the low–medium-density survey points; (<b>d</b>) the medium–high-density survey points; (<b>e</b>) the high-density survey points.</p>
Full article ">Figure 5
<p>The relationships between the omission rate, occurrence probability and cumulative threshold at different density levels: (<b>a</b>) all survey points; (<b>b</b>) the low-density survey points; (<b>c</b>) the medium–low-density survey points; (<b>d</b>) the medium–high-density survey points; (<b>e</b>) the high-density survey points.</p>
Full article ">Figure 6
<p>The distribution maps of the dominant species dasyhippus barbipes, based on our maximum entropy model under three scenarios: (<b>a</b>) the extract results using all survey points; (<b>b</b>) the results using the high-density (more than 15 locusts/m<sup>2</sup>) survey points; (<b>c</b>) the density stratification results.</p>
Full article ">Figure 7
<p>The response curves of the main habitat factors predicted by our high−density grassland locust dataset model: (<b>a</b>) mean annual temperature; (<b>b</b>) vegetation coverage; (<b>c</b>) precipitation in growth season; (<b>d</b>) grassland type. The response curves show the relationships between the probability of occurrence of locusts and the habitat variables. The displayed values are the average of 10 repeated runs. The blue areas show the ±SD of the 10 repetitions. For each panel, the X axis represents the variable and the Y axis represents the probability of occurrence. Note: the grassland types were classified as follows: 1 = nongrassland area; 3 = other types of grassland; 5 = typical temperate grassland; 6 = temperate desertification; 7 = temperate grassland; 8 = temperate desert; 9 = background value.</p>
Full article ">Figure 8
<p>The response curves of the main habitat factors in our model prediction, showing the relationships between the probability of the occurrence of locusts and the habitat variables. The displayed values are the average of 10 repeated runs. The blue areas show the ±SD of the 10 repetitions. For each panel, the X axis represents the variable and the Y axis represents the probability of occurrence.</p>
Full article ">Figure 9
<p>The results of the knife-cutting test for the importance of the environmental variables in the Maxent model at different density levels: (<b>a</b>) all survey points; (<b>b</b>) the low-density survey points; (<b>c</b>) the medium–low-density survey points; (<b>d</b>) the medium–high-density survey points; (<b>e</b>) the high-density survey points. The environment variables in this figure have the same meanings as those shown in <a href="#remotesensing-15-01718-t001" class="html-table">Table 1</a>.</p>
Full article ">
22 pages, 22379 KiB  
Article
A Partial Reconstruction Method for SAR Altimeter Coastal Waveforms Based on Adaptive Threshold Judgment
by Xiaonan Liu, Weiya Kong, Hanwei Sun and Yaobing Lu
Remote Sens. 2023, 15(6), 1717; https://doi.org/10.3390/rs15061717 - 22 Mar 2023
Cited by 1 | Viewed by 1678
Abstract
Due to land contamination and human activities, the sea surface height (SSH) data retrieved from altimeter coastal waveforms have poor precision and cannot provide effective information for various tasks. The along-track high-resolution characteristic of the new synthetic aperture radar (SAR) altimeter makes the [...] Read more.
Due to land contamination and human activities, the sea surface height (SSH) data retrieved from altimeter coastal waveforms have poor precision and cannot provide effective information for various tasks. The along-track high-resolution characteristic of the new synthetic aperture radar (SAR) altimeter makes the retracking methods of traditional coastal waveforms difficult to apply. This study proposes a partial reconstruction method for SAR altimeter coastal waveforms. By making adaptive threshold judgments of model matching errors and repairing the contaminated waveforms based on the nearest linear prediction, the success rate of retracking and retrieval precision of SSH are significantly improved. The data from the coastal experimental areas of the Sentinel-3B satellite altimeter are processed. The results indicate that the mean proportion of waveform quality improvement brought by partial reconstruction is 80.30%, the mean retracking success rate of reconstructed waveforms is 85.60%, and the mean increasing percentage is 30.98%. The noise levels of SSH data retrieved by different methods are calculated to evaluate the processing precision. It is shown that the 20 Hz SSH precisions of the original and reconstructed coastal waveforms are 12.75 cm and 6.32 cm, respectively, and the corresponding 1 Hz SSH precisions are 2.85 cm and 1.41 cm, respectively. The results validate that the proposed partial reconstruction method has improved the SSH precision by a factor of two, and the comparison results with mean sea surface (MSS) model data further verify this conclusion. Full article
(This article belongs to the Special Issue Radar Signal Processing and Imaging for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The locations of the two experimental areas used for coastal waveform processing and the corresponding subsatellite tracks. (<b>b</b>) The offshore distances of the subsatellite points in the two experimental areas. (<b>c</b>,<b>d</b>) The coastal details of experimental areas 1 and 2.</p>
Full article ">Figure 2
<p>Normalized coastal waveforms of experimental area 1 in (<b>a</b>) June and (<b>b</b>) September. Normalized coastal waveforms of experimental area 2 in (<b>c</b>) May and (<b>d</b>) September.</p>
Full article ">Figure 3
<p>Some typical coastal waveforms in the experimental area, with (<b>a</b>) single-peak (<b>b</b>) double-peak (<b>c</b>) multi-peak (<b>d</b>) slight contamination and (<b>e</b>,<b>f</b>) heavy contamination.</p>
Full article ">Figure 4
<p>(<b>a</b>) The epoch sequences obtained by processing the coastal waveforms of experiment area 1 in April after steps 3 (blue curve), 4 (red curve), and 5 (green curve). (<b>b</b>) The coastal waveforms before and after step 5 (dark and light blue curves), and the corresponding fitting models (red and green curves).</p>
Full article ">Figure 5
<p>(<b>a</b>) The matching error matrix corresponding to the coastal waveforms of experimental area 1 in March. (<b>b</b>) The median errors at all gates in different months for experimental area 1. (<b>c</b>,<b>d</b>) Two typical error histograms fit by the Rayleigh distribution. (<b>e</b>,<b>f</b>) Two typical error histograms fit by exponential distribution.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The matching error matrix corresponding to the coastal waveforms of experimental area 1 in March. (<b>b</b>) The median errors at all gates in different months for experimental area 1. (<b>c</b>,<b>d</b>) Two typical error histograms fit by the Rayleigh distribution. (<b>e</b>,<b>f</b>) Two typical error histograms fit by exponential distribution.</p>
Full article ">Figure 6
<p>Adaptive thresholds of experimental area 1 for different months and different gates.</p>
Full article ">Figure 7
<p>(<b>a</b>) The original contaminated coastal waveform (blue curve) and the corresponding reconstructive result (red dotted line). (<b>b</b>) The reference values (blue dots) used for linear fitting (red line) and the corresponding reconstructive value (green dot).</p>
Full article ">Figure 8
<p>The reconstruction results corresponding to the waveforms in <a href="#remotesensing-15-01717-f002" class="html-fig">Figure 2</a>b (<b>a</b>) and <a href="#remotesensing-15-01717-f002" class="html-fig">Figure 2</a>d (<b>b</b>).</p>
Full article ">Figure 9
<p>Processing flow of the SAR altimeter coastal waveforms.</p>
Full article ">Figure 10
<p>Two typical results of the NPPR algorithm in sub-waveform extraction (<b>a</b>,<b>b</b>). The blue curves represent the total coastal waveforms, the red curves represent the sub-waveforms determined to be false, and the green curves represent the sub-waveforms determined to be correct.</p>
Full article ">Figure 11
<p>Two typical retracking results of the reconstructed coastal waveforms (<b>a</b>,<b>b</b>). The blue curves represent the reconstructed coastal waveforms, and the red curves represent the final iteration results of the echo model.</p>
Full article ">Figure 12
<p>(<b>a</b>) Epoch-retracking results for experimental area 1 in September. (<b>b</b>) Mean SSH data for experimental area 1. The green and red dotted lines represent the OCOG retracking results and the NPPTR results, and the blue and yellow solid lines represent the retracking results of the reconstructed and original waveforms.</p>
Full article ">Figure 13
<p>Epoch-retracking results for experimental area 2 in (<b>a</b>) March and (<b>b</b>) August. The green and red dotted lines represent the OCOG retracking results and the NPPTR results, and the blue and yellow solid lines represent the retracking results of the reconstructed and original waveforms.</p>
Full article ">Figure 14
<p>Retrieved SSH data for experimental area 2 of year 2022.</p>
Full article ">Figure 15
<p>Validation of retrieved SSH data using the MSS model. The yellow and blue lines represent the SSH retrieval results of the original and reconstructed waveforms, and the purple line represents the MSS model.</p>
Full article ">
17 pages, 4534 KiB  
Article
Weakening the Flicker Noise in GPS Vertical Coordinate Time Series Using Hybrid Approaches
by Bing Yang, Zhiqiang Yang, Zhen Tian and Pei Liang
Remote Sens. 2023, 15(6), 1716; https://doi.org/10.3390/rs15061716 - 22 Mar 2023
Cited by 3 | Viewed by 1798
Abstract
Noises in the GPS vertical coordinate time series, mainly including the white and flicker noise, have been proven to impair the accuracy and reliability of GPS products. Various methods were adopted to weaken the white and flicker noises in the GPS time series, [...] Read more.
Noises in the GPS vertical coordinate time series, mainly including the white and flicker noise, have been proven to impair the accuracy and reliability of GPS products. Various methods were adopted to weaken the white and flicker noises in the GPS time series, such as the complementary ensemble empirical mode decomposition (CEEMD), wavelet denoising (WD), and variational mode decomposition (VMD). However, a single method only works at a limited frequency band of the time series, and the corresponding denoising ability is insufficient, especially for the flicker noise. Hence, in this study, we try to build two combined methods: CEEMD & WD and VMD & WD, to weaken the flicker noise in the GPS positioning time series from the Crustal Movement Observation Network of China. First, we handled the original signal using CEEMD or VMD with the appropriate parameters. Then, the processed signal was further denoised by WD. The results show that the average flicker noise in the time series was reduced from 19.90 mm/year0.25 to 2.8 mm/year0.25. This relates to a reduction of 86% after applying the two methods to process the GPS data, which indicates our solutions outperform CEEMD by 6.84% and VMD by 16.88% in weakening the flicker noise, respectively. Those apparent decreases in the flicker noises for the two combined methods are attributed to the differences in the frequencies between the WD and the other two methods, which were verified by analyzing the power spectrum density (PSD). With the help of WD, CEEMD & WD and VMD & WD can identify more flicker noise hidden in the low-frequency signals obtained by CEEMD and VMD. Finally, we found that the two combined methods have almost identical effects on removing the flicker noise in the time series for 226 GPS stations in China, testified by the Wilcoxon rank sum test. Full article
(This article belongs to the Special Issue Remote Sensing in Space Geodesy and Cartography Methods)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The variation of flicker noise with different options for CEEMD (<b>a</b>), VMD (<b>b</b>) and WD (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>Flowchart of the two proposed methods for denoising GPS coordinate time series.</p>
Full article ">Figure 3
<p>Spatial locations and time spans of selected 226 continuous GPS stations from CMONOC. The color declares the observation days of each station.</p>
Full article ">Figure 4
<p>The vertical time series before and after conducting the pre-processing procedure of three typical GPS sites (BJFS: 115.89°E, 39.61°N, YNCX: 101.49°E, 25.05°N and XJRQ: 88.17°E, 39.02°N) are shown. Good measurements (black points) plus outliers (red points) present the raw time series. Good measurements (black points) plus gaps (blue points) present the input time series to our investigations.</p>
Full article ">Figure 5
<p>The estimations of flicker noise of GPS vertical coordinate time series with six strategies for the 226 CMONOC stations employed in this study.</p>
Full article ">Figure 6
<p>The CRs of flicker noise for different methods. Mean and std represent the average and standard deviation of CRs for each group, respectively. (Unit: %).</p>
Full article ">Figure 7
<p>The differences of CRs between CEEMD &amp; WD and VMD &amp; WD.</p>
Full article ">Figure 8
<p>The CRs (<b>left</b>) and their differences (<b>right</b>) of flicker noise for single and hybrid methods.</p>
Full article ">Figure 9
<p>The power spectral density (PSD) of residual vertical coordinate time series after applying different methods for BJSH station.</p>
Full article ">
36 pages, 12517 KiB  
Article
Breach Progression Observation in Rockfill Dam Models Using Photogrammetry
by Geir Helge Kiplesund, Fjola Gudrun Sigtryggsdottir and Leif Lia
Remote Sens. 2023, 15(6), 1715; https://doi.org/10.3390/rs15061715 - 22 Mar 2023
Cited by 5 | Viewed by 2644
Abstract
Dam failures are examples of man-made disasters that have stimulated investigation into the processes related to the failure of different dam types. Embankment dam breaching during an overtopping event is one of the major modes of failure for this dam type, comprising both [...] Read more.
Dam failures are examples of man-made disasters that have stimulated investigation into the processes related to the failure of different dam types. Embankment dam breaching during an overtopping event is one of the major modes of failure for this dam type, comprising both earthfill and rockfill dams. This paper presents the results of a series of laboratory tests on breach initiation and progression in rockfill dams. Especially eight breaching tests of 1 m-high 1:10 scale embankment dams constructed of scaled well-graded rockfill were conducted. Tests were performed with and without an impervious core and under different inflow discharges. Controlling instrumentation includes up to nine video cameras used for image analysis and photogrammetry. A previously little-used technique of dynamic 3D photogrammetry has been applied to prepare 3D models every 5 s throughout the breaching process, allowing us to track in detail breach development. These dynamic 3D models along with pressure sensor data, flow data, and side-view video are used to provide data on erosion rates throughout the breaching process. One important purpose of this research is to test methods of observing a rapidly changing morphology such as an embankment dam breach that can easily be scaled up to large-scale and prototype-scale tests. The resulting data sets are further intended for the verification of existing empirical and numerical models for slope stability and breach development as well as the development of new models. Full article
Show Figures

Figure 1

Figure 1
<p>Hydraulic flume at NTNU (units in mm) flow from right to left.</p>
Full article ">Figure 2
<p>Model setup including core and axis system (units in mm) Y-axis out of paper plane.</p>
Full article ">Figure 3
<p>Camera rig setups (<b>a</b>) Model U1-U3 (<b>b</b>) Model U4-H3, camera locations indicated by yellow circles, coordinate system definition shown on (<b>b</b>) with red arrows and axis labels (<b>c</b>) shows DSLR image of a GCP cube (<b>d</b>) shows video frame of the same GCP cube.</p>
Full article ">Figure 4
<p>Grain size distribution enveloping curves from dam database and for chosen shell material (scaled up 10:1 to prototype scale) previously presented in [<a href="#B26-remotesensing-15-01715" class="html-bibr">26</a>].</p>
Full article ">Figure 5
<p>Reservoir curve for model tests.</p>
Full article ">Figure 6
<p>Example set of synchronised images from test U4—3170 s from start of test.</p>
Full article ">Figure 7
<p>Photogrammetry and image analysis workflow.</p>
Full article ">Figure 8
<p>Side-view tracking of breach progression of model U4 (central core); direction of flow is <b>right</b> to <b>left;</b> white line indicates traced solid surface.</p>
Full article ">Figure 9
<p>Side-view tracking of breach progression of model H1 (no core); direction of flow is <b>right</b> to <b>left</b>; white line indicates traced solid surface.</p>
Full article ">Figure 10
<p>Vertical breach development model U4 (central core). The bottom elevation on the vertical axis is the Z coordinate from the DEM.</p>
Full article ">Figure 11
<p>Vertical breach erosion rate E<sub>v</sub> at X = 0 m from traced side-view profiles (mm/s).</p>
Full article ">Figure 12
<p>Cross-section profiles for X = 0 m (centre line) extracted from U4 (central core) dynamic 3D model; T = 0 s is immediately before initially observed erosion at the downstream edge of the crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure 13
<p>Longitudinal profiles of test U4 from low-resolution dynamic DEM at Y = 0.9 m (blue) and side-view tracking (black) throughout the breaching process; direction of flow is <b>left</b> to <b>right</b>; T = 0 s is last frame before initial observed erosion at downstream end of pilot channel.</p>
Full article ">Figure 14
<p>Breach width (W) and breach bottom elevation (H) for model U4 (central core) plotted against breach outflow (Q) and upstream water level (WL).</p>
Full article ">Figure 15
<p>Breach width (W) and breach bottom elevation (H) for model H1 (no core) plotted against breach outflow (Q) and upstream water level (WL).</p>
Full article ">Figure 16
<p>Lateral erosion rates (BER) over 5 s time steps (mm/s) at X = 0 m for tests U4 (central core) and H1 (no core).</p>
Full article ">Figure 17
<p>Longitudinal profiles of test U1; high-resolution DEM (red) and low-resolution DEM (blue); nine different profiles, with 0.1 being close to the back wall and 0.9 being near the glass wall (with pilot channel visible); calculated RMSE is shown for each profile.</p>
Full article ">Figure 18
<p>Longitudinal profiles of test U4; high-resolution DEM (red) and low-resolution DEM (blue); nine different profiles, with 0.1 being close to the back wall and 0.9 being near the glass wall (with pilot channel visible); calculated RMSE is shown for each profile.</p>
Full article ">Figure A1
<p>Cross-section profiles for X = 0 m (centre line) extracted from U1 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A2
<p>Cross-section profiles for X = 0 m (centre line) extracted from U2 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A3
<p>Cross-section profiles for X = 0 m (centre line) extracted from U3 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A4
<p>Cross-section profiles for X = 0 m (centre line) extracted from U4 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A5
<p>Cross-section profiles for X = 0 m (centre line) extracted from U5 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A6
<p>Cross-section profiles for X = 0 m (centre line) extracted from H1 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A7
<p>Cross-section profiles for X = 0 m (centre line) extracted from H2 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">Figure A8
<p>Cross-section profiles for X = 0 m (centre line) extracted from H3 dynamic 3D model; T = 0 s is immediately before initial observed erosion at downstream edge of crest; red circle indicates breach edge as used in further analyses.</p>
Full article ">
27 pages, 7002 KiB  
Article
Passive Electro-Optical Tracking of Resident Space Objects for Distributed Satellite Systems Autonomous Navigation
by Khaja Faisal Hussain, Kathiravan Thangavel, Alessandro Gardi and Roberto Sabatini
Remote Sens. 2023, 15(6), 1714; https://doi.org/10.3390/rs15061714 - 22 Mar 2023
Cited by 16 | Viewed by 3418 | Correction
Abstract
Autonomous navigation (AN) and manoeuvring are increasingly important in distributed satellite systems (DSS) in order to avoid potential collisions with space debris and other resident space objects (RSO). In order to accomplish collision avoidance manoeuvres, tracking and characterization of RSO is crucial. At [...] Read more.
Autonomous navigation (AN) and manoeuvring are increasingly important in distributed satellite systems (DSS) in order to avoid potential collisions with space debris and other resident space objects (RSO). In order to accomplish collision avoidance manoeuvres, tracking and characterization of RSO is crucial. At present, RSO are tracked and catalogued using ground-based observations, but space-based space surveillance (SBSS) represents a valid alternative (or complementary asset) due to its ability to offer enhanced performances in terms of sensor resolution, tracking accuracy, and weather independence. This paper proposes a particle swarm optimization (PSO) algorithm for DSS AN and manoeuvring, specifically addressing RSO tracking and collision avoidance requirements as an integral part of the overall system design. More specifically, a DSS architecture employing hyperspectral sensors for Earth observation is considered, and passive electro-optical sensors are used, in conjunction with suitable mathematical algorithms, to accomplish autonomous RSO tracking and classification. Simulation case studies are performed to investigate the tracking and system collision avoidance capabilities in both space-based and ground-based tracking scenarios. Results corroborate the effectiveness of the proposed AN technique and highlight its potential to supplement either conventional (ground-based) or SBSS tracking methods. Full article
(This article belongs to the Special Issue Autonomous Space Navigation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Space environment statistics by ESA; (<b>b</b>) Space debris population estimation by ESA.</p>
Full article ">Figure 2
<p>Various SBSS missions during their respective timelines.</p>
Full article ">Figure 3
<p>Simplified conceptual illustration for ground-based scenario.</p>
Full article ">Figure 4
<p>(<b>a</b>) Multi sensor RSO tracking for space-based scenario, (<b>b</b>) Multi sensor RSO tracking for ground-based scenario.</p>
Full article ">Figure 5
<p>(<b>a</b>) Proposed DSS constellation with 4 orbital planes (not to scale); (<b>b</b>) Simplified DSS system architecture for a single orbital plane.</p>
Full article ">Figure 6
<p>Inter-Satellite links between the DSS assets.</p>
Full article ">Figure 7
<p>(<b>a</b>) HyperScout-2; (<b>b</b>) MAI-SS star tracker, Adcole Maryland Aerospace [<a href="#B89-remotesensing-15-01714" class="html-bibr">89</a>,<a href="#B90-remotesensing-15-01714" class="html-bibr">90</a>].</p>
Full article ">Figure 8
<p>AN system architecture for SBSS.</p>
Full article ">Figure 9
<p>Ground-based surveillance scenario.</p>
Full article ">Figure 10
<p>AN system architecture for ground-based tracking scenario.</p>
Full article ">Figure 11
<p>Uncertainty volumes around the tracked RSO in space-based scenario.</p>
Full article ">Figure 12
<p>Uncertainty volumes around the tracked RSO in ground-based scenario.</p>
Full article ">Figure 13
<p>Change in thrust control angles in time (SBSS scenario).</p>
Full article ">Figure 14
<p>Change in thrust control angles over time (ground-based scenario).</p>
Full article ">Figure 15
<p>Change in semimajor axis from initial to final trajectory (SBSS).</p>
Full article ">Figure 16
<p>Change in semimajor axis from initial to final trajectory (ground-based scenario).</p>
Full article ">
27 pages, 1846 KiB  
Article
Nearest Neighboring Self-Supervised Learning for Hyperspectral Image Classification
by Yao Qin, Yuanxin Ye, Yue Zhao, Junzheng Wu, Han Zhang, Kenan Cheng and Kun Li
Remote Sens. 2023, 15(6), 1713; https://doi.org/10.3390/rs15061713 - 22 Mar 2023
Cited by 9 | Viewed by 2606
Abstract
Recently, state-of-the-art classification performance of natural images has been obtained by self-supervised learning (S2L) as it can generate latent features through learning between different views of the same images. However, the latent semantic information of similar images has hardly been exploited by these [...] Read more.
Recently, state-of-the-art classification performance of natural images has been obtained by self-supervised learning (S2L) as it can generate latent features through learning between different views of the same images. However, the latent semantic information of similar images has hardly been exploited by these S2L-based methods. Consequently, to explore the potential of S2L between similar samples in hyperspectral image classification (HSIC), we propose the nearest neighboring self-supervised learning (N2SSL) method, by interacting between different augmentations of reliable nearest neighboring pairs (RN2Ps) of HSI samples in the framework of bootstrap your own latent (BYOL). Specifically, there are four main steps: pretraining of spectral spatial residual network (SSRN)-based BYOL, generation of nearest neighboring pairs (N2Ps), training of BYOL based on RN2P, final classification. Experimental results of three benchmark HSIs validated that S2L on similar samples can facilitate subsequent classification. Moreover, we found that BYOL trained on an un-related HSI can be fine-tuned for classification of other HSIs with less computational cost and higher accuracy than training from scratch. Beyond the methodology, we present a comprehensive review of HSI-related data augmentation (DA), which is meaningful to future research of S2L on HSIs. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of typical S2L methods. Following the same procedure in [<a href="#B35-remotesensing-15-01713" class="html-bibr">35</a>,<a href="#B36-remotesensing-15-01713" class="html-bibr">36</a>,<a href="#B37-remotesensing-15-01713" class="html-bibr">37</a>,<a href="#B38-remotesensing-15-01713" class="html-bibr">38</a>,<a href="#B39-remotesensing-15-01713" class="html-bibr">39</a>], these methods achieved 60.6%, 69.3%, 71.8%, 74.3% and 71.3% top-1 linear classification accuracy by self-supervised pretraining on the training set of the ImageNet ILSVRC-2012 dataset [<a href="#B40-remotesensing-15-01713" class="html-bibr">40</a>] with ResNet50 [<a href="#B41-remotesensing-15-01713" class="html-bibr">41</a>], respectively. It can be simply concluded that BYOL outperformed the other methods with the largest margin of 13.7%. Note that the encoder and the projector of BYOL were integrated as a “whole encoder” herein to draw a consistent comparison of network architectures.</p>
Full article ">Figure 2
<p>Illustration of BYOL framework. It minimizes the loss based on mean square error (MSE) between <math display="inline"><semantics> <msub> <mi>q</mi> <mi>θ</mi> </msub> </semantics></math> and detach(<math display="inline"><semantics> <msubsup> <mi>z</mi> <mi>ϑ</mi> <msup> <mrow/> <mo>′</mo> </msup> </msubsup> </semantics></math>), where detach(·) means stop-gradient. The parameters of the target network <math display="inline"><semantics> <mi>ϑ</mi> </semantics></math> are an exponential moving average of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> (parameters of the online network). Once the training of BYOL is finished, only the online encoder, namely <math display="inline"><semantics> <msub> <mi>f</mi> <mi>θ</mi> </msub> </semantics></math>, is kept for further classification or other downstream tasks.</p>
Full article ">Figure 3
<p>Illustration of the proposed N2SSL method (Best viewed in color).</p>
Full article ">Figure 4
<p>Illustration of (<b>a</b>) the SSRN-based encoder and (<b>b</b>) DA used in the proposed N2SSL method.</p>
Full article ">Figure 5
<p>Illustration of training BYOL based on RN2P (Best viewed in color).</p>
Full article ">Figure 6
<p>Qualitative GT of all the three scenes: (<b>a</b>) UP, (<b>b</b>) IP and (<b>c</b>) KSC (Best viewed in color).</p>
Full article ">Figure 7
<p>Comparison of t-SNE feature visualization of IP samples (C5, C7, C10 and C15) obtained by (<b>a</b>) SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> and (<b>b</b>) N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> (Best viewed in color).</p>
Full article ">Figure 8
<p>Comparison of maps of classification accuracies for the UP scene: (<b>a</b>) SSRN, (<b>b</b>) SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> and (<b>c</b>) N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> (Best viewed in color).</p>
Full article ">Figure 9
<p>Comparison of maps of classification accuracies for the IP scene: (<b>a</b>) SSRN, (<b>b</b>) SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> and (<b>c</b>) N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> (Best viewed in color).</p>
Full article ">Figure 10
<p>Comparison of maps of classification accuracies for the KSC scene: (<b>a</b>) SSRN, (<b>b</b>) SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> and (<b>c</b>) N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> (Best viewed in color).</p>
Full article ">Figure 11
<p>Comparison of running time by N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> and SOTA-supervised methods on both cases with different number of labeled samples: (<b>a</b>) KSC→UP and (<b>b</b>) KSC→IP. The bars in red boxes represent running time of N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math> under the same setting of labeled samples (Best viewed in color).</p>
Full article ">Figure 12
<p>Illustration of mean training loss and mean accuracy in the iterative process over 20 trials by SSL<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, SSRN and SSTN for the (<b>a</b>) KSC→UP and (<b>b</b>) KSC→IP cases. The number of labeled samples were set to 200 (Best viewed in color).</p>
Full article ">Figure 13
<p>Comparison of mean OAs of UP scene achieved by SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> using 5 labeled samples per class under different configurations of DA (Best viewed in color).</p>
Full article ">Figure 14
<p>The effects of learning between nearest neighboring samples under different values of parameter <span class="html-italic">K</span> for three scenes: (<b>a</b>) UP, (<b>b</b>) IP and (<b>c</b>) KSC. The bars in red rectangles give mean OAs achieved by SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> under the same setting of pretraining samples (Best viewed in color).</p>
Full article ">Figure 15
<p>Classification accuracies and running time of SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> and N2SSL<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math> with different percentages of samples for pretraining on three scenes: (<b>a</b>) UP, (<b>b</b>) IP and (<b>c</b>) KSC (Best viewed in color).</p>
Full article ">
23 pages, 1961 KiB  
Article
Visible Near-Infrared Spectroscopy and Pedotransfer Function Well Predict Soil Sorption Coefficient of Glyphosate
by Sonia Akter, Lis Wollesen de Jonge, Per Møldrup, Mogens Humlekrog Greve, Trine Nørgaard, Peter Lystbæk Weber, Cecilie Hermansen, Abdul Mounem Mouazen and Maria Knadel
Remote Sens. 2023, 15(6), 1712; https://doi.org/10.3390/rs15061712 - 22 Mar 2023
Cited by 3 | Viewed by 2183
Abstract
The soil sorption coefficient (Kd) of glyphosate mainly controls its transport and fate in the environment. Laboratory-based analysis of Kd is laborious and expensive. This study aimed to test the feasibility of visible near-infrared spectroscopy (vis–NIRS) as an alternative method [...] Read more.
The soil sorption coefficient (Kd) of glyphosate mainly controls its transport and fate in the environment. Laboratory-based analysis of Kd is laborious and expensive. This study aimed to test the feasibility of visible near-infrared spectroscopy (vis–NIRS) as an alternative method for glyphosate Kd estimation at a country scale and compare its accuracy against pedotransfer function (PTF). A total of 439 soils with a wide range of Kd values (37–2409 L kg−1) were collected from Denmark (DK) and southwest Greenland (GR). Two modeling scenarios were considered to predict Kd: a combined model developed on DK and GR samples and individual models developed on either DK or GR samples. Partial least squares regression (PLSR) and artificial neural network (ANN) techniques were applied to develop vis–NIRS models. Results from the best technique were validated using a prediction set and compared with PTF for each scenario. The PTFs were built with soil texture, OC, pH, Feox, and Pox. The ratio of performance to interquartile distance (RPIQ) was 1.88, 1.70, and 1.50 for the combined (ANN), DK (ANN), and GR (PLSR) validation models, respectively. vis–NIRS obtained higher predictive ability for Kd than PTFs for the combined dataset, whereas PTF resulted in slightly better estimations of Kd on the DK and GR samples. However, the differences in prediction accuracy between vis–NIRS and PTF were statistically insignificant. Considering the multiple advantages of vis–NIRS, e.g., being rapid and non-destructive, it can provide a faster and easier alternative to PTF for estimating glyphosate Kd. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sampling sites of the (<b>a</b>) Danish (DK) and (<b>b</b>) southwest Greenlandic (GR) soils. Red squared area shows the location of four GR fields. Red circle denotes the location of two adjacent fields represented by one location.</p>
Full article ">Figure 2
<p>A graphical representation of the methodology for predicting the K<sub>d</sub> of glyphosate. vis–NIRS, visible near-infrared diffuse reflectance spectroscopy; PTF, pedotransfer function; MLR, multiple linear regression; PLSR, partial least squares regression; ANN, artificial neural network; DK, Denmark; GR, Greenland.</p>
Full article ">Figure 3
<p>Spectral response for (<b>a</b>) the calibration and validation subsets of the combined dataset and (<b>b</b>) the Danish and Greenlandic datasets; score plots for (<b>c</b>) the calibration and validation subsets of the combined dataset and (<b>d</b>) the Danish and Greenlandic datasets; and (<b>e</b>) loadings plot for all soil samples.</p>
Full article ">Figure 4
<p>The best three single linear regression models to predict the K<sub>d</sub> of glyphosate for the combined (<b>a</b>–<b>c</b>), Danish (<b>d</b>–<b>f</b>), and Greenlandic (<b>g</b>–<b>i</b>) datasets. TOC, total organic carbon; K<sub>d</sub>, soil sorption coefficient of glyphosate; Fe<sub>ox</sub> and P<sub>ox</sub>, oxalate-extractable iron and phosphorous; R<sup>2</sup>, coefficient of determination at significance levels of * 0.05, ** 0.01 and *** 0.001; RMSE, root mean square error; Symbology, open symbol represents samples from the calibration subset and closed symbol represents samples from the validation subset for all datasets; Regression line, dashed line for the calibration subset and solid line for the validation subset of each dataset.</p>
Full article ">Figure 5
<p>Pedotransfer functions for the estimation of the K<sub>d</sub> of glyphosate. Results showing the calibration and validation subsets of the combined ((<b>a</b>,<b>d</b>), respectively), Danish ((<b>b</b>,<b>e</b>), respectively), and Greenlandic datasets ((<b>c</b>,<b>f</b>), respectively). R<sup>2</sup>, coefficient of determination; RMSE<sub>C</sub>, root mean square error for calibration; RMSE<sub>P</sub>, root mean square error for prediction; RPIQ, ratio of performance to interquartile range.</p>
Full article ">Figure 6
<p>Results of the best vis–NIRS calibration and validation models with regression coefficients plot for combined (<b>a</b>,<b>d</b>,<b>g</b>), respectively), Danish (<b>b</b>,<b>e</b>,<b>h</b>), respectively), and Greenlandic (<b>c</b>,<b>f</b>,<b>i</b>), respectively) datasets. ANN, artificial neural network; PLSR, partial least square regression; R<sup>2</sup>, coefficient of determination; RMSE<sub>CV</sub>, root mean square error for cross-validation; RMSE<sub>P</sub>, root mean square error for prediction; RPIQ, ratio of performance to interquartile range.</p>
Full article ">Figure 7
<p>Comparison of the prediction accuracy of the K<sub>d</sub> of glyphosate between the combined validation subset and the joint individual validation subsets. Comparison was performed for the (<b>a</b>) PTFs and (<b>b</b>) vis–NIRS models. Com, results from the combined validation model; DK, results from the Danish validation model; GR, results from the Greenlandic validation model.</p>
Full article ">
16 pages, 9790 KiB  
Article
A Data Assimilation Method Combined with Machine Learning and Its Application to Anthropogenic Emission Adjustment in CMAQ
by Congwu Huang, Tao Niu, Hao Wu, Yawei Qu, Tijian Wang, Mengmeng Li, Rong Li and Hongli Liu
Remote Sens. 2023, 15(6), 1711; https://doi.org/10.3390/rs15061711 - 22 Mar 2023
Cited by 4 | Viewed by 2222
Abstract
Anthropogenic emissions play an important role in air quality forecasting. To improve the forecasting accuracy, the use of nudging as the data assimilation method, combined with extremely randomized trees (ExRT) as the machine learning method, was developed and applied to adjust the anthropogenic [...] Read more.
Anthropogenic emissions play an important role in air quality forecasting. To improve the forecasting accuracy, the use of nudging as the data assimilation method, combined with extremely randomized trees (ExRT) as the machine learning method, was developed and applied to adjust the anthropogenic emissions in the Community Multiscale Air Quality modeling system (CMAQ). This nudging–ExRT method can iterate with the forecast and is suitable for linear and nonlinear emissions. For example, an episode between 15 and 30 January 2019 was simulated for China’s Beijing–Tianjin–Hebei (BTH) region. For PM2.5, the correlation coefficient of the site averaged concentration (Ra) increased from 0.85 to 0.94, and the root mean square error (RMSEa) decreased from 24.41 to 9.97 µg/m3. For O3, the Ra increased from 0.75 to 0.81, and the RMSEa decreased from 13.91 to 12.07 µg/m3. These results showed that nudging–ExRT can significantly improve forecasting skills and can be applied to routine air quality forecasting in the future. Full article
Show Figures

Figure 1

Figure 1
<p>Simulation domains: (<b>a</b>) domains 1–3 and topography height; and (<b>b</b>) domain 3 (BTH) and locations of the stations.</p>
Full article ">Figure 2
<p>The framework of the ExRT.</p>
Full article ">Figure 3
<p>The framework of the nudging–ExRT method in CMAQ.</p>
Full article ">Figure 4
<p>The daily change comparison in the NODA, Nud, and NudEx experiments of the (<b>a</b>) PM<sub>2.5</sub>, (<b>b</b>) O<sub>3</sub>, and (<b>c</b>) NO<sub>2</sub> concentrations, and the (<b>d</b>) PM<sub>2.5</sub>, (<b>e</b>) VOCs, and (<b>f</b>) NO<sub>x</sub> emission inventories.</p>
Full article ">Figure 5
<p>The PM<sub>2.5</sub>, VOCs, and NO<sub>x</sub> differences between the NODA, Nud, and NudEx emissions in the CMAQ as the hourly average of 15–30 January 2019: (<b>a</b>) Nud−NODA of PM<sub>2.5</sub>; (<b>b</b>) NudEx−NODA of PM<sub>2.5</sub>; (<b>c</b>) NudEx−Nud of PM<sub>2.5</sub>; (<b>d</b>) Nud−NODA of VOCs; (<b>e</b>) NudEx−NODA of VOCs; (<b>f</b>) NudEx−Nud of VOCs; (<b>g</b>) Nud−NODA of NO<sub>x</sub>; (<b>h</b>) NudEx−NODA of NO<sub>x</sub>; and (<b>i</b>) NudEx−Nud of PM<sub>2.5</sub>.</p>
Full article ">Figure 6
<p>The PM<sub>2.5</sub> and O<sub>3</sub> concentration differences using the NODA, Nud, and NudEx emission inventories and observations as the average of 15–30 January 2019: (<b>a</b>) NODA−observation of PM<sub>2.5</sub>; (<b>b</b>) Nud−observation of PM<sub>2.5</sub>; (<b>c</b>) NudEx−observation of PM<sub>2.5</sub>; (<b>d</b>) NODA−observation of O<sub>3</sub>; (<b>e</b>) Nud−observation of O<sub>3</sub>; and (<b>f</b>) NudEx−observation of O<sub>3</sub>.</p>
Full article ">Figure 7
<p>The PM<sub>2.5</sub> and O<sub>3</sub> comparison of the observations and simulations using NODA, Nud, and NudEx emission inventories in BTH between 15 and 30 January 2019: (<b>a</b>) concentration of the spatially averaged PM<sub>2.5</sub>; (<b>b</b>) concentration of the spatially averaged O<sub>3</sub>; (<b>c</b>) <span class="html-italic">Rs</span> of PM<sub>2.5</sub>; (<b>d</b>) <span class="html-italic">Rs</span> of O<sub>3</sub>; (<b>e</b>) <span class="html-italic">RMSEs</span> of PM<sub>2.5</sub>; and (<b>f</b>) <span class="html-italic">RMSEs</span> of O<sub>3</sub>.</p>
Full article ">Figure 8
<p>The Taylor diagram of the hourly averaged spatial (PM2.5_s and O3_s) and site-averaged (PM2.5_a and O3_a) comparison of the observations and simulations using the NODA, Nud, and NudEx emission inventories in BTH between 15 and 30 January 2019.</p>
Full article ">Figure A1
<p>The PM<sub>2.5</sub>, VOCs, and NO<sub>x</sub> emission intensity in the NODA, Nud, and NudEx experiments in the CMAQ as the hourly average from 15 to 30 January 2019: (<b>a</b>) NODA PM<sub>2.5</sub>; (<b>b</b>) Nud PM<sub>2.5</sub>; (<b>c</b>) NudEx PM<sub>2.5</sub>; (<b>d</b>) NODA VOCs; (<b>e</b>) Nud VOCs; (<b>f</b>) NudEx VOCs; (<b>g</b>) NODA NO<sub>x</sub>; (<b>h</b>) Nud NO<sub>x</sub>; and (<b>i</b>) NudEx NO<sub>x</sub>.</p>
Full article ">Figure A2
<p>The PM<sub>2.5</sub> and O<sub>3</sub> concentrations of the observations and using the NODA, Nud, and NudEx emission inventories averaged from 15 to 30 January 2019: (<b>a</b>) observation PM<sub>2.5</sub>; (<b>b</b>) NODA PM<sub>2.5</sub>; (<b>c</b>) Nud PM<sub>2.5</sub>; (<b>d</b>) NudEx PM<sub>2.5</sub>; (<b>e</b>) Observation O<sub>3</sub>; (<b>f</b>) NODA O<sub>3</sub>; (<b>g</b>) Nud O<sub>3</sub>; and (<b>h</b>) NudEx O<sub>3</sub>.</p>
Full article ">
17 pages, 16249 KiB  
Article
Evaluation and Error Decomposition of IMERG Product Based on Multiple Satellite Sensors
by Yunping Li, Ke Zhang, Andras Bardossy, Xiaoji Shen and Yujia Cheng
Remote Sens. 2023, 15(6), 1710; https://doi.org/10.3390/rs15061710 - 22 Mar 2023
Cited by 4 | Viewed by 1778
Abstract
The Integrated Multisatellite Retrievals for GPM (IMERG) is designed to derive precipitation by merging data from all the passive microwave (PMW) and infrared (IR) sensors. While the input source errors originating from the PMW and IR sensors are important, their structure, characteristics, and [...] Read more.
The Integrated Multisatellite Retrievals for GPM (IMERG) is designed to derive precipitation by merging data from all the passive microwave (PMW) and infrared (IR) sensors. While the input source errors originating from the PMW and IR sensors are important, their structure, characteristics, and algorithm improvement remain unclear. Our study utilized a four-component error decomposition (4CED) method and a systematic and random error decomposition method to evaluate the detectability of IMERG dataset and identify the precipitation errors based on the multi-sensors. The 30 min data from 30 precipitation stations in the Tunxi Watershed were used to evaluate the IMERG data from 2018 to 2020. The input source includes five types of PMW sensors and IR instruments. The results show that the sample ratio for IR (Morph, IR + Morph, and IR only) is much higher than that for PMW (AMSR2, SSMIS, GMI, MHS, and ATMS), with a ratio of 72.8% for IR sources and a ratio of 27.2% for PMW sources. The high false ratio of the IR sensor leads to poor detectability performance of the false alarm ratio (FAR, 0.5854), critical success index (CSI, 0.3014), and Brier score (BS, 0.1126). As for the 4CED, Morph and Morph + IR have a large magnitude of high total bias (TB), hit overestimate bias (HOB), hit underestimate bias (HUB), false bias (FB), and miss bias (MB), which is related to the prediction ability and sample size. In addition, systematic error is the prominent component for AMSR2, SSMIS, GMI, and Morph + IR, indicating some inherent error (retrieval algorithm) that needs to be removed. These findings can support improving the retrieval algorithm and reducing errors in the IMERG dataset. Full article
(This article belongs to the Topic Advanced Research in Precipitation Measurements)
Show Figures

Figure 1

Figure 1
<p>Location of the Tunxi Watershed and the distribution of rain gauges.</p>
Full article ">Figure 2
<p>The relationships between 3CED, 4CED, and 2CED.</p>
Full article ">Figure 3
<p>Spatial distribution of (<b>a</b>) bias and (<b>b</b>) relative bias of annual precipitation and (<b>c</b>) correlation coefficient between ground stations and IMERG (point-to-pixel) on 30 min time scale. The point with black circle means it has passed the significant test with the significance level of 0.05.</p>
Full article ">Figure 4
<p>Hit, miss, false, and nonevent ratios for each input source in IMERG. The values at the right indicate the sample ratio.</p>
Full article ">Figure 5
<p>The POD, FAR, and CSI for each input source used in IMERG.</p>
Full article ">Figure 6
<p>The Brier score for each input source used in IMERG.</p>
Full article ">Figure 7
<p>The total bias (TB), hit overestimate bias (HOB), hit underestimate bias (HUB), false bias (FB), and miss bias (MB) components for each input source used in IMERG over the Tunxi Watershed.</p>
Full article ">Figure 8
<p>The systematic and random error for each input source used in IMERG.</p>
Full article ">Figure 9
<p>The total bias (TB), hit bias (HB), false bias (FB), and miss bias (MB) components for each input source used in IMERG over the Tunxi watershed.</p>
Full article ">Figure 10
<p>The total bias (TB), overestimate bias (OB), and underestimate bias (UB) components for each input source used in IMERG over the Tunxi Watershed.</p>
Full article ">
10 pages, 4671 KiB  
Communication
Improving Pre-Training and Fine-Tuning for Few-Shot SAR Automatic Target Recognition
by Chao Zhang, Hongbin Dong and Baosong Deng
Remote Sens. 2023, 15(6), 1709; https://doi.org/10.3390/rs15061709 - 22 Mar 2023
Cited by 8 | Viewed by 2146
Abstract
SAR-ATR (synthetic aperture radar-automatic target recognition) is a hot topic in remote sensing. This work suggests a few-shot target recognition approach (FTL) based on the concept of transfer learning to accomplish accurate target recognition of SAR images in a few-shot scenario since the [...] Read more.
SAR-ATR (synthetic aperture radar-automatic target recognition) is a hot topic in remote sensing. This work suggests a few-shot target recognition approach (FTL) based on the concept of transfer learning to accomplish accurate target recognition of SAR images in a few-shot scenario since the classic SAR ATR method has significant data reliance. At the same time, the strategy introduces a model distillation method to improve the model’s performance further. This method is composed of three parts. First, the data engine, which uses the style conversion model and optical image data to generate image data similar to SAR style and realize cross-domain conversion, can effectively solve the problem of insufficient training data of the SAR image classification model. Second is model training, which uses SAR image data sets to pre-train the model. Here, we introduce the deep Brownian distance covariance (Deep BDC) pooling layer to optimize the image feature representation so that the model can learn the image representation by measuring the difference between the joint feature function of the embedded feature and the edge product. Third, model fine-tuning, which freezes the model structure, except the classifier, and fine-tunes it by using a small amount of novel data. The knowledge distillation approach is also introduced simultaneously to train the model repeatedly, sharpen the knowledge, and enhance model performance. According to experimental results on the MSTAR benchmark dataset, the proposed method is demonstrably better than the SOTA method in the few-shot SAR ATR issue. The recognition accuracy is about 80% in the case of 10-way 10-shot. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed FTL.</p>
Full article ">Figure 2
<p>Optical and SAR pictures of ten target types from the MSTAR collection. The image’s relevant targets are BMP2, BRDM 2, BTR60, BTR70, D7, I2S1, T62, T72, ZIL 131, and ZSU234.</p>
Full article ">Figure 3
<p>A partial display of the results of the style transfer from the MSTAR dataset to the EMINIST dataset.</p>
Full article ">
27 pages, 8792 KiB  
Article
Drought Disasters in China from 1991 to 2018: Analysis of Spatiotemporal Trends and Characteristics
by Xiaofeng Wang, Pingping Luo, Yue Zheng, Weili Duan, Shuangtao Wang, Wei Zhu, Yuzhu Zhang and Daniel Nover
Remote Sens. 2023, 15(6), 1708; https://doi.org/10.3390/rs15061708 - 22 Mar 2023
Cited by 28 | Viewed by 3569
Abstract
Droughts have emerged as a global problem in contemporary societies. China suffers from different degrees of drought almost every year, with increasing drought severity each year. Droughts in China are seasonal and can severely impact crops. This study used spatiotemporal trend and characteristics [...] Read more.
Droughts have emerged as a global problem in contemporary societies. China suffers from different degrees of drought almost every year, with increasing drought severity each year. Droughts in China are seasonal and can severely impact crops. This study used spatiotemporal trend and characteristics analysis of drought disaster data from 1991 to 2018 in Chinese provinces, in addition to the Mann–Kendall test and wavelet analysis. The drought disaster data included the crop damage area, drought-affected area of the crops, and crop failure area. The outputs of the crops decreased by 10%, 30%, and 80%, respectively. The population with reduced drinking water caused by drought, and the domestic animals with reduced drinking water caused by drought, were numbered in the tens of thousands. The results of the study show that the crop damage areas owing to drought disasters, drought-affected areas of crops, and crop failure areas in China were mainly distributed in the northern, eastern, northeaster, and southwestern regions. The number of people and domestic animals with reduced drinking water owing to drought in China were mainly concentrated in the northern and southwestern regions. These indicators showed a general increasing trend. Tibet, Fujian, Shandong, Jiangsu, Anhui, and Henan provinces and autonomous regions also showed a slightly increasing trend. In particular, the number of domestic animals with reduced drinking water caused by drought in the Inner Mongolia Autonomous Region showed a clear increasing trend with a significant Z-value of 2.2629. The results of this research can be used to provide scientific evidence for predicting future trends in drought and for practising the best management of drought prevention and resistance. Full article
Show Figures

Figure 1

Figure 1
<p>Administrative map of China.</p>
Full article ">Figure 2
<p>Research framework of this study.</p>
Full article ">Figure 3
<p>Drought distribution map of the provinces over the last 20 years.</p>
Full article ">Figure 4
<p>National average provincial drought disaster conditions.</p>
Full article ">Figure 4 Cont.
<p>National average provincial drought disaster conditions.</p>
Full article ">Figure 4 Cont.
<p>National average provincial drought disaster conditions.</p>
Full article ">Figure 5
<p>Heat map of the variation of drought indicators (1—blank control; 2—Beijing; 3—Tianjin; 4—Hebei; 5—Shanxi; 6—Inner Mongolia; 7—Liaoning; 8—Jilin; 9—Heilongjiang; 10—Shanghai; 11—Jiangsu; 12—Zhejiang; 13—Anhui; 14—Fujian; 15—Jiangxi; 16—Shandong; 17—Henan; 18—Hubei; 19—Hunan; 20—Guangdong; 21—Guangxi; 22—Hainan; 23—Chongqing; 24—Sichuan; 25—Guizhou; 26—Yunnan; 27—Tibet; 28—Shaanxi; 29—Gansu; 30—Qinghai; 31—Ningxia; 32—Xinjiang).</p>
Full article ">Figure 6
<p>Z-value of the national disaster data over the past 12 years: (<b>a</b>) crop damage area; (<b>b</b>) crop area with drought disaster; (<b>c</b>) total crop failure area; (<b>d</b>) number of people with reduced drinking water owing to drought; (<b>e</b>) number of domestic animals with reduced drinking water caused by drought.</p>
Full article ">Figure 6 Cont.
<p>Z-value of the national disaster data over the past 12 years: (<b>a</b>) crop damage area; (<b>b</b>) crop area with drought disaster; (<b>c</b>) total crop failure area; (<b>d</b>) number of people with reduced drinking water owing to drought; (<b>e</b>) number of domestic animals with reduced drinking water caused by drought.</p>
Full article ">Figure 6 Cont.
<p>Z-value of the national disaster data over the past 12 years: (<b>a</b>) crop damage area; (<b>b</b>) crop area with drought disaster; (<b>c</b>) total crop failure area; (<b>d</b>) number of people with reduced drinking water owing to drought; (<b>e</b>) number of domestic animals with reduced drinking water caused by drought.</p>
Full article ">Figure 7
<p>Wavelet square difference map and the real part of the contour map of the crop disaster area from 1991 to 2018.</p>
Full article ">Figure 8
<p>Wavelet square difference map and the real part contour map of the wavelet coefficients of the crop drought-affected area from 1991 to 2018.</p>
Full article ">Figure 9
<p>Wavelet square difference plot and the real part contour plot of the wavelet coefficients of the national crop failure area from 1991 to 2018.</p>
Full article ">Figure 10
<p>Wavelet variogram and coefficient real part contour plot of the number of people with reduced drinking water owing to drought from 1991 to 2018.</p>
Full article ">Figure 11
<p>Wavelet variogram and coefficient real contours of the number of livestock with reduced drinking water owing to drought in China, 1991–2018.</p>
Full article ">
17 pages, 8071 KiB  
Article
Remote Seismoacoustic Monitoring of Tropical Cyclones in the Sea of Japan
by Grigory Dolgikh, Stanislav Dolgikh, Vladimir Chupin, Aleksandr Davydov and Aleksandr Mishakov
Remote Sens. 2023, 15(6), 1707; https://doi.org/10.3390/rs15061707 - 22 Mar 2023
Cited by 2 | Viewed by 1413
Abstract
In the course of processing and analysing data from a two-coordinate laser strainmeter, obtained during the propagation of the Hagupit typhoon over the Sea of Japan, we researched the possibility of sensing the direction of tropical cyclones/typhoons and also tracking their movements. We [...] Read more.
In the course of processing and analysing data from a two-coordinate laser strainmeter, obtained during the propagation of the Hagupit typhoon over the Sea of Japan, we researched the possibility of sensing the direction of tropical cyclones/typhoons and also tracking their movements. We tackled the set of problems on the basis of further development of the technology for sensing the direction of primary and secondary microseisms’ generation zones, the “voice of the sea” microseisms, and clarifying the connection between their formation zones and movement of tropical cyclones. In our work, we identified the formation zones of primary and secondary microseisms, which were registered by the two-coordinate laser strainmeter. We established that, from the registered microseisms, we could determine the main characteristics of wind waves generated by a typhoon, but we could not identify its location. By processing the two-coordinate laser strainmeter data in the range of the “voice of the sea” microseisms, we established the possibility of sensing the direction of the “voice of the sea” microseisms’ formation zones, which are associated with zones of the highest energy capacity of typhoons, and this allowed us to tracking the direction of the typhoons’ movement. Full article
Show Figures

Figure 1

Figure 1
<p>Underground beam guide of 52.5 m laser strainmeter of unequal-arm-type (<b>a</b>) and central interference unit (<b>b</b>).</p>
Full article ">Figure 2
<p>Scheme of arrangement of laser strainmeters. 1—laser strainmeter with measuring arm length of 52.5 m, 2—laser strainmeter with measuring arm length of 17.5 m, 3—laboratory building.</p>
Full article ">Figure 3
<p>Trajectory of Typhoon Hagupit in the Pacific Ocean zone.</p>
Full article ">Figure 4
<p>Spectrograms of fragments of the records measured by 52.5 m laser strainmeter (<b>a</b>) and 17.5 m laser strainmeter (<b>b</b>).</p>
Full article ">Figure 5
<p>Spectra of synchronous fragments of 52.5 m laser strainmeter (<b>a</b>) and 17.5 m laser strainmeter (<b>b</b>) records.</p>
Full article ">Figure 6
<p>Wind waves and wind speeds as the typhoon moved on 6 August 2020 (UTC).</p>
Full article ">Figure 7
<p>Wind waves and wind speeds as the typhoon moved on 6 August 2020–7 August 2020 (UTC).</p>
Full article ">Figure 8
<p>Diagram of direction of microseisms’ generation zones. Red circle is the location of the laser strainmeters.</p>
Full article ">Figure 9
<p>Integrated map of typhoons tracks, where: Red asterisk—location of the laser strainmeters; ■—Typhoon Chan-Hom; ◆—Typhoon Matmo; ⬟—Typhoon Hagupit.</p>
Full article ">Figure 10
<p>Areas of the “voice of the sea” microseisms generation; 1–4—successive (over time) areas of the “voice of the sea” microseisms generation. Red asterisk indicates the location of the two-coordinate laser strainmeter.</p>
Full article ">Figure 11
<p>Areas of the “voice of the sea” microseisms generation; 2–7—successive (over time) areas of the “voice of the sea” microseisms generation. Red asterisk indicates the location of the two-coordinate laser strainmeter.</p>
Full article ">Figure 12
<p>Areas of the “voice of the sea” microseisms generation; 2–6—successive (over time) areas of the “voice of the sea” microseisms generation. Red asterisk indicates the location of the two-coordinate laser strainmeter.</p>
Full article ">
20 pages, 40396 KiB  
Article
Convolutional Neural Network-Driven Improvements in Global Cloud Detection for Landsat 8 and Transfer Learning on Sentinel-2 Imagery
by Shulin Pang, Lin Sun, Yanan Tian, Yutiao Ma and Jing Wei
Remote Sens. 2023, 15(6), 1706; https://doi.org/10.3390/rs15061706 - 22 Mar 2023
Cited by 13 | Viewed by 3169
Abstract
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to [...] Read more.
A stable and reliable cloud detection algorithm is an important step of optical satellite data preprocessing. Existing threshold methods are mostly based on classifying spectral features of isolated individual pixels and do not contain or incorporate the spatial information. This often leads to misclassifications of bright surfaces, such as human-made structures or snow/ice. Multi-temporal methods can alleviate this problem, but cloud-free images of the scene are difficult to obtain. To deal with this issue, we extended four deep-learning Convolutional Neural Network (CNN) models to improve the global cloud detection accuracy for Landsat imagery. The inputs are simplified as all discrete spectral channels from visible to short wave infrared wavelengths through radiometric calibration, and the United States Geological Survey (USGS) global Landsat 8 Biome cloud-cover assessment dataset is randomly divided for model training and validation independently. Experiments demonstrate that the cloud mask of the extended U-net model (i.e., UNmask) yields the best performance among all the models in estimating the cloud amounts (cloud amount difference, CAD = −0.35%) and capturing the cloud distributions (overall accuracy = 94.9%) for Landsat 8 imagery compared with the real validation masks; in particular, it runs fast and only takes about 41 ± 5.5 s for each scene. Our model can also actually detect broken and thin clouds over both dark and bright surfaces (e.g., urban and barren). Last, the UNmask model trained for Landsat 8 imagery is successfully applied in cloud detections for the Sentinel-2 imagery (overall accuracy = 90.1%) via transfer learning. These prove the great potential of our model in future applications such as remote sensing satellite data preprocessing. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison in spectral response functions between Landsat 8 and Sentinel-2A satellites.</p>
Full article ">Figure 2
<p>Geolocations of global Landsat 8 Biome training (marked in red colors) and validation (marked in pink colors) images. Background map is MODIS land use cover product in 2019.</p>
Full article ">Figure 3
<p>The VGG-16 architecture.</p>
Full article ">Figure 4
<p>Framework for (<b>a</b>) FCNmask, (<b>b</b>) UNmask, (<b>c</b>) SNmask, and (<b>d</b>) DLmask, respectively.</p>
Full article ">Figure 5
<p>The optimization curves of accuracy for different models.</p>
Full article ">Figure 6
<p>The training and validation accuracy on the Landsat 8 and Sentinel 2 datasets for the UNmask model.</p>
Full article ">Figure 7
<p>Examples of full-scene and zoom-in standard-false-color (RGB: 4-3-2) images and cloud detection results from (<b>a</b>–<b>f</b>) dark to bright surfaces for Landsat 8 imagery using the UNmask, FCNmask, SNmask, and DLmask, respectively. The right-side annotations indicate the acquisition time (yyyymmdd, where yyyy = year, mm = month, dd = day).</p>
Full article ">Figure 8
<p>Same as <a href="#remotesensing-15-01706-f007" class="html-fig">Figure 7</a>, but the bright surfaces: (<b>a</b>) urban, (<b>b</b>–<b>d</b>) bare and desert, (<b>e</b>–<b>f</b>) ice and snow, where orange and red arrows point to cloudy and clear-sky surfaces.</p>
Full article ">Figure 9
<p>Comparison in cloud amount between Landsat 8 Biome validation dataset and different CNN models: (<b>a</b>) UNmask, (<b>b</b>) FCNmask, (<b>c</b>) SNmask, and (<b>d</b>) DLmask.</p>
Full article ">Figure 10
<p>Model performance in cloud detection for Landsat 8 imagery over different land-use types in terms of (<b>a</b>) Accuracy, (<b>b</b>) Recall, (<b>c</b>) Precision, and (<b>d</b>) F<sub>1</sub>-score.</p>
Full article ">Figure 11
<p>Effects of varying thresholds on different CNN models using Landsat 8 Biome dataset in terms of (<b>a</b>) Recall and Precision, (<b>b</b>) F<sub>1</sub>-score, (<b>c</b>) Accuracy.</p>
Full article ">Figure 12
<p>Examples of standard-false-colour (RGB: 8-4-3) images and cloud detection results from (<b>a</b>–<b>f</b>) dark to bright surfaces for Sentinel 2 imagery. The acquisition time is below the image (yyyymmdd, where yyyy = year, mm = month, dd = day).</p>
Full article ">
30 pages, 24057 KiB  
Article
RadWet: An Improved and Transferable Mapping of Open Water and Inundated Vegetation Using Sentinel-1
by Gregory Oakes, Andy Hardy and Pete Bunting
Remote Sens. 2023, 15(6), 1705; https://doi.org/10.3390/rs15061705 - 22 Mar 2023
Cited by 6 | Viewed by 3075
Abstract
Mapping the spatial and temporal dynamics of tropical herbaceous wetlands is vital for a wide range of applications. Inundated vegetation can account for over three-quarters of the total inundated area, yet widely used EO mapping approaches are limited to the detection of open [...] Read more.
Mapping the spatial and temporal dynamics of tropical herbaceous wetlands is vital for a wide range of applications. Inundated vegetation can account for over three-quarters of the total inundated area, yet widely used EO mapping approaches are limited to the detection of open water bodies. This paper presents a new wetland mapping approach, RadWet, that automatically defines open water and inundated vegetation training data using a novel mixture of radar, terrain, and optical imagery. Training data samples are then used to classify serial Sentinel-1 radar imagery using an ensemble machine learning classification routine, providing information on the spatial and temporal dynamics of inundation every 12 days at a resolution of 30 m. The approach was evaluated over the period 2017–2022, covering a range of conditions (dry season to wet season) for two sites: (1) the Barotseland Floodplain, Zambia (31,172 km2) and (2) the Upper Rupununi Wetlands in Guyana (11,745 km2). Good agreement was found at both sites using random stratified accuracy assessment data (n = 28,223) with a median overall accuracy of 89% in Barotseland and 80% in the Upper Rupununi, outperforming existing approaches. The results revealed fine-scale hydrological processes driving inundation patterns as well as temporal patterns in seasonal flood pulse timing and magnitude. Inundated vegetation dominated wet season wetland extent, accounting for a mean 80% of total inundation. RadWet offers a new way in which tropical wetlands can be routinely monitored and characterised. This can provide significant benefits for a range of application areas, including flood hazard management, wetland inventories, monitoring natural greenhouse gas emissions and disease vector control. Full article
(This article belongs to the Special Issue Advances of Remote Sensing and GIS Technology in Surface Water Bodies)
Show Figures

Figure 1

Figure 1
<p>Open TopoMap showing the location of the Barotseland Floodplain study site within western Zambia (highlighted with red square and point), with the main Zambezi Channel highlighted as well as the Leuna, Lui and Matabele Mulonga tributaries marked.</p>
Full article ">Figure 2
<p>Open TopoMap showing the North Rupununi Study Area within Guyana (highlighted with red square and point) with the Takutu, Ireng and Rupununi Rivers labelled. Areas of Wetland are not currently depicted on Open TopoMap data.</p>
Full article ">Figure 3
<p>Flow diagram summarising the RadWet Sentinel-1 image classification processes for wetland mapping.</p>
Full article ">Figure 4
<p>Diagram-based description of the split-based thresholding routine showing: (<b>a</b>) Input Sentinel-1 observation, (<b>b</b>) split tile variability based on sub-tile <span class="html-italic">cv</span> and <span class="html-italic">r</span> Euclidean distance to <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>c</mi> <mi>v</mi> </mrow> <mo>-</mo> </mover> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>r</mi> </mrow> <mo>-</mo> </mover> </mrow> </semantics></math>, (<b>c</b>) visual description of a homogenous sub-tile showing <span class="html-italic">cv</span> and <span class="html-italic">r</span> with the boundary of 3 Standard Deviations from <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>c</mi> <mi>v</mi> </mrow> <mo>-</mo> </mover> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <mi>r</mi> </mrow> <mo>-</mo> </mover> </mrow> </semantics></math>, and histogram of VV backscatter values, (<b>d</b>) candidate heterogenous sub-tiles used for threshold calculation, (<b>e</b>) candidate heterogenous sub-tile covering a class boundary of open water and dry background land cover, a histogram of VV backscatter values are included as well as subsequent split-based threshold.</p>
Full article ">Figure 5
<p>A pseudo-code description of Split-Based Thresholding approach used to generate global image thresholds.</p>
Full article ">Figure 6
<p>NASA GRACE and GRACE-FO (2000–2020) monthly crutinized gravity anomaly expressed as water equivalent thickness. (<b>a</b>) Monthly crutinized water equivalent thickness over the Barotseland Floodplain, with expected periods of high and low inundation timings defined by [<a href="#B5-remotesensing-15-01705" class="html-bibr">5</a>]. (<b>b</b>) Monthly crutinized water equivalent thickness over the North Rupununi Wetlands, with expected periods of high and low inundation timings defined by [<a href="#B44-remotesensing-15-01705" class="html-bibr">44</a>,<a href="#B46-remotesensing-15-01705" class="html-bibr">46</a>].</p>
Full article ">Figure 7
<p>Example of RadWet wet season classified output over the Barotseland Floodplain dated 19 April 2020 (<b>a</b>). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (<b>b1</b>–<b>b3</b>) is a section of the main Zambezi channel and surrounding river features; (<b>c1</b>–<b>c3</b>) represents dambo features atop the eastern escarpment; and (<b>d1</b>–<b>d3</b>) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.</p>
Full article ">Figure 8
<p>Example of RadWet dry season classified output over the Barotseland Floodplain dated 28 August 2018 (<b>a</b>). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (<b>b1</b>–<b>b3</b>) is a section of the main Zambezi channel and surrounding river features; (<b>c1</b>–<b>c3</b>) represents dambo features atop the eastern escarpment; and (<b>d1</b>–<b>d3</b>) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.</p>
Full article ">Figure 9
<p>Example of RadWet wet season classified output over the North Rupununi dated 2 October 2020 (<b>a</b>). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (<b>b1</b>–<b>b3</b>) covers a section ofAmoko Lake; (<b>c1</b>–<b>c3</b>) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (<b>d1</b>–<b>d3</b>) is the location of the confluence of the Takutu and Irend Rivers.</p>
Full article ">Figure 10
<p>Example of RadWet dry season classified output over the North Rupununi dated 28 March 2017 (<b>a</b>). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (<b>b1</b>–<b>b3</b>) covers a section ofAmoko Lake; (<b>c1</b>–<b>c3</b>) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (<b>d1</b>–<b>d3</b>) is the location of the confluence of the Takutu and Irend Rivers.</p>
Full article ">Figure 11
<p>(<b>a</b>) Landsat 8 composite (False Colour: NIR, SWIR1, Red), (<b>b</b>) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (<b>c</b>) RadWet output product dated 8 October 2017 showing a section of the Lui river within the Barotseland Floodplain where inundated vegetation can be observed within the Landsat 8 composite, but the typical red/orange double-bounce signature is not present in the Sentinel-1 Observation.</p>
Full article ">Figure 12
<p>(<b>a</b>) PlanetLabs composite (False Colour: NIR, SWIR1, Red) [<a href="#B63-remotesensing-15-01705" class="html-bibr">63</a>], (<b>b</b>) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (<b>c</b>) RadWet output product dated 29 February 2020 showing a section of the Ireng river within the Rupununi Wetlands Region in Guyana where the Sentinel-1 Observation shows very low backscatter in the northeast, much lower than that of the Ireng river but is shown not to be open water in the PlanetLabs composite.</p>
Full article ">Figure 13
<p>Classification sensitivity analysis. (<b>a</b>) Classification accuracy for each class and the weighted average F1 score for the classification as classification replicate mode agreement increases, (<b>b</b>) classification accuracy for each class and the weighted average F1 score for the classification as the number of replicates increases.</p>
Full article ">Figure 14
<p>(<b>a</b>) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Barotseland Floodplain, Zambia. (<b>b</b>) Rate of change derivative of the total wetted area, with wet season timings indicated.</p>
Full article ">Figure 15
<p>(<b>a</b>) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Rupununi Floodplain, Guyana. (<b>b</b>) Rate of change derivative of the total wetted area, with wet season timings indicated.</p>
Full article ">Figure 16
<p>(<b>a</b>) Median % wet season inundation occurrence for the Barotseland Floodplain over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (<b>b</b>) 2017, (<b>c</b>) 2018, (<b>d</b>) 2019, (<b>e</b>) 2020, (<b>f</b>) 2021.</p>
Full article ">Figure 17
<p>(<b>a</b>) Median % wet season inundation occurrence for the Upper Rupununi wetlands over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (<b>b</b>) 2017, (<b>c</b>) 2018, (<b>d</b>) 2019, (<b>e</b>) 2020, (<b>f</b>) 2021.</p>
Full article ">
28 pages, 31701 KiB  
Article
Present-Day Surface Deformation in North-East Italy Using InSAR and GNSS Data
by Giulia Areggi, Giuseppe Pezzo, John Peter Merryman Boncori, Letizia Anderlini, Giuliana Rossi, Enrico Serpelloni, David Zuliani and Lorenzo Bonini
Remote Sens. 2023, 15(6), 1704; https://doi.org/10.3390/rs15061704 - 22 Mar 2023
Cited by 5 | Viewed by 3078
Abstract
Geodetic data can detect and estimate deformation signals and rates due to natural and anthropogenic phenomena. In the present study, we focus on northeastern Italy, an area characterized by ~1.5–3 mm/yr of convergence rates due to the collision of Adria-Eurasia plates and active [...] Read more.
Geodetic data can detect and estimate deformation signals and rates due to natural and anthropogenic phenomena. In the present study, we focus on northeastern Italy, an area characterized by ~1.5–3 mm/yr of convergence rates due to the collision of Adria-Eurasia plates and active subsidence along the coasts. To define the rates and trends of tectonic and subsidence signals, we use a Multi-Temporal InSAR (MT-InSAR) approach called the Stanford Method for Persistent Scatterers (StaMPS), which is based on the detection of coherent and temporally stable pixels in a stack of single-master differential interferograms. We use Sentinel-1 SAR images along ascending and descending orbits spanning the 2015–2019 temporal interval as inputs for Persistent Scatterers InSAR (PSI) processing. We apply spatial-temporal filters and post-processing steps to reduce unrealistic results. Finally, we calibrate InSAR measurements using GNSS velocities derived from permanent stations available in the study area. Our results consist of mean ground velocity maps showing the displacement rates along the radar Line-Of-Sight for each satellite track, from which we estimate the east–west and vertical velocity components. Our results provide a detailed and original view of active vertical and horizontal displacement rates over the whole region, allowing the detection of spatial velocity gradients, which are particularly relevant to a better understanding of the seismogenic potential of the area. As regards the subsidence along the coasts, our measurements confirm the correlation between subsidence and the geological setting of the study area, with rates of ~2–4 mm/yr between the Venezia and Marano lagoons, and lower than 1 mm/yr near Grado. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Seismotectonic map of the study area. (<b>A</b>) The map shows the seismicity and tectonics of the region. The blue-purple circles represent the instrumental seismicity for the 2000–2017 time span, provided by OGS bulletins (URL: <a href="http://www.crs.inogs.it/bollettino/RSFVG" target="_blank">http://www.crs.inogs.it/bollettino/RSFVG</a> (accessed on 31 August 2020)) with focal mechanisms of the most important historical events [<a href="#B15-remotesensing-15-01704" class="html-bibr">15</a>,<a href="#B48-remotesensing-15-01704" class="html-bibr">48</a>,<a href="#B49-remotesensing-15-01704" class="html-bibr">49</a>]. (<b>B</b>) The map shows the tectonics and paleogeography of the area. (AR: Arba-Ragogna thrust; BL: Belluno thrust; BV: Bassano-Valdobbiadene thrust; CA: Cansiglio thrust; FS: Fella-Sava line; ID: Idrija fault; MD: Medea thrust; MT: Montello thrust; PAF: Periadriatic fault; PM: Polcenigo-Maniago thrust; PR: Predjama fault; PT: Pozzuolo thrust; RA: Raša fault; RV: Ravne Fault; ST: Susans-Tricesimo thrust; TBC: Thiene-Bassano-Cornuda thrusts; VS: Valsugana thrust). (<b>C</b>) Simplified geological map with stratigraphic column (Modified from: [<a href="#B67-remotesensing-15-01704" class="html-bibr">67</a>]).</p>
Full article ">Figure 1 Cont.
<p>Seismotectonic map of the study area. (<b>A</b>) The map shows the seismicity and tectonics of the region. The blue-purple circles represent the instrumental seismicity for the 2000–2017 time span, provided by OGS bulletins (URL: <a href="http://www.crs.inogs.it/bollettino/RSFVG" target="_blank">http://www.crs.inogs.it/bollettino/RSFVG</a> (accessed on 31 August 2020)) with focal mechanisms of the most important historical events [<a href="#B15-remotesensing-15-01704" class="html-bibr">15</a>,<a href="#B48-remotesensing-15-01704" class="html-bibr">48</a>,<a href="#B49-remotesensing-15-01704" class="html-bibr">49</a>]. (<b>B</b>) The map shows the tectonics and paleogeography of the area. (AR: Arba-Ragogna thrust; BL: Belluno thrust; BV: Bassano-Valdobbiadene thrust; CA: Cansiglio thrust; FS: Fella-Sava line; ID: Idrija fault; MD: Medea thrust; MT: Montello thrust; PAF: Periadriatic fault; PM: Polcenigo-Maniago thrust; PR: Predjama fault; PT: Pozzuolo thrust; RA: Raša fault; RV: Ravne Fault; ST: Susans-Tricesimo thrust; TBC: Thiene-Bassano-Cornuda thrusts; VS: Valsugana thrust). (<b>C</b>) Simplified geological map with stratigraphic column (Modified from: [<a href="#B67-remotesensing-15-01704" class="html-bibr">67</a>]).</p>
Full article ">Figure 2
<p>SAR and GNSS data. The map shows the area covered by the Sentinel-1 ascending (blue area) and descending (red area) orbit tracks. The arrows show the GNSS horizontal velocities in an Adria-fixed reference frame, considering the rotation pole from [<a href="#B15-remotesensing-15-01704" class="html-bibr">15</a>], whereas the colored points indicate the vertical velocity according to the color scale.</p>
Full article ">Figure 3
<p>Workflow for PSI processing. The final products, highlighted in yellow, are the velocity maps showing the surface deformation along Line-Of-Sight (LOS) and in the East–West and Vertical directions.</p>
Full article ">Figure 4
<p>LOS Velocity maps. The LOS mean ground displacement maps before (<b>A</b>,<b>B</b>) and after the calibration (<b>C</b>,<b>D</b>) for the ascending (<b>A</b>–<b>C</b>) and descending (<b>B</b>–<b>D</b>) tracks. The black points indicate the location of the GNSS stations used for the calibration. Positive and negative values indicate movements towards and away from the satellite, respectively.</p>
Full article ">Figure 5
<p>East–west (<b>A</b>) and vertical (<b>B</b>) velocity maps. The black arrows represent the GNSS horizontal (east–west) and vertical velocity components. According to the scale, positive rates indicate eastward and upward ground motion.</p>
Full article ">Figure 6
<p>Vertical velocity profile across the Alpine system (Dolomites). <b>Top</b>: the map shows the vertical velocity values for each PS, while the colored squares indicate the vertical velocity of the GNSS stations, according to the InSAR multicolor scale on the right (blue = uplift; red = subsidence). The blue dotted line represents the trace of the geological section reported below. The profile with a buffer of 20 km is 140 km long, and the white star indicates the starting point. <b>Bottom</b>: the plot shows the vertical SAR velocities (grey dots), whereas the blue line indicates the median value. The circles with relative uncertainties represent the vertical velocities of GNSS stations; specifically, the grey ones have not been used during the calibration. The geological section is modified from Fantoni and Franciosi [<a href="#B69-remotesensing-15-01704" class="html-bibr">69</a>].</p>
Full article ">Figure 7
<p>Vertical velocity profile across the Alpine (Carnic and Julian Alps) and Dinaric system. <b>Top</b>: the map shows the vertical velocity values for each PS, while the colored squares indicate the vertical velocity of the GNSS stations, according to the InSAR multicolor scale on the right (blue = uplift; red = subsidence). The blue dotted line represents the trace of the geological section reported below. The profile with a buffer of 20 km is 140 km long, and the white star indicates the starting point. <b>Bottom</b>: the plot shows the vertical SAR velocities (grey dots), whereas the blue line indicates the median value. The circles with relative uncertainties represent the vertical velocities of GNSS stations; specifically, the grey ones have not been used during the calibration. The geological section is modified from Merlini et al. [<a href="#B114-remotesensing-15-01704" class="html-bibr">114</a>].</p>
Full article ">Figure 8
<p>East–west velocity profile across the Alpine (Carnic and Julian Alps) and Dinaric system. <b>Top</b>: the map shows the east–west velocity values for each PS, while the colored squares indicate the east–west velocity of the GNSS stations, according to the InSAR multicolor scale on the right (red = westward; blue = eastward displacement). The blue dotted line represents the trace of the geological section reported below. The profile with a buffer of 20 km is 140 km long, and the white star indicates the starting point. <b>Bottom</b>: the plot shows the east–west SAR velocities (grey dots), whereas the blue line indicates the median value. The circles with relative uncertainties represent the east–west velocities of GNSS stations; specifically, the grey ones have not been used during the calibration. The geological section is modified from Merlini et al. [<a href="#B114-remotesensing-15-01704" class="html-bibr">114</a>].</p>
Full article ">Figure 9
<p>Vertical velocity profile across the Dinaric system. <b>Top</b>: the map shows the vertical velocity values for each PS, while the colored squares indicate the vertical velocity of the GNSS stations, according to the InSAR multicolor scale on the right (blue = uplift; red = subsidence). The blue dotted line represents the trace of the geological section reported below. The profile with a buffer of 20 km is 140 km long, and the white star indicates the starting point. <b>Bottom</b>: the plot shows the vertical InSAR velocities (grey dots), whereas the blue line indicates the median value. The circles with relative uncertainties represent the vertical velocities of GNSS stations; specifically, the grey ones have not been used during the calibration. The geological section is modified from Moulin et al. [<a href="#B56-remotesensing-15-01704" class="html-bibr">56</a>].</p>
Full article ">Figure 10
<p>East–west velocity profile across the Dinaric system. <b>Top</b>: the map shows the east–west velocity values for each PS, while the colored squares indicate the east–west velocity of the GNSS stations, according to the InSAR multicolor scale on the right (red = westward; blue = eastward displacement). The blue dotted line represents the trace of the geological section reported below. The profile with a buffer of 20 km) is 140 km long, and the white star indicates the starting point. <b>Bottom</b>: the plot shows the east–west InSAR velocities (grey dots), whereas the blue line indicates the median value. The circles with relative uncertainties represent the east–west velocities of GNSS stations; specifically, the grey ones have not been used during the calibration. The geological section is modified from Moulin et al. [<a href="#B56-remotesensing-15-01704" class="html-bibr">56</a>].</p>
Full article ">Figure 11
<p>Ascending and descending LOS and GNSS Time Series (average PSs within a 600 m radius) of Grado (GRDO) and Portogruaro (PRTG) GNSS stations (black = GNSS; red = SAR).</p>
Full article ">
16 pages, 5184 KiB  
Communication
Distinguishing Buildings from Vegetation in an Urban-Chaparral Mosaic Landscape with LiDAR-Informed Discriminant Analysis
by Thomas J. Yamashita, David B. Wester, Michael E. Tewes, John H. Young, Jr. and Jason V. Lombardi
Remote Sens. 2023, 15(6), 1703; https://doi.org/10.3390/rs15061703 - 22 Mar 2023
Cited by 3 | Viewed by 1662
Abstract
Identification of buildings from remotely sensed imagery in urban and suburban areas is a challenging task. Light detection and Ranging (LiDAR) provides an opportunity to accurately identify buildings by identification of planar surfaces. Dense vegetation can limit the number of light particles that [...] Read more.
Identification of buildings from remotely sensed imagery in urban and suburban areas is a challenging task. Light detection and Ranging (LiDAR) provides an opportunity to accurately identify buildings by identification of planar surfaces. Dense vegetation can limit the number of light particles that reach the ground, potentially creating false planar surfaces within a vegetation stand. We present an application of discriminant analysis (a commonly used statistical tool in decision theory) to classify polygons (derived from LiDAR) as either buildings or a non-building planar surfaces. We conducted our analysis in southern Texas where thornscrub vegetation often prevents a LiDAR beam from fully penetrating the vegetation canopy in and around residential areas. Using discriminant analysis, we grouped potential building polygons into building and non-building classes using the point densities of ground, unclassified, and building points. Our technique was 95% accurate at distinguishing buildings from non-buildings. Therefore, we recommend its use in any locale where distinguishing buildings from surrounding vegetation may be affected by the proximity of dense vegetation to buildings. Full article
Show Figures

Figure 1

Figure 1
<p>Study area showing State Highway 100, Farm-to-Market (FM) 1847, and FM 106 in Cameron County, Texas (black lines) and the LiDAR tiles used to identify buildings (brown polygon).</p>
Full article ">Figure 2
<p>Workflow for using a software-based building classifier and discriminant analysis to distinguish buildings from vegetation.</p>
Full article ">Figure 3
<p>Posterior probability of a polygon being correctly (top) or incorrectly (bottom) classified as a building or non-building in the testing dataset. The Mahalanobis D<sup>2</sup> represents the distance in multivariate space from a set of ground, unclassified, and building proportions with unknown group assignment to the mean of known buildings (Yes) or known non-buildings (No). The difference is calculated as the distance to Yes—the distance to No. The vertical axes are the posterior probabilities of being classified as non-building (left) and building (right). Blue circles are buildings and red triangles are non-buildings.</p>
Full article ">Figure 4
<p>Building (blue) and non-building (red) as categorized by discriminant analysis after polygon creation by LP360 point cloud tasks at each of the test sites: (<b>a</b>) Laguna Atascosa National Wildlife Refuge (LANWR) Headquarters (original testing site), (<b>b</b>) LANWR thornscrub, (<b>c</b>) Los Fresnos, (<b>d</b>) Laguna Vista, (<b>e</b>) thornscrub-prairie habitat, (<b>f</b>) rural property near Los Fresnos.</p>
Full article ">
26 pages, 9597 KiB  
Article
Continuously Updated Digital Elevation Models (CUDEMs) to Support Coastal Inundation Modeling
by Christopher J. Amante, Matthew Love, Kelly Carignan, Michael G. Sutherland, Michael MacFerrin and Elliot Lim
Remote Sens. 2023, 15(6), 1702; https://doi.org/10.3390/rs15061702 - 22 Mar 2023
Cited by 20 | Viewed by 5163
Abstract
The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) generates digital elevation models (DEMs) that range from the local to global scale. Collectively, these DEMs are essential to determining the timing and extent of coastal inundation and improving community [...] Read more.
The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) generates digital elevation models (DEMs) that range from the local to global scale. Collectively, these DEMs are essential to determining the timing and extent of coastal inundation and improving community preparedness, event forecasting, and warning systems. We initiated a comprehensive framework at NCEI, the Continuously Updated DEM (CUDEM) Program, with seamless bare-earth, topographic-bathymetric and bathymetric DEMs for the entire United States (U.S.) Atlantic and Gulf of Mexico Coasts, Hawaii, American Territories, and portions of the U.S. Pacific Coast. The CUDEMs are currently the highest-resolution, seamless depiction of the entire U.S. Atlantic and Gulf Coasts in the public domain; coastal topographic-bathymetric DEMs have a spatial resolution of 1/9th arc-second (~3 m) and offshore bathymetric DEMs coarsen to 1/3rd arc-second (~10 m). We independently validate the land portions of the CUDEMs with NASA’s Advanced Topographic Laser Altimeter System (ATLAS) instrument on board the Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) observatory and calculate a corresponding vertical mean bias error of 0.12 m ± 0.75 m at one standard deviation, with an overall RMSE of 0.76 m. We generate the CUDEMs through a standardized process using free and open-source software (FOSS) and provide open-access to our code repository. The CUDEM framework consists of systematic tiled geographic extents, spatial resolutions, and horizontal and vertical datums to facilitate rapid updates of targeted areas with new data collections, especially post-storm and tsunami events. The CUDEM framework also enables the rapid incorporation of high-resolution data collections ingested into local-scale DEMs into NOAA NCEI’s suite of regional and global DEMs. Future research efforts will focus on the generation of additional data products, such as spatially explicit vertical error estimations and morphologic change calculations, to enhance the utility and scientific benefits of the CUDEM Program. Full article
(This article belongs to the Special Issue Remote Sensing in Marine-Coastal Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Differences in adjacent project-based DEM specifications and available source data at the time of generation can result in vertical offsets at the boundaries of DEMs. A topographic profile across the DEM boundary (red-dashed line) indicates a nearly 1 m offset (black arrow) in important coastal areas between the NOAA NCEI 2014 Central FL DEM and 2010 Palm Beach DEM.</p>
Full article ">Figure 2
<p>NOAA NCEI-tiled CUDEM footprints as of October 2022. The CUDEMs cover the entire U.S. Atlantic and Gulf of Mexico Coasts, and the lettered inserts show the portions of (<b>A</b>) Alaska and (<b>B</b>) Washington (WA), Oregon (OR) and California (CA) Coasts, and the entirety of (<b>C</b>) Hawaii, (<b>D</b>) Guam and CNMI, (<b>E</b>) Puerto Rico and USVI, and (<b>F</b>) American Samoa.</p>
Full article ">Figure 3
<p>The DEM development workflow at NOAA NCEI is an iterative process of gathering elevation and depth data from multiple sources, converting the data to common specifications, and editing the data prior to building the DEM. There are multiple, iterative quality-control (QC) measures in the workflow, and we document the DEM development process prior to the final DEM distribution.</p>
Full article ">Figure 4
<p>An example of the CUDEM spatial metadata shapefile product: (<b>A</b>) CUDEM tile “ncei19_n27X50_w082X75_2020v1.tif” elevation values, (<b>B</b>) highlighted data source “NOAA NOS hydrographic surveys” shows the location of the individual measurements and indicate sparser measurements offshore, (<b>C</b>) attribute table of highlighted data source includes valuable information such as the data collection agency and the year of collection.</p>
Full article ">Figure 5
<p>Independent validation results from ICESat-2 ground photons indicate a vertical mean bias error of 0.12 m ± 0.75 m at one standard deviation, with an overall RMSE of 0.76 m, for the land portions of the 1/9th arc-second CUDEMs. (<b>A</b>) Histogram of DEM grid cell errors calculated as the differences with ICESat-2 derived elevations, including mean biases and standard deviations of errors. Note: Errors and biases result from both errors inherent in either DEMs or ICESat-2 and potential errors introduced by the vertical datum conversions. (<b>B</b>) DEM vs. ICESat-2 elevation scatterplot, with a dotted-gray “perfect fit” 1:1 line. (<b>C</b>) Histogram of the number of photons within the interdecile range that was used to compute the ICESat-2-derived mean elevation of each DEM grid cell. (<b>D</b>) Histogram of relative canopy-cover in the validated grid cells, with the percentage equated as the (# canopy photons)/(# canopy photons + # land photons) × 100.</p>
Full article ">Figure 6
<p>Download requests for CUDEMs from January 2018 to October 2022 by (<b>A</b>) total counts and (<b>B</b>) email domains. Download statistics and figures were generated from the NOAA Digital Coast Data Access Viewer—Data Report [<a href="#B98-remotesensing-15-01702" class="html-bibr">98</a>].</p>
Full article ">Figure 7
<p>An example of pre- and post-Hurricane Michael DEMs of Cape San Blas, Florida. Quantifying morphologic change in CUDEMs from pre- and post-Hurricane DEMs, such as the Cape San Blas breach (arrows), is an avenue of future research.</p>
Full article ">
21 pages, 14587 KiB  
Article
A Low-Cost Deep Learning System to Characterize Asphalt Surface Deterioration
by Diogo Inácio, Henrique Oliveira, Pedro Oliveira and Paulo Correia
Remote Sens. 2023, 15(6), 1701; https://doi.org/10.3390/rs15061701 - 22 Mar 2023
Cited by 1 | Viewed by 2122
Abstract
Every day millions of people travel on highways for work- or leisure-related purposes. Ensuring road safety is thus of paramount importance, and maintaining good-quality road pavements is essential, requiring an effective maintenance policy. The automation of some road pavement maintenance tasks can reduce [...] Read more.
Every day millions of people travel on highways for work- or leisure-related purposes. Ensuring road safety is thus of paramount importance, and maintaining good-quality road pavements is essential, requiring an effective maintenance policy. The automation of some road pavement maintenance tasks can reduce the time and effort required from experts. This paper proposes a simple system to help speed up road pavement surface inspection and its analysis towards making maintenance decisions. A low-cost video camera mounted on a vehicle was used to capture pavement imagery, which was fed to an automatic crack detection and classification system based on deep neural networks. The system provided two types of output: (i) a cracking percentage per road segment, providing an alert to areas that require attention from the experts; (ii) a segmentation map highlighting which areas of the road pavement surface are affected by cracking. With this data, it became possible to select which maintenance or rehabilitation processes the road pavement required. The system achieved promising results in the analysis of highway pavements, and being automated and having a low processing time, the system is expected to be an effective aid for experts dealing with road pavement maintenance. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the seed-growing strategy described in [<a href="#B10-remotesensing-15-01701" class="html-bibr">10</a>] for different seed-growing radius.</p>
Full article ">Figure 2
<p>Illustration of the ConvNet architecture presented in [<a href="#B12-remotesensing-15-01701" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>Illustration of the CNN architecture proposed in [<a href="#B13-remotesensing-15-01701" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>Generic architecture of the proposed system.</p>
Full article ">Figure 5
<p>Image acquisition system consisting of a conventional 2D camera mounted on the back of a vehicle.</p>
Full article ">Figure 6
<p>Sample image acquired by the proposed system. The area of interest for analysis is delimited in red.</p>
Full article ">Figure 7
<p>Images captured with a slant angle of 30° and a camera focal length of: (<b>a</b>) 16 mm; (<b>b</b>) 28 mm.</p>
Full article ">Figure 8
<p>Selection of the region of interest (<b>left</b>) and the output of perspective transformation (<b>right</b>).</p>
Full article ">Figure 9
<p>Sample of a panoramic image.</p>
Full article ">Figure 10
<p>Road section generated through the image concatenation method.</p>
Full article ">Figure 11
<p>Road lane section processing during the segmentation step.</p>
Full article ">Figure 12
<p>Illustration of the U-Net architecture proposed in [<a href="#B24-remotesensing-15-01701" class="html-bibr">24</a>].</p>
Full article ">Figure 13
<p>Sample of a road lane section.</p>
Full article ">Figure 14
<p>Road lane section segmented by the segmentation neural network.</p>
Full article ">Figure 15
<p>The result of merging the road section with its segmentation.</p>
Full article ">Figure 16
<p>Generic architecture of a multi-class classification neural network.</p>
Full article ">Figure 17
<p>(<b>a</b>) Alligator crack; (<b>b</b>) longitudinal crack; (<b>c</b>) transverse crack; and (<b>d</b>) non-crack samples.</p>
Full article ">Figure 18
<p>Images relating to the section [76,300; 76,400]: (<b>a</b>) longitudinal crack in the pavement and within the upper white lane marking (the system classified the image as non-crack); (<b>b</b>) longitudinal crack in the pavement (the system classified the image as non-crack).</p>
Full article ">Figure 19
<p>Images relating to the section [76,700; 76,800]: (<b>a</b>) longitudinal crack within the lower white lane marking (the system classified the image as non-crack); (<b>b</b>) longitudinal crack in the pavement and within the lower white lane marking (the system classified the image as non-crack).</p>
Full article ">Figure 20
<p>Correct classifications performed by the classification model: (<b>a</b>) image from section [76,200; 76,300] with a longitudinal crack in the pavement surface and within the white lane markings (the system classified the image as longitudinal crack); (<b>b</b>) image from section [76,900; 77,000] with a longitudinal crack in the pavement surface (the system classified the image as longitudinal crack).</p>
Full article ">
17 pages, 6041 KiB  
Article
Overcoming Domain Shift in Neural Networks for Accurate Plant Counting in Aerial Images
by Javier Rodriguez-Vazquez, Miguel Fernandez-Cortizas, David Perez-Saura, Martin Molina and Pascual Campoy
Remote Sens. 2023, 15(6), 1700; https://doi.org/10.3390/rs15061700 - 22 Mar 2023
Cited by 4 | Viewed by 2564
Abstract
This paper presents a novel semi-supervised approach for accurate counting and localization of tropical plants in aerial images that can work in new visual domains in which the available data are not labeled. Our approach uses deep learning and domain adaptation, designed to [...] Read more.
This paper presents a novel semi-supervised approach for accurate counting and localization of tropical plants in aerial images that can work in new visual domains in which the available data are not labeled. Our approach uses deep learning and domain adaptation, designed to handle domain shifts between the training and test data, which is a common challenge in this agricultural applications. This method uses a source dataset with annotated plants and a target dataset without annotations and adapts a model trained on the source dataset to the target dataset using unsupervised domain alignment and pseudolabeling. The experimental results show the effectiveness of this approach for plant counting in aerial images of pineapples under significative domain shift, achieving a reduction up to 97% in the counting error (1.42 in absolute count) when compared to the supervised baseline (48.6 in absolute count). Full article
Show Figures

Figure 1

Figure 1
<p>Domain gap between different crop domains in the pineapple dataset. The images in each column belong to a different crop domain, characterized by different lighting conditions, plant growth stages, soil types, and other factors. The significant variations between domains pose a challenge for traditional fully supervised methods, which struggle to generalize across domains.</p>
Full article ">Figure 2
<p>Selected Up-Net architecture [<a href="#B49-remotesensing-15-01700" class="html-bibr">49</a>] for the generator network. The network has 4 main parts, (1) the encoding path generates rich features to represent the input image, decreasing the resolution, (2) the bottleneck layer, (3) the decoding path increases the resolution of the generated features and generates the final output, (4) the skip connections provide high spatial resolution to the decoding path. Each convolutional block is composed of three convolutions (with kernel size 3, each one followed by a Batch Normalization layer [<a href="#B55-remotesensing-15-01700" class="html-bibr">55</a>] and with ReLU activation). Green arrows depict the upsampling layers, which are composed of a first bicubic upsampling of the feature maps that doubles the resolution and is followed by a convolutional layer that halves the number of channels, Batch Normalization and ReLU activation.</p>
Full article ">Figure 3
<p>The baseline method uses two neural networks, <span class="html-italic">G</span> and <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>i</mi> <mi>m</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> </mrow> </msub> </semantics></math>, which are trained together in an adversarial manner. <span class="html-italic">G</span> attempts to map input images to center maps, while <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>i</mi> <mi>m</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> </mrow> </msub> </semantics></math> tries to distinguish between ground truth and generated outputs. The gradient reversal layer (GRL) allows both networks to be trained together, even though they have opposing objectives, by reversing the sign of the gradient and scaling it when it flows from <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>i</mi> <mi>m</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> </mrow> </msub> </semantics></math> to <span class="html-italic">G</span>. This allows the networks to be trained in a single pass.</p>
Full article ">Figure 4
<p>Multilevel discriminator architecture. This design aims to adapt features at various levels (<math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>0</mn> </msub> <mo>−</mo> <msub> <mi>f</mi> <mn>3</mn> </msub> </mrow> </semantics></math>). The architecture consists of five main blocks, with the first four blocks taking as input the features at the current skip connection level and the output of the previous block. The last block is used to determine whether the features come from a source or target sample. We use a Gradient Reversal layer at each input. It is important to note that each discriminator block includes a residual skip connection.</p>
Full article ">Figure 5
<p>Sample of the General Domain dataset depicting 9 diverse domains with unequal representation.</p>
Full article ">
23 pages, 19153 KiB  
Article
A Modified NLCS Algorithm for High-Speed Bistatic Forward-Looking SAR Focusing with Spaceborne Illuminator
by Yuzhou Liu, Yachao Li, Xuan Song and Xuanqi Wang
Remote Sens. 2023, 15(6), 1699; https://doi.org/10.3390/rs15061699 - 21 Mar 2023
Cited by 1 | Viewed by 1704
Abstract
The coupling and spatial variation of range and azimuth parameters is the biggest challenge for bistatic forward-looking SAR (BFSAR) imaging. In contrast with the monostatic SAR and translational invariant bistatic SAR (TI-BSAR), the range cell migration (RCM), and Doppler parameters of high-speed bistatic [...] Read more.
The coupling and spatial variation of range and azimuth parameters is the biggest challenge for bistatic forward-looking SAR (BFSAR) imaging. In contrast with the monostatic SAR and translational invariant bistatic SAR (TI-BSAR), the range cell migration (RCM), and Doppler parameters of high-speed bistatic forward-looking SAR (HS-BFSAR) have two-dimensional spatial variation characteristics, which makes it difficult to obtain SAR images with satisfactory global focusing. Firstly, based on the configuration of the spaceborne illuminator and high-speed forward-looking receiving platform, the accurate range-Doppler domain expression of the echo signal is derived in this paper. Secondly, using this analytical expression, a range nonlinear chirp scaling (NLCS) is proposed to equalize the RCM and equivalent range frequency modulation (FM) rate so that they can be uniformly processed in the two-dimensional frequency domain. Next, in the azimuth processing, the proposed method decomposes the Doppler contribution of the transmitter and receiver, respectively. Then, an azimuth NLCS is used to eliminate the spatial variation of the azimuth FM rate. Finally, a range-dependent azimuth filter is constructed to achieve azimuth compression. Simulation results validate the efficiency and effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Breakthroughs in Passive Radar Technologies)
Show Figures

Figure 1

Figure 1
<p>Spaceborne HS-BFSAR imaging geometry.</p>
Full article ">Figure 2
<p>The error of the given ranging model. (<b>a</b>) ranging model error; and (<b>b</b>) phase error.</p>
Full article ">Figure 3
<p>The simulation echo data without and with range preprocessing. (<b>a</b>) No range preprocessing when recording echo; and (<b>b</b>) using range preprocessing when recording echo.</p>
Full article ">Figure 4
<p>Diagrams for the processing of NLCS. (<b>a</b>) The quadratic phase of targets <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>-</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> </mrow> </semantics></math> in the time domain; (<b>b</b>) Quadratic phase of targets <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>−</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> </mrow> </semantics></math> after introducing the NLCS factor in the time domain. (<b>c</b>) NLCS factor; and (<b>d</b>) Uniform phase filtering in the frequency domain.</p>
Full article ">Figure 5
<p>The distribution of and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>b</mi> <mi>f</mi> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>. (<b>a</b>) Contour formed by <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> of each point in the scene; (<b>b</b>) Contour formed by <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>b</mi> <mi>f</mi> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> of each point in the scene.</p>
Full article ">Figure 6
<p>Imaging geometry at the synthetic aperture center time.</p>
Full article ">Figure 7
<p>The results of the range processing. (<b>a</b>) Range processing the echo of the target <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>−</mo> </msub> </mrow> </semantics></math> using the traditional NLCS algorithm; (<b>b</b>) Range processing the echo of the target <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>−</mo> </msub> </mrow> </semantics></math> using the proposed method; (<b>c</b>) Range processing the echo of target <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> </mrow> </semantics></math> using the traditional NLCS algorithm; and (<b>d</b>) Range processing the echo of the target <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> </mrow> </semantics></math> using the proposed method.</p>
Full article ">Figure 8
<p>The distribution of <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>a</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>t</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) Value of <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>a</mi> </msub> </mrow> </semantics></math>; and (<b>b</b>) Value of <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>t</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The distribution of <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>a</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>a</b>) Contour formed by <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>a</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> of each point in the scene; and (<b>b</b>) Contour formed by <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>a</mi> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> of each point in the scene.</p>
Full article ">Figure 10
<p>Divisional imaging geometry at the synthetic aperture center time. (<b>a</b>) The geometry of the transmitter; and (<b>b</b>) The geometry of the receiver.</p>
Full article ">Figure 11
<p>The error of <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mrow> <mi>a</mi> <mo>_</mo> <mi>f</mi> <mi>i</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> in the scene.</p>
Full article ">Figure 12
<p>Block diagram of the proposed algorithm.</p>
Full article ">Figure 13
<p>The total FLOPs of the proposed NLCS, traditional NLCS [<a href="#B40-remotesensing-15-01699" class="html-bibr">40</a>], frequency-domain NLCS [<a href="#B22-remotesensing-15-01699" class="html-bibr">22</a>], extended NLCS [<a href="#B43-remotesensing-15-01699" class="html-bibr">43</a>] and the modified azimuth NLCS algorithms [<a href="#B53-remotesensing-15-01699" class="html-bibr">53</a>].</p>
Full article ">Figure 14
<p>Imaging results of the 49 simulated points using the proposed method.</p>
Full article ">Figure 15
<p>Contour plot of target P1 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 16
<p>Range profiles of target P1 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 17
<p>Azimuth profiles of target P1 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 18
<p>Contour plot of target P5 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 19
<p>Range profiles of target P5 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 20
<p>Azimuth profiles of target P5 processed by different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 21
<p>Imaging results of scene simulation using the proposed method.</p>
Full article ">Figure 22
<p>The enlarged view of the selected area in <a href="#remotesensing-15-01699-f021" class="html-fig">Figure 21</a> processed by the different algorithms. (<b>a</b>) The conventional method; (<b>b</b>) The reference method; and (<b>c</b>) The proposed method.</p>
Full article ">Figure 23
<p>The profiles of the isolated edge point obtained by the different algorithms. (<b>a</b>) Range profiles; and (<b>b</b>) Azimuth profiles.</p>
Full article ">
15 pages, 3040 KiB  
Technical Note
Going Back to Grassland? Assessing the Impact of Groundwater Decline on Irrigated Agriculture Using Remote Sensing Data
by Haoying Wang
Remote Sens. 2023, 15(6), 1698; https://doi.org/10.3390/rs15061698 - 21 Mar 2023
Cited by 5 | Viewed by 1865
Abstract
Climate change has increased agricultural drought risk in arid/semi-arid regions globally. One of the common adaptation strategies is shifting to more drought-tolerant crops or switching back to grassland permanently. In many drought-prone areas, groundwater dynamics play a critical role in agricultural production and [...] Read more.
Climate change has increased agricultural drought risk in arid/semi-arid regions globally. One of the common adaptation strategies is shifting to more drought-tolerant crops or switching back to grassland permanently. In many drought-prone areas, groundwater dynamics play a critical role in agricultural production and drought management. This study aims to help understand how groundwater level decline affects the propensity of cropland switching back to grassland. Taking Union County of New Mexico (US) as a case study, field-scale groundwater level projections and high-resolution remote sensing data on crop choices are integrated to explore the impact of groundwater level decline in a regression analysis framework. The results show that cropland has been slowly but permanently switching back to grassland as the groundwater level in the Ogallala Aquifer continues to decline in the area. Specifically, for a one-standard-deviation decline in groundwater level (36.95 feet or 11.26 m), the average likelihood of switching back to grassland increases by 1.85% (the 95% confidence interval is [0.07%, 3.58%]). The findings account for the fact that farmers usually explore other options (such as more drought-tolerant crops, land idling, and rotation) before switching back to grassland permanently. The paper concludes by exploring relevant policy implications for land (soil) and water conservation in the long run. Full article
Show Figures

Figure 1

Figure 1
<p>The High Plains (Ogallala) Aquifer and irrigated crop fields in Union County, New Mexico. Data Source: US Geological Survey, US Census, and Google Maps. Note: (1) A total of 472 circular irrigated fields are illustrated on the map. (2) The aquifer map was published in 2010 by the US Geological Survey (see <a href="https://pubs.usgs.gov/ds/543/" target="_blank">https://pubs.usgs.gov/ds/543/</a> (accessed on 19 March 2023)).</p>
Full article ">Figure 2
<p>A standard 400 m radius irrigated field (field #2) in its transition into grassland (left panel) compared to the remotely sensed Crop Data Layer (right panel, 2019 data) of the same location. Data Source: NASS, USDA; Google Maps. Note: The remote sensing data in the right panel indicate that corn (in dark green) was grown in fields #1, #3, and #4 in 2019. Later, in 2021 (corresponding to the time of the left panel Google Maps imagery), field #3 was in idle status and fields #1 and #4 still had corn.</p>
Full article ">Figure A1
<p>The relative geographic location of Union County, New Mexico, in the broader region. Note: the local landscape features mainly natural grassland and (mostly irrigated) crop agriculture. Data source: US Census.</p>
Full article ">Figure A2
<p>Identified circular irrigated crop field boundaries overlapped with Crop Data Layer data in the central–eastern and southeastern parts of Union County, New Mexico. Note: light green in the background indicates grassland and other colors indicate different crop covers. Data source: NASS, USDA; Google Maps.</p>
Full article ">Figure A3
<p>The USGS monitored wells that recorded groundwater level data during the study period (2007–2019). Note: A total of 111 well locations are illustrated on the map. There is only one weather station in the county (Clayton Municipal Airpark (KCAO); Lat: 36.45°N; Lon: 103.15°W), located near Clayton. Data sources: US Geological Survey; US Census.</p>
Full article ">
16 pages, 1050 KiB  
Article
Above- and Belowground Biomass Carbon Stock and Net Primary Productivity Maps for Tidal Herbaceous Marshes of the United States
by Victoria L. Woltz, Camille LaFosse Stagg, Kristin B. Byrd, Lisamarie Windham-Myers, Andre S. Rovai and Zhiliang Zhu
Remote Sens. 2023, 15(6), 1697; https://doi.org/10.3390/rs15061697 - 21 Mar 2023
Cited by 12 | Viewed by 4818
Abstract
Accurate assessments of greenhouse gas emissions and carbon sequestration in natural ecosystems are necessary to develop climate mitigation strategies. Regional and national-level assessments of carbon sequestration require high-resolution data to be available for large areas, increasing the need for remote sensing products that [...] Read more.
Accurate assessments of greenhouse gas emissions and carbon sequestration in natural ecosystems are necessary to develop climate mitigation strategies. Regional and national-level assessments of carbon sequestration require high-resolution data to be available for large areas, increasing the need for remote sensing products that quantify carbon stocks and fluxes. The Intergovernmental Panel on Climate Change (IPCC) provides guidelines on how to quantify carbon flux using land cover land change and biomass carbon stock information. Net primary productivity (NPP), carbon uptake, and storage in vegetation, can also be used to model net carbon sequestration and net carbon export from an ecosystem (net ecosystem carbon balance). While biomass and NPP map products for terrestrial ecosystems are available, there are currently no conterminous United States (CONUS) biomass carbon stock or NPP maps for tidal herbaceous marshes. In this study, we used peak soil adjusted vegetation index (SAVI) values, derived from Landsat 8 composites, and five other vegetation indices, plus a categorical variable for the CONUS region (Pacific Northwest, California, Northeast, Mid-Atlantic, South Atlantic-Gulf, or Everglades), to model spatially explicit aboveground peak biomass stocks in tidal marshes (i.e., tidal palustrine and estuarine herbaceous marshes) for the first time. Tidal marsh carbon conversion factors, root-to-shoot ratios, and vegetation turnover rates, were compiled from the literature and used to convert peak aboveground biomass to peak total (above- and belowground) biomass and NPP. An extensive literature search for aboveground turnover rates produced sparse and variable values; therefore, we used an informed assumption of a turnover rate of one crop per year for all CONUS tidal marshes. Due to the lack of turnover rate data, the NPP map is identical to the peak biomass carbon stock map. In reality, it is probable that turnover rate varies by region, given seasonal length differences; however, the NPP map provides the best available information on spatially explicit CONUS tidal marsh NPP. This study identifies gaps in the scientific knowledge, to support future studies in addressing this lack of turnover data. Across CONUS, average total peak biomass carbon stock in tidal marshes was 848 g C m−2 (871 g C m−2 in palustrine and 838 g C m−2 in estuarine marshes), and based on a median biomass turnover rate of 1, it is expected that the mean NPP annual flux for tidal marshes is similar (e.g., 848 g C m−2 y−1). Peak biomass carbon stocks in tidal marshes were lowest in the Florida Everglades region and highest in the California regions. These are the first fine-scale national maps of biomass carbon and NPP for tidal wetlands, spanning all of CONUS. These estimates of CONUS total peak biomass carbon stocks and NPP rates for tidal marshes can support regional- and national-scale assessments of greenhouse gas emissions, as well as natural resource management of coastal wetlands, as part of nature-based climate solution efforts. Full article
(This article belongs to the Special Issue Use of Remote Sensing in Valuation of Blue Carbon and Its Co-benefits)
Show Figures

Figure 1

Figure 1
<p>Regions used to study wetland biomass and net primary productivity (NPP).</p>
Full article ">Figure 2
<p>Peak total (above- and belowground) biomass carbon in (<b>A</b>) palustrine and (<b>B</b>) estuarine herbaceous tidal marshes by region. Box plot boundaries closest to zero represent the 25th percentile, the line within the boxes indicates the median, and boundaries farthest from zero represent the 75th percentile.</p>
Full article ">Figure 3
<p>Peak total (above- and belowground) biomass carbon in palustrine and estuarine herbaceous wetlands by hydrologic unit.</p>
Full article ">
18 pages, 8883 KiB  
Article
A Real-Time Detecting Method for Continuous Urban Flood Scenarios Based on Computer Vision on Block Scale
by Haocheng Huang, Xiaohui Lei, Weihong Liao, Haichen Li, Chao Wang and Hao Wang
Remote Sens. 2023, 15(6), 1696; https://doi.org/10.3390/rs15061696 - 21 Mar 2023
Cited by 3 | Viewed by 2326
Abstract
Due to the frequent and sudden occurrence of urban waterlogging, targeted and rapid risk monitoring is extremely important for urban management. To improve the efficiency and accuracy of urban waterlogging monitoring, a real-time determination method of urban waterlogging based on computer vision technology [...] Read more.
Due to the frequent and sudden occurrence of urban waterlogging, targeted and rapid risk monitoring is extremely important for urban management. To improve the efficiency and accuracy of urban waterlogging monitoring, a real-time determination method of urban waterlogging based on computer vision technology was proposed in this study. First, city images were collected and then identified using the ResNet algorithm to determine whether a waterlogging risk existed in the images. Subsequently, the recognition accuracy was improved by image augmentation and the introduction of an attention mechanism (SE-ResNet). The experimental results showed that the waterlogging recognition rate reached 99.50%. In addition, according to the actual water accumulation process, real-time images of the waterlogging area were obtained, and a threshold method using the inverse weight of the time interval (T-IWT) was proposed to determine the times of the waterlogging occurrences from the continuous images. The results showed that the time error of the waterlogging identification was within 30 s. This study provides an effective method for identifying urban waterlogging risks in real-time. Full article
Show Figures

Figure 1

Figure 1
<p>Processes of classifier design and decision-making.</p>
Full article ">Figure 2
<p>Shortcut connections.</p>
Full article ">Figure 3
<p>Expansion of the residual structure.</p>
Full article ">Figure 4
<p>EfficientNet.</p>
Full article ">Figure 5
<p>Standard 2D and 3D CNNs.</p>
Full article ">Figure 6
<p>SE-ResNet.</p>
Full article ">Figure 7
<p>Images of urban streets without floods.</p>
Full article ">Figure 8
<p>Images of urban streets with floods.</p>
Full article ">Figure 9
<p>Data augmentation.</p>
Full article ">Figure 10
<p>The experimental setup.</p>
Full article ">Figure 11
<p>Result of image recognition using SE-ResNet: 1 denotes a positive recognition result (a risk of waterlogging exists) and 0 denotes a negative recognition result (no risk of waterlogging exists).</p>
Full article ">Figure 12
<p>Simulation results of each model. The number after the model name indicates the number of training datasets.</p>
Full article ">Figure 13
<p>Number of epochs when the model result reached target accuracy.</p>
Full article ">Figure 14
<p>Waterlogging images in real scenes. P denotes a positive result and N denotes a negative result.</p>
Full article ">Figure 15
<p>Errors corresponding to the <span class="html-italic">FLI</span> obtained by the different methods.</p>
Full article ">Figure 16
<p>The <span class="html-italic">FLI</span> curve with the IIW.</p>
Full article ">Figure 17
<p>The <span class="html-italic">FLI</span> curve with the IAW.</p>
Full article ">Figure 18
<p>The <span class="html-italic">FLI</span> curve with the ITW.</p>
Full article ">Figure 19
<p>Video classification using the 3D CNN.</p>
Full article ">
17 pages, 5871 KiB  
Technical Note
Passive Location for 5G OFDM Radiation Sources Based on Virtual Synthetic Aperture
by Tong Zhang, Xin Zhang and Qiang Yang
Remote Sens. 2023, 15(6), 1695; https://doi.org/10.3390/rs15061695 - 21 Mar 2023
Cited by 6 | Viewed by 3134
Abstract
Passive location technology has been greatly developed because of its low power consumption, long detection distance, good concealment, and strong anti-interference ability. Orthogonal frequency-division multiplexing (OFDM) is an efficient multi-carrier transmission technology, which is an important signal form of 5G communication. Researching passive [...] Read more.
Passive location technology has been greatly developed because of its low power consumption, long detection distance, good concealment, and strong anti-interference ability. Orthogonal frequency-division multiplexing (OFDM) is an efficient multi-carrier transmission technology, which is an important signal form of 5G communication. Researching passive locations for OFDM signals can realize the location of base stations, which is of great significance in the military. Space-borne passive location technology has a contradiction between wide coverage and high precision. Therefore, a single-satellite passive location algorithm for OFDM radiation sources based on the virtual synthetic aperture is proposed. The algorithm introduces virtual synthetic aperture technology, using antenna movement to accumulate data coherently over a long time period and synthesizing a long azimuth virtual aperture. In addition, it utilizes fast Fourier transform (FFT) to extract phase information at a specific frequency based on the multi-carrier modulation technology of the OFDM signal. Pilot technology of the communication system is used for phase compensation and noise reduction. Thus, the azimuth linear frequency modulation (LFM) signal containing the location information of the radiation source is obtained. The radiation source location can be obtained by range searching and azimuth focusing. Simulation results verify the effectiveness of the algorithm and show that the algorithm can realize high-precision and wide-coverage location for the OFDM radiation sources using a single antenna, turning the hardware structure into software to reduce the cost and complexity of the system. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric model of the virtual synthetic aperture radiation source location method.</p>
Full article ">Figure 2
<p>The principle of generating the OFDM signal.</p>
Full article ">Figure 3
<p>The unified time–frequency resource grid of virtual synthetic aperture radiation source location method.</p>
Full article ">Figure 4
<p>The principle of virtual synthetic aperture radiation source location method.</p>
Full article ">Figure 5
<p>The Doppler spectrum before and after phase compensation. (<b>a</b>) The Doppler spectrum before phase compensation. (<b>b</b>) The Doppler spectrum before phase compensation.</p>
Full article ">Figure 6
<p>The passive location result. (<b>a</b>) The 2-D location result. (<b>b</b>) The azimuth location result. (<b>c</b>) The range location result.</p>
Full article ">Figure 7
<p>The resolution curves. (<b>a</b>) The azimuth resolution. (<b>b</b>) The range resolution.</p>
Full article ">Figure 8
<p>The location error curve variation with the virtual aperture time.</p>
Full article ">Figure 9
<p>The configuration of satellites for the TDOA method.</p>
Full article ">Figure 10
<p>The effect of satellite vibration. (<b>a</b>) The effect on azimuth resolution. (<b>b</b>) The effect on range resolution. (<b>c</b>) The effect on location error.</p>
Full article ">
24 pages, 13414 KiB  
Article
A Comparison of UAV-Derived Dense Point Clouds Using LiDAR and NIR Photogrammetry in an Australian Eucalypt Forest
by Megan Winsen and Grant Hamilton
Remote Sens. 2023, 15(6), 1694; https://doi.org/10.3390/rs15061694 - 21 Mar 2023
Cited by 6 | Viewed by 3122
Abstract
Light detection and ranging (LiDAR) has been a tool of choice for 3D dense point cloud reconstructions of forest canopy over the past two decades, but advances in computer vision techniques, such as structure from motion (SfM) photogrammetry, have transformed 2D digital aerial [...] Read more.
Light detection and ranging (LiDAR) has been a tool of choice for 3D dense point cloud reconstructions of forest canopy over the past two decades, but advances in computer vision techniques, such as structure from motion (SfM) photogrammetry, have transformed 2D digital aerial imagery into a powerful, inexpensive and highly available alternative. Canopy modelling is complex and affected by a wide range of inputs. While studies have found dense point cloud reconstructions to be accurate, there is no standard approach to comparing outputs or assessing accuracy. Modelling is particularly challenging in native eucalypt forests, where the canopy displays abrupt vertical changes and highly varied relief. This study first investigated whether a remotely sensed LiDAR dense point cloud reconstruction of a native eucalypt forest completely reproduced canopy cover and accurately predicted tree heights. A further comparison was made with a photogrammetric reconstruction based solely on near-infrared (NIR) imagery to gain some insight into the contribution of the NIR spectral band to the 3D SfM reconstruction of native dry eucalypt open forest. The reconstructions did not produce comparable canopy height models and neither reconstruction completely reproduced canopy cover nor accurately predicted tree heights. Nonetheless, the LiDAR product was more representative of the eucalypt canopy than SfM-NIR. The SfM-NIR results were strongly affected by an absence of data in many locations, which was related to low canopy penetration by the passive optical sensor and sub-optimal feature matching in the photogrammetric pre-processing pipeline. To further investigate the contribution of NIR, future studies could combine NIR imagery captured at multiple solar elevations. A variety of photogrammetric pre-processing settings should continue to be explored in an effort to optimise image feature matching. Full article
Show Figures

Figure 1

Figure 1
<p>Location and characteristics of the study site at Samford Ecological Research Facility in Brisbane, Queensland. The inset at the top left shows the location of Brisbane relative to the whole of Australia. The embedded images at the bottom left show the appearance of the vegetation in the field plot looking across the landscape and up into the canopy.</p>
Full article ">Figure 2
<p>The extent of the point cloud generated using the UAV-LiDAR survey is depicted above in yellow. The red polygon shows the location of the field plot within the point cloud and the green line represents the flight path which comprised a perpendicular grid with 30 m spacing.</p>
Full article ">Figure 3
<p>This image depicts the landscape surrounding the study site with the UAV flight path used to collect digital imagery (comprising 16 survey lines with 19 m spacing) shown in blue. The red area shows the location of the field plot within this survey area.</p>
Full article ">Figure 4
<p>The 50 m × 100 m field plot (orange outline) is shown with a data strip extending 1 m on either side of four transects set 10 m apart (shown in yellow). Data was collected at 5-metre intervals along each transect (orange asterisks).</p>
Full article ">Figure 5
<p>The workflow for LiDAR and NIR data collection, point cloud processing and CHM reconstruction, followed by analysis involving comparison between CHM metrics and against field-derived measurements.</p>
Full article ">Figure 6
<p>Point density values (per cell) are shown for the (<b>a</b>) SfM-NIR CHM and (<b>b</b>) LiDAR CHM. (<b>c</b>) The difference in point density distributions with the SfM-NIR CHM (dark to light grey) overlaid on the LiDAR CHM (red to green), highlighting not only the higher density of the LiDAR CHM but also the gaps in SfM-NIR data. The white areas represent points at which no data were collected.</p>
Full article ">Figure 6 Cont.
<p>Point density values (per cell) are shown for the (<b>a</b>) SfM-NIR CHM and (<b>b</b>) LiDAR CHM. (<b>c</b>) The difference in point density distributions with the SfM-NIR CHM (dark to light grey) overlaid on the LiDAR CHM (red to green), highlighting not only the higher density of the LiDAR CHM but also the gaps in SfM-NIR data. The white areas represent points at which no data were collected.</p>
Full article ">Figure 7
<p>Above-ground height distribution of points in each CHM showing SfM-NIR in green and LiDAR in red. The frequency percentage on the <span class="html-italic">y</span>-axis represents the number of points collected at each height as a proportion of the total number of points collected with each sensor.</p>
Full article ">Figure 8
<p>Visual comparison of CHMs in CloudCompare showed the difference between the SfM-NIR and LiDAR height distribution in (<b>a,b</b>) plan view and (<b>c,d</b>) elevation view (scale bar in metres). The elevation view shows the colour gradation from blue (lowest) to red (highest) height.</p>
Full article ">Figure 9
<p>Canopy heights and height classes are shown for the (<b>a</b>) SfM-NIR CHM and (<b>b</b>) LiDAR CHM. (<b>c</b>) The difference in the distribution of height classes with the SfM-NIR CHM (dark to light grey) overlaid on the LiDAR CHM (red to green).</p>
Full article ">Figure 10
<p>(<b>a</b>) The measured CPC at each field data collection point, with values ranging from 0.15 (lightest yellow) to 0.70 (darkest blue). These values were compared with the point density per cell within a 1 m radius of each data collection point in the (<b>b</b>) buffered SfM-NIR CHM and (<b>c</b>) buffered LiDAR CHM.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) The measured CPC at each field data collection point, with values ranging from 0.15 (lightest yellow) to 0.70 (darkest blue). These values were compared with the point density per cell within a 1 m radius of each data collection point in the (<b>b</b>) buffered SfM-NIR CHM and (<b>c</b>) buffered LiDAR CHM.</p>
Full article ">Figure 11
<p>(<b>a</b>) Map showing the field-measured heights in relation to field plot and buffered transects. Heights and locations of measured trees in relation to the (<b>b</b>) SfM-NIR CHM and (<b>c</b>) LiDAR CHM height distributions.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Map showing the field-measured heights in relation to field plot and buffered transects. Heights and locations of measured trees in relation to the (<b>b</b>) SfM-NIR CHM and (<b>c</b>) LiDAR CHM height distributions.</p>
Full article ">Figure 12
<p>Scatter plots for CPC values vs. mean point density within a 1 m radius of field data points, and field-measured heights vs. CHM heights at the same locations for (<b>a</b>,<b>b</b>) field measurements vs. SfM-NIR CHM, (<b>c</b>,<b>d</b>) field measurements vs. LiDAR CHM and (<b>e</b>,<b>f</b>) LiDAR vs. SfM-NIR CHMs.</p>
Full article ">Figure A1
<p>Raster analysis of the extent of cover in CHMs produced from (<b>a</b>) SfM-NIR and (<b>b</b>) LiDAR. Empty cells are shown in white.</p>
Full article ">Figure A1 Cont.
<p>Raster analysis of the extent of cover in CHMs produced from (<b>a</b>) SfM-NIR and (<b>b</b>) LiDAR. Empty cells are shown in white.</p>
Full article ">
15 pages, 4052 KiB  
Technical Note
Towards a General Monitoring System for Terrestrial Primary Production: A Test Spanning the European Drought of 2018
by Keith J. Bloomfield, Roel van Hoolst, Manuela Balzarolo, Ivan A. Janssens, Sara Vicca, Darren Ghent and I. Colin Prentice
Remote Sens. 2023, 15(6), 1693; https://doi.org/10.3390/rs15061693 - 21 Mar 2023
Cited by 3 | Viewed by 2412
Abstract
(1) Land surface models require inputs of temperature and moisture variables to generate predictions of gross primary production (GPP). Differences between leaf and air temperature vary temporally and spatially and may be especially pronounced under conditions of low soil moisture availability. The Sentinel-3 [...] Read more.
(1) Land surface models require inputs of temperature and moisture variables to generate predictions of gross primary production (GPP). Differences between leaf and air temperature vary temporally and spatially and may be especially pronounced under conditions of low soil moisture availability. The Sentinel-3 satellite mission offers estimates of the land surface temperature (LST), which for vegetated pixels can be adopted as the canopy temperature. Could remotely sensed estimates of LST offer a parsimonious input to models by combining information on leaf temperature and hydration? (2) Using a light use efficiency model that requires only a handful of input variables, we generated GPP simulations for comparison with eddy-covariance inferred estimates available from flux sites within the Integrated Carbon Observation System. Remotely sensed LST and greenness data were input from Sentinel-3. Gridded air temperature data were obtained from the European Centre for Medium-Range Weather Forecasts. We chose the years 2018–2019 to exploit the natural experiment of a pronounced European drought. (3) Simulated GPP showed good agreement with flux-derived estimates. During dry conditions, simulations forced with LST performed better than those with air temperature for shrubland, grassland and savanna sites. (4) This study advances the prospect for a global GPP monitoring system that will rely primarily on remotely sensed inputs. Full article
(This article belongs to the Special Issue Remote Sensing Applications for the Biosphere)
Show Figures

Figure 1

Figure 1
<p>Scatterplots comparing ECMWF gridded air temperatures with Sentinel3 daytime LST estimates; organised by vegetation class: evergreen needleleaf forest (ENF), deciduous broadleaf forest (DBF), mixed forest (MF), closed shrublands (CSH), open shrublands (OSH), savannahs (SAV), grasslands (GRA), permanent wetlands (WET) and croplands (CRO). The dashed red line shows the ideal fit. Each point is a ten-day average.</p>
Full article ">Figure 2
<p>Goodness of fit: GPP simulations versus eddy-covariance estimates. Left panel—forcing temperature is Sentinel3 LST; Right panel—forcing temperature is ECMWF gridded air temperature. The intensity of the colours (heat-map) is designed to indicate the density of points. Each point is a ten-day average. The dashed-grey line shows the ideal fit.</p>
Full article ">Figure 3
<p>Error estimation (GPP simulations versus eddy-covariance) by vegetation class with the number of participating sites indicated. Those periods identified as unusually dry (SPEI &lt; −1.5) are shown in red and contrasted with the full timeseries (all, in blue). Bars relate to simulations driven by S3_LST and dots to ECMWF gridded T<sub>air</sub>.</p>
Full article ">Figure 4
<p>Seasonal variation in GPP estimates for selected sites. Inferred (eddy-covariance) values are shown as points: those periods with a site-specific SPEI &lt; −1.5 are indicated in red. GPP simulations are shown as lines: forced with ECMWF gridded air temperature (purple) and forced with Sentinel3 daytime LST (cyan). Notice that the ranges on the y-axes vary between the plots.</p>
Full article ">Figure A1
<p>Map of participating sites—locations are indicated (blue dots).</p>
Full article ">Figure A2
<p>Model validation, GPP simulations versus eddy-covariance estimates; companion to <a href="#remotesensing-15-01693-f002" class="html-fig">Figure 2</a> (main text), but restricted to those sites deemed to show spatial homogeneity of vegetation cover. Forcing temperature is Sentinel3 LST. The intensity of the colours (heat-map) is designed to indicate the density of points. Each point is a ten-day average. The dashed-grey line shows the ideal fit.</p>
Full article ">
29 pages, 9750 KiB  
Article
Daily Sea Ice Concentration Product over Polar Regions Based on Brightness Temperature Data from the HY-2B SMR Sensor
by Suhui Wu, Lijian Shi, Bin Zou, Tao Zeng, Zhaoqing Dong and Dunwang Lu
Remote Sens. 2023, 15(6), 1692; https://doi.org/10.3390/rs15061692 - 21 Mar 2023
Cited by 2 | Viewed by 2048
Abstract
Polar sea ice profoundly affects atmospheric and oceanic circulation and plays a significant role in climate change. Sea ice concentration (SIC) is a key geophysical parameter used to quantify these changes. In this study, we determined SIC products for the Arctic and Antarctic [...] Read more.
Polar sea ice profoundly affects atmospheric and oceanic circulation and plays a significant role in climate change. Sea ice concentration (SIC) is a key geophysical parameter used to quantify these changes. In this study, we determined SIC products for the Arctic and Antarctic from 2019 to 2021 using data from the Chinese marine satellite Haiyang 2B (HY-2B) with an improved bootstrap algorithm. Then the results were compared with similar operational SIC products and ship-based data. Our findings demonstrate the effectiveness of the improved algorithm for accurately determining SIC in polar regions. Additionally, the results of the study demonstrate that the SIC product obtained through the improved bootstrap algorithm has a high correlation with other similar SIC products. The daily average SIC of the different products showed similar inter-annual trends for both the Arctic and Antarctic regions. Comparison of the different SIC products showed that the Arctic BT-SMR SIC was slightly lower than the BT-SSMIS and BT-AMSR2 SIC products, while the difference between Antarctic SIC products was more pronounced. The lowest MAE was between the BT-SSMIS SIC and BT-SMR SIC in both regions, while the largest MAE was between the NT-SMR and BT-SMR in the Arctic, and between the NT-SSMIS and BT-SMR in the Antarctic. The SIE and SIA time series showed consistent trends, with a greater difference in SIA than SIC and a slight difference in SIA between the BT-AMSR2 and BT-SMR in the Arctic. Evaluation of the different SIC products using ship-based observation data showed a high correlation between the BT-SMR SIC and the ship-based SIC of approximately 0.85 in the Arctic and 0.88 in the Antarctic. The time series of dynamic tie-points better reflected the seasonal variation in sea ice radiation characteristics. This study lays the foundation for the release of long-term SIC product series from the Chinese autonomous HY-2B satellite, which will ensure the continuity of polar sea ice records over the past 40 years despite potential interruptions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of ship-based observation data, with red denoting 2019, green denoting 2020, and yellow denoting 2021. The background image is the SIE map on 5 February 2019 ((<b>left</b>), Arctic; (<b>right</b>), Antarctic).</p>
Full article ">Figure 2
<p>Schematic diagram of the bootstrap algorithm.</p>
Full article ">Figure 3
<p>Scatter diagrams of polarization mode and frequency mode on 1 January 2020. (<b>a</b>) Polarization mode. (<b>b</b>) Frequency mode. AD line for sea ice close to 100% SIC, the letter I indicates different types of ice, the letter B indicates the SIC value at any point, the letter O indicates water tie-point.</p>
Full article ">Figure 4
<p>Selection of tie-points for the BT algorithm. (<b>a</b>) Polarization mode. (<b>b</b>) Frequency mode. The black dashed lines AD and AO represent the lines formed by the initial points. Magenta dashed line indicates the result of AD or AO ± 10 K. The red line indicates the result of the linear regression of AD or AO.</p>
Full article ">Figure 5
<p>Daily HY-2B SMR bootstrap algorithm tie-points <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <mi>O</mi> </mrow> </msub> </mrow> </semantics></math> for open water (orange line) and [−7, +7] day interval results (red line) during 2019–2021. The (<b>a</b>) diagram indicates the Arctic and the (<b>b</b>) diagram indicates the Antarctic.</p>
Full article ">Figure 6
<p>The linear regression parameters for SIC retrieval. The (<b>a</b>–<b>l</b>) diagrams indicate the Arctic, and the (<b>m</b>–<b>r</b>) diagrams indicate the Antarctic. The first column represents the RMSE of the respective linear regression equation; the second column represents the slope; and the third column represents the intercept.</p>
Full article ">Figure 7
<p>Arctic daily average SIC (<b>a</b>), bias (<b>b</b>) and MAE (<b>c</b>) from January 2019 to December 2021 for 25 km spatial resolution SIC datasets. Bias refers to the above SIC datasets minus BT−SMR daily average difference. MAE refers to mean absolute error.</p>
Full article ">Figure 8
<p>Maps of the overall Arctic difference (<b>a</b>–<b>e</b>) between five SIC datasets and BT-SMR from January 2019 to December 2021, as well as the months of March (<b>f</b>–<b>j</b>) and September (<b>k</b>–<b>o</b>). Grey indicates land, red indicates positive deviations, and blue indicates negative deviations.</p>
Full article ">Figure 9
<p>Same as in <a href="#remotesensing-15-01692-f007" class="html-fig">Figure 7</a> but in the Antarctic.</p>
Full article ">Figure 10
<p>Same as in <a href="#remotesensing-15-01692-f008" class="html-fig">Figure 8</a> but in the Antarctic.</p>
Full article ">Figure 11
<p>Violin chart of SIC differences between the five different SIC products and BT-SMR SIC with BT-SMR SICs of 15–30%, 30–70%, and 70–100% from January 2019 to December 2021. The x-axis denotes different SIC products; (<b>a</b>–<b>f</b>) the y-axis denotes bias; (<b>g</b>–<b>l</b>) the y-axis denotes the mean absolute error; the first and third rows of the diagram depict the Arctic, and the second and fourth rows depict the Antarctic.</p>
Full article ">Figure 12
<p>Scatterplots of co-located daily average SIC from visual ship-based observations (Ice Watch/ASSIST, x-axis) and the six satellite SIC algorithm products (y-axes) for the Arctic during 2019–2021. The blue solid lines denote the identity line. The red solid lines denote the linear regression of the respective value pairs. The linear regression equation, bias, and squared linear correlation coefficient (R<sup>2</sup>) are given at the top of every image.</p>
Full article ">Figure 13
<p>SIC differences between the PM SIC products and ship-based SIC. The number in the grid denotes the proportion of data pairs for the PM SIC products. The horizontal axes denote six different PM SICs. The SIC differences are grouped with an interval of 20% from −100% to 100% (vertical axis). The (<b>a</b>) diagram indicates the Arctic and the (<b>b</b>) diagram indicates the Antarctic.</p>
Full article ">Figure 14
<p>Same as in <a href="#remotesensing-15-01692-f012" class="html-fig">Figure 12</a> but in the Antarctic.</p>
Full article ">Figure 15
<p>Arctic SIE (<b>a</b>) and SIE difference (<b>b</b>), and SIA (<b>c</b>) and SIA difference (<b>d</b>) time series from the SIC datasets used.</p>
Full article ">Figure 15 Cont.
<p>Arctic SIE (<b>a</b>) and SIE difference (<b>b</b>), and SIA (<b>c</b>) and SIA difference (<b>d</b>) time series from the SIC datasets used.</p>
Full article ">Figure 16
<p>Same as in <a href="#remotesensing-15-01692-f015" class="html-fig">Figure 15</a> but in the Antarctic.</p>
Full article ">Figure 17
<p>Monthly MIZ SIE and MIZ SIE fraction during 2019–2021. Additionally, the differences in MIZ SIE and MIZ SIE fraction between the BT-SMR and BT-SSMIS (blue number) and between the BT-SMR and BT-AMSR2 (green number) are shown. The shading presents the 2 STDs from the monthly MIZ SIE and MIZ SIE fraction.</p>
Full article ">
15 pages, 9427 KiB  
Article
The Impacts of Quality-Oriented Dataset Labeling on Tree Cover Segmentation Using U-Net: A Case Study in WorldView-3 Imagery
by Tao Jiang, Maximilian Freudenberg, Christoph Kleinn, Alexander Ecker and Nils Nölke
Remote Sens. 2023, 15(6), 1691; https://doi.org/10.3390/rs15061691 - 21 Mar 2023
Cited by 1 | Viewed by 2630
Abstract
Deep learning has emerged as a prominent technique for extracting vegetation information from high-resolution satellite imagery. However, less attention has been paid to the quality of dataset labeling as compared to research into networks and models, despite data quality consistently having a high [...] Read more.
Deep learning has emerged as a prominent technique for extracting vegetation information from high-resolution satellite imagery. However, less attention has been paid to the quality of dataset labeling as compared to research into networks and models, despite data quality consistently having a high impact on final accuracies. In this work, we trained a U-Net model for tree cover segmentation in 30 cm WorldView-3 imagery and assessed the impact of training data quality on segmentation accuracy. We produced two reference tree cover masks of different qualities by labeling images accurately or roughly and trained the model on a combination of both, with varying proportions. Our results show that models trained with accurately delineated masks achieved higher accuracy (88.06%) than models trained on masks that were only roughly delineated (81.13%). When combining the accurately and roughly delineated masks at varying proportions, we found that the segmentation accuracy increased with the proportion of accurately delineated masks. Furthermore, we applied semisupervised active learning techniques to identify an efficient strategy for selecting images for labeling. This showed that semisupervised active learning saved nearly 50% of the labeling cost when applied to accurate masks, while maintaining high accuracy (88.07%). Our study suggests that accurate mask delineation and semisupervised active learning are essential for efficiently generating training datasets in the context of tree cover segmentation from high-resolution satellite imagery. Full article
(This article belongs to the Special Issue Image Analysis for Forest Environmental Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area: a transect of 50 km × 5 km in the northern part of Bengaluru, India. The transect is enlarged here as a WorldView-3 false color composite (i.e., near IR1, red and green).</p>
Full article ">Figure 2
<p>Illustration of the two different qualities of delineating tree crowns: an urban (<b>left</b>) and a rural (<b>right</b>) environment. Yellow denotes rough delineations and green denotes accurate delineations of tree crowns.</p>
Full article ">Figure 3
<p>The architecture of U-Net.</p>
Full article ">Figure 4
<p>The overall flowchart of our <span class="html-italic">semisupervised</span> active learning strategy.</p>
Full article ">Figure 5
<p>Accuracy (overall <span class="html-italic">IoU</span>) and labeling cost for different strategies on accurately and roughly delineated masks (i.e., random selection (RS), active learning (AL), semisupervised active learning strategies (SSAL 60%, 40% and 100%)). For accurately delineated masks, the 60% strategy yielded the best results and for roughly delineated masks the 40% strategy yielded the best results.</p>
Full article ">Figure 6
<p>Classification results of two test images. For each test image, the subplots in the first row, from left to right, depict the false color image (near IR1, red and green), the ground truth, the segmentation results of models trained on roughly (R) and accurately (A) delineated masks. The subplots in the second and third row represent the classification results of the initial model, the model with intermediate iterations (Iter1 and Iter3) and the final iteration when applying semisupervised active learning (SSAL) on datasets with accurately and roughly delineated masks, respectively.</p>
Full article ">Figure 7
<p>The relationship between entropy, accuracy and image mask selection based on semisupervised active learning with a 60% strategy. Each green point represents one image. Points covered by the shaded blue area are accepted model predictions. Points shaded red were selected to be labeled by hand.</p>
Full article ">Figure 8
<p>Classification results on two test images. In each test image, subplots from left to right then top to bottom represent false color images (near IR1, red, and green), ground truth and classification results from models trained by different proportions of two quality levels of masks (100%R, 20%A, 40%A, 60%A, 80%A and 100%A).</p>
Full article ">
23 pages, 10592 KiB  
Article
Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction
by Zhen Wang, Buhong Wang, Chuanlei Zhang and Yaohui Liu
Remote Sens. 2023, 15(6), 1690; https://doi.org/10.3390/rs15061690 - 21 Mar 2023
Cited by 3 | Viewed by 2889
Abstract
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety-critical systems. Existing research mainly focuses on solving [...] Read more.
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety-critical systems. Existing research mainly focuses on solving digital attacks for aerial image semantic segmentation, but adversarial patches with physical attack attributes are more threatening than digital attacks. In this article, we systematically evaluate the threat of adversarial patches on the aerial image semantic segmentation task for the first time. To defend against adversarial patch attacks and obtain accurate semantic segmentation results, we construct a novel robust feature extraction network (RFENet). Based on the characteristics of aerial images and adversarial patches, RFENet designs a limited receptive field mechanism (LRFM), a spatial semantic enhancement module (SSEM), a boundary feature perception module (BFPM) and a global correlation encoder module (GCEM), respectively, to solve adversarial patch attacks from the DL model architecture design level. We discover that semantic features, shape features and global features contained in aerial images can significantly enhance the robustness of the DL model against patch attacks. Extensive experiments on three aerial image benchmark datasets demonstrate that the proposed RFENet has strong resistance to adversarial patch attacks compared with the existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Adversarial Attacks and Defenses for Remote Sensing Data)
Show Figures

Figure 1

Figure 1
<p>Visualization results of different adversarial patch generation methods on the attack effect of the semantic segmentation model.</p>
Full article ">Figure 2
<p>Illustration of the proposed RFENet. The limited receptive field mechanism is first adopted to extract local features with robustness. Then, we use the spatial semantic enhancement and boundary feature perception modules to obtain robust semantic and boundary features. The global correlation encoder module is used to build global dependency. Finally, we transmit the robust features to the decoder part to obtain the semantic segmentation results.</p>
Full article ">Figure 3
<p>The architecture of our proposed LRFM.</p>
Full article ">Figure 4
<p>The architecture of our proposed SSEM.</p>
Full article ">Figure 5
<p>The architecture of our proposed BFPM.</p>
Full article ">Figure 6
<p>Interpretation of the proposed GCEM: (<b>a</b>) the pairwise relationship between feature vectors; (<b>b</b>) the structure of global correlation encoder model.</p>
Full article ">Figure 7
<p>Example images and corresponding ground truth from the UAVid [<a href="#B60-remotesensing-15-01690" class="html-bibr">60</a>], Semantic Drone [<a href="#B61-remotesensing-15-01690" class="html-bibr">61</a>], and AeroScapes [<a href="#B62-remotesensing-15-01690" class="html-bibr">62</a>] datasets.</p>
Full article ">Figure 8
<p>Visualization results for different types of adversarial patches: (<b>a</b>) LaVAN; (<b>b</b>) QR-Patch; (<b>c</b>) IAP; (<b>d</b>) Patch-Wise; (<b>e</b>) DiAP; (<b>f</b>) Image-Patch.</p>
Full article ">Figure 9
<p>Visualization results of different methods on adversarial patch (LaVAN/QR-Patch) test set in the UAVid dataset, where the yellow curve circle represents the added adversarial patch region.</p>
Full article ">Figure 10
<p>Visualization results of different methods on adversarial patch (IAP/Patch-Wise) test set in the Semantic Drone dataset, where the yellow curve circle represents the added adversarial patch region.</p>
Full article ">Figure 11
<p>Visualization results of different methods on adversarial patch (DiAP/ImageNet-Patch) test set in the Aeroscapes dataset, where the yellow curve circle represents the added adversarial patch region.</p>
Full article ">
30 pages, 32630 KiB  
Article
Spatiotemporal Evolution and Hysteresis Analysis of Drought Based on Rainfed-Irrigated Arable Land
by Enyu Du, Fang Chen, Huicong Jia, Lei Wang and Aqiang Yang
Remote Sens. 2023, 15(6), 1689; https://doi.org/10.3390/rs15061689 - 21 Mar 2023
Cited by 10 | Viewed by 2612
Abstract
Drought poses a serious threat to agricultural production and food security in the context of global climate change. Few studies have explored the response mechanism and lag time of agricultural drought to meteorological drought from the perspective of cultivated land types. This paper [...] Read more.
Drought poses a serious threat to agricultural production and food security in the context of global climate change. Few studies have explored the response mechanism and lag time of agricultural drought to meteorological drought from the perspective of cultivated land types. This paper analyzes the spatiotemporal evolution patterns and hysteresis relationship of meteorological and agricultural droughts in the middle and lower reaches of the Yangtze River in China. Here, the Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation index products and surface temperature products were selected to calculate the Temperature Vegetation Dryness Index (TVDI) from 2010 to 2015. Furthermore, we obtained the Standardized Precipitation Evapotranspiration Index (SPEI) and the Palmer Drought Severity Index (PDSI) for the same period. Based on these indices, we analyzed the correlation and the hysteresis relationship between agricultural and meteorological drought in rainfed and irrigated arable land. The results showed that, (1) compared with SPEI, the high spatial resolution PDSI data were deemed more suitable for the subsequent accurate and scientific analysis of the relationship between meteorological and agricultural droughts. (2) When meteorological drought occurs, irrigated arable land is the first to experience agricultural drought, and then alleviates when the drought is most severe in rainfed arable land, indicating that irrigated arable land is more sensitive to drought events when exposed to the same degree of drought risk. However, rainfed arable land is actually more susceptible to agricultural drought due to the intervention of irrigation measures. (3) According to the cross-wavelet transform analysis, agricultural droughts significantly lag behind meteorological droughts by about 33 days during the development process of drought events. (4) The spatial distribution of the correlation coefficient between the PDSI and TVDI shows that the area with negative correlations of rainfed croplands and the area with positive correlations of irrigated croplands account for 77.55% and 68.04% of cropland areas, respectively. This study clarifies and distinguishes the details of the meteorological-to-agricultural drought relationship in rainfed and irrigated arable land, noting that an accurate lag time can provide useful guidance for drought monitoring management and irrigation project planning in the middle and lower reaches of the Yangtze River. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The cropland types and elevation of the middle and lower reaches of the Yangtze River in China. (<b>a</b>) Location of the study area in China. (<b>b</b>) Digital elevation model map of the study area. (<b>c</b>) Irrigated and rainfed croplands map of the study area.</p>
Full article ">Figure 2
<p>The flow chart of the methodology. SPEI: Standardized Precipitation Evapotranspiration Index. PDSI: Palmer Drought Severity Index. TVDI: Temperature Vegetation Drought Index. TVDI-rain/irr: TVDI of rainfed/irrigated arable land. MOD13Q1/MOD11A2: The satellite data. GFSAD1KCM: Global Food Security support Analysis Data.</p>
Full article ">Figure 3
<p>Definition of the Temperature Vegetation Drought Index (TVDI) and the geometric meaning of a given pixel P. Red and green circles: The maximum and minimum values (<span class="html-italic">LSTmin</span>, <span class="html-italic">LSTmax</span>) of LST corresponding to each NDVI pixel. Red and green lines: Dry edge and wet edge obtained from linear fitting according to all <span class="html-italic">LSTmax</span> and <span class="html-italic">LSTmin</span>.</p>
Full article ">Figure 4
<p>The interannual variations of the mean value of the SPEI on different timescales. (<b>a</b>) SPEI-1, (<b>b</b>) SPEI-3, (<b>c</b>) SPEI-6, and (<b>d</b>) SPEI-12.</p>
Full article ">Figure 5
<p>The monthly-averaged PDSI values of the middle and lower reaches of the Yangtze River in China from 2010 to 2015.</p>
Full article ">Figure 6
<p>Monthly spatial distributions of the PDSI in the middle and lower reaches of the Yangtze River in China in 2011 and 2013.</p>
Full article ">Figure 7
<p>Spatial distributions of 3-month averages of the PDSI in the middle and lower reaches of the Yangtze River in China in 2011 and 2013.</p>
Full article ">Figure 8
<p>Spatial distributions of annual averages of the PDSI in the middle and lower reaches of the Yangtze River in China from 2010 to 2015.</p>
Full article ">Figure 9
<p>The average value of the TVDI in the middle and lower reaches of the Yangtze River from 2010 to 2015, with a temporal resolution of 16 days. The images without crop growth in January and December were removed. (<b>a</b>) The TVDI of rain-fed arable land (TVDI-rain). (<b>b</b>) The TVDI of irrigated arable land (TVDI-irr).</p>
Full article ">Figure 10
<p>Spatial distribution of the Temperature Vegetation Drought Index (TVDI) in the middle and lower reaches of the Yangtze River in China in 2011, with a temporal resolution of 16 days. The images without crop growth in January and December were removed. (<b>a</b>) The TVDI of rain-fed arable land (TVDI-rain). (<b>b</b>) The TVDI of irrigated arable land (TVDI-irr).</p>
Full article ">Figure 11
<p>Spatial distribution of the Temperature Vegetation Drought Index (TVDI) in the middle and lower reaches of the Yangtze River in China in 2013, with a temporal resolution of 16 days. The images without crop growth in January and December were removed. (<b>a</b>) The TVDI of rain-fed arable land (TVDI-rain). (<b>b</b>) The TVDI of irrigated arable land (TVDI-irr).</p>
Full article ">Figure 12
<p>Spatial distribution of 3-month average of the TVDI in the middle and lower reaches of the Yangtze River in China in 2011 and 2013. The images without crop growth in January and December were removed. (<b>a</b>) The TVDI of rain-fed arable land (TVDI-rain). (<b>b</b>) The TVDI of irrigated arable land (TVDI-irr).</p>
Full article ">Figure 13
<p>Spatial distribution of annual average of the TVDI in the middle and lower reaches of the Yangtze River in China from 2010 to 2015. The images without crop growth in January and December were removed. (<b>a</b>) The TVDI of rain-fed arable land (TVDI-rain). (<b>b</b>) The TVDI of irrigated arable land (TVDI-irr).</p>
Full article ">Figure 14
<p>Spatial patterns of the correlation coefficient between the PDSI and TVDI based on annual averages in the middle and lower reaches of the Yangtze River from 2010 to 2015. Passed the significance test, with <span class="html-italic">p</span> &lt; 0.05. (<b>a</b>) The PDSI and TVDI of rain-fed arable land. (<b>b</b>) The PDSI and TVDI of irrigated arable land.</p>
Full article ">Figure 14 Cont.
<p>Spatial patterns of the correlation coefficient between the PDSI and TVDI based on annual averages in the middle and lower reaches of the Yangtze River from 2010 to 2015. Passed the significance test, with <span class="html-italic">p</span> &lt; 0.05. (<b>a</b>) The PDSI and TVDI of rain-fed arable land. (<b>b</b>) The PDSI and TVDI of irrigated arable land.</p>
Full article ">Figure 15
<p>Correlation curves between the PDSI and TVDI of rain-fed arable land and irrigated arable land based on month averages in the middle and lower reaches of the Yangtze River from 2010 to 2015. Passed the significance test, with <span class="html-italic">p</span> &lt; 0.05. (<b>a</b>) The correlation curves of rain-fed arable land. (<b>b</b>) The correlation curves of irrigated arable land.</p>
Full article ">Figure 15 Cont.
<p>Correlation curves between the PDSI and TVDI of rain-fed arable land and irrigated arable land based on month averages in the middle and lower reaches of the Yangtze River from 2010 to 2015. Passed the significance test, with <span class="html-italic">p</span> &lt; 0.05. (<b>a</b>) The correlation curves of rain-fed arable land. (<b>b</b>) The correlation curves of irrigated arable land.</p>
Full article ">Figure 16
<p>The cross-wavelet transform (XWT) between the PDSI and TVDI time series in the middle and lower reaches of the Yangtze River from 2010 to 2015. The abscissa and ordinate represent the year and period (month), respectively, and the color bar on the right is the wavelet energy. The arrows indicate the relative phase relationship. (<b>a</b>) The XWT of the PDSI and TVDI in rain-fed arable land. (<b>b</b>) The XWT of the PDSI and TVDI in irrigated arable land.</p>
Full article ">Figure 17
<p>The wavelet coherence (WTC) between the PDSI and TVDI time series in the middle and lower reaches of the Yangtze River from 2010 to 2015. The abscissa and ordinate represent the year and period (month), respectively, and the color bar on the right is the wavelet energy. The arrows indicate the relative phase relationship. (<b>a</b>) The WTC of the PDSI and TVDI in rain-fed arable land. (<b>b</b>) The WTC of the PDSI and TVDI in irrigated arable land.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop