Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, December-2
Previous Issue
Volume 15, November-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 23 (December-1 2023) – 193 articles

Cover Story (view full-size image): The Earth Climate Observatory (ECO) is a new space mission concept for the measurement of the Earth Energy Imbalance (EEI). The Incoming Solar Radiation (ISR)—of the order of 340 W/m2—and the Total Outgoing terrestrial Radiation (TOR)—of the order of 339 W/m2—are measured by identically designed, intercalibrated wide-field-of-view radiometers, such that a significant measurement of the EEI—of the order of 1 W/m2—can be made. Auxiliary visible and thermal multispectral cameras will be used to increase the spatial resolution of the radiometer observations, and to separate the TOR spectrally in the Reflected Solar Radiation (RSR) and the Outgoing Longwave Radiation (OLR). View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 6638 KiB  
Article
Spatio-Temporal Dynamics of Total Suspended Sediments in the Belize Coastal Lagoon
by Chintan B. Maniyar, Megha Rudresh, Ileana A. Callejas, Katie Osborn, Christine M. Lee, Jennifer Jay, Myles Phillips, Nicole Auil Gomez, Emil A. Cherrington, Robert Griffin, Christine Evans, Andria Rosado, Samir Rosado, Stacey L. Felgate, Claire Evans, Vanesa Martín-Arias and Deepak R. Mishra
Remote Sens. 2023, 15(23), 5625; https://doi.org/10.3390/rs15235625 - 4 Dec 2023
Cited by 1 | Viewed by 2163
Abstract
Increased tourism in Belize over the last decade and the growth of the local population have led to coastal development and infrastructure expansion. Land use alteration and anthropogenic activity may change the sediment and nutrient loads in coastal systems, which can negatively affect [...] Read more.
Increased tourism in Belize over the last decade and the growth of the local population have led to coastal development and infrastructure expansion. Land use alteration and anthropogenic activity may change the sediment and nutrient loads in coastal systems, which can negatively affect ecosystems via mechanisms such as reducing photosynthetically active radiation fields, smothering sessile habitats, and stimulating eutrophication events. Accurate monitoring and prediction of water quality parameters such as Total Suspended Sediments (TSS), are essential in order to understand the influence of land-based changes, climate, and human activities on the coastal systems and devise strategies to mitigate negative impacts. This study implements machine learning algorithms such as Random Forests (RF), Extreme Gradient Boosting (XGB), and Deep Neural Networks (DNN) to estimate TSS using Sentinel-2 reflectance data in the Belize Coastal Lagoon (BCL) and validates the results using TSS data collected in situ. DNN performed the best and estimated TSS with a testing R2 of 0.89. Time-series analysis was also performed on the BCL’s TSS trends using Bayesian Changepoint Detection (BCD) methods to flag anomalously high TSS spatio-temporally, which may be caused by dredging events. Having such a framework can ease the near-real-time monitoring of water quality in Belize, help track the TSS dynamics for anomalies, and aid in meeting and maintaining the sustainable goals for Belize. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of sampling sites (inland and into the BBRS) depicted as red dots on a LandSat-8 image from 20 May 2019.</p>
Full article ">Figure 2
<p>Prediction fit plots for actual and predicted TSS values for (<b>a</b>) RF, (<b>b</b>) XGB, (<b>c</b>) DNN models for TSS estimation. The red line is the best-fit line of actual vs. predicted values, and the green line is the reference (1:1) line.</p>
Full article ">Figure 3
<p>Box plot of training R<sup>2</sup> of all three models after cross-validation. The testing R<sup>2</sup> value for each model is marked with a star.</p>
Full article ">Figure 4
<p>Spatio-temporal maps of TSS prepared using the DNN model for the BCL region across 3 years. The red box highlights the area where the Belize River opens up into the BCL, a transition zone between fresh and lagoon waters, and a potentially frequent dredging location. TSS levels within the red box can be seen to be anomalously high in July 2021 from among the three pictures. Only water pixels are shown; all non-water pixels (land/cloud) are masked out.</p>
Full article ">Figure 5
<p>(<b>a</b>) Sentinel-2 scene captured on 19 October 2019 with an opaque box highlighting the Central Belize region for where the average daily K<sub>d</sub> (490) was calculated, (<b>b</b>) Inset of the Haulover Creek region with TSS field sampling locations marked with star symbols, (<b>c</b>) Inset of Belize coastal region with TSS field sampling locations marked with start symbols, (<b>d</b>) Inset of Belize River with TSS field sampling locations marked with start symbols.</p>
Full article ">Figure 6
<p>TSS concentration trend, precipitation trend, and the BCD anomaly from each station in Haulover Creek for the 2015–2021 period. The first row contains the TSS concentration trends for the NA14, NA15, NA16, NA17, and NA18 stations, as well as precipitation. The rows below the first one contain the trend decomposition of the TSS concentration trend of each station as obtained from BCD analysis, which shows the magnitude of anomalies. Anomalous TSS concentrations co-incident with anomalous precipitation events are highlighted in the red boxes. Each red box (anomalous TSS event) is drawn based on the timestamp of the TSS anomaly trend for each station derived from BCD analysis. The dominating anomaly for each flagged event (red box) is connected by a black dotted line.</p>
Full article ">Figure 7
<p>TSS concentration trend, precipitation trend, and the BCD anomaly from each station in the BCL for the 2015–2021 period. The first row contains the TSS concentration trends for the NA01, NA02, NA03, NA04, NA05, NA06, NA07, NA08, NA10, and NA11 stations, as well as precipitation. The rows below the first one contain the trend decomposition of the TSS concentration trend of each station as obtained from BCD analysis, which shows the magnitude of anomalies. Anomalous TSS concentrations co-incident with anomalous precipitation events are highlighted in the red and black boxes. Each box (anomalous TSS event) is drawn based on the timestamp of the TSS anomaly trend for each station derived from BCD analysis. The dominating anomaly for each flagged event (red and black boxes) is connected by a dashed black box, indicating the probability flag being in the temporal neighborhood of the anomalous TSS.</p>
Full article ">Figure 8
<p>TSS concentration trend overlayed with the K<sub>d</sub> trend and the BCD anomaly from each station in the BCL for the 2015–2021 period. Anomalous TSS concentrations that followed the dredging-induced anomalous K<sub>d</sub> are highlighted by a red box. Anomalously high K<sub>d</sub> values in the trend are highlighted by a black box.</p>
Full article ">Figure 9
<p>TSS concentration trend, precipitation trend, and the BCD anomaly from each station in the Belize River for the 2015–2021 period. The first row contains the TSS concentration trends for the NA26, NA27, NA28, NA29, and NA50 stations, as well as precipitation. The rows below the first one contain the trend decomposition of the TSS concentration trend of each station as obtained from BCD analysis, which shows the magnitude of anomalies. Anomalous TSS concentrations co-incident with anomalous precipitation events are highlighted in the red boxes. Each red box (anomalous TSS event) is drawn based on the timestamp of the TSS anomaly trend for each station derived from BCD analysis. The dominating anomaly for each flagged event (red box) is connected by either a black dotted line if it lines up with the anomalous TSS peak or a dashed black box if the temporal flag does not directly line up with the anomalous TSS peak but is, rather, in its neighborhood.</p>
Full article ">
22 pages, 8199 KiB  
Article
Weather Radar Parameter Estimation Based on Frequency Domain Processing: Technical Details and Performance Evaluation
by Shuai Zhang, Yubao Chen, Zhifeng Shu, Haifeng Yu, Hui Wang, Jianjun Chen and Lu Li
Remote Sens. 2023, 15(23), 5624; https://doi.org/10.3390/rs15235624 - 4 Dec 2023
Cited by 1 | Viewed by 1381
Abstract
Parameter estimation is important in weather radar signal processing. Time-domain processing (TDP) and frequency-domain processing (FDP) are two basic parameter estimation methods used in the weather radar field. TDP is widely used in operational weather radars because of its high efficiency and robustness; [...] Read more.
Parameter estimation is important in weather radar signal processing. Time-domain processing (TDP) and frequency-domain processing (FDP) are two basic parameter estimation methods used in the weather radar field. TDP is widely used in operational weather radars because of its high efficiency and robustness; however, it must be assumed that the received signal has a symmetrical or Gaussian power spectrum, which limits its performance. FDP does not require assumptions about its power spectrum model and has a seamless connection to spectrum analysis; however, its application performance has not been fully validated to ensure its robustness in an operational environment. In this study, we introduce several technical details in FDP, including window function selection, aliasing correction, and noise correction. Additionally, we evaluate the performance of FDP and compare the performance of FDP and TDP based on simulated and measured weather in-phase/quadrature (I/Q) data. The results show that FDP has potential for operational applications; however, further improvements are required, e.g., windowing processing for signals mixed with severe clutter. Full article
(This article belongs to the Special Issue Synergetic Remote Sensing of Clouds and Precipitation II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the weather radar principle presented in the form of signal/data flow.</p>
Full article ">Figure 2
<p>Bias of FDP for convective precipitation for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math> and window functions. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>; and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>SD of FDP for convective precipitation for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math> and window functions. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>; and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Bias of FDP for stratiform precipitation for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math> and window functions. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>, and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>SD of FDP for stratiform precipitation for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math> and window functions. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>, and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Difference between the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> estimates using the normalized Hamming window and the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> estimates using the rectangular window. (<b>a</b>) For different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) For different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Schematic diagram of the power spectrum for a typical weather signal. (<b>a</b>) Response to the motion of weather targets; (<b>b</b>) Distribution splitting during spectrum aliasing.</p>
Full article ">Figure 8
<p>Difference between the Doppler estimates of the simulated I/Q data and the input of the simulation for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>υ</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>. (<b>a</b>) For <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> and (<b>b</b>) For <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>v</mi> </mrow> </msub> </mrow> </semantics></math>. The blue, orange, and yellow circles denote differences before correction, after correction using CS, and after correction using CP, respectively.</p>
Full article ">Figure 9
<p>Bias of FDP for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math> and noise processing methods. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>; and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>SD of FDP for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math> and noise processing methods. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>P</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>Z</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ϕ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>D</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math>; and (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>ρ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <mrow> <msub> <mrow> <mi>P</mi> </mrow> <mrow> <mi>h</mi> </mrow> </msub> </mrow> </semantics></math> estimates under different noise processing methods when the <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math> is 0 dB.</p>
Full article ">Figure 12
<p>Bias and SD of Doppler estimates based on FDP and TDP for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>. (<b>a</b>) Bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>; (<b>f</b>) bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>; (<b>g</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>; (<b>h</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>; (<b>i</b>) bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>; (<b>j</b>) bias of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>; (<b>k</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>v</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>; and (<b>l</b>) SD of <math display="inline"><semantics> <mrow> <msub> <mrow> <mover accent="true"> <mrow> <mi>σ</mi> </mrow> <mo>^</mo> </mover> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math> for different <math display="inline"><semantics> <mrow> <mi>M</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Typical non-Gaussian power spectrum signals based on simulations. (<b>a</b>) Asymmetric power spectrum signal; (<b>b</b>) Bimodal power spectrum signal.</p>
Full article ">Figure 14
<p>Distribution of the Doppler estimates of TDP relative to that of FDP for the two non-Gaussian power spectrum signals. (<b>a</b>) <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>∆</mo> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>A severe storm observed by CSSR using the range-height indicator mode at 0728 UTC on 9 August 2023. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>Z</mi> </mrow> <mrow> <mi>H</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>υ</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi>v</mi> </mrow> <mrow> <mi>r</mi> </mrow> </msub> </mrow> </semantics></math>; and (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>∆</mo> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>. The red dotted lines highlight the region with large <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>υ</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Observed non-Gaussian power spectrum signals. (<b>a</b>) Asymmetric power spectrum signal; (<b>b</b>) multi-peak power spectrum signal. The red dotted line is an auxiliary line for indicating the tilt at the top of the power spectrum. Three semi-ellipses with different colors to assist in identifying the positions of the peaks.</p>
Full article ">
18 pages, 4813 KiB  
Article
Grassland Chlorophyll Content Estimation from Drone Hyperspectral Images Combined with Fractional-Order Derivative
by Aiwu Zhang, Shengnan Yin, Juan Wang, Nianpeng He, Shatuo Chai and Haiyang Pang
Remote Sens. 2023, 15(23), 5623; https://doi.org/10.3390/rs15235623 - 4 Dec 2023
Cited by 5 | Viewed by 1846
Abstract
Chlorophyll plays a critical role in assessing the photosynthetic capacity and health of grasslands. However, existing studies on the hyperspectral inversion of chlorophyll have mainly focused on field crops, leading to limited accuracy when applied to natural grasslands due to their complex canopy [...] Read more.
Chlorophyll plays a critical role in assessing the photosynthetic capacity and health of grasslands. However, existing studies on the hyperspectral inversion of chlorophyll have mainly focused on field crops, leading to limited accuracy when applied to natural grasslands due to their complex canopy structures and species diversity. This study aims to address this challenge by extrapolating the measured leaf chlorophyll to the canopy level using the green vegetation coverage approach. Additionally, fractional-order derivative (FOD) methods are employed to enhance the sensitivity of hyperspectral data to chlorophyll. Several FOD spectral indices are developed to minimize interference from factors such as bare soil and hay, resulting in improved chlorophyll estimation accuracy. The study utilizes partial least squares regression (PLSR) and support vector machine regression (SVR) to construct inversion models based on full-band FOD, two-band FOD spectral indices, and their combination. Through comparative analysis, the optimal model for estimating grassland chlorophyll content is determined, yielding an R2 value of 0.808, RMSE value of 1.720, and RPD value of 2.347. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample distribution of T5, T6, and T7 study areas.</p>
Full article ">Figure 2
<p>Representative photographs of the study area samples ((<b>a</b>–<b>c</b>) correspond to T5, T6, and T7, respectively).</p>
Full article ">Figure 3
<p>(<b>a</b>) Hyperspectral reflectance curves of grass drones in T5, T6 and T7; (<b>b</b>) Correlation between 25 bands and grass canopy chlorophyll content.</p>
Full article ">Figure 4
<p>Correlation of FOD spectra with chlorophyll content: (<b>a</b>) 0~0.5 order correlation, (<b>b</b>) 0.6~1 order correlation, (<b>c</b>) 1.1~1.5 order correlation, and (<b>d</b>) 1.6~2 order correlation.</p>
Full article ">Figure 5
<p>Distribution of correlation coefficients between FOD spectral indices and chlorophyll content in two bands.</p>
Full article ">Figure 6
<p>Coefficients of variation of full-band modeling R<sup>2</sup> of different orders for PLSR and SVR.</p>
Full article ">Figure 7
<p>Scatter plot of two-band vegetation index grass chlorophyll estimation model for PLSR and SVR. The green dots in the figure represent fitted values of measured biomass against predicted biomass.</p>
Full article ">Figure 8
<p>Scatter plot of the model for estimating chlorophyll in grasses combining PLSR and SVR spectral features with exponential features. The green dots in the figure represent fitted values of measured biomass against predicted biomass.</p>
Full article ">Figure 9
<p>Chlorophyll inversion results in T5, T6, and T7 sampling areas.</p>
Full article ">
19 pages, 24180 KiB  
Article
Comparison of GPM IMERG Version 06 Final Run Products and Its Latest Version 07 Precipitation Products across Scales: Similarities, Differences and Improvements
by Yaji Wang, Zhi Li, Lei Gao, Yong Zhong and Xinhua Peng
Remote Sens. 2023, 15(23), 5622; https://doi.org/10.3390/rs15235622 - 4 Dec 2023
Cited by 4 | Viewed by 1918
Abstract
Precipitation is an essential element in earth system research, which greatly benefits from the emergence of Satellite Precipitation Products (SPPs). Therefore, assessment of the accuracy of the SPPs is necessary both scientifically and practically. The Integrated Multi-Satellite Retrievals for GPM (IMERG) is one [...] Read more.
Precipitation is an essential element in earth system research, which greatly benefits from the emergence of Satellite Precipitation Products (SPPs). Therefore, assessment of the accuracy of the SPPs is necessary both scientifically and practically. The Integrated Multi-Satellite Retrievals for GPM (IMERG) is one of the most widely used SPPs in the scientific community. However, there is a lack of comprehensive evaluation for the performance of the newly released IMERG Version 07, which is essential for determining its effectiveness and reliability in precipitation estimation. In this study, we compare the IMERG V07 Final Run (V07_FR) with its predecessor IMERG V06_FR across scales from January 2016 to December 2020 over the globe (cross-compare their similarities and differences) and a focused study on mainland China (validate against 2481 rain gauges). The results show that: (1) Globally, the annual mean precipitation of V07_FR increases 2.2% compared to V06_FR over land but decreases 5.8% over the ocean. The two SPPs further exhibit great differences as indicated by the Critical Success Index (CSI = 0.64) and the Root Mean Squared Difference (RMSD = 3.42 mm/day) as compared to V06_FR to V07_FR. (2) Over mainland China, V06_FR and V07_FR detect comparable precipitation annually. However, the Probability of Detection (POD) improves by 5.0%, and the RMSD decreases by 3.7% when analyzed by grid cells. Further, the POD (+0%~+6.1%) and CSI (+0%~+8.8%) increase and the RMSD (−11.1%~0%) decreases regardless of the sub-regions. (3) Under extreme rainfall rates, V07_FR measures 4.5% lower extreme rainfall rates than V06_FR across mainland China. But V07_FR tends to detect more accurate extreme precipitation at both daily and event scales. These results can be of value for further SPP development, application in climatological and hydrological modeling, and risk analysis. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The eight climatic sub−regions across China (<b>a</b>), the ground gauge distribution (<b>b</b>), and the southeast coastline area of mainland China (<b>c</b>). Partly cited from Shen et al. [<a href="#B32-remotesensing-15-05622" class="html-bibr">32</a>].</p>
Full article ">Figure 2
<p>Flowchart of IMERG intercomparison across scales.</p>
Full article ">Figure 3
<p>Spatial distribution of global annual mean precipitation estimated by IMERG V06_FR (<b>a</b>), V07_FR (<b>b</b>), and their differences (<b>c</b>).</p>
Full article ">Figure 4
<p>Maps of statistical description, POD (<b>a</b>), FAR (<b>b</b>), CSI (<b>c</b>), RB (<b>d</b>), MAD (<b>e</b>), RMSD (<b>f</b>), normalized RMSD over land (<b>g</b>), and normalized RMSD over ocean (<b>h</b>), of the differences between IMERG V06_FR and V07_FR.</p>
Full article ">Figure 5
<p>Boxplot of Means Absolute Difference (MAD) between IMERG V06_FR and V07_FR under ranges of different daily precipitation rates (<b>a</b>) and regions with different wetness (<b>b</b>).</p>
Full article ">Figure 6
<p>National-based evaluation of gauges (<b>a</b>), IMERG V06_FR (<b>b</b>), V07_FR (<b>c</b>), and differences between V07_FR and V06_FR (<b>d</b>) in annual mean precipitation estimation.</p>
Full article ">Figure 7
<p>Probability density function (PDF) of gauge, IMERG V06_FR, and IMERG V07_FR of daily rainfall intensities from January 2016 to December 2020.</p>
Full article ">Figure 8
<p>Spatial distribution of statistical metrics of IMERG V06_FR (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>,<b>p</b>), IMERG V07_FR (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>,<b>q</b>), and their differences (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>,<b>r</b>).</p>
Full article ">Figure 9
<p>Taylor diagrams consisting of correlation coefficient, normalized standard deviation, and normalized RMSD for daily precipitation estimates from IMERG V06_FR and V07_FR during all periods (<b>a</b>) and distinct hydrological years (<b>b</b>–<b>f</b>) over mainland China.</p>
Full article ">Figure 10
<p>Spatial distribution of statistical metrics for IMERG V06 FR (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>,<b>p</b>), IMERG V07 FR (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>,<b>q</b>), and their differences (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>,<b>r</b>) under extreme precipitation rate.</p>
Full article ">
23 pages, 3668 KiB  
Article
Evaluating Feature Selection Methods and Machine Learning Algorithms for Mapping Mangrove Forests Using Optical and Synthetic Aperture Radar Data
by Zhen Shen, Jing Miao, Junjie Wang, Demei Zhao, Aowei Tang and Jianing Zhen
Remote Sens. 2023, 15(23), 5621; https://doi.org/10.3390/rs15235621 - 4 Dec 2023
Cited by 3 | Viewed by 1944
Abstract
Mangrove forests, mostly found in the intertidal zone, are among the highest-productivity ecosystems and have great ecological and economic value. The accurate mapping of mangrove forests is essential for the scientific management and restoration of mangrove ecosystems. However, it is still challenging to [...] Read more.
Mangrove forests, mostly found in the intertidal zone, are among the highest-productivity ecosystems and have great ecological and economic value. The accurate mapping of mangrove forests is essential for the scientific management and restoration of mangrove ecosystems. However, it is still challenging to perform the rapid and accurate information mapping of mangrove forests due to the complexity of mangrove forests themselves and their environments. Utilizing multi-source remote sensing data is an effective approach to address this challenge. Feature extraction and selection, as well as the selection of classification models, are crucial for accurate mangrove mapping using multi-source remote sensing data. This study constructs multi-source feature sets based on optical (Sentinel-2) and SAR (synthetic aperture radar) (C-band: Sentinel-1; L-band: ALOS-2) remote sensing data, aiming to compare the impact of three feature selection methods (RFS, random forest; ERT, extremely randomized tree; MIC, maximal information coefficient) and four machine learning algorithms (DT, decision tree; RF, random forest; XGBoost, extreme gradient boosting; LightGBM, light gradient-boosting machine) on classification accuracy, identify sensitive feature variables that contribute to mangrove mapping, and formulate a classification framework for accurately recognizing mangrove forests. The experimental results demonstrated that using the feature combination selected via the ERT method could obtain higher accuracy with fewer features compared to other methods. Among the feature combinations, the visible bands, shortwave infrared bands, and the vegetation indices constructed from these bands contributed the greatest to the classification accuracy. The classification performance of optical data was significantly better than SAR data in terms of data sources. The combination of optical and SAR data could improve the accuracy of mangrove mapping to a certain extent (0.33% to 4.67%), which is essential for the research of mangrove mapping in a larger area. The XGBoost classification model performed optimally in mangrove mapping, with the highest overall accuracy of 95.00% among all the classification models. The results of the study show that combining optical and SAR remote sensing data with the ERT feature selection method and XGBoost classification model has great potential for accurate mangrove mapping at a regional scale, which is important for mangrove restoration and protection and provides a reliable database for mangrove scientific management. Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Ocean and Coastal Ecology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow for mangrove extraction.</p>
Full article ">Figure 2
<p>Location of the study area; (<b>a</b>) location of the study area in China; (<b>b</b>) Location of the study area in Zhanjiang City, Guangdong Province; (<b>c</b>) spatial distribution of sample points and the Sentinel-2B image in the study area (R: band 4, G: band 3, B: band 2). Close-ups of 8 categories of land use (<b>d</b>–<b>k</b>). The two subfigures from (<b>d</b>) to (<b>k</b>) show the same category in different regions of figure (<b>c</b>).</p>
Full article ">Figure 3
<p>The ranking of the importance scores and mutual information values of multispectral features for three feature selection methods, (<b>a</b>) the importance scores of RFS, (<b>b</b>) the importance scores of ERT, and (<b>c</b>) the mutual information value of MIC.</p>
Full article ">Figure 4
<p>The ranking of the importance scores and mutual information values of polarimetric SAR features for three feature selection methods, (<b>a</b>,<b>d</b>) the importance scores of RFS, (<b>b</b>,<b>e</b>) the importance scores of ERT, (<b>c</b>,<b>f</b>) the mutual information value of MIC.</p>
Full article ">Figure 5
<p>The overall accuracy for different data sources in this study. The red dotted line indicates an acceptable accuracy of 85%. (<b>a</b>) S2 optical data, (<b>b</b>) S1 C-band SAR data, (<b>c</b>) A2 L-band SAR data, (<b>d</b>) S2 optical and S1 SAR data, and (<b>e</b>) S2 optical and A2 SAR data.</p>
Full article ">Figure 6
<p>The ranking of the importance scores with combination of multispectral features and dual-polarized SAR features. (<b>a</b>) The importance scores with combination of S2 and S1 features; (<b>b</b>) the importance scores with combination of S2 and A2 features.</p>
Full article ">Figure 7
<p>Heat map for UA and PA of combining multispectral data and dual-polarized SAR data. (MG: mangrove forest, TV: terrestrial vegetation, CL: cultivated land, BL: building land, BE: culture pond, WB: water body, TF: tidal flat). (<b>a</b>) S2 + S1 scheme and RFS method, (<b>b</b>) S2 + A2 scheme and RFS method, (<b>c</b>) S2 + S1 scheme and ERT method, (<b>d</b>) S2 + A2 and ERT method, (<b>e</b>) S2 + S1 and MIC method, and (<b>f</b>) S2 + A2 scheme and MIC method.</p>
Full article ">Figure 8
<p>Classification results of the two schemes based on four machine learning algorithms. (<b>a</b>) SC scheme and DT method, (<b>b</b>) SC scheme and RF method, (<b>c</b>) SC scheme and XGBoost method, (<b>d</b>) SC scheme and LightGBM method, (<b>e</b>) SL scheme and DT method, (<b>f</b>) SL scheme and RF method, (<b>g</b>) SL scheme and XGBoost method, and (<b>h</b>) SL scheme and LightGBM method.</p>
Full article ">
25 pages, 5631 KiB  
Article
Learn by Yourself: A Feature-Augmented Self-Distillation Convolutional Neural Network for Remote Sensing Scene Image Classification
by Cuiping Shi, Mengxiang Ding, Liguo Wang and Haizhu Pan
Remote Sens. 2023, 15(23), 5620; https://doi.org/10.3390/rs15235620 - 4 Dec 2023
Cited by 3 | Viewed by 1503
Abstract
In recent years, with the rapid development of deep learning technology, great progress has been made in remote sensing scene image classification. Compared with natural images, remote sensing scene images are usually more complex, with high inter-class similarity and large intra-class differences, which [...] Read more.
In recent years, with the rapid development of deep learning technology, great progress has been made in remote sensing scene image classification. Compared with natural images, remote sensing scene images are usually more complex, with high inter-class similarity and large intra-class differences, which makes it difficult for commonly used networks to effectively learn the features of remote sensing scene images. In addition, most existing methods adopt hard labels to supervise the network model, which makes the model prone to losing fine-grained information of ground objects. In order to solve these problems, a feature-augmented self-distilled convolutional neural network (FASDNet) is proposed. First, ResNet34 is adopted as the backbone network to extract multi-level features of images. Next, a feature augmentation pyramid module (FAPM) is designed to extract and fuse multi-level feature information. Then, auxiliary branches are constructed to provide additional supervision information. The self-distillation method is utilized between the feature augmentation pyramid module and the backbone network, as well as between the backbone network and auxiliary branches. Finally, the proposed model is jointly supervised using feature distillation loss, logits distillation loss, and cross-entropy loss. A lot of experiments are conducted on four widely used remote sensing scene image datasets, and the experimental results show that the proposed method is superior to some state-ot-the-art classification methods. Full article
Show Figures

Figure 1

Figure 1
<p>The remote sensing scene image above is manually semantically labeled as a bridge. There are multiple different land covers besides the bridge, including “River”, “Forest”, “Car”, “Residential”. If only bridges are considered in the feature learning process, the content corresponding to other semantics will reduce the discriminative degree of the learned features.</p>
Full article ">Figure 2
<p>Network output soft-label histogram.</p>
Full article ">Figure 3
<p>The overall framework of the proposed FASDNet.</p>
Full article ">Figure 4
<p>The overall architecture of FAPM.</p>
Full article ">Figure 5
<p>The bottleneck convolution structure of the auxiliary classifier.</p>
Full article ">Figure 6
<p>Randomly selected sample images from the four datasets.</p>
Full article ">Figure 7
<p>Confusion matrix on the UC-Merced dataset with 80% training ratio.</p>
Full article ">Figure 8
<p>The confusion matrix obtained under the 50% training ratio of the RSSCN7 dataset.</p>
Full article ">Figure 9
<p>The confusion matrix obtained under the 50% training ratio of the AID.</p>
Full article ">Figure 10
<p>The confusion matrix obtained under the 20% training ratio of the NWPU dataset.</p>
Full article ">Figure 11
<p>The heat maps obtained using different methods. The first row shows the heat maps obtained using the backbone network combined with the distillation method. The second row shows the heat maps obtained with only the backbone network.</p>
Full article ">
25 pages, 19930 KiB  
Article
Fusing Ascending and Descending Time-Series SAR Images with Dual-Polarized Pixel Attention UNet for Landslide Recognition
by Bin Pan and Xianjian Shi
Remote Sens. 2023, 15(23), 5619; https://doi.org/10.3390/rs15235619 - 4 Dec 2023
Cited by 2 | Viewed by 1894
Abstract
Conducting landslide recognition research holds notable practical significance for disaster management. In response to the challenges posed by noise, information redundancy, and geometric distortions in single-orbit SAR imagery during landslide recognition, this study proposes a dual-polarization SAR image landslide recognition approach that combines [...] Read more.
Conducting landslide recognition research holds notable practical significance for disaster management. In response to the challenges posed by noise, information redundancy, and geometric distortions in single-orbit SAR imagery during landslide recognition, this study proposes a dual-polarization SAR image landslide recognition approach that combines ascending and descending time-series information while considering polarization channel details to enhance the accuracy of landslide identification. The results demonstrate notable improvements in landslide recognition accuracy using the ascending and descending fusion strategy compared to single-orbit data, with F1 scores increasing by 5.19% and 8.82% in Hokkaido and Papua New Guinea, respectively. Additionally, utilizing time-series imagery in Group 2 as opposed to using only pre- and post-event images in Group 4 leads to F1 score improvements of 6.94% and 9.23% in Hokkaido and Papua New Guinea, respectively, confirming the effectiveness of time-series information in enhancing landslide recognition accuracy. Furthermore, employing dual-polarization strategies in Group 4 relative to single-polarization Groups 5 and 6 results in peak F1 score increases of 7.46% and 12.07% in Hokkaido and Papua New Guinea, respectively, demonstrating the feasibility of dual-polarization strategies. However, due to limitations in Sentinel-1 imagery resolution and terrain complexities, omissions and false alarms may arise near landslide edges. The improvements achieved in this study hold critical implications for landslide disaster assessment and provide valuable insights for further enhancing landslide recognition capabilities. Full article
(This article belongs to the Topic Landslides and Natural Resources)
Show Figures

Figure 1

Figure 1
<p>Processing steps for SAR amplitude to backscattering coefficient.</p>
Full article ">Figure 2
<p>Schematic illustration of sloping terrain imaging by an ascending orbit SAR satellite: (<b>a</b>) orientation parameters of the SAR satellite and terrain aspect; (<b>b</b>) geometric relationship between the SAR satellite and the Earth’s surface.</p>
Full article ">Figure 3
<p>The general structure of DPPA-UNet.</p>
Full article ">Figure 4
<p>Pixel attention mechanism.</p>
Full article ">Figure 5
<p>Learning rate.</p>
Full article ">Figure 6
<p>Overview of the study area, covering the following aspects: (<b>a</b>) geographic location of the study area; (<b>b</b>) overview of the location of the Mw 6.6 magnitude earthquake in Chuzen, Hokkaido, Japan; and (<b>c</b>) overview of the location of the Mw 7.5 magnitude earthquake in Papua New Guinea.</p>
Full article ">Figure 7
<p>Landslide identification results for Hokkaido training region: (<b>a</b>) DEM; (<b>b</b>) Sentinel-2 optical imagery (results of cloud removal and stitching from 12 September 2018 to 12 September 2019); (<b>c</b>) natural landslide; (<b>d</b>) Group 1; (<b>e</b>) Group 2; (<b>f</b>) Group 3.</p>
Full article ">Figure 8
<p>Training score statistics for Hokkaido events: (<b>a</b>) evaluation metrics’ distribution for various combinations; (<b>b</b>) discrepancy among group statistics.</p>
Full article ">Figure 9
<p>Landslide identification results for the Hokkaido validation area: (<b>a</b>) DEM; (<b>b</b>) Sentinel-2 optical imagery (results of cloud removal and stitching from 12 September 2018 to 12 September 2019); (<b>c</b>) natural landslide; (<b>d</b>) Group 1; (<b>e</b>) Group 2; (<b>f</b>) Group 3.</p>
Full article ">Figure 10
<p>Landslide identification results for the training area in Papua New Guinea: (<b>a</b>) DEM; (<b>b</b>) Sentinel-2 optical imagery (results of cloud removal and stitching from 12 September 2018 to 12 September 2019); (<b>c</b>) natural landslide; (<b>d</b>) Group 1; (<b>e</b>) Group 2; (<b>f</b>) Group 3.</p>
Full article ">Figure 11
<p>Training score statistics for Papua New Guinea events: (<b>a</b>) evaluation metrics’ distribution for various combinations; (<b>b</b>) discrepancy among groups.</p>
Full article ">Figure 12
<p>Landslide identification results for the validation area in Papua New Guinea: (<b>a</b>) DEM; (<b>b</b>) Sentinel-2 optical imagery (results of cloud removal and stitching from 26 February 2018 to 30 December 2018); (<b>c</b>) natural landslide; (<b>d</b>) Group 1; (<b>e</b>) Group 2; (<b>f</b>) Group 3.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>j</b>) Comparison of Hokkaido validation area identification results with SAR images: the first column is the natural landslide; the second column is the SAR fusion image; the third column is the Group 1 result; the fourth column is the ascending SAR image; the fifth column is the Group 2 identification; the sixth column is the descending SAR image; and the seventh column is the Group 3 result. The short blue arrow represents the line of sight of the satellite, and the long blue arrow represents the flight direction of the satellite.</p>
Full article ">Figure 14
<p>(<b>a</b>–<b>j</b>) Comparison of Papua New Guinea validation area identification results with SAR imagery: the first column is natural landslides; the second column is SAR fusion imagery; the third column is Group 1 results; the fourth column is ascending SAR imagery; the fifth column is Group 2 identification; the sixth column is descending SAR imagery; and the seventh column is Group 3 results.</p>
Full article ">
17 pages, 10155 KiB  
Article
Delineation of Backfill Mining Influence Range Based on Coal Mining Subsidence Principle and Interferometric Synthetic Aperture Radar
by Yafei Yuan, Meinan Zheng, Huaizhan Li, Yu Chen, Guangli Guo, Zhe Su and Wenqi Huo
Remote Sens. 2023, 15(23), 5618; https://doi.org/10.3390/rs15235618 - 4 Dec 2023
Cited by 2 | Viewed by 1105
Abstract
The present study explores a three-dimensional deformation monitoring method for the better delineation of the surface subsidence range in coal mining by combining the mining subsidence law with the geometries of SAR imaging. The mining surface subsidence of the filling working face in [...] Read more.
The present study explores a three-dimensional deformation monitoring method for the better delineation of the surface subsidence range in coal mining by combining the mining subsidence law with the geometries of SAR imaging. The mining surface subsidence of the filling working face in Shandong, China, from March 2018 to June 2021, was obtained with 97 elements of Sentinel-1A data, the small baseline subset (SBAS) technique, and the proposed method, respectively. By comparison with the ground leveling of 46 observation stations, it is shown that the average standard deviation of the SBAS monitoring results is 10.3 mm; with this deviation, it is difficult to satisfy the requirements for the delimitation of the mining impact area. Meanwhile, the average standard deviation of the vertical deformation obtained by the proposed method is 6.2 mm. Compared to the SBAS monitoring accuracy, the monitoring accuracy of the proposed method is increased by 39.8%; thus, it meets the requirements for the precise delineation of the surface subsidence range for backfill mining. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Time-series InSAR data processing flow.</p>
Full article ">Figure 2
<p>Three-dimensional spatial decomposition model of satellite line-of-sight deformation.</p>
Full article ">Figure 3
<p>Location of the study area.</p>
Full article ">Figure 4
<p>SBAS monitoring results of surface subsidence in C9 and C10.</p>
Full article ">Figure 5
<p>Comparison between InSAR monitoring results and leveling data.</p>
Full article ">Figure 6
<p>Inversion results of 3D deformation The red rectangle represents the mined area; the black rectangle represents the area to be mined.</p>
Full article ">Figure 7
<p>Comparison of vertical deformation and horizontal data after 3D decomposition.</p>
Full article ">Figure 8
<p>Delineation of surface influence area of C9 and C10 filling mining.</p>
Full article ">
21 pages, 4327 KiB  
Article
Research on the Extraction Method Comparison and Spatial-Temporal Pattern Evolution for the Built-Up Area of Hefei Based on Multi-Source Data Fusion
by Jianwei Huang, Chaoqun Chu, Lu Wang, Zhaofu Wu, Chunju Zhang, Jun Geng, Yongchao Zhu and Min Yu
Remote Sens. 2023, 15(23), 5617; https://doi.org/10.3390/rs15235617 - 4 Dec 2023
Viewed by 2946
Abstract
With the development of urban built-up areas, accurately extracting the urban built-up area and spatiotemporal pattern evolution trends could be valuable for understanding urban sprawl and human activities. Considering the coarse spatial resolution of nighttime light (NTL) data and the inaccurate regional boundary [...] Read more.
With the development of urban built-up areas, accurately extracting the urban built-up area and spatiotemporal pattern evolution trends could be valuable for understanding urban sprawl and human activities. Considering the coarse spatial resolution of nighttime light (NTL) data and the inaccurate regional boundary reflection on point of interest (POI) data, land surface temperature (LST) data were introduced. A composite index method (LJ–POI–LST) was proposed based on the positive relationship for extracting the boundary and reflecting the spatial-temporal evolution of urban built-up areas involving the NTL, POIs, and LST data from 1993 to 2018 in this paper. This paper yielded the following results: (1) There was a spatial-temporal pattern evolution from north-east to south-west with a primary quadrant orientation of IV, V, and VI in the Hefei urban area from 1993–2018. The medium-speed expansion rate, with an average value of 14.3 km2/a, was much faster than the population growth rate. The elasticity expansion coefficient of urbanization of 1.93 indicated the incongruous growth rate between the urban area and population, leading to an incoordinate and unreasonable development trend in Hefei City. (2) The detailed extraction accuracy for urban and rural junctions, urban forest parks, and other error-prone areas was improved, and the landscape connectivity and fragmentation were optimized according to the LJ–POI–LST composite index based on a high-resolution remote sensing validation image in the internal spatial structure. (3) Compared to the conventional NTL data and the LJ–POI index, the LJ–POI–LST composite index method displayed an extraction accuracy greater than 85%, with a similar statistical and landscape pattern index result. This paper provides a suitable method for the positive relationship among these LST, NTL, and POI data for accurately extracting the boundary and reflecting the spatial-temporal evolution of urban built-up areas by the fusion data. Full article
(This article belongs to the Special Issue Application of Photogrammetry and Remote Sensing in Urban Areas)
Show Figures

Figure 1

Figure 1
<p>The geographical location and boundary of (<b>a</b>) the People’s Republic of China; (<b>b</b>) Anhui Province; (<b>c</b>) Hefei City; (<b>d</b>) built-up area of Hefei. The image data are available on the National Platform for Common Geospatial Information Services at <a href="https://www.tianditu.gov.cn/?tdsourcetag=s_pcqq_aiomsg" target="_blank">https://www.tianditu.gov.cn/?tdsourcetag=s_pcqq_aiomsg</a>, accessed on 13 November 2023.</p>
Full article ">Figure 2
<p>The process of the built-up area extraction method involved integrating NTL, LST, and POI data.</p>
Full article ">Figure 3
<p>The flowchart of this research.</p>
Full article ">Figure 4
<p>Compactness and fractal dimension of the Hefei built-up area.</p>
Full article ">Figure 5
<p>SDE distribution in the Hefei built-up area.</p>
Full article ">Figure 6
<p>Center of gravity migration map of the Hefei built-up area. (pink: 1993; blue: 1997; green: 2002; yellow: 2007 &amp; 2012 (overlap); orange: 2018).</p>
Full article ">Figure 7
<p>Results of kernel density analysis with different bandwidths; (<b>a</b>) Bandwidth of 500 m; (<b>b</b>) Bandwidth of 1000 m; (<b>c</b>) Bandwidth of 1500 m; (<b>d</b>) Bandwidth of 2000 m; (<b>e</b>) Bandwidth of 3500 m; (<b>f</b>) Bandwidth of 6000 m.</p>
Full article ">Figure 8
<p>The built-up area images were extracted by different methods; (<b>a</b>) Luojia 1-01 NTL images; (<b>b</b>) LJ–POI composite index; (<b>c</b>) LJ–POI–LST composite index.</p>
Full article ">Figure 9
<p>Comparison of extraction results of built-up areas in Hefei City; (<b>a</b>) Luojia 1-01; (<b>b</b>) LJ–POI; (<b>c</b>) LJ–POI–LST.</p>
Full article ">Figure 10
<p>Comparison of high-resolution image extraction results. ((<b>A</b>) The Shushan Forest Park, which does not belong to urban built-up areas; (<b>B</b>) The eco-agricultural tourism spot in Dawei; (<b>C</b>) The urban-rural fringe areas in Yaohai).</p>
Full article ">
26 pages, 37177 KiB  
Article
An Integrated Approach for 3D Solar Potential Assessment at the City Scale
by Hassan Waqas, Yuhong Jiang, Jianga Shang, Iqra Munir and Fahad Ullah Khan
Remote Sens. 2023, 15(23), 5616; https://doi.org/10.3390/rs15235616 - 3 Dec 2023
Cited by 2 | Viewed by 2842
Abstract
The use of solar energy has shown the fastest global growth of all renewable energy sources. Efforts towards careful evaluation are required to select optimal locations for the installation of photovoltaics (PV) because their effectiveness is strongly reliant on exposure to solar irradiation. [...] Read more.
The use of solar energy has shown the fastest global growth of all renewable energy sources. Efforts towards careful evaluation are required to select optimal locations for the installation of photovoltaics (PV) because their effectiveness is strongly reliant on exposure to solar irradiation. Assessing the shadows cast by nearby buildings and vegetation is essential, especially at the city scale. Due to urban complexity, conventional methods using Digital Surface Models (DSM) overestimate solar irradiation in dense urban environments. To provide further insights into this dilemma, a new modeling technique was developed for integrated 3D city modeling and solar potential assessment on building roofs using light detection and ranging (LiDAR) data. The methodology used hotspot analysis to validate the workflow in both site and without-site contexts (e.g., trees that shield small buildings). Field testing was conducted, covering a total area of 4975 square miles and 10,489 existing buildings. The results demonstrate a considerable impact of large, dense trees on the solar irradiation received by smaller buildings. Considering the site’s context, a mean annual solar estimate of 99.97 kWh/m2/year was determined. Without considering the site context, this value increased by 9.3% (as a percentage of total rooftops) to 109.17 kWh/m2/year, with a peak in July and troughs in December and January. The study suggests that both factors have a substantial impact on solar potential estimations, emphasizing the importance of carefully considering the shadowing effect during PV panel installation. The research findings reveal that 1517 buildings in the downtown area of Austin have high estimated radiation ranging from 4.7 to 6.9 kWh/m2/day, providing valuable insights for the identification of optimal locations highly suitable for PV installation. Additionally, this methodology can be generalized to other cities, addressing the broader demand for renewable energy solutions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location map of the study area.</p>
Full article ">Figure 2
<p>Overview of the developed methodology.</p>
Full article ">Figure 3
<p>(<b>a</b>) Raster DSM having three different building segments. (<b>b</b>) General 3D building extraction process in ArcGIS Pro 3.2 (As indicated by the yellow shading building footprint have one segment). (<b>c</b>) Building footprint segmentation using Mask R-CNN (As indicated by the yellow shading building footprints have three segments). (<b>d</b>) Final 3D building representation in LOD2.</p>
Full article ">Figure 4
<p>LiDAR vegetation class, vector to 3D volumetric trees, (<b>a</b>) LiDAR point cloud class, (<b>b</b>) 3D trees point and minimum bounding geometry, (<b>c</b>) 3D volumetric trees.</p>
Full article ">Figure 5
<p>Viewshed analysis to solar raster map.</p>
Full article ">Figure 6
<p>(<b>a</b>) A site without context Digital Surface Model, (<b>b</b>) B with site context LOD2 buildings and 3D volumetric trees, (<b>c</b>) high-resolution solar raster mapping on rooftops with solar panels.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) A site without context Digital Surface Model, (<b>b</b>) B with site context LOD2 buildings and 3D volumetric trees, (<b>c</b>) high-resolution solar raster mapping on rooftops with solar panels.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) A site without context Digital Surface Model, (<b>b</b>) B with site context LOD2 buildings and 3D volumetric trees, (<b>c</b>) high-resolution solar raster mapping on rooftops with solar panels.</p>
Full article ">Figure 7
<p>Whole year solar irradiation estimation with site context.</p>
Full article ">Figure 8
<p>Whole year solar irradiation estimation without site context.</p>
Full article ">Figure 9
<p>Annual solar irradiation estimation with site and without site context.</p>
Full article ">Figure 10
<p>Tree hotspots and coldspots for the study area.</p>
Full article ">Figure 11
<p>Effect of site context/trees on GSI hotspots and coldspots.</p>
Full article ">Figure 12
<p>Optimal location for PV installation.</p>
Full article ">Figure 13
<p>AC vents on rooftops, downtown Austin area (as indicated by the highlighted yellow box may have an impact on the efficiency of solar panels installation), courtesy of Google Maps.</p>
Full article ">
18 pages, 5922 KiB  
Article
Weed–Crop Segmentation in Drone Images with a Novel Encoder–Decoder Framework Enhanced via Attention Modules
by Sultan Daud Khan, Saleh Basalamah and Ahmed Lbath
Remote Sens. 2023, 15(23), 5615; https://doi.org/10.3390/rs15235615 - 3 Dec 2023
Cited by 6 | Viewed by 1797
Abstract
The rapid expansion of the world’s population has resulted in an increased demand for agricultural products which necessitates the need to improve crop yields. To enhance crop yields, it is imperative to control weeds. Traditionally, weed control predominantly relied on the use of [...] Read more.
The rapid expansion of the world’s population has resulted in an increased demand for agricultural products which necessitates the need to improve crop yields. To enhance crop yields, it is imperative to control weeds. Traditionally, weed control predominantly relied on the use of herbicides; however, the indiscriminate application of herbicides presents potential hazards to both crop health and productivity. Fortunately, the advent of cutting-edge technologies such as unmanned vehicle technology (UAVs) and computer vision has provided automated and efficient solutions for weed control. These approaches leverage drone images to detect and identify weeds with a certain level of accuracy. Nevertheless, the identification of weeds in drone images poses significant challenges attributed to factors like occlusion, variations in color and texture, and disparities in scale. The utilization of traditional image processing techniques and deep learning approaches, which are commonly employed in existing methods, presents difficulties in extracting features and addressing scale variations. In order to address these challenges, an innovative deep learning framework is introduced which is designed to classify every pixel in a drone image into categories such as weed, crop, and others. In general, our proposed network adopts an encoder–decoder structure. The encoder component of the network effectively combines the Dense-inception network with the Atrous spatial pyramid pooling module, enabling the extraction of multi-scale features and capturing local and global contextual information seamlessly. The decoder component of the network incorporates deconvolution layers and attention units, namely, channel and spatial attention units (CnSAUs), which contribute to the restoration of spatial information and enhance the precise localization of weeds and crops in the images. The performance of the proposed framework is assessed using a publicly available benchmark dataset known for its complexity. The effectiveness of the proposed framework is demonstrated via comprehensive experiments, showcasing its superiority by achieving a 0.81 mean Intersection over Union (mIoU) on the challenging dataset. Full article
Show Figures

Figure 1

Figure 1
<p>Detailed architecture of proposed framework for weed–crop segmentation.</p>
Full article ">Figure 2
<p>Structure of channel and spatial attention module.</p>
Full article ">Figure 3
<p>Illustrated sample frames randomly selected from the dataset and their corresponding ground truth masks. The red pixels in the ground truth mask represent the weed, while the green pixels represent rice crops and the gray pixels represent the others.</p>
Full article ">Figure 4
<p>Performance comparison of different methods using Precision–Recall Curves. (<b>a</b>) represents the performance of different methods on “rice crop” class, (<b>b</b>) represent and compares the performance on “weed” class, while (<b>c</b>) represents the performance of different methods on “others” class.</p>
Full article ">Figure 5
<p>Comparison of predicted and ground truth segmentation masks. The first column showcases randomly selected sample frames from the dataset. In the second column, the ground truth segmentation masks are displayed, while the third column shows the predicted masks.</p>
Full article ">
18 pages, 4806 KiB  
Article
Extracting Citrus in Southern China (Guangxi Region) Based on the Improved DeepLabV3+ Network
by Hao Li, Jia Zhang, Jia Wang, Zhongke Feng, Boyi Liang, Nina Xiong, Junping Zhang, Xiaoting Sun, Yibing Li and Shuqi Lin
Remote Sens. 2023, 15(23), 5614; https://doi.org/10.3390/rs15235614 - 3 Dec 2023
Cited by 3 | Viewed by 1829
Abstract
China is one of the countries with the largest citrus cultivation areas, and its citrus industry has received significant attention due to its substantial economic benefits. Traditional manual forestry surveys and remote sensing image classification tasks are labor-intensive and time-consuming, resulting in low [...] Read more.
China is one of the countries with the largest citrus cultivation areas, and its citrus industry has received significant attention due to its substantial economic benefits. Traditional manual forestry surveys and remote sensing image classification tasks are labor-intensive and time-consuming, resulting in low efficiency. Remote sensing technology holds great potential for obtaining spatial information on citrus orchards on a large scale. This study proposes a lightweight model for citrus plantation extraction that combines the DeepLabV3+ model with the convolutional block attention module (CBAM) attention mechanism, with a focus on the phenological growth characteristics of citrus in the Guangxi region. The objective is to address issues such as inaccurate extraction of citrus edges in high-resolution images, misclassification and omissions caused by intra-class differences, as well as the large number of network parameters and long training time found in classical semantic segmentation models. To reduce parameter count and improve training speed, the MobileNetV2 lightweight network is used as a replacement for the Xception backbone network in DeepLabV3+. Additionally, the CBAM is introduced to extract citrus features more accurately and efficiently. Moreover, in consideration of the growth characteristics of citrus, this study augments the feature input with additional channels to better capture and utilize key phenological features of citrus, thereby enhancing the accuracy of citrus recognition. The results demonstrate that the improved DeepLabV3+ model exhibits high reliability in citrus recognition and extraction, achieving an overall accuracy (OA) of 96.23%, a mean pixel accuracy (mPA) of 83.79%, and a mean intersection over union (mIoU) of 85.40%. These metrics represent an improvement of 11.16%, 14.88%, and 14.98%, respectively, compared to the original DeepLabV3+ model. Furthermore, when compared to classical semantic segmentation models, such as UNet and PSPNet, the proposed model achieves higher recognition accuracy. Additionally, the improved DeepLabV3+ model demonstrates a significant reduction in both parameters and training time. Generalization experiments conducted in Nanning, Guangxi Province, further validate the model’s strong generalization capabilities. Overall, this study emphasizes extraction accuracy, reduction in parameter count, adherence to timeliness requirements, and facilitation of rapid and accurate extraction of citrus plantation areas, presenting promising application prospects. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Study area. (<b>a</b>) Geographic location of the study area. (<b>b</b>) Main study area, i.e., Yangshuo County, Guangxi Province. (<b>c</b>,<b>d</b>) show the labeled areas of citrus samples (marked by yellow and green blocks). The images used are GF-2 images with pseudo-color components (R = near-infrared, G = red, B = green).</p>
Full article ">Figure 2
<p>Structure of improved DeepLabV3+ model.</p>
Full article ">Figure 3
<p>Structure of CBAM: (<b>a</b>) Channel attention module; (<b>b</b>) Spatial attention module; (<b>c</b>) CBAM.</p>
Full article ">Figure 4
<p>Comparison of extraction accuracy of various models for citrus.</p>
Full article ">Figure 5
<p>Citrus extraction results using four different models, where the black area is the background area, the gray is the citrus sample labeled area, and the white is the citrus area extracted by the models. Among the three special plots selected, plot (<b>a</b>) contains roads and water, plot (<b>b</b>) contains complex and fragmentary citrus planting areas, and plot (<b>c</b>) contains concentrated citrus planting areas.</p>
Full article ">Figure 6
<p>Results of model testing in Nanning City.</p>
Full article ">
21 pages, 56215 KiB  
Article
Landscape Pattern of Sloping Garden Erosion Based on CSLE and Multi-Source Satellite Imagery in Tropical Xishuangbanna, Southwest China
by Rui Tan, Guokun Chen, Bohui Tang, Yizhong Huang, Xianguang Ma, Zicheng Liu and Junxin Feng
Remote Sens. 2023, 15(23), 5613; https://doi.org/10.3390/rs15235613 - 3 Dec 2023
Cited by 1 | Viewed by 1549
Abstract
Inappropriate soil management accelerates soil erosion and thus poses a serious threat to food security and biodiversity. Due to poor data availability and fragmented terrain, the landscape pattern of garden erosion in tropical Xishuangbanna is not clear. In this study, by integrating multi-source [...] Read more.
Inappropriate soil management accelerates soil erosion and thus poses a serious threat to food security and biodiversity. Due to poor data availability and fragmented terrain, the landscape pattern of garden erosion in tropical Xishuangbanna is not clear. In this study, by integrating multi-source satellite imagery, field investigation and visual interpretation, we realized high-resolution mapping of gardens and soil conservation measures at the landscape scale. The Chinese Soil Loss Equation (CSLE) model was then performed to estimate the garden erosion rates and to identify critical erosion-prone areas; the landscape pattern of soil erosion was further discussed. Results showed the following: (1) For the three major plantations, teas have the largest degree of fragmentation and orchards suffer the highest soil erosion rate, while rubbers show the largest patch area, aggregation degree and soil erosion ratio. (2) The average garden erosion rate is 1595.08 t·km−2a−1, resulting in an annual soil loss of 9.73 × 106 t. Soil erosion is more susceptible to elevation and vegetation cover rather than the slope gradient. Meanwhile, irreversible erosion rates only occur in gardens with fraction vegetation coverage (FVC) lower than 30%, and they contribute 68.19% of total soil loss with the smallest land portion, indicating that new plantations are suffering serious erosion problems. (3) Garden patches with high erosion intensity grades and aggregation indexes should be recognized as priorities for centralized treatment. For elevations near 1900 m and lowlands (<950 m), the decrease in the fractal dimension index of erosion-prone areas indicates that patches are more regular and aggregated, suggesting a more optimistic conservation situation. Full article
(This article belongs to the Special Issue Remote Sensing of Soil Erosion in Forest Area)
Show Figures

Figure 1

Figure 1
<p>Views of typical garden land (citrus orchards, plots 5–7) showing evidence of high soil erosion rates compared to bare soil (plot 1), cropland (plot 2 and plot 4) and shrub (plot 3) in our field experiments.</p>
Full article ">Figure 2
<p>(<b>a</b>) Map of XSBN showing its location, elevation and counties; (<b>b</b>) S1–S4 are orchards, other gardens, tea gardens and rubber plantations, and high-resolution remote sensing images of XSBN. (<b>c</b>) Fraction vegetation coverage of gardens in XSBN in 2020; (<b>d</b>) distribution of gardens in XSBN in 2020 and Primary Sample Units with measured erosion rates for validation in the National Soil Erosion Survey in China; (<b>e</b>−<b>l</b>) are high-resolution satellite images of GF–7, Beijing–2 and Sentinel–2 and corresponding retrieved garden polygons (red ones).</p>
Full article ">Figure 3
<p>Schematic flowchart of the methodology employed in this study.</p>
Full article ">Figure 4
<p>Distribution of gardens at different (<b>a</b>) slope gradient and (<b>b</b>) elevation zones in XSBN.</p>
Full article ">Figure 5
<p>Spatial distribution maps of soil-erosion-affecting factors and soil erosion modulus estimated using the CSLE model. (<b>a</b>) Rainfall erosivity; (<b>b</b>) Soil erodibility; (<b>c</b>) Topographic factors; (<b>d</b>) Vegetation cover factor; (<b>e</b>) Engineering factor; (<b>f</b>) Soil erosion modulus.</p>
Full article ">Figure 6
<p>Typical soil conservation measures implemented for gardens at different slope gradient classes in XSBN.</p>
Full article ">Figure 7
<p>(<b>a</b>) Validation of estimated soil erosion rates using CSLE model; (<b>b</b>) the absolute error distribution frequency for estimated erosion results.</p>
Full article ">Figure 8
<p>Garden erosion modulus of different slope gradients (<b>a</b>) and vegetation coverage classes (<b>b</b>); area distribution of garden erosion intensity grades for different slope gradient (<b>c</b>,<b>d</b>) vegetation coverage in XSBN.</p>
Full article ">Figure 9
<p>Landscape metric curves of (<b>a</b>) D, (<b>b</b>) LPI, (<b>c</b>) PLAND and (<b>d</b>) AI for garden erosion intensity in XSBN.</p>
Full article ">
23 pages, 2966 KiB  
Article
DVST: Deformable Voxel Set Transformer for 3D Object Detection from Point Clouds
by Yaqian Ning, Jie Cao, Chun Bao and Qun Hao
Remote Sens. 2023, 15(23), 5612; https://doi.org/10.3390/rs15235612 - 3 Dec 2023
Cited by 1 | Viewed by 2057
Abstract
The use of a transformer backbone in LiDAR point-cloud-based models for 3D object detection has recently gained significant interest. The larger receptive field of the transformer backbone improves its representation capability but also results in excessive attention being given to background regions. To [...] Read more.
The use of a transformer backbone in LiDAR point-cloud-based models for 3D object detection has recently gained significant interest. The larger receptive field of the transformer backbone improves its representation capability but also results in excessive attention being given to background regions. To solve this problem, we propose a novel approach called deformable voxel set attention, which we utilized to create a deformable voxel set transformer (DVST) backbone for 3D object detection from point clouds. The DVST aims to efficaciously integrate the flexible receptive field of the deformable mechanism and the powerful context modeling capability of the transformer. Specifically, we introduce the deformable mechanism into voxel-based set attention to selectively transfer candidate keys and values of foreground queries to important regions. An offset generation module was designed to learn the offsets of the foreground queries. Furthermore, a globally responsive convolutional feed-forward network with residual connection is presented to capture global feature interactions in hidden space. We verified the validity of the DVST on the KITTI and Waymo open datasets by constructing single-stage and two-stage models. The findings indicated that the DVST enhanced the average precision of the baseline model while preserving computational efficiency, achieving a performance comparable to state-of-the-art methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall architecture of DVST. The DVST is a transformer-based 3D backbone that can be utilized in various voxel-based 3D detection frameworks. It comprises a sequence of DVSA modules and MLPs. The DVSA is a deformable attention module explicitly designed for learning from 3D point clouds. It consists of the OGM, encoding cross-attention, GRCFFN, and decoding cross-attention. The OGM is employed to generate deformable offsets.</p>
Full article ">Figure 2
<p>An illustration of DVSA. The figure presents the information flow of the DVSA. A group of foreground points is identified, and their offsets are learned from the queries through the OGM. Then, the deformed keys and values are projected from the deformable features. Subsequently, the deformable features are compressed into a latent space using a set of inducing points, and the features are refined using the GRCFFN. Finally, the output features are obtained through multi-head attention. For clarity of presentation, only two voxel grids and five foreground points are shown, although there are more points in practical implementation.</p>
Full article ">Figure 3
<p>The network structure of GRCFFN.</p>
Full article ">Figure 4
<p>Qualitative analysis results of 3D object detection on the KITTI dataset. Three scenes, denoted as (<b>a</b>–<b>c</b>), are showcased. Each scene includes the ground truth box. The detection results for each scene using VoxSeT and DVST models are depicted on the left and right sides, respectively. False detection targets are circled for clarity.</p>
Full article ">Figure 4 Cont.
<p>Qualitative analysis results of 3D object detection on the KITTI dataset. Three scenes, denoted as (<b>a</b>–<b>c</b>), are showcased. Each scene includes the ground truth box. The detection results for each scene using VoxSeT and DVST models are depicted on the left and right sides, respectively. False detection targets are circled for clarity.</p>
Full article ">
21 pages, 2913 KiB  
Article
Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
by Chen Cai, Yi Wang and Kim-Hui Yap
Remote Sens. 2023, 15(23), 5611; https://doi.org/10.3390/rs15235611 - 3 Dec 2023
Cited by 6 | Viewed by 1861
Abstract
Remote sensing image change captioning (RSICC) aims to automatically generate sentences describing the difference in content in remote sensing bitemporal images. Recent works extract the changes between bitemporal features and employ a hierarchical approach to fuse multiple changes of interest, yielding change captions. [...] Read more.
Remote sensing image change captioning (RSICC) aims to automatically generate sentences describing the difference in content in remote sensing bitemporal images. Recent works extract the changes between bitemporal features and employ a hierarchical approach to fuse multiple changes of interest, yielding change captions. However, these methods directly aggregate all features, potentially incorporating non-change-focused information from each encoder layer into the change caption decoder, adversely affecting the performance of change captioning. To address this problem, we proposed an Interactive Change-Aware Transformer Network (ICT-Net). ICT-Net is able to extract and incorporate the most critical changes of interest in each encoder layer to improve change description generation. It initially extracts bitemporal visual features from the CNN backbone and employs an Interactive Change-Aware Encoder (ICE) to capture the crucial difference between these features. Specifically, the ICE captures the most change-aware discriminative information between the paired bitemporal features interactively through difference and content attention encoding. A Multi-Layer Adaptive Fusion (MAF) module is proposed to adaptively aggregate the relevant change-aware features in the ICE layers while minimizing the impact of irrelevant visual features. Moreover, we extend the ICE to extract multi-scale changes and introduce a novel Cross Gated-Attention (CGA) module into the change caption decoder to select essential discriminative multi-scale features to improve the change captioning performance. We evaluate our method on two RSICC datasets (e.g., LEVIR-CC and LEVIRCCD), and the experimental results demonstrate that our method achieves a state-of-the-art performance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>A visualization of the existing method and our proposed method. (<b>a</b>) The existing method [<a href="#B3-remotesensing-15-05611" class="html-bibr">3</a>] uses a hierarchical approach that tends to integrate the unchanged focused information from each encoder layer, disrupting the change feature learning in the decoder and generating inferior change descriptions. Our proposed method attentively aggregates the essential features for more informative caption generation. (<b>b</b>) Existing methods [<a href="#B2-remotesensing-15-05611" class="html-bibr">2</a>,<a href="#B3-remotesensing-15-05611" class="html-bibr">3</a>,<a href="#B4-remotesensing-15-05611" class="html-bibr">4</a>,<a href="#B9-remotesensing-15-05611" class="html-bibr">9</a>] overlook the change in objects with various scales, generating inferior change descriptions. Ours can extract discriminative information across various scales (e.g., a small scale) for change captioning. Blue indicates that the word “house” is attended to the particular region in the image, while reddish colors suggest a lower level of focus on it. The bluer the color, the higher the attention value.</p>
Full article ">Figure 2
<p>Overview of the proposed ICT-Net. It consists of three components: a multi-scale feature extractor to extract visual features, an Interactive Change-Aware Encoder (ICE) with a Multi-Layer Adaptive Fusion (MAF) module to capture the semantic changes between bitemporal features, and a change caption decoder with a Cross Gated-Attention (CGA) module to generate change descriptions.</p>
Full article ">Figure 3
<p>Structure of the Multi-Layer Adaptive Fusion module.</p>
Full article ">Figure 4
<p>Structure of the Cross Gated-Attention module.</p>
Full article ">Figure 5
<p>Comparison of attention maps generated using DAE and DAE + CAE. <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> <mi>r</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>M</mi> <mrow> <mi>s</mi> <mi>m</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> </mrow> </msub> </semantics></math> denote the attention maps for large and small changes captured between bitemporal image features, respectively. <math display="inline"><semantics> <msub> <mi>I</mi> <msub> <mi>t</mi> <mn>0</mn> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>I</mi> <msub> <mi>t</mi> <mn>1</mn> </msub> </msub> </semantics></math> denote input RS images. Note that regions appearing more blue indicate higher levels of attention. We use the red dotted box to ground the small change areas to ease the visualization.</p>
Full article ">Figure 6
<p>Visualization of the generated attention map of the caption decoder using the existing MBF [<a href="#B3-remotesensing-15-05611" class="html-bibr">3</a>] method and the proposed MAF. The word highlighted in red in the caption corresponds to the blue region in the generated attention map. Note that regions appearing more blue indicate higher levels of attention.</p>
Full article ">Figure 7
<p>Visualization of captured multi-scale word and feature attention maps in the change caption decoder of the GCA module, where <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>w</mi> <mi>o</mi> <mi>r</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mi>w</mi> <mi>o</mi> <mi>r</mi> <mi>d</mi> <mi>s</mi> </mrow> </msub> </semantics></math> denote the attention maps that capture large and small object changes for each object word (highlighted in red) in the generated change caption, respectively. We use the red bounding boxes to indicate the small-scale object change regions for image pairs (1), (2), and (3). (4), (5), and (6) include middle to large-scale changes. The regions appearing more blue indicate higher levels of attention.</p>
Full article ">Figure 8
<p>Qualitative results on the LEVIR-CC dataset. The <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> </semantics></math> image was captured “before”, and the <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>t</mi> <mn>1</mn> </mrow> </msub> </semantics></math> was captured “after”. GT represents the ground truth caption. We use red bounding boxes to indicate the small-scale object change regions for image pairs (1) and (2). (3) and (4) include middle to large-scale changes. Green and blue words highlighted the correctly predicted change objects for the existing method (a) and ours (b), respectively.</p>
Full article ">
24 pages, 6085 KiB  
Article
SSCNet: A Spectrum-Space Collaborative Network for Semantic Segmentation of Remote Sensing Images
by Xin Li, Feng Xu, Xi Yong, Deqing Chen, Runliang Xia, Baoliu Ye, Hongmin Gao, Ziqi Chen and Xin Lyu
Remote Sens. 2023, 15(23), 5610; https://doi.org/10.3390/rs15235610 - 3 Dec 2023
Cited by 20 | Viewed by 1951
Abstract
Semantic segmentation plays a pivotal role in the intelligent interpretation of remote sensing images (RSIs). However, conventional methods predominantly focus on learning representations within the spatial domain, often resulting in suboptimal discriminative capabilities. Given the intrinsic spectral characteristics of RSIs, it becomes imperative [...] Read more.
Semantic segmentation plays a pivotal role in the intelligent interpretation of remote sensing images (RSIs). However, conventional methods predominantly focus on learning representations within the spatial domain, often resulting in suboptimal discriminative capabilities. Given the intrinsic spectral characteristics of RSIs, it becomes imperative to enhance the discriminative potential of these representations by integrating spectral context alongside spatial information. In this paper, we introduce the spectrum-space collaborative network (SSCNet), which is designed to capture both spectral and spatial dependencies, thereby elevating the quality of semantic segmentation in RSIs. Our innovative approach features a joint spectral–spatial attention module (JSSA) that concurrently employs spectral attention (SpeA) and spatial attention (SpaA). Instead of feature-level aggregation, we propose the fusion of attention maps to gather spectral and spatial contexts from their respective branches. Within SpeA, we calculate the position-wise spectral similarity using the complex spectral Euclidean distance (CSED) of the real and imaginary components of projected feature maps in the frequency domain. To comprehensively calculate both spectral and spatial losses, we introduce edge loss, Dice loss, and cross-entropy loss, subsequently merging them with appropriate weighting. Extensive experiments on the ISPRS Potsdam and LoveDA datasets underscore SSCNet’s superior performance compared with several state-of-the-art methods. Furthermore, an ablation study confirms the efficacy of SpeA. Full article
(This article belongs to the Special Issue Multisource Remote Sensing Image Interpretation and Application)
Show Figures

Figure 1

Figure 1
<p>Illustration of frequency domain features. (<b>a</b>) Raw image, (<b>b</b>) 2D FFT transformed frequency image, (<b>c</b>) low-frequency components, and (<b>d</b>) high-frequency components.</p>
Full article ">Figure 2
<p>Overall framework of SCCNet.</p>
Full article ">Figure 3
<p>Details of JSSA.</p>
Full article ">Figure 4
<p>Details of SpeA.</p>
Full article ">Figure 5
<p>Details of SpaA.</p>
Full article ">Figure 6
<p>Pipeline of AttnFusion.</p>
Full article ">Figure 7
<p>Visualization of ISPRS Potsdam dataset.</p>
Full article ">Figure 8
<p>Visualization of LoveDA dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>–<b>c</b>) Visual inspections of random samples from test set of ISPRS Potsdam.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>c</b>) Visual inspections of random samples from test set of LoveDA.</p>
Full article ">Figure 11
<p>Training loss for ISPRS Potsdam.</p>
Full article ">Figure 12
<p>Training loss for LoveDA.</p>
Full article ">Figure 13
<p>Visualizations of samples of ISPRS Potsdam. (<b>a</b>) RGB images, (<b>b</b>) ground truth, (<b>c</b>) predictions by SSCNet, and (<b>d</b>) predictions by SSCNet w/o SpeA.</p>
Full article ">Figure 14
<p>Visualizations of samples of LoveDA. (<b>a</b>) RGB images, (<b>b</b>) ground truth, (<b>c</b>) predictions by SSCNet, and (<b>d</b>) predictions by SSCNet w/o SpeA.</p>
Full article ">
21 pages, 15312 KiB  
Article
Enhancing Leaf Area Index Estimation with MODIS BRDF Data by Optimizing Directional Observations and Integrating PROSAIL and Ross–Li Models
by Hu Zhang, Xiaoning Zhang, Lei Cui, Yadong Dong, Yan Liu, Qianrui Xi, Hongtao Cao, Lei Chen and Yi Lian
Remote Sens. 2023, 15(23), 5609; https://doi.org/10.3390/rs15235609 - 2 Dec 2023
Cited by 2 | Viewed by 2600
Abstract
The Leaf Area Index (LAI) is a crucial vegetation parameter for climate and ecological models. Reflectance anisotropy contains valuable supplementary information for the retrieval of properties of an observed target surface. Previous studies have utilized multi-angular reflectance data and physically based Bidirectional Reflectance [...] Read more.
The Leaf Area Index (LAI) is a crucial vegetation parameter for climate and ecological models. Reflectance anisotropy contains valuable supplementary information for the retrieval of properties of an observed target surface. Previous studies have utilized multi-angular reflectance data and physically based Bidirectional Reflectance Distribution Function (BRDF) models with detailed vegetation structure descriptions for LAI estimation. However, the optimal selection of viewing angles for improved inversion results has received limited attention. By optimizing directional observations and integrating the PROSAIL and Ross–Li models, this study aims to enhance LAI estimation from MODIS BRDF data. A dataset of 20,000 vegetation parameter combinations was utilized to identify the directions in which the PROSAIL model exhibits higher sensitivity to LAI changes and better consistency with the Ross–Li BRDF models. The results reveal significant variations in the sensitivity of the PROSAIL model to LAI changes and its consistency with the Ross–Li model over the viewing hemisphere. In the red band, directions with high sensitivity to LAI changes and strong model consistency are mainly found at smaller solar and viewing zenith angles. In the near-infrared band, these directions are distributed at positions with larger solar and viewing zenith angles. Validation using field measurements and LAI maps demonstrates that the proposed method achieves comparable accuracy to an algorithm utilizing 397 viewing angles by utilizing reflectance data from only 30 directions. Moreover, there is a significant improvement in computational efficiency. The accuracy of LAI estimation obtained from simulated multi-angle data is relatively high for LAI values below 3.5 when compared with the MODIS LAI product from two tiles. Additionally, there is also a slight improvement in the results when the LAI exceeds 4.5. Overall, our results highlight the potential of utilizing multi-angular reflectance in specific directions for vegetation parameter inversion, showcasing the promise of this method for large-scale LAI estimation. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of LAI estimation by linking the PROSAIL and Ross–Li BRDF models using the MODIS BRDF product. Part A is the PROSAIL model’s sensitivity to LAI, part B is the coherence between PROSAIL and RTLSR_C BRDF models, part C is to determine the optimal direction, and part D is the inversion and validation of LAI based on MODIS BRDF.</p>
Full article ">Figure 2
<p>Three-dimensional BRDF shape in the red (<b>a</b>–<b>c</b>) and NIR (<b>d</b>–<b>f</b>) bands simulated based on the common value and PROSAIL model under different LAI parameter conditions. Different colors represent the magnitude of the reflectance. In the bottom coordinate plane, the radius represents the zenith angle, while the polar angle represents the azimuth angle. The vertical axis is used to plot the BRDF values.</p>
Full article ">Figure 3
<p>Here, (<b>a</b>,<b>c</b>) refer to the directional reflectance variations with changing LAI values, simulated using the common value and PROSAIL model in the nadir (green line), 45° forward (45F, black line) and 45° backward (45B, red line) in the red and NIR bands under a solar zenith angle of 45°. The σ of directional reflectance with changing LAI is also demonstrated. Then, (<b>b</b>,<b>d</b>) refer to the distribution of σ in the viewing hemisphere. The radius represents the zenith angle, while the polar angle represents the azimuth angle. Different colors represent the magnitude of the σ.</p>
Full article ">Figure 4
<p>The distribution of the average σ of 20,000 sets of data over the viewing hemisphere at solar zenith angles (SZA) of 0° (<b>a</b>,<b>f</b>), 15° (<b>b</b>,<b>g</b>), 30° (<b>c</b>,<b>h</b>), 45° (<b>d</b>,<b>i</b>), and 60° (<b>e</b>,<b>j</b>) in the red (<b>a</b>–<b>e</b>) and NIR (<b>f</b>–<b>j</b>) bands.</p>
Full article ">Figure 5
<p>The three-dimensional BRDF shapes based on the RTLSR_C model and the multi-angular reflectance simulated using the Common Value and PROSAIL models in the red (<b>a</b>) and NIR (<b>b</b>) bands.</p>
Full article ">Figure 6
<p>Comparison of reflectance in the NIR band between the PROSAIL and RTLSR_C models in the nadir direction (black points) and at a backscattering angle of 40° (red points) when the solar zenith angle is 30°.</p>
Full article ">Figure 7
<p>The distribution of the RMSE between the reflectance from the PROSAIL and RTLSR_C models over the whole viewing hemisphere at solar zenith angles of 0° (<b>a</b>,<b>f</b>), 15° (<b>b</b>,<b>g</b>), 30° (<b>c</b>,<b>h</b>), 15° (<b>d</b>,<b>i</b>), and 60° (<b>e</b>,<b>j</b>) in the red (<b>a</b>–<b>e</b>) and NIR (<b>f</b>–<b>j</b>) bands.</p>
Full article ">Figure 8
<p>The distribution of observations in the red (<b>a</b>–<b>c</b>) and NIR (<b>d</b>–<b>f</b>) bands for 30 (<b>a</b>,<b>d</b>), 60 (<b>b</b>,<b>e</b>), and 90 (<b>c</b>,<b>f</b>) selected directions. Solid circles represent the position of the sun, while hollow rectangles represent the observation locations. The colors red, green, blue, gold, and purple correspond to observations taken at solar zenith angles of 0°, 15°, 30°, 45°, and 60°, respectively. Different-sized rectangles are used to indicate the positions of the observation data in order to avoid overlap.</p>
Full article ">Figure 9
<p>The comparison of 180 LAI measurements at the 500 m plot with LAI estimated from MODIS BRDF based on different numbers of observations. Among them, (<b>a</b>) is the result based on 397 observations; (<b>b</b>) is the result based on 30 directions; and (<b>c</b>) is the R<sup>2</sup> between the results obtained based on different numbers of observations and surface observation data. The blue line in (<b>a</b>,<b>b</b>) represents the fitted relationship between these two sets of data.</p>
Full article ">Figure 10
<p>The comparison of 30 m high-quality LAI maps with LAIs retrieval from MODIS BRDF at a scale of 1.5 km: (<b>a</b>) is the result based on 397 observations provided by Zhang et al., 2021, [<a href="#B10-remotesensing-15-05609" class="html-bibr">10</a>] and (<b>b</b>) is the result based on 30 selected observations. The blue line represents the fitted relationship between these two sets of data.</p>
Full article ">Figure 11
<p>The comparison between LAI retrieval using MODIS BRDF and the MODIS LAI product. Specifically, (<b>a</b>,<b>b</b>) show the results for tile h26v04 during days 181–193 of the year 2020, while (<b>c</b>,<b>d</b>) display the results for tile h12v04 during days 245–257 of the year 2020. Here, (<b>a</b>,<b>c</b>) refer to LAI retrieval from 397 directional reflectance per band, while (<b>b</b>,<b>d</b>) refer to LAI retrieval from 30 directional reflectance per band. Different colors represent the number of pixels in each category, with gray indicating a pixel number of less than 5.</p>
Full article ">
24 pages, 13702 KiB  
Article
Dual-Channel Semi-Supervised Adversarial Network for Building Segmentation from UAV-Captured Images
by Wenzheng Zhang, Changyue Wu, Weidong Man and Mingyue Liu
Remote Sens. 2023, 15(23), 5608; https://doi.org/10.3390/rs15235608 - 2 Dec 2023
Viewed by 1524
Abstract
Accurate building extraction holds paramount importance in various applications such as urbanization rate calculations, urban planning, and resource allocation. In response to the escalating demand for precise low-altitude unmanned aerial vehicle (UAV) building segmentation in intricate scenarios, this study introduces a semi-supervised methodology [...] Read more.
Accurate building extraction holds paramount importance in various applications such as urbanization rate calculations, urban planning, and resource allocation. In response to the escalating demand for precise low-altitude unmanned aerial vehicle (UAV) building segmentation in intricate scenarios, this study introduces a semi-supervised methodology to alleviate the labor-intensive process of procuring pixel-level annotations. Within the framework of adversarial networks, we employ a dual-channel parallel generator strategy that amalgamates the morphology-driven optical flow estimation channel with an enhanced multilayer sensing Deeplabv3+ module. This approach aims to comprehensively capture both the morphological attributes and textural intricacies of buildings while mitigating the dependency on annotated data. To further enhance the network’s capability to discern building features, we introduce an adaptive attention mechanism via a feature fusion module. Additionally, we implement a composite loss function to augment the model’s sensitivity to building structures. Across two distinct low-altitude UAV datasets within the domain of UAV-based building segmentation, our proposed method achieves average mean pixel intersection-over-union (mIoU) ratios of 82.69% and 79.37%, respectively, with unlabeled data constituting 70% of the overall dataset. These outcomes signify noteworthy advancements compared with contemporaneous networks, underscoring the robustness of our approach in tackling intricate building segmentation challenges in the domain of UAV-based architectural analysis. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Total flow chart of the algorithm. The dotted lines represent the different modules in the network, and the arrows represent the order in which the network operates. The images and corresponding true labels are processed by the optical flow estimation channel and the improved Deeplabv3+ module. The features from both channels are fused, evaluated by the discriminator, and fine-tuned using the composite loss function for network convergence.</p>
Full article ">Figure 2
<p>Schematic diagram of shape-driven optical flow estimation channel. The RGB images and keyframe label maps go through building form constraint algorithms before entering the optical flow estimation channel. This channel handles feature extraction and pseudo-label generation, including constrained feature matching, displacement calculation to establish the objective function, and the generation of optical flow characteristic values.</p>
Full article ">Figure 3
<p>Improved Deeplabv3+ module structure chart. The images are initially passed through BA-ASPP, MFFM, and 1 × 1 conv using the features extracted by ResNet. The features input to BA-ASPP undergo processing by different conv layers and HCAM. They are then fused with features constrained by MFFM and subjected to 1 × 1 conv for sampling fusion. Lastly, the output feature map is upsampled.</p>
Full article ">Figure 4
<p>Hierarchy channel attention module (HCAM) structure flow chart. Input features are extracted again via max pooling, average pooling, MCOB, and 1 × 1 conv. The max pooling and average pooling features are merged with MCOB-constrained features using 1 × 1 conv. The resulting features are further combined with those obtained from initial convolution. Finally, these integrated features are passed to the fine-grained perception hierarchy for amplification.</p>
Full article ">Figure 5
<p>Multilevel feature fusion module (MFFM) structure flow chart. Input features are sampled in three dimensions: low-dimensional features through 1 × 1 conv, batch normalization, and coordinate attention; medium-dimensional features via upsampling, 3 × 3 conv (expansion rate of 2), batch normalization, and coordinate attention; high-dimensional features with 3 × 3 conv (expansion rate of 4), batch normalization, and coordinate attention. The final output is obtained by applying the ReLu activation to these features after multilayer feature fusion.</p>
Full article ">Figure 6
<p>Comparison of the results of each network in the drone building dataset.</p>
Full article ">Figure 7
<p>Comparison of the results of each network in the UDD6 dataset.</p>
Full article ">Figure 8
<p>Ablation experiments on the UDD6 dataset.</p>
Full article ">Figure 9
<p>Morphology-driven optical flow estimation example.</p>
Full article ">
24 pages, 21652 KiB  
Article
A Multi-Source-Data-Assisted AUV for Path Cruising: An Energy-Efficient DDPG Approach
by Tianyu Xing, Xiaohao Wang, Kaiyang Ding, Kai Ni and Qian Zhou
Remote Sens. 2023, 15(23), 5607; https://doi.org/10.3390/rs15235607 - 2 Dec 2023
Cited by 1 | Viewed by 1538
Abstract
As marine activities expand, deploying underwater autonomous vehicles (AUVs) becomes critical. Efficiently navigating these AUVs through intricate underwater terrains is vital. This paper proposes a sophisticated motion-planning algorithm integrating deep reinforcement learning (DRL) with an improved artificial potential field (IAPF). The algorithm incorporates [...] Read more.
As marine activities expand, deploying underwater autonomous vehicles (AUVs) becomes critical. Efficiently navigating these AUVs through intricate underwater terrains is vital. This paper proposes a sophisticated motion-planning algorithm integrating deep reinforcement learning (DRL) with an improved artificial potential field (IAPF). The algorithm incorporates remote sensing information to overcome traditional APF challenges and combines the IAPF with the traveling salesman problem for optimal path cruising. Through a combination of DRL and multi-source data optimization, the approach ensures minimal energy consumption across all target points. Inertial sensors further refine trajectory, ensuring smooth navigation and precise positioning. The comparative experiments confirm the method’s energy efficiency, trajectory refinement, and safety excellence. Full article
(This article belongs to the Topic Artificial Intelligence in Navigation)
Show Figures

Figure 1

Figure 1
<p>Schematic of multi-source-data-assisted AUV for multiple target point cruising.</p>
Full article ">Figure 2
<p>Kinematic and dynamic models for the AUV.</p>
Full article ">Figure 3
<p>The improvement strategies for the local optimum problem. (<b>a</b>) The improvement strategy for a single obstacle in the 2D environment. (<b>b</b>) The improvement strategy for multiple obstacles in the 2D environment. (<b>c</b>) The improvement strategy for a single obstacle in the 3D environment. (<b>d</b>) The improvement strategy for multiple obstacles in the 3D environment.</p>
Full article ">Figure 4
<p>Kalman filter process.</p>
Full article ">Figure 5
<p>Multi-source-data-assisted AUV motion planning based on the DDPG algorithm.</p>
Full article ">Figure 6
<p>Cruise sequence generated by TSP-IAPF.</p>
Full article ">Figure 7
<p>Reward curves generated by four DRL algorithms.</p>
Full article ">Figure 8
<p>The planned 3D paths from the four algorithms: (<b>a</b>) top view; (<b>b</b>) 3D view.</p>
Full article ">Figure 9
<p>The four algorithms’ generated motion data. (<b>a</b>) Steering angle. (<b>b</b>) Attack angle. (<b>c</b>) Pitch angle. (<b>d</b>) Nearest distance to the obstacle. (<b>e</b>) Velocity. (<b>f</b>) Acceleration.</p>
Full article ">Figure 10
<p>Energy consumption, the closest distance to the obstacle, path length, and navigation time for the four algorithms under Monte Carlo simulation. (<b>a</b>) Simulation of energy consumption and closest distance to the obstacle. (<b>b</b>) Simulation of path length and navigation time.</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>) Motion-planning paths are generated by four algorithms in a dynamic environment.</p>
Full article ">Figure 12
<p>Trajectory-tracking curve for AUV under remote sensing error. (<b>a</b>) Top view of trajectory tracking curve. (<b>b</b>) Three-dimensional view of trajectory-tracking curve.</p>
Full article ">Figure 13
<p>Energy consumption and closest distance to the obstacle for different remote sensing detection distances: (<b>a</b>) energy consumption; (<b>b</b>) nearest distance to the obstacle.</p>
Full article ">Figure 14
<p>Trajectory-tracking error for AUV.</p>
Full article ">
19 pages, 8205 KiB  
Article
Investigation on the Utilization of Millimeter-Wave Radars for Ocean Wave Monitoring
by Xindi Liu, Yunhua Wang, Fushun Liu and Yuting Zhang
Remote Sens. 2023, 15(23), 5606; https://doi.org/10.3390/rs15235606 - 2 Dec 2023
Cited by 1 | Viewed by 1523
Abstract
The feasibility of using millimeter-wave radars for wave observations was investigated in this study. The radars used in this study operate at a center frequency of 77.572 GHz. To investigate the feasibility of wave observations and extract one-dimensional and two-dimensional wave spectra, arrays [...] Read more.
The feasibility of using millimeter-wave radars for wave observations was investigated in this study. The radars used in this study operate at a center frequency of 77.572 GHz. To investigate the feasibility of wave observations and extract one-dimensional and two-dimensional wave spectra, arrays consisting of multiple radar units were deployed for observations in both laboratory and field environments. Based on the data measured with the millimeter-wave radars, one-dimensional wave spectra and two-dimensional wave directional spectra were evaluated using the periodogram method and the Bayesian directional spectrum estimation method (BDM), respectively. Meanwhile, wave parameters such as the significant wave height, wave period, and wave direction were also calculated. Via comparative experiments with a capacitive wave height meter in a wave tank and RADAC’s WG5-HT-CP radar in an offshore field, the viability of using millimeter-wave radars to observe water waves was validated. The results indicate that the one-dimensional wave spectra measured with the millimeter-wave radars were consistent with those measured with the mature commercial capacitive wave height meter and the WG5-HT-CP wave radar. Via wave direction measurement experiments conducted in a wave tank and offshore environment, it is evident that the wave directions retrieved with the millimeter-wave radars were in good alignment with the actual wave directions. Full article
Show Figures

Figure 1

Figure 1
<p>The single-chirp linear-frequency-modulated continuous-wave signal and its parameters.</p>
Full article ">Figure 2
<p>The millimeter-wave radar water surface echoes are depicted in multiple-chirp one-dimensional range profile (<b>a</b>) and single-chirp one-dimensional range profile (<b>b</b>).</p>
Full article ">Figure 3
<p>When the radar is located on a metal platform: the occurrence of multiple reflections in wave measurements (<b>a</b>) and the spectrum of single-chirp signal (<b>b</b>).</p>
Full article ">Figure 4
<p>The experimental measurement sites were Rainbow Bridge (<b>a</b>) located in the Nan District of Qingdao City, and the planar random-wave wave–current coupling pool (<b>b</b>) located in the Laoshan Campus of the Ocean University of China.</p>
Full article ">Figure 5
<p>The capacitive wave height meter (<b>a</b>) and its high-speed data acquisition system (<b>b</b>).</p>
Full article ">Figure 6
<p>Comparison of wave height time series between millimeter-wave radars and capacitive wave height meter: regular wave (<b>a</b>) and irregular wave (<b>b</b>).</p>
Full article ">Figure 7
<p>The wave spectrum comparisons between millimeter-wave radars and the capacitive wave height meter in a planar random-wave wave–current coupling pool for regular waves (<b>a</b>) and irregular waves (<b>b</b>).</p>
Full article ">Figure 8
<p>In the case of regular waves, the arrangement of the millimeter-wave radar array (<b>a</b>) and the wave direction at the field (<b>b</b>).</p>
Full article ">Figure 9
<p>In the case of irregular waves, the arrangement of the millimeter-wave radar array (<b>a</b>) and the wave direction at the field (<b>b</b>).</p>
Full article ">Figure 10
<p>Under regular wave conditions, the millimeter-wave radar array provided measurements of the wave direction spectrum in both polar form (<b>a</b>) and Cartesian form (<b>b</b>).</p>
Full article ">Figure 11
<p>Under irregular wave conditions, the millimeter-wave radar array provided measurements of the wave direction spectrum in polar form (<b>a</b>) and Cartesian form (<b>b</b>).</p>
Full article ">Figure 12
<p>Millimeter-wave radar (Left 1) and WG5-HT-CP (Left 2) field setup.</p>
Full article ">Figure 13
<p>The spectra (<b>a</b>) calculated from 20 min observation data of millimeter-wave radar and WG5-HT-CP radar, and the spectral ratio (<b>b</b>) between the two.</p>
Full article ">Figure 14
<p>The scatter plots depict wave parameters measured with the millimeter-wave radar and WG5-HT-CP radar, which include significant wave height (<b>a</b>), spectrum peak period (<b>b</b>), mean wave period (<b>c</b>), and mean zero-crossing wave period (<b>d</b>). Each scatter plot of these wave parameters comprises 53 data points, calculated at 2-min intervals.</p>
Full article ">Figure 15
<p>Arrangement of millimeter-wave radar array for marine field experiment.</p>
Full article ">Figure 16
<p>The mm wave radar array arrangement (<b>a</b>), the actual ocean wave (<b>b</b>), the rectangular coordinate (<b>c</b>), and the polar coordinate (<b>d</b>) of the wave direction spectrum measured in the field ocean experiment.</p>
Full article ">
36 pages, 66724 KiB  
Review
Planetary Radar—State-of-the-Art Review
by Anne K. Virkki, Catherine D. Neish, Edgard G. Rivera-Valentín, Sriram S. Bhiravarasu, Dylan C. Hickson, Michael C. Nolan and Roberto Orosei
Remote Sens. 2023, 15(23), 5605; https://doi.org/10.3390/rs15235605 - 2 Dec 2023
Cited by 4 | Viewed by 8426
Abstract
Planetary radar observations have provided invaluable information on the solar system through both ground-based and space-based observations. In this overview article, we summarize how radar observations have contributed in planetary science, how the radar technology as a remote-sensing method for planetary exploration and [...] Read more.
Planetary radar observations have provided invaluable information on the solar system through both ground-based and space-based observations. In this overview article, we summarize how radar observations have contributed in planetary science, how the radar technology as a remote-sensing method for planetary exploration and the methods to interpret the radar data have advanced in the eight decades of increasing use, where the field stands in the early 2020s, and what are the future prospects of the ground-based facilities conducting planetary radar observations and the planned spacecraft missions equipped with radar instruments. The focus of the paper is on radar as a remote-sensing technique using radar instruments in spacecraft orbiting planetary objects and in Earth-based radio telescopes, whereas ground-penetrating radar systems on landers are mentioned only briefly. The key scientific developments are focused on the search for water ice in the subsurface of the Moon, which could be an invaluable in situ resource for crewed missions, dynamical and physical characterization of near-Earth asteroids, which is also crucial for effective planetary defense, and a better understanding of planetary geology. Full article
(This article belongs to the Special Issue Radar for Planetary Exploration)
Show Figures

Figure 1

Figure 1
<p>The “left-looking” surface mosaic of Venus as mapped by Magellan. Data from NASA/USGS (see Data Availability Statement).</p>
Full article ">Figure 2
<p>Two typical Doppler echo power spectra of asteroids 2020 BX12 (on the <b>left</b>) and (136795) 1997 BQ (on the <b>right</b>) obtained at Arecibo Observatory. The horizontal axis is centered at the Doppler shift expected based on the ephemeris, and the offset shows that the ephemeris requires correcting. Binary asteroids such as 2020 BX12 show the echo of the secondary often as a narrow peak, here near −10.8 Hz, due to the differences in the respective sizes and spin periods of the two bodies orbiting each other. Data from NASA/NSF/Arecibo Observatory (see Data Availability Statement).</p>
Full article ">Figure 3
<p>Two typical high-quality delay–Doppler images of asteroids 2017 YE5 obtained using the Arecibo S-band radar system (on the left) and 2014 HQ124 obtained using bistatic X-band radar observations with transmission from Goldstone and reception at Arecibo (on the right). Data from NASA/NSF/Arecibo Observatory/JPL (see Data Availability Statement).</p>
Full article ">Figure 4
<p>Mini-RF bistatic S-band observation of Vallis Schröteri on the Moon, showing the four Stokes parameters and the derived CPR. The CPR overlaid on the S1 image is colorized from 0 (purple) to &gt;1.0 (red), and the intermediate values increasing from blue to cyan. North is to the right. Data from NASA (see Data Availability Statement).</p>
Full article ">Figure 5
<p>A simplified cartoon of the typical patterns seen when plotting the per-pixel backscatter coefficients with three aspects of interest: (A) the OC backscatter coefficient intercept point, (B) a range of gradients (here, three different gradients are shown), and (C) the relative extent of the observed OC and SC backscatter coefficients, following [<a href="#B71-remotesensing-15-05605" class="html-bibr">71</a>] (see the text). The three gray dotted lines demonstrate a cone shape typically seen for a sample of data, e.g., the distribution of properly normalized backscatter coefficients in a radar image of the lunar surface, which can be attributed to varying distributions of various shapes, sizes, and dielectric properties of wavelength-scale particles in each pixel.</p>
Full article ">Figure 6
<p>The <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>χ</mi> </mrow> </semantics></math> decomposition RGB map centered on the lunar crater Gardner (33.8<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>N, 17.7<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>E) resulting from Mini-RF S-band monostatic observations. The magenta boxes annotated as A and B correspond to the areas where <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>χ</mi> </mrow> </semantics></math> decomposition components were sampled for <a href="#remotesensing-15-05605-f007" class="html-fig">Figure 7</a>. The resolution of the image is 0.3 km/pxl. Data from NASA (see Data Availability Statement).</p>
Full article ">Figure 7
<p>Histogram of the <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>χ</mi> </mrow> </semantics></math> decomposition components, in terms of probability, from the A and B units noted in <a href="#remotesensing-15-05605-f006" class="html-fig">Figure 6</a> (<b>A</b>: the crater floor and walls, and <b>B</b>: a smoother area nearby). Color coding follows the red, blue, green of the <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>−</mo> <mi>χ</mi> </mrow> </semantics></math> decomposition method.</p>
Full article ">Figure 8
<p>Mare Crisium on the Moon as seen in (<b>A</b>) optical, (<b>B</b>) S-band, and (<b>C</b>) P-band radar. Radar images are maps of SC backscatter in dB scale. Color variation for the radar maps ranges between 1.5 times the interquartile range above and below the lower and upper quartile values from blue to yellow, respectively. In the magenta box, we note a buried volcanic feature that appears in P-band images but not in the optical or S-band images. Data from NASA (see Data Availability Statement).</p>
Full article ">Figure 8 Cont.
<p>Mare Crisium on the Moon as seen in (<b>A</b>) optical, (<b>B</b>) S-band, and (<b>C</b>) P-band radar. Radar images are maps of SC backscatter in dB scale. Color variation for the radar maps ranges between 1.5 times the interquartile range above and below the lower and upper quartile values from blue to yellow, respectively. In the magenta box, we note a buried volcanic feature that appears in P-band images but not in the optical or S-band images. Data from NASA (see Data Availability Statement).</p>
Full article ">Figure 9
<p>Total power radar backscatter delay–Doppler image of Mercury’s north polar bright features as observed on 19 July 2019. Values are in dB scale and resolution is 1.5 km/pxl. Data were collected in [<a href="#B31-remotesensing-15-05605" class="html-bibr">31</a>].</p>
Full article ">Figure 10
<p>Cassini RADAR provided the first high-resolution views of Titan’s polar regions, revealing the presence of large lakes and seas of liquid methane. This image shows Kraken Mare, which has a larger surface area than the Red Sea. Data from JPL/USGS (see Data Availability Statement).</p>
Full article ">Figure 11
<p>The Doppler echo power spectrum of Comet 73P fragment C observed using the Arecibo S-band radar system on 15 May 2006, with fitted models (dashed curves) for each circular polarization sense (solid black and dashed blue curves for the OC polarization and solid gray and dashed cyan curves for the SC polarization). The data were reported in [<a href="#B204-remotesensing-15-05605" class="html-bibr">204</a>].</p>
Full article ">
22 pages, 16268 KiB  
Article
Satellite and High-Spatio-Temporal Resolution Data Collected by Southern Elephant Seals Allow an Unprecedented 3D View of the Argentine Continental Shelf
by Melina M. Martinez, Laura A. Ruiz-Etcheverry, Martin Saraceno, Anatole Gros-Martial, Julieta Campagna, Baptiste Picard and Christophe Guinet
Remote Sens. 2023, 15(23), 5604; https://doi.org/10.3390/rs15235604 - 2 Dec 2023
Viewed by 3014
Abstract
High spatial and temporal resolution hydrographic data collected by Southern Elephant Seals (Mirounga leonina, SESs) and satellite remote sensing data allow a detailed oceanographic description of the Argentine Continental Shelf (ACS). In-situ data were obtained from the CTD (Conductivity, Temperature, and Depth), [...] Read more.
High spatial and temporal resolution hydrographic data collected by Southern Elephant Seals (Mirounga leonina, SESs) and satellite remote sensing data allow a detailed oceanographic description of the Argentine Continental Shelf (ACS). In-situ data were obtained from the CTD (Conductivity, Temperature, and Depth), accelerometer, and hydrophone sensors attached to five SESs that crossed the ACS between the 17th and 31st of October 2019. The analysis of the temperature (T) and salinity (S) along the trajectories allowed us to identify two different regions: north and south of 42°S. Satellite Sea Surface Temperature (SST) data suggests that north of 42°S, warm waters are coming from the San Matias Gulf (SMG). The high spatio-temporal resolution of the in-situ data shows regions with intense gradients along the T and S sections that were associated with a seasonal front that develops north of Península Valdés in winter due to the entrance of cold and fresh water to the SMG. The speed of the SESs is correlated with tidal currents in the coastal portion of the northern region, which is in good agreement with the macrotidal regime observed. A large number of Prey Catch Attempts (PCA), a measure obtained from the accelerometer sensor, indicates that SESs also feed in this region, contradicting suggestions from previous works. The analysis of wind intensity estimated from acoustic sensors allowed us to rule out the local wind as the cause of fast thermocline breakups observed along the SESs trajectories. Finally, we show that the maximum depth reached by the elephant seals can be used to detect errors in the bathymetry charts. Full article
(This article belongs to the Special Issue Oceans from Space V)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Thick color lines represent the trajectories of the five SESs that crossed the Argentine Continental Shelf (ACS) in October 2019. The colors in the background represent the bathymetry of GEBCO 2021. The thin black lines represent the 75 and 200 m isobaths. SMG: San Matias Gulf. PV: Península Valdés.</p>
Full article ">Figure 2
<p>A female SES with three biologging devices glued: a head-mounted DTAG-4, a back-mounted CTD-SRDL, and a neck-mounted SPOT. The main characteristics of the devices are listed in <a href="#remotesensing-15-05604-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 3
<p>Amplitude (cm) for the M2 constituent as obtained with the TPX0 tide model. Thick lines represent the trajectories of the five SESs in October 2019. Colors along the trajectories correspond to the different days, as detailed in <a href="#remotesensing-15-05604-f004" class="html-fig">Figure 4</a>. Thin gray lines represent the 50, 75, 90, 100, 150, and 200 m isobaths.</p>
Full article ">Figure 4
<p>The zonal component of the SES swimming velocity (black) and the tidal current velocity derived from TPXO (magenta) along the five trajectories. The mean value along each trajectory of the in-situ velocity (blue line) was added to the tidal currents. A threshold of two standard deviations was used to eliminate the outliers from the in-situ velocity data. The colors on the x-axis match the colors along the trajectories in <a href="#remotesensing-15-05604-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) A scatter plot of the in-situ temperature of SESs and SST-MUR was extracted along the trajectories (black dots). Linear regression and 95% confidence level intervals are indicated with a magenta line and a shaded gray region. (<b>b</b>) Temperature bias (SST MUR—T at 15 dbar) along trajectories. Black lines represent the 50, 75, 90, 100, 150, and 200 m isobaths.</p>
Full article ">Figure 6
<p>Temperature (°C) (<b>left</b> column) and Salinity (<b>right</b> column) were measured by sensors placed on 5 elephant seals during October 2019. The variables are presented at 3 pressure levels: 20 dbar (upper panel), 40 dbar (middle panel), and 60 dbar (bottom panel). Black lines represent the 50, 75, 90, 100, 150, and 200 m isobaths.</p>
Full article ">Figure 7
<p>(<b>a</b>) Trajectories represent the northern (875) and southern (051) regions. The corresponding TS diagrams are presented in panels (<b>b</b>,<b>c</b>). The colors in panels (<b>a</b>–<b>c</b>) correspond to the segment of trajectory traveled per day by each elephant. The black lines represent isolines of σ in kg/m<sup>3</sup>.</p>
Full article ">Figure 8
<p>Temperature (<b>top</b>), salinity (<b>middle</b>), and density (<b>bottom</b>) profiles for SES 875 after departing north of PV. The horizontal axis is the distance in km from PV. The red, gray, and black lines show the seafloor using the GEBCO-SHN 2019, GEBCO 2020, and GEBCO 2021 bathymetries, respectively. The top colorbar indicates the day and month of 2019; the colors are the same as in <a href="#remotesensing-15-05604-f007" class="html-fig">Figure 7</a>. Magenta dots are Prey Catch Attempts.</p>
Full article ">Figure 9
<p>(<b>a</b>) Sea surface temperature (SST) is derived from the Multi-Scale Ultra-High Resolution L4 (MUR) satellite product for 18 October 2019. The stars are kilometers from the coast (<a href="#remotesensing-15-05604-f008" class="html-fig">Figure 8</a>). Black lines represent the 50, 75, and 100 m isobaths. (<b>b</b>) Average magnitude of Sea surface temperature (SST) gradient (°C/km) derived from the Multi-scale Ultra-High Resolution L4 (MUR) satellite product for all months of October from 2002 to 2019. The yellow lines represent the trajectories of SESs. The trajectory of the SES 875 is marked with the black-magenta line on both panels. The magenta color corresponds to the segment of trajectory traveled on October 18 by the SES.</p>
Full article ">Figure 10
<p>Temperature (<b>top</b>), salinity (<b>middle</b>), and density (<b>bottom</b>) profiles along the southern trajectory 051. The horizontal axis is the distance in km from PV. The red, gray, and black lines show the seafloor using the GEBCO-SHN 2019, GEBCO 2020, and GEBCO 2021 bathymetries, respectively. The top colorbar indicates the day and month of 2019; the colors are the same as in <a href="#remotesensing-15-05604-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Trajectories of SESs 905 (magenta) and 051 (red). (<b>b</b>) In-situ temperature (magenta—red), salinity (blue), and wind speed from ERA5 (grey) and DTAG-4 (black) along the two trajectories. DTAG-4 values were low-pass filtered with a 20 min cut-off period using a Loess filter. The black dots in panel (<b>a</b>) and the black stars in panel (<b>b</b>) indicate the cooling event in both trajectories.</p>
Full article ">Figure 12
<p>Bathymetry differences higher than 5 m between the depth reached by SESs and the depth reported by GEBCO 2019 + SHN (<b>upper</b> panel), GEBCO 2020 (<b>middle</b> panel), and GEBCO 2021 (<b>bottom</b> panel). The black lines represent the five SES trajectories, and the gray contours represent the 25, 50, 75, 100, 150, and 200 m isobaths.</p>
Full article ">Figure 13
<p>Histogram of the bathymetry differences higher than 5 m between depths reached by SESs and bathymetric charts: GEBCO 2019 + SHN (dark blue), GEBCO 2020 (red), and GEBCO 2021 (dark yellow).</p>
Full article ">
23 pages, 7675 KiB  
Article
Satellite-Based Localization of IoT Devices Using Joint Doppler and Angle-of-Arrival Estimation
by Iza S. Mohamad Hashim and Akram Al-Hourani
Remote Sens. 2023, 15(23), 5603; https://doi.org/10.3390/rs15235603 - 2 Dec 2023
Cited by 4 | Viewed by 1944
Abstract
While global navigation satellite system (GNSS) technologies have always been the go-to solution for localization problems, they may not be the best choice for some Internet-of-Things (IoT) applications due to the incurred power consumption and cost. In this paper, we present an alternative [...] Read more.
While global navigation satellite system (GNSS) technologies have always been the go-to solution for localization problems, they may not be the best choice for some Internet-of-Things (IoT) applications due to the incurred power consumption and cost. In this paper, we present an alternative satellite-based localization method exploiting the signature of Doppler shifts and angle-of-arrival measurements as seen by a low-Earth-orbit (LEO) satellite. We first derive the joint likelihood function of the measurements, which is represented as a combination of three Gaussian distributions. Then, we show that the maximum likelihood problem reduces to a more-efficient mean squared error minimization in the Gaussian case as inferred from real measurements we collected from low-Earth-orbit satellite using a tracking ground station. Thus, we propose utilizing a stochastic optimizer to search for the global minimum of the mean squared error, which represents the location of the ground IoT device as estimated by the satellite platform. The emulated results show that the IoT device localization, in such a realistic model, can be performed with sufficient accuracy for IoT applications. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the non-terrestrial network (NTN): spaceborne and airborne.</p>
Full article ">Figure 2
<p>Example of satellite constellation orbits using Walker-star pattern with 12 orbital planes and 24 satellites distributed equally on a single plane.</p>
Full article ">Figure 3
<p>Comparison between the classical and relativistic Doppler shift frequency measured from a polar orbiting LEO satellite at an altitude of 833 km. The highest relative error is at the inflection point (around 0.62 Hz).</p>
Full article ">Figure 4
<p>Doppler shift frequency at different LEO polar orbit altitudes (ranges from 200 to 1000 km).</p>
Full article ">Figure 5
<p>Illustration of the ground IoT device’s angle of arrival measured at satellite in north–east–down (NED) frame. Antenna boresight is assumed to be oriented towards the nadir.</p>
Full article ">Figure 6
<p>Block diagram of measurement setup for Doppler shift frequency measurements.</p>
Full article ">Figure 7
<p>Satellite tracking antenna with digital control interface and elevation–azimuth controller connected to the antenna’s rotator.</p>
Full article ">Figure 8
<p>The utilized VHF antenna and the corresponding motors to control the antenna’s tilt angle in the elevation and azimuth planes.</p>
Full article ">Figure 9
<p>Spectrogram of the actual measurements from the NOAA-15 satellite. The orange dotted line is the estimated Doppler shift frequency.</p>
Full article ">Figure 10
<p>The connection between the Software-Defined Radio (National Instruments (NI) Universal Software Radio Peripheral (USRP) 2950) and the GPS-disciplined oscillator.</p>
Full article ">Figure 11
<p>Comparison of the measured Doppler error distribution using free running (FRO) and GPS-disciplined (GPSDO) oscillators, respectively.</p>
Full article ">Figure 12
<p>Doppler measurement error distribution for five different satellite pass examples collected over a few days. All measurements passed the Kolmogorov–Smirnov test at the 5% significance level with the corresponding <span class="html-italic">p</span>-values. Note that these results are for the case when the SDR is phased locked with a GPS-disciplined oscillator (GPSDO).</p>
Full article ">Figure 13
<p>An example of empirical and theoretical Doppler error cumulative distribution functions (CDFs) for pass index 1.</p>
Full article ">Figure 14
<p>Block diagram of Doppler measurements and the sources of error.</p>
Full article ">Figure 15
<p>An example snapshot of the contributing satellites that are above the 15<math display="inline"> <semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics> </math> elevation threshold with respect to a ground IoT device. The collected measurements from these satellites are used in the localization. Non-contributing satellites that are below the threshold are marked by red triangles.</p>
Full article ">Figure 16
<p>Satellite orbits that are above the elevation threshold and the corresponding location at the beginning and end of the utilized segment for signal measurements in the localization algorithm.</p>
Full article ">Figure 17
<p>An example of the joint AoA and Doppler log likelihood function, showing the ground truth point.</p>
Full article ">Figure 18
<p>The median localization error with increasing satellites for the stochastic optimizer. Different curves represent the Doppler standard deviation <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mi mathvariant="normal">d</mi> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>25</mn> <mo>,</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mn>50</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mo>Φ</mo> </msub> <mo>,</mo> <msub> <mi>σ</mi> <mo>Θ</mo> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math>; the results are averaged over 300 runs.</p>
Full article ">Figure 19
<p>The median localization error for a varying number of measurements taken at intervals separated by 5 s and different curves represent the number of satellites with Doppler standard deviation, <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mi mathvariant="normal">d</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> Hz and AoA standard deviation <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mo>Φ</mo> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <msub> <mi>σ</mi> <mo>Θ</mo> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math>; the results are for 300 runs.</p>
Full article ">Figure 20
<p>The median localization error for varying Doppler standard deviation, <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi mathvariant="normal">d</mi> </msub> </semantics> </math> and azimuth, <math display="inline"> <semantics> <msub> <mi>σ</mi> <mo>Φ</mo> </msub> </semantics> </math> and elevation, and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mo>Θ</mo> </msub> </semantics> </math> AoA deviation using six satellites and 75 s of measurements; the results are deduced from 300 runs.</p>
Full article ">Figure 21
<p>The localization error using Doppler, AoA and joint Doppler-AoA measurements for Doppler standard deviation, <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mi mathvariant="normal">d</mi> </msub> <mo>=</mo> <mn>5</mn> <mspace width="3.33333pt"/> <mi>Hz</mi> </mrow> </semantics> </math> and azimuth, <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mo>Φ</mo> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>01</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math> and elevation, and <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mo>Θ</mo> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>01</mn> <mo>∘</mo> </msup> </mrow> </semantics> </math> deviation using six satellites and 75 s of measurements; the results are based on 1600 runs.</p>
Full article ">
43 pages, 18503 KiB  
Article
Suitability of Satellite Imagery for Surveillance of Maize Ear Damage by Cotton Bollworm (Helicoverpa armigera) Larvae
by Fruzsina Enikő Sári-Barnácz, Mihály Zalai, Stefan Toepfer, Gábor Milics, Dóra Iványi, Mariann Tóthné Kun, János Mészáros, Mátyás Árvai and József Kiss
Remote Sens. 2023, 15(23), 5602; https://doi.org/10.3390/rs15235602 - 1 Dec 2023
Cited by 2 | Viewed by 1562
Abstract
The cotton bollworm (Helicoverpa armigera, Lepidoptera: Noctuidae) poses significant risks to maize. Changes in the maize plant, such as its phenology, influence the short-distance movement and oviposition of cotton bollworm adults and, thus, the distribution of the subsequent larval damage. We [...] Read more.
The cotton bollworm (Helicoverpa armigera, Lepidoptera: Noctuidae) poses significant risks to maize. Changes in the maize plant, such as its phenology, influence the short-distance movement and oviposition of cotton bollworm adults and, thus, the distribution of the subsequent larval damage. We aim to provide an overview of future approaches to the surveillance of maize ear damage by cotton bollworm larvae based on remote sensing. We focus on finding a near-optimal combination of Landsat 8 or Sentinel-2 spectral bands, vegetation indices, and maize phenology to achieve the best predictions. The study areas were 21 sweet and grain maze fields in Hungary in 2017, 2020, and 2021. Correlations among the percentage of damage and the time series of satellite images were explored. Based on our results, Sentinel-2 satellite imagery is suggested for damage surveillance, as 82% of all the extremes of the correlation coefficients were stronger, and this satellite provided 20–64% more cloud-free images. We identified that the maturity groups of maize are an essential factor in cotton bollworm surveillance. No correlations were found before canopy closure (BBCH 18). Visible bands were the most suitable for damage surveillance in mid–late grain maize (|rmedian| = 0.49–0.51), while the SWIR bands, NDWI, NDVI, and PSRI were suitable in mid–late grain maize fields (|rmedian| = 0.25–0.49) and sweet maize fields (|rmedian| = 0.24–0.41). Our findings aim to support prediction tools for cotton bollworm damage, providing information for the pest management decisions of advisors and farmers. Full article
(This article belongs to the Special Issue Spectral Imaging Technology for Crop Disease Detection)
Show Figures

Figure 1

Figure 1
<p>Location of all 21 study fields (purple) in Hungary in each farm and each year with the background Sentinel-2 satellite actual color image (10 m resolution).</p>
Full article ">Figure 2
<p>Method of (<b>a</b>) selecting sampling zones by NDVI of the fields: dividing the NDVI range by an equal number of intervals and designating the central point of selected sampling zones and (<b>b</b>) selecting sample plants in the sampling zones. (<b>c</b>) On the ears of the sample plants, consumed kernels and typical CBW excrement were searched.</p>
Full article ">Figure 3
<p>Correlation analysis of percentage of ears damaged in the sampling zones and satellite spectral bands/vegetation indexes aligned with agronomic factors—data structure.</p>
Full article ">Figure 4
<p>Workflow of different processing steps of the cotton bollworm surveillance analysis, maize field characteristics, and reflectance to select the most important parameters.</p>
Full article ">Figure 5
<p>Weekly average catches of male adults of cotton bollworm of all observed sex pheromone traps per year. Vegetative development = BBCH 05–BBCH 52, Silk/Tassel (emergence) = BBCH 54–BBCH 64, Grain filling, ripening = BBCH 65–BBCH 98, Harvest = BBCH 99–.</p>
Full article ">Figure 6
<p>Overall distribution of Pearson correlation coefficients between spectral ranges of bands or vegetation indices and ear damage by larvae of cotton bollworm distinguished by multispectral sensor of the Sentinel-2 and Landsat 8 satellites (<b>a</b>) and by spectral bands and VIs (<b>b</b>). The width of the violin plots around the boxplots represents the density of correlation coefficients. The mean values of the different groups are shown as grey dots. Tukey’s post hoc test based groups denoted by letters above the boxplot (Spectral bands and vegetation indices were analyzed separately).</p>
Full article ">Figure 7
<p>Number of Landsat 8 and Sentinel-2 satellite images where) the maximal cloud cover of the whole image was below 60%, the area of each observed field was free of clouds (0%), and the date of recording was within the maize vegetation period (from 15 April to 10 September). (N/A = no field data recorded).</p>
Full article ">Figure 8
<p>Distribution, amount (light grey dots), and quartiles of Pearson correlation coefficients of all Sentinel-2-derived spectral bands and vegetation indices calculated from the bands with larval ear damage by cotton bollworm considering all available and suitable images and grouped by maize types and years. The width of the violin plots around the boxplots represents the density of correlation values. Tukey’s post hoc test based on groups denoted by letters above the boxplot.</p>
Full article ">Figure 9
<p>K-means clustering analysis of the Pearson correlation coefficients between ear damage by cotton bollworm larvae with the Sentinel-2 satellite’s bands and vegetation indices of grain maize fields. The following factors were considered: FAO number of the maize variety, date, spectral bands (B02, B03, B04, B05, B06, B07, B8A, B11, and B12), and vegetation indices (ARI*1000, EVI, NDMI, NDVI, NDWI, NPCRI, PRSI, SAVI, and CRI*1000). (<b>a</b>) The optimal number of clusters was identified with the elbow method by finding the sum of the square distance between points in a cluster and the cluster centroid—WCSS (Within-Cluster Sum of Square). (<b>b</b>) The clusters are shown as jittered points divided by FAO number types and years. (<b>c</b>) Coefficients and larval ear damage with Sentinel-2 spectral band-measured surface reflectance or VIs (facets) grouped by maize FAO type (colors). * Maize groups = maturity group of the observed commercial gain maize hybrids, FAO_300 = mid–early maize hybrids with FAO number from FAO 290 to FAO 380, FAO_400 = mid–late maize hybrids with FAO number from FAO 390 to FAO 490.</p>
Full article ">Figure 10
<p>Absolute value of Pearson correlation coefficients between larval ear damage of cotton bollworm and all Sentinel-2 spectral bands and vegetation indices per week and phenological stage grouped by maize cultivation purpose and maturity group and year.</p>
Full article ">Figure 11
<p>Histogram of Pearson correlation coefficients between ear damage by cotton bollworm larvae and surface reflectance measured by Sentinel-2 spectral bands (<b>a</b>) and vegetation indices (<b>b</b>), including data from the Digital Evaluation Period of Cotton Bollworm (DEPC) of all years. * Maize groups FAO 300 maize group means, grain maize hybrids from FAO 290 to FAO 389, FAO 400 group means, grain maize hybrids from FAO 390 to FAO 489 and sweet maize means sweet maize varieties.</p>
Full article ">Figure 12
<p>Linear model of Pearson correlation coefficients between larval ear damage of cotton bollworm Sentinel-2 spectral bands (<b>a</b>) or vegetation indices (<b>b</b>) as dependent on the week of each year and each maize cultivation purpose (sweet and grain maize) and maturity group (FAO 300 and FAO 400) of the grain maize field.</p>
Full article ">Figure A1
<p>Minimum, maximum, average daily temperature, and daily sum of precipitation of the maize-growing season grouped by year type (humid and arid), year, and farm.</p>
Full article ">Figure A2
<p>Surface reflectance of sampling zones of maize fields measured by each Sentinel-2 spectral band in each field in 2017. Different line types denote the different grain maize types.</p>
Full article ">Figure A3
<p>Surface reflectance of sampling zones of maize fields measured by each Sentinel-2 spectral band in each field in 2020. Different line types denote the different grain maize types.</p>
Full article ">Figure A4
<p>Surface reflectance of sampling zones of maize fields measured by each Sentinel-2 spectral band in each field in 2021. Different line types denote the different grain maize types.</p>
Full article ">
29 pages, 14690 KiB  
Article
Polarimetric Synthetic Aperture Radar Ship Potential Area Extraction Based on Neighborhood Semantic Differences of the Latent Dirichlet Allocation Bag-of-Words Topic Model
by Weixing Qiu and Zongxu Pan
Remote Sens. 2023, 15(23), 5601; https://doi.org/10.3390/rs15235601 - 1 Dec 2023
Viewed by 1179
Abstract
Recently, deep learning methods have been widely studied in the field of polarimetric synthetic aperture radar (PolSAR) ship detection. However, extracting polarimetric and spatial features on the whole PolSAR image will result in high computational complexity. In addition, in the massive data ship [...] Read more.
Recently, deep learning methods have been widely studied in the field of polarimetric synthetic aperture radar (PolSAR) ship detection. However, extracting polarimetric and spatial features on the whole PolSAR image will result in high computational complexity. In addition, in the massive data ship detection task, the image to be detected contains a large number of invalid areas, such as land and seawater without ships. Therefore, using ship coarse detection methods to quickly locate the potential areas of ships, that is, ship potential area extraction, is an important prerequisite for PolSAR ship detection. Since existing unsupervised PolSAR ship detection methods based on pixel-level features often rely on fine sea–land segmentation pre-processing and have poor applicability to images with complex backgrounds, in order to solve the abovementioned issue, this paper proposes a PolSAR ship potential area extraction method based on the neighborhood semantic differences of an LDA bag-of-words topic model. Specifically, a polarimetric feature suitable for the scattering diversity condition is selected, and a polarimetric feature map is constructed; the superpixel segmentation method is used to generate the bag of words on the feature map, and latent high-level semantic features are extracted and classified with the improved LDA bag-of-words topic model method to obtain the PolSAR ship potential area extraction result, i.e., the PolSAR ship coarse detection result. The experimental results on the self-established PolSAR dataset validate the effectiveness and demonstrate the superiority of our method. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Complex backgrounds in the PolSAR ship potential area extraction (coarse detection) task: (<b>a</b>) overall view; (<b>b</b>) the green rectangle represents the false alarm of defocusing, and the red rectangle represents the real ship; (<b>c</b>) false alarm of islands; (<b>d</b>) false alarm of azimuth ambiguity.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Nearshore polarimetric rotation domain feature map: (<b>a</b>) the HV channel of original PolSAR image; (<b>b</b>) constructed polarimetric feature map.</p>
Full article ">Figure 4
<p>Distant ocean polarimetric rotation domain feature map: (<b>a</b>) the HV channel of original PolSAR image; (<b>b</b>) constructed polarimetric feature map.</p>
Full article ">Figure 5
<p>Comparison plot of superpixel segmentation results for nearshore feature map: (<b>a</b>–<b>c</b>) overall; (<b>d</b>–<b>f</b>) land area; (<b>g</b>–<b>i</b>) ship area; (<b>a</b>,<b>d</b>,<b>g</b>) watershed method; (<b>b</b>,<b>e</b>,<b>h</b>) SLIC method; (<b>c</b>,<b>f</b>,<b>i</b>) our method.</p>
Full article ">Figure 6
<p>Comparison plot of superpixel segmentation results for distant ocean feature map: (<b>a</b>–<b>c</b>) overall; (<b>d</b>–<b>f</b>) ship area; (<b>a</b>,<b>d</b>) watershed method; (<b>b</b>,<b>e</b>) SLIC method; (<b>c</b>,<b>f</b>) our method.</p>
Full article ">Figure 7
<p>Sketch map of the LDA topic model.</p>
Full article ">Figure 8
<p>Sketch map of the neighborhood structure.</p>
Full article ">Figure 9
<p>Comparison plot of classification results for nearshore feature map: (<b>a</b>,<b>b</b>) overall; (<b>c</b>,<b>d</b>) land area; (<b>e</b>,<b>f</b>) ship area; (<b>a</b>,<b>c</b>,<b>e</b>) using original semantic features; (<b>b</b>,<b>d</b>,<b>f</b>) using neighborhood semantic features.</p>
Full article ">Figure 10
<p>Comparison plot of classification results for distant ocean feature map: (<b>a</b>,<b>b</b>) overall; (<b>c</b>,<b>d</b>) ship area; (<b>a</b>,<b>c</b>) using original semantic features; (<b>b</b>,<b>d</b>) using neighborhood semantic features.</p>
Full article ">Figure 11
<p>The ground-truth data, and the yellow rectangle represents a ship, the blue rectangle represents defocusing, the white rectangle represents azimuth ambiguity, and the purple rectangle represents an island: (<b>a</b>,<b>b</b>) the nearshore images; (<b>c</b>,<b>d</b>) the distant ocean images; (<b>a</b>,<b>c</b>) the HV channel of original PolSAR images; (<b>b</b>,<b>d</b>) the ground-truth maps.</p>
Full article ">Figure 12
<p>Comparison plot of ship detection results for nearshore image, and the green rectangle represents true positive, and the red rectangle represents false alarm: (<b>a</b>) PWF-TS-CFAR method; (<b>b</b>) SP method; (<b>c</b>) SPCE method; (<b>d</b>) our method.</p>
Full article ">Figure 13
<p>Comparison plot of ship detection results for distant ocean image, and the green rectangle represents true positive, and the red rectangle represents false alarm: (<b>a</b>) PWF-TS-CFAR method; (<b>b</b>) SP method; (<b>c</b>) SPCE method; (<b>d</b>) our method.</p>
Full article ">
25 pages, 9398 KiB  
Article
Variations and Depth of Formation of Submesoscale Eddy Structures in Satellite Ocean Color Data in the Southwestern Region of the Peter the Great Bay
by Nadezhda A. Lipinskaya, Pavel A. Salyuk and Irina A. Golik
Remote Sens. 2023, 15(23), 5600; https://doi.org/10.3390/rs15235600 - 1 Dec 2023
Cited by 1 | Viewed by 1352
Abstract
The aim of this study was to develop methods for determining the most significant contrasts in satellite ocean color data arising in the presence of a submesoscale eddy structure, as well as to determine the corresponding depths of the upper layer of the [...] Read more.
The aim of this study was to develop methods for determining the most significant contrasts in satellite ocean color data arising in the presence of a submesoscale eddy structure, as well as to determine the corresponding depths of the upper layer of the sea where these contrasts are formed. The research was carried out on the example of the chain of submesoscale eddies identified in the Tumen River water transport area in the Japan/East Sea. MODIS Aqua/Terra satellite data of the remotely sensed reflectance (Rrs) and Rrs band ratio at various wavelengths, chlorophyll-a concentration, and, for comparison, sea surface temperature (sst) were analyzed. Additionally, the results of ship surveys in September 2009 were used to study the influence of eddy vertical structure on the obtained remote characteristics. The best characteristic for detecting the studied eddies in satellite ocean color data was the MODIS chlor_a standard product, which is an estimate of chlorophyll-a concentration obtained by a combination of the three-band reflectance difference algorithm (CI) for low concentrations and the band-ratio algorithm (OCx) for high concentrations. At the same time, the weakest contrasts were in sst data due to similar water heating inside and outside the eddies. The best eddy contrast-to-noise ratio according to Rrs spectra is achieved at 547 nm in the spectral region of seawater with maximum transparency and low relative errors of measurements. The Rrs at 678 nm and associated products may be a significant characteristic for eddy detection if there are many phytoplankton in the eddy waters. The maximum depth of the remotely sensed contrast formation of the considered eddy vertical structure was ~6 m, which was significantly less than the maximum spectral penetration depth of solar radiation for remote sensing, which was in the 14–17 m range. The results obtained can be used to determine the characteristics that provide the best contrast for detecting eddy structures in remotely sensed reflectance data and to improve the interpretation of remote spectral ocean color data in the areas of eddies activity. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Ocean Observation (Second Edition))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. Big dots indicate ship measurements made on 4 September 2009. The numbers near the dots correspond to the station numbers used later in the text. Green polygon indicates the southern area of the Far Eastern Marine Reserve.</p>
Full article ">Figure 2
<p>MODIS-Aqua/Terra satellite images in the southwestern part of the Peter the Great Bay from 31 August 2009 to 4 September 2009. Top row (<b>a</b>–<b>d</b>)—<span class="html-italic">sst</span> data. Bottom row (<b>e</b>–<b>h</b>)—<span class="html-italic">chlor_a<sub>sat</sub></span> data. The identifiers denote the eddies under consideration.</p>
Full article ">Figure 3
<p>Scheme of oceanographic stations along the eddy structure ID<sub>04.02</sub> on 4 September 2009, where data from remote sensing and in situ ship measurements were collected. The ship route followed a direction from northeast to southwest. The numbers correspond to the ship’s station numbers.</p>
Full article ">Figure 4
<p>Schematic illustrating the selection of regions inside and outside the eddy for calculating the contrast-to-noise ratio (<span class="html-italic">CNR</span>) on the example of eddy ID<sub>03.02</sub>: (<b>a</b>) initial <span class="html-italic">chlor_a<sub>sat</sub></span> concentration data; (<b>b</b>) the result after applying the Sobel operator.</p>
Full article ">Figure 5
<p>Flowchart of the algorithm for determining the maximum depth of a remotely sensed contrast formation of an eddy structure (<span class="html-italic">Z<sub>rsE</sub></span>). <span class="html-italic">a<sub>ph</sub></span>—light absorption coefficient by phytoplankton, <span class="html-italic">a<sub>CDOM</sub></span>—by CDOM, <span class="html-italic">a<sub>nap</sub></span>—by non-algal particles, <span class="html-italic">b<sub>bp</sub></span>—light backscattering coefficient by particles.</p>
Full article ">Figure 6
<p>Example simulation of layer-by-layer removal of the eddy structure in the <span class="html-italic">Chl<sub>insitu</sub></span>(<span class="html-italic">z</span>) concentration profile. Panel (<b>a</b>) illustrates the initial structure, then panels (<b>b</b>–<b>f</b>) show structures where values from “background” stations (st.1, st.6, st.8, red dashed lines) in the sea surface layer are propagated to all “signal” stations (st.2–5, st.7, blue dashed lines) for the following layer thickness ranges: (<b>b</b>) 0–5 m; (<b>c</b>) 0–10 m; (<b>d</b>) 0–15 m; (<b>e</b>) 0–20 m; (<b>f</b>) 0–25 m. Black dashed lines is the boundary of the processed upper layer. The numbers correspond to the ship’s station numbers (st. №).</p>
Full article ">Figure 7
<p>Number of wins at which <span class="html-italic">CNR</span> was the highest among all MODIS radiometer wavelengths: (<b>a</b>) for <span class="html-italic">Rrs<sub>sat</sub></span>(<span class="html-italic">λ</span>), (<b>b</b>) for <span class="html-italic">BR<sub>sat</sub></span>(<span class="html-italic">λ</span>).</p>
Full article ">Figure 8
<p>Mean <span class="html-italic">CNR</span> from the maximum for each eddy (max_mean), median from the maximum <span class="html-italic">CNR</span> for each eddy (max_median), and absolute maximum <span class="html-italic">CNR</span> for all analyzed eddies (max): (<b>a</b>) for <span class="html-italic">Rrs<sub>sat</sub></span>(<span class="html-italic">λ</span>), (<b>b</b>) for <span class="html-italic">BR<sub>sat</sub></span>(<span class="html-italic">λ</span>) at MODIS radiometer wavelengths.</p>
Full article ">Figure 9
<p>Comparison of the maximum absolute <span class="html-italic">CNR</span> values for satellite estimations of <span class="html-italic">chlor_a<sub>sat</sub></span>, <span class="html-italic">sst</span>, <span class="html-italic">Rrs<sub>sat</sub></span>(547), and <span class="html-italic">BR<sub>sat</sub></span>(678) for the individual eddies shown in <a href="#remotesensing-15-05600-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 10
<p>MODIS/Aqua satellite images, (<b>a</b>,<b>b</b>) for 4 September 2009 in the area of two submesoscale eddies ID<sub>04.01</sub> and ID<sub>04.02</sub> and their corresponding plots of spatial variability of the analyzed parameters along the ship transect, (<b>c</b>,<b>d</b>). Panels (<b>a</b>,<b>c</b>) provide data for <span class="html-italic">sst</span> and <span class="html-italic">T<sub>flow</sub></span> measurements; panels (<b>b</b>,<b>d</b>) provide data for <span class="html-italic">chlor_a<sub>sat</sub></span> and <span class="html-italic">Chl<sub>flow</sub></span> measurements. The red and green rectangles in (<b>c</b>,<b>d</b>) correspond to the eddy boundaries of ID<sub>04.01</sub> and ID<sub>04.02</sub>.</p>
Full article ">Figure 11
<p>In situ vertical profiles of temperature <span class="html-italic">T<sub>insitu</sub></span> (<b>a</b>) and salinity <span class="html-italic">S<sub>insitu</sub></span> (<b>b</b>) obtained at the stations located at the intersection of submesoscale eddy ID<sub>04.02</sub> on 4 September 2009 (<b>c</b>,<b>d</b>). The colors of the dots in (<b>a</b>,<b>b</b>) correspond to the colors of the dots in the map shown in <a href="#remotesensing-15-05600-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 12
<p>Vertical transect of in situ measurements of chl-a <span class="html-italic">Chl<sub>insitu</sub></span> (<b>a</b>) and CDOM <span class="html-italic">CDOM<sub>insitu</sub></span> (<b>b</b>) concentrations obtained at the stations located at the intersection of submesoscale eddy ID<sub>04.02</sub> on 4 September 2009.</p>
Full article ">Figure 13
<p>Variability in <span class="html-italic">Rrs<sub>model</sub></span>(443), <span class="html-italic">Rrs<sub>model</sub></span>(547), and <span class="html-italic">BR<sub>model</sub></span>(443) values on a transect through submesoscale eddy ID<sub>04.02</sub>.</p>
Full article ">Figure 14
<p>Contrast-to-noise ratio values, <span class="html-italic">CNR<sub>i</sub></span>, occurring between a set of the <span class="html-italic">i</span>-th number of remotely sensed characteristics modeled at transect through submesoscale eddy ID<sub>04.02</sub>. The blue rectangle marks the contrasts characterizing the presence of the eddy periphery. The red rectangle marks contrasts indicating the eddy core manifestation.</p>
Full article ">Figure 15
<p>Variations in the maximum absolute value of <span class="html-italic">CNR<sub>i</sub></span> among the remotely sensed characteristics, simulated for the case of layer-by-layer structure removal from the bio-optical characteristics’ profiles, according to the methodology presented in <a href="#remotesensing-15-05600-f005" class="html-fig">Figure 5</a> and <a href="#remotesensing-15-05600-f006" class="html-fig">Figure 6</a>. Different line colors correspond to different pairs of analyzed stations, according to the legend presented.</p>
Full article ">Figure 16
<p>(<b>a</b>) Wavelength dependencies of <span class="html-italic">Z</span><sub>90</sub>(<span class="html-italic">λ</span>) calculated for all stations through eddy structure ID<sub>04.02</sub>. (<b>b</b>) Comparison of the maximum depth of remotely sensed contrast formation (<span class="html-italic">Z<sub>rsE</sub></span>) and the maximum depth of sunlight penetration for remote sensing <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>Z</mi> </mrow> <mrow> <mn>90</mn> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msubsup> </mrow> </semantics></math> for considered vertical structures in the eddy ID<sub>04.02</sub>.</p>
Full article ">
17 pages, 26835 KiB  
Technical Note
The Impact of Side-Scan Sonar Resolution and Acoustic Shadow Phenomenon on the Quality of Sonar Imagery and Data Interpretation Capabilities
by Artur Grządziel
Remote Sens. 2023, 15(23), 5599; https://doi.org/10.3390/rs15235599 - 1 Dec 2023
Cited by 3 | Viewed by 3347
Abstract
Side-scan sonar is designed and used for a variety of survey work, in both military and civilian fields. These systems provide acoustic imageries that play a significant role in a variety of marine and inland applications. For this reason, it is extremely important [...] Read more.
Side-scan sonar is designed and used for a variety of survey work, in both military and civilian fields. These systems provide acoustic imageries that play a significant role in a variety of marine and inland applications. For this reason, it is extremely important that the recorded sonar image is characterized by high resolution, detail and sharpness. This article is mainly aimed at the demonstration of the impact of side-scan sonar resolution on the imaging quality. The article also presents the importance of acoustic shadow in the process of analyzing sonar data and identifying underwater objects. The real measurements were carried out using two independent survey systems: hull-mounted sonar and towed side-scan sonar. Six different shipwrecks lying in the Baltic Sea were selected as the objects of research. The results presented in the article also constitute evidence of how the sonar technology has changed over time. The survey findings show that by maintaining the appropriate operational conditions and meeting several requirements, it is possible to obtain photographic-quality sonar images, which may be crucial in the process of data interpretation and shipwreck identification. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of all underwater objects selected for the side-scan sonar investigation.</p>
Full article ">Figure 2
<p>Sonar equipment used for field operation and data acquisition: (<b>a</b>) console of Acson-100 side-scan sonar; (<b>b</b>) transducer of the Acson-100 mounted in the hull of the hydrographic ship Arctowski; (<b>c</b>) DF-1000 fish on the deck ready for deployment; (<b>d</b>) CODA DA-25 data acquisition system.</p>
Full article ">Figure 3
<p>Input data for towfish position estimation. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>l</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math> side scan sonar layback, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>a</mi> </mrow> <mrow> <mi>l</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math>distance between GPS antenna and A-frame, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>h</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math>height of cable block on A-frame above water line, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>d</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math>depth of side-scan sonar, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>W</mi> </mrow> <mrow> <mi>d</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math>water depth, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>—</mo> </mrow> </semantics></math>side scan sonar altitude.</p>
Full article ">Figure 4
<p>Comparison of the quality of the sonar imageries of the shipwrecks: (<b>a</b>) sonogram of the Franken acquired with hull-mounted sonar, <span class="html-italic">f =</span> 100 kHz; (<b>b</b>) sonogram of the Franken acquired with towed SSS, <span class="html-italic">f =</span> 500 kHz; (<b>c</b>) sonogram of the Delfin acquired with hull-mounted sonar, <span class="html-italic">f =</span> 100 kHz; (<b>d</b>) sonogram of the Delfin acquired with towed SSS, <span class="html-italic">f =</span> 100 kHz; (<b>e</b>) sonogram of the Delfin acquired with towed SSS, <span class="html-italic">f =</span> 500 kHz; (<b>f</b>) sonogram of the Delfin acquired with hull-mounted sonar, <span class="html-italic">f =</span> 100 kHz; (<b>g</b>) sonogram of the Delfin acquired with towed SSS, <span class="html-italic">f =</span> 100 kHz; (<b>h</b>) sonogram of the Delfin acquired with towed SSS, <span class="html-italic">f =</span> 500 kHz.</p>
Full article ">Figure 5
<p>Comparison of the quality of the sonar imageries of the Wilhelm Gustloff shipwreck: (<b>a</b>) sonogram recorded with hull-mounted sonar, <span class="html-italic">f =</span> 100 kHz; (<b>b</b>) sonograph recorded with towed sonar, <span class="html-italic">f =</span> 500 kHz; (<b>c</b>) sonogram recorded with towed sonar, <span class="html-italic">f =</span> 500 kHz, in a different color palette.</p>
Full article ">Figure 6
<p>Acoustic shadow formation on the example of the Goya shipwreck: (<b>a</b>) sonogram recorded with hull-mounted sonar, <span class="html-italic">f</span> = 100 kHz; (<b>b</b>) sonograph recorded with towed sonar, <span class="html-italic">f</span> = 500 kHz; (<b>c</b>) acoustic shadow extracted from sonar imagery; (<b>d</b>) ship silhouette of the Goya intentionally filled with black for comparison purposes.</p>
Full article ">Figure 7
<p>Sonar images of the shipwreck of the fishing boat: (<b>a</b>) sonograph acquired with hull-mounted sonar, <span class="html-italic">f =</span> 100 kHz; (<b>b</b>) sonograph acquired with towed side-scan sonar, <span class="html-italic">f =</span> 500 kHz; (<b>c</b>) sonograph acquired with towed side-scan sonar, <span class="html-italic">f =</span> 500 kHz; (<b>d</b>) silhouette of a fishing boat.</p>
Full article ">Figure 8
<p>Acoustic shadow formation on the example of the SS Steuben shipwreck: (<b>a</b>) sonogram recorded with hull-mounted sonar; (<b>b</b>) sonogram recorded with towed side-scan sonar; (<b>c</b>) hydroacoustic shadow extracted from sonogram acquired with towed side-scan sonar, <span class="html-italic">f</span> = 500 kHz; (<b>d</b>) ship silhouette of the Steuben intentionally filled with black for comparison purposes.</p>
Full article ">Figure 9
<p>Theoretical across-track resolution of the side-scan sonars, Acson-100 and EdgeTech DF-1000.</p>
Full article ">Figure 10
<p>Theoretical along-track resolution of the side-scan sonars, Acson-100 and EdgeTech DF-1000.</p>
Full article ">
14 pages, 5158 KiB  
Article
Difference between WMO Climate Normal and Climatology: Insights from a Satellite-Based Global Cloud and Radiation Climate Data Record
by Abhay Devasthale, Karl-Göran Karlsson, Sandra Andersson and Erik Engström
Remote Sens. 2023, 15(23), 5598; https://doi.org/10.3390/rs15235598 - 1 Dec 2023
Cited by 4 | Viewed by 1717
Abstract
The World Meteorological Organization (WMO) recommends that the most recent 30-year period, i.e., 1991–2020, be used to compute the climate normals of geophysical variables. A unique aspect of this recent 30-year period is that the satellite-based observations of many different essential climate variables [...] Read more.
The World Meteorological Organization (WMO) recommends that the most recent 30-year period, i.e., 1991–2020, be used to compute the climate normals of geophysical variables. A unique aspect of this recent 30-year period is that the satellite-based observations of many different essential climate variables are available during this period, thus opening up new possibilities to provide a robust, global basis for the 30-year reference period in order to allow climate-monitoring and climate change studies. Here, using the satellite-based climate data record of cloud and radiation properties, CLARA-A3, for the month of January between 1981 and 2020, we illustrate the difference between the climate normal, as defined by guidelines from WMO on calculations of 30 yr climate normals, and climatology. It is shown that this difference is strongly dependent on the climate variable in question. We discuss the impacts of the nature and availability of satellite observations, variable definition, retrieval algorithm and programmatic configuration. It is shown that the satellite-based climate data records show enormous promise in providing a climate normal for the recent 30-year period (1991–2020) globally. We finally argue that the holistic perspectives from the global satellite community should be increasingly considered while formulating the future WMO guidelines on computing climate normals. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The absolute values of CN_WMO for the total cloud fraction (in %) together with the differences (also in %) between CN_WMO and the monthly based climate normal (CN_MB) and the two climatologies (Clim30 and Clim40). The numbers in the titles of subplots show the global mean differences. The spatial resolution of the equal angle lat–lon grid is 0.25 degrees.</p>
Full article ">Figure 2
<p>Latitude–time histograms showing the number of longitude grids failing either the D5 or the D11 condition when computing climate normals of total cloud fraction. Maximum number of longitudes grids can be 1440 since the spatial resolution in 0.25 degrees. The resolution on the <span class="html-italic">Y</span>-axis (latitude) is also 0.25 degrees.</p>
Full article ">Figure 3
<p>Same as in <a href="#remotesensing-15-05598-f001" class="html-fig">Figure 1</a>, but for the low-level clouds.</p>
Full article ">Figure 4
<p>Same as in <a href="#remotesensing-15-05598-f001" class="html-fig">Figure 1</a>, but for the daytime cloud fraction.</p>
Full article ">Figure 5
<p>Same as in <a href="#remotesensing-15-05598-f001" class="html-fig">Figure 1</a>, but for the nighttime cloud fraction.</p>
Full article ">Figure 6
<p>The number of years, out of total 30, used to compute CN_WMO and Clim30 in the case of daytime cloud fraction and the difference thereof.</p>
Full article ">Figure 7
<p>Latitude–time histograms showing the number of longitude grids failing either the D5 or the D11 condition when computing climate normals of daytime cloud fraction. Maximum number of longitudes grids can be 1440 since the spatial resolution in 0.25 degrees. The resolution on the <span class="html-italic">Y</span>-axis (latitude) is also 0.25 degrees.</p>
Full article ">Figure 8
<p>Same as in <a href="#remotesensing-15-05598-f001" class="html-fig">Figure 1</a>, but for the cloud top pressure (in hPa).</p>
Full article ">Figure 9
<p>The number of years, out of total 30, used to compute CN_WMO and Clim30 in the case of cloud top pressure and the difference thereof.</p>
Full article ">Figure 10
<p>Latitude–time histograms showing the number of longitude grids failing either the D5 or the D11 condition when computing climate normals of cloud top pressure. Maximum number of longitudes grids can be 1440 since the spatial resolution in 0.25 degrees. The resolution on the <span class="html-italic">Y</span>-axis (latitude) is also 0.25 degrees.</p>
Full article ">Figure 11
<p>Same as in <a href="#remotesensing-15-05598-f001" class="html-fig">Figure 1</a>, but for the incoming solar radiation at the surface (in W/m<sup>2</sup>).</p>
Full article ">
21 pages, 6170 KiB  
Article
Satellite Imagery-Based Cloud Classification Using Deep Learning
by Rukhsar Yousaf, Hafiz Zia Ur Rehman, Khurram Khan, Zeashan Hameed Khan, Adnan Fazil, Zahid Mahmood, Saeed Mian Qaisar and Abdul Jabbar Siddiqui
Remote Sens. 2023, 15(23), 5597; https://doi.org/10.3390/rs15235597 - 1 Dec 2023
Cited by 3 | Viewed by 3423
Abstract
A significant amount of satellite imaging data is now easily available due to the continued development of remote sensing (RS) technology. Enabling the successful application of RS in real-world settings requires efficient and scalable solutions to extend their use in multidisciplinary areas. The [...] Read more.
A significant amount of satellite imaging data is now easily available due to the continued development of remote sensing (RS) technology. Enabling the successful application of RS in real-world settings requires efficient and scalable solutions to extend their use in multidisciplinary areas. The goal of quick analysis and precise classification in Remote Sensing Imaging (RSI) is often accomplished by utilizing approaches based on deep Convolution Neural Networks (CNNs). This research offers a unique snapshot-based residual network (SnapResNet) that consists of fully connected layers (FC-1024), batch normalization (BN), L2 regularization, dropout layers, dense layer, and data augmentation. Architectural changes overcome the inter-class similarity problem while data augmentation resolves the problem of imbalanced classes. Moreover, the snapshot ensemble technique is utilized to prevent over-fitting, thereby further improving the network’s performance. The proposed SnapResNet152 model employs the most challenging Large-Scale Cloud Images Dataset for Meteorology Research (LSCIDMR), having 10 classes with thousands of high-resolution images and classifying them into respective classes. The developed model outperforms the existing deep learning-based algorithms (e.g., AlexNet, VGG-19, ResNet101, and EfficientNet) and achieves an overall accuracy of 97.25%. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Inter-class similarity: (<b>a</b>) tropical cyclone (<b>b</b>) extra-tropical cyclone (<b>c</b>) high ice cloud. images from (<b>a</b>–<b>c</b>) all have blue hues and the (<b>d</b>) frontal surface (<b>e</b>) westerly jet has a bluish cyclonic curve with elongated cloud belts.</p>
Full article ">Figure 2
<p>ResNet50 architecture, used as a baseline model for the proposed scheme [<a href="#B31-remotesensing-15-05597" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>The original ResNet50 architecture for 10 classes of the LSCIDMR dataset.</p>
Full article ">Figure 4
<p>Learning rate schedules. (<b>a</b>) Step decay. (<b>b</b>) Cosine decay.</p>
Full article ">Figure 5
<p>Modified design after the addition of a dropout layer (in red) in the original ResNet design to reduce the overfitting that has been observed in the model.</p>
Full article ">Figure 6
<p>Proposed FC-1024 ResNet50 architecture; addition of FC-1024 to classifying images and weight decay regularization (additional red block) to help the model to generalize better hence improving the model performance on unseen or test datasets.</p>
Full article ">Figure 7
<p>Proposed training regime, highlighting the steps to further improve the proposed model.</p>
Full article ">Figure 8
<p>Proposed SnapResNet architecture. In the FC-1024 ResNet152 model, we apply snapshot ensembling with cyclic cosine annealing to then, in return, obtain snapshots where the model accuracy was higher. Now, the proposed model is converted into SnapResNet152 that is used to classify images.</p>
Full article ">Figure 9
<p>Images from LSCIDMR-S: Example of some images from ten classes. There are 40,625 images within ten classes. (<b>a</b>) Desert, (<b>b</b>) extratropical cyclone, (<b>c</b>) frontal surface, (<b>d</b>) high ice cloud, (<b>e</b>) low water cloud, (<b>f</b>) ocean, (<b>g</b>) snow, (<b>h</b>) tropical cyclone, (<b>i</b>) vegetation, (<b>j</b>) westerly jet.</p>
Full article ">Figure 10
<p>A statistical analysis of the proportion of images of each weather system in different seasons is shown. The seasons are divided according to the northern hemisphere.</p>
Full article ">Figure 11
<p>Snapshot ensembling: each snap is saved after 100 epochs. (<b>a</b>) Training and validation (val) accuracy in 2 snapshots (<b>b</b>) Training and validation (val) losses in 2 snapshots.</p>
Full article ">Figure 12
<p>A blue dot shows a single Snapshot after 100 epochs and an orange dot shows combined snapshots after 200 epochs.</p>
Full article ">Figure 13
<p>Confusion matrix for detailed analysis of the proposed scheme’s performance.</p>
Full article ">Figure 14
<p>True and false predictions from SnapResNet152: (<b>a</b>) shows where true and predicted labels are different—false predictions; (<b>b</b>) shows where true and predicted labels are same—true predictions.</p>
Full article ">
25 pages, 47432 KiB  
Article
Research on Deformation Evolution of a Large Toppling Based on Comprehensive Remote Sensing Interpretation and Real-Time Monitoring
by Shenghua Cui, Hui Wang, Xiangjun Pei, Luguang Luo, Bin Zeng and Tao Jiang
Remote Sens. 2023, 15(23), 5596; https://doi.org/10.3390/rs15235596 - 1 Dec 2023
Cited by 2 | Viewed by 1441
Abstract
Deep, unstable slopes are highly developed in mountainous areas, especially in the Minjiang River Basin, Sichuan Province, China. In this study, to reveal their deformation evolution characteristics for stability evaluation and disaster prevention, multi-period optical remote sensing images (2010–2019), SBAS-InSAR data (January 2018–December [...] Read more.
Deep, unstable slopes are highly developed in mountainous areas, especially in the Minjiang River Basin, Sichuan Province, China. In this study, to reveal their deformation evolution characteristics for stability evaluation and disaster prevention, multi-period optical remote sensing images (2010–2019), SBAS-InSAR data (January 2018–December 2019), and on-site real-time monitoring (December 2017–September 2020) were utilized to monitor the deformation of a large deep-seated toppling, named the Tizicao (TZC) Toppling. The obtained results by different techniques were cross-validated and synthesized in order to introduce the spatial and temporal characteristics of the toppling. It was found that the displacements on the north side of the toppling are much larger than those on the south side, and the leading edge exhibits a composite damage pattern of “collapse failure” and “bulging cracking”. The development process of the toppling from the formation of a tensile crack at the northern leading edge to the gradual pulling of the rear edge was revealed for a time span of up to ten years. In addition, the correlation between rainfall, earthquakes, and GNSS time series showed that the deformation of the toppling is sensitive to rainfall but does not change under the effect of earthquakes. The surface-displacement-monitoring method in this study can provide a reference for the evolution analysis of unstable slopes with a large span of deformation. Full article
Show Figures

Figure 1

Figure 1
<p>Regional geological structures. (<b>a</b>) Geological plan showing the Longmenshan (LMS) Fault system and location of the study area; (<b>b</b>) geological plan of the study area.</p>
Full article ">Figure 2
<p>Multi-year monthly rainfall in Shidaguan Township.</p>
Full article ">Figure 3
<p>(<b>a</b>) Geological plan showing the Longmenshan (LMS) Fault system and location of the study area; (<b>b</b>) geological plan of the study area; (<b>c</b>) the rock avalanche.</p>
Full article ">Figure 4
<p>The longitudinal section of the TZC Toppling (Profile line I–I′ is shown in <a href="#remotesensing-15-05596-f003" class="html-fig">Figure 3</a>a).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>i</b>) Pole plots of equatorial projection of bedding planes and joints (upper hemisphere; equal angle).</p>
Full article ">Figure 6
<p>Flowchart showing available data and methods in the study.</p>
Full article ">Figure 7
<p>Plane map of the toppling slope and work arrangement.</p>
Full article ">Figure 8
<p>Schematic diagram of satellite monitoring of slope surface deformation.</p>
Full article ">Figure 9
<p>Image data time and location connection diagram.</p>
Full article ">Figure 10
<p>Diagram of the spatial relationship between satellite LOS monitoring and GNSS monitoring.</p>
Full article ">Figure 11
<p>Surface advective displacement. (<b>a</b>) North side of the dumping body; (<b>b</b>) south side of the dumping body.</p>
Full article ">Figure 12
<p>Surface prograde displacement at monitoring point T18 of the TZC Toppling.</p>
Full article ">Figure 13
<p>Surface prograde displacement of monitoring point T10 of the TZC Toppling.</p>
Full article ">Figure 14
<p>Surface prograde displacement of monitoring point T1 of the TZC Toppling.</p>
Full article ">Figure 15
<p>GNSS cumulative horizontal displacement and cumulative settlement.</p>
Full article ">Figure 16
<p>Results of deep displacement monitoring of Ishidaiguan dumping deformers. (<b>a</b>) D1; (<b>b</b>) D2; (<b>c</b>) D3; (<b>d</b>) D4; (<b>e</b>) D5; (<b>f</b>) D6; (<b>g</b>) D7.</p>
Full article ">Figure 17
<p>(<b>a</b>) 2018–2019 annual average velocity and deformation monitoring point layout overview; (<b>b</b>) the position of the deformation monitoring points.</p>
Full article ">Figure 18
<p>Cumulative deformation of the monitoring points from 2 January 2018 to 11 December 2019. (<b>a</b>) A–A′; (<b>b</b>) B–B′.</p>
Full article ">Figure 19
<p>SBAS-InSAR and GNSS monitoring LOS direction cumulative deformation sequence diagram. (<b>a</b>) G1; (<b>b</b>) G2; (<b>c</b>) G3; (<b>d</b>) G4; (<b>e</b>) G5.</p>
Full article ">Figure 20
<p>Historical multi-phase optical images. (<b>a</b>) WorldView-2 image on 9 January 2010; (<b>b</b>) Geoeye-1 image on 17 March 2011; (<b>c</b>) WorldView-2 image on 9 January 2016; (<b>d</b>) unmanned aerial imagery on 29 July 2017; (<b>e</b>) Pleiades-A image on 23 October 2019; (<b>f</b>) unmanned aerial imagery on 10 May 2020 (red lines represent cracks and red circles represent the rock avalanche).</p>
Full article ">Figure 21
<p>Deformation map of road surface deformation from 9 January 2010 to 23 October 2019.</p>
Full article ">Figure 22
<p>Time sequence diagram of cumulative deformation of the TZC Toppling.</p>
Full article ">Figure 23
<p>Interpolated cloud map for surface displacement monitoring.</p>
Full article ">Figure 24
<p>The basis for discriminating the evolutionary process of topplings.</p>
Full article ">Figure 25
<p>Slickensides at the bedrock–cover interfaces (The red lines represent slickensides).</p>
Full article ">Figure 26
<p>(<b>a</b>) GNSS cumulative horizontal displacement and rainfall; (<b>b</b>) GNSS cumulative settlement and rainfall.</p>
Full article ">Figure 27
<p>(<b>a</b>) GNSS cumulative horizontal displacement and earthquakes; (<b>b</b>) GNSS cumulative settlement and earthquakes.</p>
Full article ">Figure 28
<p>The geological model of (<b>a</b>) the north side of the TZC Toppling; (<b>b</b>) the south side of the TZC Toppling.</p>
Full article ">
Previous Issue
Back to TopTop