Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 16, September-1
Previous Issue
Volume 16, August-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 16 (August-2 2024) – 242 articles

Cover Story (view full-size image): Soil moisture (SM) data with both a fine spatial scale and a short repeat period would benefit many hydrologic and climatic applications. In this paper, we describe the creation and validation of a new 3 km SM dataset. We downscaled 9 km brightness temperatures from Soil Moisture Active Passive (SMAP) by merging them with L-band reflectivity data from Cyclone Global Navigation Satellite System (CYGNSS). We then calculated 3 km SMAP/CYGNSS SM using the SMAP single-channel vertically polarized SM algorithm. To remedy the sparse daily coverage of CYGNSS data at a 3 km spatial resolution, we used spatially interpolated CYGNSS data. 3 km interpolated SMAP/CYGNSS SM matches the SMAP repeat period of ~2–3 days and performs similarly to 9 km SMAP SM at the SMAP 9 km core validation sites within ±38° latitude. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
32 pages, 7438 KiB  
Article
Monitoring of Spatio-Temporal Variations of Oil Slicks via the Collocation of Multi-Source Satellite Images
by Tran Vu La, Ramona-Maria Pelich, Yu Li, Patrick Matgen and Marco Chini
Remote Sens. 2024, 16(16), 3110; https://doi.org/10.3390/rs16163110 - 22 Aug 2024
Viewed by 915
Abstract
Monitoring oil drift by integrating multi-source satellite imagery has been a relatively underexplored practice due to the limited time-sampling of datasets. However, this limitation has been mitigated by the emergence of new satellite constellations equipped with both Synthetic Aperture Radar (SAR) and optical [...] Read more.
Monitoring oil drift by integrating multi-source satellite imagery has been a relatively underexplored practice due to the limited time-sampling of datasets. However, this limitation has been mitigated by the emergence of new satellite constellations equipped with both Synthetic Aperture Radar (SAR) and optical sensors. In this manuscript, we take advantage of multi-temporal and multi-source satellite imagery, incorporating SAR (Sentinel-1 and ICEYE-X) and optical data (Sentinel-2/3 and Landsat-8/9), to provide insights into the spatio-temporal variations of oil spills. We also analyze the impact of met–ocean conditions on oil drift, focusing on two specific scenarios: marine floating oil slicks off the coast of Qatar and oil spills resulting from a shipwreck off the coast of Mauritius. By overlaying oils detected from various sources, we observe their short-term and long-term evolution. Our analysis highlights the finding that changes in oil structure and size are influenced by strong surface winds, while surface currents predominantly affect the spread of oil spills. Moreover, to detect oil slicks across different datasets, we propose an innovative unsupervised algorithm that combines a Bayesian approach used to detect oil and look-alike objects with an oil contours approach distinguishing oil from look-alikes. This algorithm can be applied to both SAR and optical data, and the results demonstrate its ability to accurately identify oil slicks, even in the presence of oil look-alikes and under varying met–ocean conditions. Full article
(This article belongs to the Special Issue Marine Ecology and Biodiversity by Remote Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Footprints of multi-sensor and multi-temporal images for observing oil spills. (<b>a</b>) Offshore Qatar, as covered by Sentinel-1 (28 March 2021, 14:33:06), Sentinel-2 (27 March 2021, 06:56:21), and Sentinel-3 (28 March 2021, 06:34:43). (<b>b</b>) Mauritius Island, as covered by Sentinel-1 IW (10 August 2020, 01:37:55), Sentinel-1 EW (10 August 2020, 14:36:16), and ICEYE-X (11 August 2020, 11:12:41).</p>
Full article ">Figure 2
<p>Oil slicks with low NRCS (dark objects) observed on the extracted scenes of (<b>a</b>) a Sentinel-1 image, 28 March 2021, 14:33:06; and (<b>b</b>) an ICEYE-X image, 6 August 2020, 18:33:23.</p>
Full article ">Figure 3
<p>Oil slick observed on the extracted scene of a Sentinel-2 image, offshore Qatar, 3 September 2021, 06:56:21: (<b>a</b>) RGB image; (<b>b</b>) oil index (in dB) calculated from the averages of RGB bands.</p>
Full article ">Figure 4
<p>Oil slick observed on the Sentinel-3 image, offshore Qatar, 28 March 2021, 06:34:43: (<b>a</b>) Sentinel-3 OLCI tristimulus image (Sentinel-3 User Handbook); (<b>b</b>) T865 variable (in dB) from Sentinel-3 Level-2 data.</p>
Full article ">Figure 5
<p>Flowchart of oil-slick detection from Sentinel-1/ICEYE-X SAR, Sentinel-2/Landsat-8 optical, and Sentinel-3 visible/near-infrared data.</p>
Full article ">Figure 6
<p>An example of the HSBA algorithm’s results for a Sentinel-1 SAR image, 5 July 2021, 02:23:25. One can find a detailed description of the HSBA algorithm in [<a href="#B33-remotesensing-16-03110" class="html-bibr">33</a>]. The purple box displays the histogram of backscattering values for the complete scene, in which the bimodality is less noticeable, making it difficult to identify T and BG. The red box presents the backscattering value histogram for the areas selected by HSBA, where oil slick is present, clearly highlighting a bimodal behavior. The green box is a histogram of the sea’s backscattering value.</p>
Full article ">Figure 7
<p>Floating-oil-slick detection for Case #Q1, 27–28 March 2021. (<b>a</b>–<b>c</b>) Extracted scenes from Sentinel-2 (27 March, 06:56:21), Sentinel-3 (28 March, 06:34:43), and Sentinel-1 (28 March, 14:33:06) images, respectively. (<b>Left</b>) Sentinel-2 RGB, Sentinel-3 OLCI tristimulus, and Sentinel-1 NRCS images, respectively. (<b>Right</b>) Oil slicks detected from Sentinel-2/3/1 images (<b>left</b>), respectively.</p>
Full article ">Figure 8
<p>Collocation of Sentinel-2/3/1 images (Case #Q1, 27–28 March 2021) for observations of oil-slick evolution in periods of about (<b>a</b>) 24 h (between Sentinel-2/3), (<b>b</b>) 8 h (between Sentinel-3/1), and (<b>c</b>) 32 h (between Sentinel-2/1).</p>
Full article ">Figure 9
<p>Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-2/3/1 scenes (<a href="#remotesensing-16-03110-f007" class="html-fig">Figure 7</a>a–c), respectively. (<b>i</b>) ERA-5 wind vectors for (<b>a</b>) 06:00, 27 March 2021; (<b>b</b>) 06:00, 28 March; and (<b>c</b>) 14:00, 28 March. (<b>ii</b>) CMEMS current vectors for (<b>d</b>) 06:30, 27 March; (<b>e</b>) 06:30, 28 March; and (<b>f</b>) 14:30, 28 March. (<b>iii</b>) Mean values of wind and current fields from 06:00, 27 March, to 14:00 28 March.</p>
Full article ">Figure 9 Cont.
<p>Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-2/3/1 scenes (<a href="#remotesensing-16-03110-f007" class="html-fig">Figure 7</a>a–c), respectively. (<b>i</b>) ERA-5 wind vectors for (<b>a</b>) 06:00, 27 March 2021; (<b>b</b>) 06:00, 28 March; and (<b>c</b>) 14:00, 28 March. (<b>ii</b>) CMEMS current vectors for (<b>d</b>) 06:30, 27 March; (<b>e</b>) 06:30, 28 March; and (<b>f</b>) 14:30, 28 March. (<b>iii</b>) Mean values of wind and current fields from 06:00, 27 March, to 14:00 28 March.</p>
Full article ">Figure 10
<p>Floating-oil-slick detection for Case #Q2, 5–6 July 2021. (<b>a</b>–<b>c</b>) Extracted scenes from Sentinel-1 (July 5, 02:23:25), Sentinel-2 (5 July, 06:56:21), and Landsat-8 (6 July, 06:58:26), respectively. (<b>Left</b>) Sentinel-1 NRCS, Sentinel-2 RGB, and Landsat-8 RGB, respectively. (<b>Right</b>) Oil slicks detected from Sentinel-1/2 and Landsat-8 images (<b>left</b>), respectively.</p>
Full article ">Figure 10 Cont.
<p>Floating-oil-slick detection for Case #Q2, 5–6 July 2021. (<b>a</b>–<b>c</b>) Extracted scenes from Sentinel-1 (July 5, 02:23:25), Sentinel-2 (5 July, 06:56:21), and Landsat-8 (6 July, 06:58:26), respectively. (<b>Left</b>) Sentinel-1 NRCS, Sentinel-2 RGB, and Landsat-8 RGB, respectively. (<b>Right</b>) Oil slicks detected from Sentinel-1/2 and Landsat-8 images (<b>left</b>), respectively.</p>
Full article ">Figure 11
<p>Collocation of Sentinel-1/2 and Landsat-8 images (Case #Q2, 5–6 July 2021) for observations of oil-slick evolution after about (<b>a</b>) 4 h (between Sentinel-1/2), (<b>b</b>) 24 h (between Sentinel-2 and Landsat-8), and (<b>c</b>) 28 h (between Sentinel-1 and Landsat-8).</p>
Full article ">Figure 11 Cont.
<p>Collocation of Sentinel-1/2 and Landsat-8 images (Case #Q2, 5–6 July 2021) for observations of oil-slick evolution after about (<b>a</b>) 4 h (between Sentinel-1/2), (<b>b</b>) 24 h (between Sentinel-2 and Landsat-8), and (<b>c</b>) 28 h (between Sentinel-1 and Landsat-8).</p>
Full article ">Figure 12
<p>Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-1/2 and Landsat-8 scenes (<a href="#remotesensing-16-03110-f010" class="html-fig">Figure 10</a>a–c), respectively. (<b>i</b>) ERA-5 wind vectors for (<b>a</b>) 02:00, 5 July 2021; (<b>b</b>) 07:00, 5 July; and (<b>c</b>) 07:00, 6 July. (<b>ii</b>) CMEMS current vectors on (<b>d</b>) 02:30, 5 July; (<b>e</b>) 07:30, 5 July; and (<b>f</b>) 07:30, 6 July. (<b>iii</b>) Mean values of wind and current fields from 02:00, 5 July, to 09:00, 6 July.</p>
Full article ">Figure 12 Cont.
<p>Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-1/2 and Landsat-8 scenes (<a href="#remotesensing-16-03110-f010" class="html-fig">Figure 10</a>a–c), respectively. (<b>i</b>) ERA-5 wind vectors for (<b>a</b>) 02:00, 5 July 2021; (<b>b</b>) 07:00, 5 July; and (<b>c</b>) 07:00, 6 July. (<b>ii</b>) CMEMS current vectors on (<b>d</b>) 02:30, 5 July; (<b>e</b>) 07:30, 5 July; and (<b>f</b>) 07:30, 6 July. (<b>iii</b>) Mean values of wind and current fields from 02:00, 5 July, to 09:00, 6 July.</p>
Full article ">Figure 13
<p>Floating-oil-slick detection for Case #Q3, 3 September 2021. (<b>a</b>,<b>c</b>) Three extracted scenes (#S1–3, <b>left</b>–<b>right</b>) from the Sentinel-1 NRCS (3 September, 02:23:54) and Sentinel-2 RGB (3 September, 06:56:21) images, respectively. (<b>b</b>,<b>d</b>) Oil slicks detected from the Sentinel-1/2 scenes #S1–3, respectively.</p>
Full article ">Figure 13 Cont.
<p>Floating-oil-slick detection for Case #Q3, 3 September 2021. (<b>a</b>,<b>c</b>) Three extracted scenes (#S1–3, <b>left</b>–<b>right</b>) from the Sentinel-1 NRCS (3 September, 02:23:54) and Sentinel-2 RGB (3 September, 06:56:21) images, respectively. (<b>b</b>,<b>d</b>) Oil slicks detected from the Sentinel-1/2 scenes #S1–3, respectively.</p>
Full article ">Figure 14
<p>Collocation of Sentinel-1/2 images (Case #Q3, 3 September 2021) for observations of oil-slick evolution after about 4 h. (<b>a</b>–<b>c</b>) Oil slicks detected from the extracted Sentinel-1/2 scenes #S1–3 (<a href="#remotesensing-16-03110-f013" class="html-fig">Figure 13</a>), respectively.</p>
Full article ">Figure 14 Cont.
<p>Collocation of Sentinel-1/2 images (Case #Q3, 3 September 2021) for observations of oil-slick evolution after about 4 h. (<b>a</b>–<b>c</b>) Oil slicks detected from the extracted Sentinel-1/2 scenes #S1–3 (<a href="#remotesensing-16-03110-f013" class="html-fig">Figure 13</a>), respectively.</p>
Full article ">Figure 15
<p>(<b>Left</b>–<b>Right</b>) Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-1/2 scenes #S1–3, respectively. (<b>a</b>) ERA-5 wind vectors for 02:00, 3 September 2021. (<b>b</b>) CMEMS current vectors for 02:30, 3 September 2021. (<b>c</b>) Mean values of wind and current fields from 02:00 to 07:00, 3 September.</p>
Full article ">Figure 15 Cont.
<p>(<b>Left</b>–<b>Right</b>) Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-1/2 scenes #S1–3, respectively. (<b>a</b>) ERA-5 wind vectors for 02:00, 3 September 2021. (<b>b</b>) CMEMS current vectors for 02:30, 3 September 2021. (<b>c</b>) Mean values of wind and current fields from 02:00 to 07:00, 3 September.</p>
Full article ">Figure 16
<p>Oil-slick observation on Sentinel-1 IW/EW and ICEYE-X images, offshore Mauritius, 10–11 August 2020. (<b>a</b>–<b>c</b>) Extracted scenes corresponding to the MV Wakashio oil spill, from Sentinel-1 IW (10 August, 01:37:55); Sentinel-1 EW (10 August, 14:36:16); and ICEYE-X (11 August, 11:12:41), respectively. (<b>d</b>–<b>f</b>) Oil spill, as detected by HSBA from the extracted scenes (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 17
<p>Surface wind and current speed and direction (indicated by the arrows), corresponding to the extracted Sentinel-1 IW, EW, and ICEYE-X scenes (<a href="#remotesensing-16-03110-f016" class="html-fig">Figure 16</a>a–c), respectively. (<b>i</b>) ERA-5 wind vectors (<b>a</b>) 01:00, 10 August 2020, and (<b>b</b>) 14:00 and (<b>c</b>) 11:00, 11 August. (<b>ii</b>) CMEMS current vectors on (<b>d</b>) 01:30, 10 August, and (<b>e</b>) 14:30 and (<b>f</b>) 11:30, 11 August. (<b>iii</b>) Mean values of wind and current fields from 01:00, 10 August, to 11:00, 11 August.</p>
Full article ">Figure 18
<p>(<b>i</b>,<b>ii</b>) Comparison between the oil slicks identified from the Sentinel-2 (scene S#3–<a href="#remotesensing-16-03110-f013" class="html-fig">Figure 13</a>c) and Sentinel-1 (scene S#2–<a href="#remotesensing-16-03110-f013" class="html-fig">Figure 13</a>a) images, respectively, for Case Q#3, 3 September 2021, by only HSBA (<a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>i–a, <a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>ii–d), those identified from HSBA plus oil contour (<a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>i–b, <a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>ii–e), and those manually segmented (<a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>i–c, <a href="#remotesensing-16-03110-f018" class="html-fig">Figure 18</a>ii–f).</p>
Full article ">Figure 19
<p>Comparison between the oil slicks (<b>a</b>) detected by HSBA plus oil contour and (<b>b</b>) those manually segmented, from the Sentinel-2 image, 5 July 2021, 06:56:21. (<b>c</b>) Difference between the detected pixels (<b>a</b>) and ground truth (<b>b</b>).</p>
Full article ">Figure 20
<p>Comparison between the oil slicks (<b>a</b>) detected by HSBA plus oil contour and (<b>b</b>) those manually segmented, from the Landsat-8 image, 6 July 2021, 06:58:26. (<b>c</b>) Difference between the detected pixels (<b>a</b>) and ground truth (<b>b</b>).</p>
Full article ">
27 pages, 6859 KiB  
Article
AOHDL: Adversarial Optimized Hybrid Deep Learning Design for Preventing Attack in Radar Target Detection
by Muhammad Moin Akhtar, Yong Li, Wei Cheng, Limeng Dong, Yumei Tan and Langhuan Geng
Remote Sens. 2024, 16(16), 3109; https://doi.org/10.3390/rs16163109 - 22 Aug 2024
Viewed by 990
Abstract
In autonomous driving, Frequency-Modulated Continuous-Wave (FMCW) radar has gained widespread acceptance for target detection due to its resilience and dependability under diverse weather and illumination circumstances. Although deep learning radar target identification models have seen fast improvement, there is a lack of research [...] Read more.
In autonomous driving, Frequency-Modulated Continuous-Wave (FMCW) radar has gained widespread acceptance for target detection due to its resilience and dependability under diverse weather and illumination circumstances. Although deep learning radar target identification models have seen fast improvement, there is a lack of research on their susceptibility to adversarial attacks. Various spoofing attack techniques have been suggested to target radar sensors by deliberately sending certain signals through specialized devices. In this paper, we proposed a new adversarial deep learning network for spoofing attacks in radar target detection (RTD). Multi-level adversarial attack prevention using deep learning is designed for the coherence pulse deep feature map from DAALnet and Range-Doppler (RD) map from TDDLnet. After the discrimination of the attack, optimization of hybrid deep learning (OHDL) integrated with enhanced PSO is used to predict the range and velocity of the target. Simulations are performed to evaluate the sensitivity of AOHDL for different radar environment configurations. RMSE of AOHDL is almost the same as OHDL without attack conditions and it outperforms the earlier RTD implementations. Full article
Show Figures

Figure 1

Figure 1
<p>FMCW chirp sequence.</p>
Full article ">Figure 2
<p>Range FFT from sampling.</p>
Full article ">Figure 3
<p>Range Doppler FFT from chirp sequence.</p>
Full article ">Figure 4
<p>Proposed workflow of the RTD.</p>
Full article ">Figure 5
<p>DAALnet architecture.</p>
Full article ">Figure 6
<p>FELLLnet architecture.</p>
Full article ">Figure 7
<p>TDDLnet layout.</p>
Full article ">Figure 8
<p>EPSO flowchart.</p>
Full article ">Figure 9
<p>Adversarial attack in radar system: (<b>a</b>) normal scenario; (<b>b</b>) adversarial attack scenario.</p>
Full article ">Figure 10
<p>RGAN architecture.</p>
Full article ">Figure 11
<p>Radar echo data cube database (row 1: bike target, row 2: car target, row 3: synthetic target).</p>
Full article ">Figure 12
<p>(<b>a</b>) Generated images of DAALnet; (<b>b</b>) DAALnet adversarial network learning progress, epoch: 50, iteration: 150, elapsed: 00:18:06.</p>
Full article ">Figure 13
<p>(<b>a</b>) Generated images of TDDLnet; (<b>b</b>) TDDLnet adversarial network learning progress, epoch: 50, iteration: 150, elapsed: 00:10:54.</p>
Full article ">Figure 14
<p>RMSE comparison for range estimation.</p>
Full article ">Figure 15
<p>RMSE comparison for velocity estimation.</p>
Full article ">Figure 16
<p>Adversarial attack performance in range prediction.</p>
Full article ">Figure 17
<p>Adversarial attack performance in velocity prediction.</p>
Full article ">Figure 18
<p>The time complexity of the system.</p>
Full article ">Figure 19
<p>Prediction accuracy of the system.</p>
Full article ">Figure 20
<p>Impact of different interference in detection of radar target.</p>
Full article ">Figure 21
<p>Impact of dynamic environment on RMSE evaluation.</p>
Full article ">
25 pages, 29302 KiB  
Article
Spatiotemporal Variations in Near-Surface Soil Water Content across Agroecological Regions of Mainland India: 1979–2022 (44 Years)
by Alka Rani, Nishant K. Sinha, Bikram Jyoti, Jitendra Kumar, Dhiraj Kumar, Rahul Mishra, Pragya Singh, Monoranjan Mohanty, Somasundaram Jayaraman, Ranjeet Singh Chaudhary, Narendra Kumar Lenka, Nikul Kumari and Ankur Srivastava
Remote Sens. 2024, 16(16), 3108; https://doi.org/10.3390/rs16163108 - 22 Aug 2024
Viewed by 1687
Abstract
This study was undertaken to address how near-surface soil water content (SWC) patterns have varied across diverse agroecological regions (AERs) of mainland India from 1979 to 2022 (44 years) and how these variations relate to environmental factors. Grid-wise trend analysis using the Mann–Kendall [...] Read more.
This study was undertaken to address how near-surface soil water content (SWC) patterns have varied across diverse agroecological regions (AERs) of mainland India from 1979 to 2022 (44 years) and how these variations relate to environmental factors. Grid-wise trend analysis using the Mann–Kendall (MK) trend test and Sen’s slope was conducted to determine the trends and their magnitudes. Additionally, we used Spearman’s rank correlation (ρ) to explore the relationships of ESA CCI’s near-surface SWC data with key environmental variables, including rainfall, temperature, actual evapotranspiration, and the normalized difference vegetation index (NDVI). The results revealed significant variations in SWC patterns and trends across different AERs and months. The MK trend test indicated that 17.96% of the area exhibited a significantly increasing trend (p < 0.1), while7.6% showed a significantly decreasing trend, with an average annual Sen’s slope of 0.9 × 10−4 m3 m−3 year−1 for mainland India. Areas with the highest decreasing trends were AER-16 (warm per-humid with brown and red hill soils), AER-15 (hot subhumid to humid with alluvium-derived soils), and AER-17 (warm per-humid with red and lateritic soils). In contrast, increasing trends were the most prominent in AER-5 (hot semi-arid with medium and deep black soils), AER-6 (hot semi-arid with shallow and medium black soils), and AER-19 (hot humid per-humid with red, lateritic, and alluvium-derived soils). Significant increasing trends were more prevalent during monsoon and post-monsoon months while decreasing trends were noted in pre-monsoon months. Correlation analysis showed strong positive correlations of SWC with rainfall (ρ = 0.70), actual evapotranspiration (ρ = 0.74), and NDVI (ρ = 0.65), but weak or negative correlations with temperature (ρ = 0.12). This study provides valuable insights for policymakers to delineate areas based on soil moisture availability patterns across seasons, aiding in agricultural and water resource planning under changing climatic conditions. Full article
(This article belongs to the Section Remote Sensing and Geo-Spatial Science)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of agroecological regions of India (Source: National Bureau of Soil Survey and Land Use Planning, India).</p>
Full article ">Figure 2
<p>Spatial pattern of the (<b>a</b>) average annual near-surface SWC and (<b>b</b>) the coefficient of variation under different AERs of mainland India from 1979 to 2022 (44 years).</p>
Full article ">Figure 3
<p>The boxplot of near-surface SWC across different AERs of mainland India. The red dot represents the mean value.</p>
Full article ">Figure 4
<p>Monthly mean of ESA CCI near-surface SWC for the period of 44 years (1979 to 2022) under different AERs of mainland India.</p>
Full article ">Figure 5
<p>Interannual variations in the mean SWC of monsoon season (June–September) across different AERs and mainland India from 1979 to 2022 (44 years).</p>
Full article ">Figure 6
<p>Spatial patterns of the magnitude of temporal trend (indicated by Sen’s slope (m<sup>3</sup> m<sup>−3</sup> year<sup>−1</sup>)) and direction of the significant (<span class="html-italic">p</span> &lt; 0.1) and insignificant (<span class="html-italic">p</span> &gt; 0.1) temporal trend (indicated by MK trend test) in the near-surface SWC for each month from 1979 to 2022.</p>
Full article ">Figure 6 Cont.
<p>Spatial patterns of the magnitude of temporal trend (indicated by Sen’s slope (m<sup>3</sup> m<sup>−3</sup> year<sup>−1</sup>)) and direction of the significant (<span class="html-italic">p</span> &lt; 0.1) and insignificant (<span class="html-italic">p</span> &gt; 0.1) temporal trend (indicated by MK trend test) in the near-surface SWC for each month from 1979 to 2022.</p>
Full article ">Figure 7
<p>Spatial patterns of the magnitude of temporal trend (indicated by Sen’s slope (m<sup>3</sup> m<sup>−3</sup> year<sup>−1</sup>)) and direction of the significant (<span class="html-italic">p</span> &lt; 0.1) and insignificant (<span class="html-italic">p</span> &gt; 0.1) temporal trend (indicated by MK trend test) in the annual near-surface SWC from 1979 to 2022.</p>
Full article ">Figure 8
<p>Spatial pattern of the significant Spearman’s rank correlation coefficient (ρ) (<span class="html-italic">p</span> &lt; 0.1) of near-surface SWC with (<b>a</b>) rainfall, (<b>b</b>) temperature, (<b>c</b>) actual evapotranspiration, and (<b>d</b>) NDVI.</p>
Full article ">
35 pages, 14791 KiB  
Article
Earth Observation Multi-Spectral Image Fusion with Transformers for Sentinel-2 and Sentinel-3 Using Synthetic Training Data
by Pierre-Laurent Cristille, Emmanuel Bernhard, Nick L. J. Cox, Jeronimo Bernard-Salas and Antoine Mangin
Remote Sens. 2024, 16(16), 3107; https://doi.org/10.3390/rs16163107 - 22 Aug 2024
Viewed by 757
Abstract
With the increasing number of ongoing space missions for Earth Observation (EO), there is a need to enhance data products by combining observations from various remote sensing instruments. We introduce a new Transformer-based approach for data fusion, achieving up to a 10- to-30-fold [...] Read more.
With the increasing number of ongoing space missions for Earth Observation (EO), there is a need to enhance data products by combining observations from various remote sensing instruments. We introduce a new Transformer-based approach for data fusion, achieving up to a 10- to-30-fold increase in the spatial resolution of our hyperspectral data. We trained the network on a synthetic set of Sentinel-2 (S2) and Sentinel-3 (S3) images, simulated from the hyperspectral mission EnMAP (30 m resolution), leading to a fused product of 21 bands at a 30 m ground resolution. The performances were calculated by fusing original S2 (12 bands, 10, 20, and 60 m resolutions) and S3 (21 bands, 300 m resolution) images. To go beyond EnMap’s ground resolution, the network was also trained using a generic set of non-EO images from the CAVE dataset. However, we found that training the network on contextually relevant data is crucial. The EO-trained network significantly outperformed the non-EO-trained one. Finally, we observed that the original network, trained at 30 m ground resolution, performed well when fed images at 10 m ground resolution, likely due to the flexibility of Transformer-based networks. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of multispectral optical image data fusion. The combination of a high spatial and medium spectral definition image (<b>left</b>), with a low spatial and high spectral definition image (<b>middle</b>) produces a fused product (<b>right</b>) embedding the two strengths (high spatial and spectral).</p>
Full article ">Figure 2
<p>The relationship between LrHSI (<math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mi>h</mi> </msub> </semantics></math>), HrMSI (<math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mi>m</mi> </msub> </semantics></math>), and the corresponding HrHSI (<math display="inline"><semantics> <mi mathvariant="bold">Y</mi> </semantics></math>) Earth Observation images.</p>
Full article ">Figure 3
<p>CAVE example image with the corresponding mean spectrum.</p>
Full article ">Figure 4
<p>Earth coverage with the requested EnMAP data.</p>
Full article ">Figure 5
<p>SRFcurves for each S2 MSI band (B01 to B12) together with a (normalized) EnMAP spectrum.</p>
Full article ">Figure 6
<p>Sentinel-2 synthetic MSI generation. Each matrix depicted in the figure represents the integration product of all the EnMAP image pixels with a single SRF coming from the MSI instrument, giving us a 12-band output tensor with simulated ortho-rectified reflectance values.</p>
Full article ">Figure 7
<p><b>Top left</b>: The synthetic Sentinel-2 (S2) composite generated from EnMAP integration. <b>Top right</b>: The true Sentinel-2 (S2) image composite. <b>Bottom</b>: The average spectra from the two S2 images alongside the EnMAP mean spectrum used for integration.</p>
Full article ">Figure 8
<p>Illustration of the data augmentation (random rotations and vertical/horizontal flips) preparation pipeline to create the input (S2, S3, GT corresponding to the HrMSI, LrHSI, HrHSI) for the neural network. The arrows show the data preparation process from the full simulated multi-spectral image to the network input data.</p>
Full article ">Figure 9
<p>Fusformer base architecture (adapted from [<a href="#B52-remotesensing-16-03107" class="html-bibr">52</a>]).</p>
Full article ">Figure 10
<p>MSE training and validation losses (<b>top</b> panels) along with the performance metrics (PSNR, RMSE, ERGAS, SAM; <b>bottom</b> panels).</p>
Full article ">Figure 11
<p>AI fusions over Angola. All images are natural color composites. The S2 composite is 10 m GSD, the EO-trained fusion is 30 m GSD, and the CAVE-trained fusion is 20 m GSD. The bottom panel shows the mean spectra, with their standard deviation, for each of the images.</p>
Full article ">Figure 12
<p>A closer look at the Angola prediction. The S2 composite (<b>left</b>) and the AI-fused composite (<b>right</b>).</p>
Full article ">Figure 13
<p>AI fusions over Amazonia (CAVE GSD 20 and EO GSD 30).</p>
Full article ">Figure 14
<p>Close-up view of the Amazonia prediction (cf. white square in <a href="#remotesensing-16-03107-f013" class="html-fig">Figure 13</a>). The S2 composite (<b>left</b>) and the AI-fused composite (<b>right</b>).</p>
Full article ">Figure 15
<p>AI fusion for an urban scene (Los Angeles). Composite images of the input (S3 at 300 m, S2 at 10 m) and output (AI fused at 10 m with EO and CAVE training, respectively) are shown in the top row. The bottom panel shows the average spectra for each input and output image. The vertical lines at each band indicate the standard deviation. The white square in the second top panel refers to the close-up shown in <a href="#remotesensing-16-03107-f016" class="html-fig">Figure 16</a>.</p>
Full article ">Figure 16
<p>Close-up view of the AI fusion of the urban scene (Los Angeles) presented in <a href="#remotesensing-16-03107-f015" class="html-fig">Figure 15</a> (cf. white square in second top panel). Here we show only the the Sentinel-2 composite (<b>left</b>) and the AI-fused composite (<b>right</b>).</p>
Full article ">Figure 17
<p><b>Top left</b>: Sentinel-2 image (665 nm) of the urban scene (Los Angeles) and its corresponding DFT image. <b>Bottom left</b>: AI-fused image (665 nm) of the same scene with its DFT image. <b>Right</b>: Difference between Sentinel-2 and AI-fused DFT images.</p>
Full article ">Figure 18
<p>AI fusions over an agricultural/forestry scene (France). Composite images of the input (S3 at 300 m, S2 at 10 m) and output (AI fused at 10 m with EO and CAVE training, respectively) are shown in the top row. The bottom panel shows the average spectra for each input and output image. The vertical lines at each band indicate the standard deviation. The white squares in the right-most top panel refer to the close-ups shown in <a href="#remotesensing-16-03107-f019" class="html-fig">Figure 19</a>.</p>
Full article ">Figure 19
<p>Close-up view of the AI fused (CAVE training) agricultural/forestry scene (France) presented in <a href="#remotesensing-16-03107-f018" class="html-fig">Figure 18</a> (cf. white square in right most top panel). The CAVE network hallucinations stand out clearly with their rainbow colors. Panels 1 and 2 correspond to the numbered squares in <a href="#remotesensing-16-03107-f018" class="html-fig">Figure 18</a>.</p>
Full article ">Figure 20
<p>Angola prediction close-up look at GSD 10 in the same area as <a href="#remotesensing-16-03107-f012" class="html-fig">Figure 12</a>. The Sentinel-2 composite (<b>left</b>) and the AI-fused composite (<b>right</b>).</p>
Full article ">Figure 21
<p>Twenty-kilometer-wide GSD 10 m and 300 M fused product inside the Algeria CEOS zone (<b>top</b> panels). The Sentinel-3 and GSD 300 AI fused image mean spectra are also displayed in the <b>bottom</b> panel. The corresponding metrics for this inference are in <a href="#remotesensing-16-03107-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 22
<p>Twenty-kilometer-wide GSD 10 and 300 fused products inside the Mauritania CEOS zone. The Sentinel-3 and GSD 300 AI fused image mean spectra are also displayed in the second row. Metrics for this inference are displayed <a href="#remotesensing-16-03107-t007" class="html-table">Table 7</a>.</p>
Full article ">Figure 23
<p>Ten-kilometer-wide GSD 10 m and 300 m fused product above Los Angeles (<b>top</b> panels). The Sentinel-3 and GSD 300 AI fused image mean spectra are also displayed in the <b>bottom</b> panel.</p>
Full article ">Figure 24
<p>From <b>left</b> to <b>right</b>: (i) natural colors composite above Baltimore agglomeration area (<b>left</b>), (ii) EO-trained network classification driven from its NDVI, (iii) the Sentinel-2 classification NDVI, and (iv) the ground truth coming from the Chesapeake dataset [<a href="#B60-remotesensing-16-03107" class="html-bibr">60</a>].</p>
Full article ">Figure A1
<p>The EO dataset (EuroSat) used to train the classifier; image from [<a href="#B67-remotesensing-16-03107" class="html-bibr">67</a>].</p>
Full article ">Figure A2
<p>Natural-color composite above Baltimore agglomeration area, EO-trained network classification driven from its NDVI, the Sentinel-2 classification NDVI, and the ground truth coming from the Chesapeake dataset. Jaccard score for the AI-fused product: 0.414; Jaccard score for Sentinel-2: 0.381.</p>
Full article ">Figure A3
<p>AI fusions over Greece (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A2" class="html-table">Table A2</a>.</p>
Full article ">Figure A4
<p>AI fusions over Algeria (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A3" class="html-table">Table A3</a>.</p>
Full article ">Figure A5
<p>AI fusions over Amazonia (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A4" class="html-table">Table A4</a>.</p>
Full article ">Figure A6
<p>AI fusions over Angola (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A5" class="html-table">Table A5</a>.</p>
Full article ">Figure A7
<p>AI fusions over Australia (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A6" class="html-table">Table A6</a>.</p>
Full article ">Figure A8
<p>AI fusions over Botswana (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A7" class="html-table">Table A7</a>.</p>
Full article ">Figure A9
<p>AI fusions over Peru (GSD 10); metrics in <a href="#remotesensing-16-03107-t0A8" class="html-table">Table A8</a>.</p>
Full article ">
7 pages, 154 KiB  
Editorial
Remote Sensing of Target Object Detection and Identification II
by Paolo Tripicchio
Remote Sens. 2024, 16(16), 3106; https://doi.org/10.3390/rs16163106 - 22 Aug 2024
Viewed by 535
Abstract
The ability to detect and identify target objects from remote images and acquisitions is paramount in remote sensing systems for the proper analysis of territories [...] Full article
(This article belongs to the Special Issue Remote Sensing of Target Object Detection and Identification II)
24 pages, 13789 KiB  
Article
A Study of the Effect of DEM Spatial Resolution on Flood Simulation in Distributed Hydrological Modeling
by Hengkang Zhu and Yangbo Chen
Remote Sens. 2024, 16(16), 3105; https://doi.org/10.3390/rs16163105 - 22 Aug 2024
Viewed by 673
Abstract
Watershed hydrological modeling methods are currently the predominant approach for flood forecasting. Digital elevation model (DEM) data, a critical input variable, significantly influence the accuracy of flood simulations, primarily due to their resolution. However, there is a paucity of research exploring the relationship [...] Read more.
Watershed hydrological modeling methods are currently the predominant approach for flood forecasting. Digital elevation model (DEM) data, a critical input variable, significantly influence the accuracy of flood simulations, primarily due to their resolution. However, there is a paucity of research exploring the relationship between DEM resolution and flood simulation accuracy. This study aims to investigate this relationship by examining three watersheds of varying scales in southern Jiangxi Province, China. Utilizing the Liuxihe model, a new-generation physically based distributed hydrological model (PBDHM), we collected and collated data, including DEM, land use, soil type, and hourly flow and rainfall data from monitoring stations, covering 22 flood events over the last decade, to conduct model calibration and flood simulation. DEM data were processed into seven resolutions, ranging from 30 m to 500 m, to analyze the impact of DEM resolution on flood simulation accuracy. The results are as follows. (1) The Nash–Sutcliffe efficiency coefficients for the entire set of flood events were above 0.75, demonstrating the Liuxihe model’s strong applicability in this region. (2) The DEM resolution of the Anhe and Dutou watersheds lost an average of 7.9% and 0.8% accuracy when increasing from 30 m to 200 m, with further losses of 37.9% and 10.7% from 200 m to 300 m. Similarly, the Mazhou watershed showed an average of 8.4% accuracy loss from 30 m to 400 m and 20.4% from 400 m to 500 m. These results suggest a threshold where accuracy sharply declines as DEM resolution increases, and this threshold rises with watershed scale. (3) Parameter optimization in the Liuxihe model significantly enhanced flood simulation accuracy, effectively compensating for the reduction in accuracy caused by increased DEM resolution. (4) The optimal parameters for flood simulation varied with different DEM resolutions, with significant changes observed in riverbed slope and river roughness, which are highly sensitive to DEM resolution. (5) Changes in DEM resolution did not significantly impact surface flow production. However, the extraction of the water system and the reduction in slope were major factors contributing to the decline in flood simulation accuracy. Overall, this study elucidates that there is a threshold range of DEM resolution that balances data acquisition efficiency and computational speed while satisfying the basic requirements for flood simulation accuracy. This finding provides crucial decision-making support for selecting appropriate DEM resolutions in hydrological forecasting. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of the three watersheds.</p>
Full article ">Figure 2
<p>DEM at different resolutions in Mazhou.</p>
Full article ">Figure 3
<p>Land use percentage at different DEM resolutions in the Mazhou watershed.</p>
Full article ">Figure 4
<p>Soil types at different DEM resolutions in the Mazhou watershed.</p>
Full article ">Figure 5
<p>Framework of the Liuxihe model.</p>
Full article ">Figure 6
<p>Stream classification results based at 90 m resolution DEM: (<b>a</b>) Anhe, (<b>b</b>) Dutou, (<b>c</b>) Mazhou.</p>
Full article ">Figure 7
<p>Results of parameter optimization of the Liuxihe model with the particle swarm optimization (PSO) algorithm: (<b>a</b>) changing curve of the objective function in 9 floods; (<b>b</b>) parameter evolution process for 20130721 in Anhe watershed.</p>
Full article ">Figure 8
<p>Simulation process for 8 floods in the Anhe watershed.</p>
Full article ">Figure 9
<p>Simulation process for 8 floods in the Dutou watershed.</p>
Full article ">Figure 10
<p>Simulation process for 6 floods in the Mazhou watershed.</p>
Full article ">Figure 11
<p>Changes in flood modeling accuracy with different resolutions for three watersheds.</p>
Full article ">Figure 12
<p>Improvement in the Nash–Sutcliffe efficiency coefficient after parameter optimization.</p>
Full article ">Figure 13
<p>Comparison of optimal parameter results at different resolutions for three floods in the Mazhou watershed.</p>
Full article ">Figure 14
<p>(<b>a</b>) Surface flow results in Anhe (<b>b</b>), Dutou, and (<b>c</b>) Mazhou. (<b>d</b>) Percentage of flux distance level produced by DEM at different resolutions (%).</p>
Full article ">
33 pages, 31036 KiB  
Article
Enhancing Extreme Precipitation Forecasts through Machine Learning Quality Control of Precipitable Water Data from Satellite FengYun-2E: A Comparative Study of Minimum Covariance Determinant and Isolation Forest Methods
by Wenqi Shen, Siqi Chen, Jianjun Xu, Yu Zhang, Xudong Liang and Yong Zhang
Remote Sens. 2024, 16(16), 3104; https://doi.org/10.3390/rs16163104 - 22 Aug 2024
Viewed by 1368
Abstract
Variational data assimilation theoretically assumes Gaussian-distributed observational errors, yet actual data often deviate from this assumption. Traditional quality control methods have limitations when dealing with nonlinear and non-Gaussian-distributed data. To address this issue, our study innovatively applies two advanced machine learning (ML)-based quality [...] Read more.
Variational data assimilation theoretically assumes Gaussian-distributed observational errors, yet actual data often deviate from this assumption. Traditional quality control methods have limitations when dealing with nonlinear and non-Gaussian-distributed data. To address this issue, our study innovatively applies two advanced machine learning (ML)-based quality control (QC) methods, Minimum Covariance Determinant (MCD) and Isolation Forest, to process precipitable water (PW) data derived from satellite FengYun-2E (FY2E). We assimilated the ML QC-processed TPW data using the Gridpoint Statistical Interpolation (GSI) system and evaluated its impact on heavy precipitation forecasts with the Weather Research and Forecasting (WRF) v4.2 model. Both methods notably enhanced data quality, leading to more Gaussian-like distributions and marked improvements in the model’s simulation of precipitation intensity, spatial distribution, and large-scale circulation structures. During key precipitation phases, the Fraction Skill Score (FSS) for moderate to heavy rainfall generally increased to above 0.4. Quantitative analysis showed that both methods substantially reduced Root Mean Square Error (RMSE) and bias in precipitation forecasting, with the MCD method achieving RMSE reductions of up to 58% in early forecast hours. Notably, the MCD method improved forecasts of heavy and extremely heavy rainfall, whereas the Isolation Forest method demonstrated a superior performance in predicting moderate to heavy rainfall intensities. This research not only provides a basis for method selection in forecasting various precipitation intensities but also offers an innovative solution for enhancing the accuracy of extreme weather event predictions. Full article
Show Figures

Figure 1

Figure 1
<p>The 500 hPa geopotential height (blue contours; gpm), 850 hPa water vapor flux (shading; <math display="inline"><semantics> <mrow> <mi>kg</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">m</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>), and 850 hPa wind vectors (arrows; <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 06:00 UTC 7 July 2013 to 06:00 UTC 9 July 2013, based on ERA5 reanalysis data (outer domain, d01); the purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. The study domain and WRF model domains are shown in <a href="#remotesensing-16-03104-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 2
<p>Simulation domains and model nesting. The purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. Red dots represent major cities (Chengdu, Mianyang, Ya’an, and Chongqing). The blue and red rectangles denote the outer (d01) and inner (d02) model domains, respectively. Background shading shows terrain elevation.</p>
Full article ">Figure 3
<p>Distribution of PW data from FY2E satellite (<b>left</b>) and CMA ground stations (<b>right</b>) at 12:00 UTC on 8 July 2013. The different colors represent different PW value ranges.</p>
Full article ">Figure 4
<p>Distribution of FY2E TPW data innovation at three representative time points: 06:00 UTC 8 July (<b>top row</b>), 06:00 UTC 9 July (<b>middle row</b>), and 06:00 UTC 10 July (<b>bottom row</b>). Each row shows the data distribution before QC (<b>left column</b>), after applying the MCD method (<b>middle column</b>), and after applying the Isolation Forest method (<b>right column</b>). Green histograms represent the data distribution, red dashed lines indicate fitted Gaussian distribution curves, and blue dashed lines mark zero innovation.</p>
Full article ">Figure 5
<p>Box plots of FY-2E TPW data innovation (in mm) at 9 assimilation times before and after QC. The different colors represent different time points.</p>
Full article ">Figure 6
<p>(<b>A</b>) Spatial distribution of reject points of FY-2E TPW data at 9 assimilation times after QC using the MCD method. (<b>B</b>) Spatial distribution of pass points of FY-2E TPW data at 9 assimilation times after QC using the MCD method.</p>
Full article ">Figure 6 Cont.
<p>(<b>A</b>) Spatial distribution of reject points of FY-2E TPW data at 9 assimilation times after QC using the MCD method. (<b>B</b>) Spatial distribution of pass points of FY-2E TPW data at 9 assimilation times after QC using the MCD method.</p>
Full article ">Figure 7
<p>(<b>A</b>) Spatial distribution of reject points of FY-2E TPW data at 9 assimilation times after QC using the Isolation Forest method. (<b>B</b>) Spatial distribution of pass points of FY-2E TPW data at 9 assimilation times after QC using the Isolation Forest method.</p>
Full article ">Figure 7 Cont.
<p>(<b>A</b>) Spatial distribution of reject points of FY-2E TPW data at 9 assimilation times after QC using the Isolation Forest method. (<b>B</b>) Spatial distribution of pass points of FY-2E TPW data at 9 assimilation times after QC using the Isolation Forest method.</p>
Full article ">Figure 8
<p>(<b>A</b>) A 500 hPa geopotential height (blue contours; gpm), vertically integrated water vapor flux (shading; <math display="inline"><semantics> <mrow> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> <mi mathvariant="normal">g</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">cm</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">P</mi> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 925 to 750 hPa, and 850 hPa wind vectors (arrows; <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013, based on the CTRL experiment simulation (outer domain, d01); the purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. (<b>B</b>) Similar to (<b>A</b>) but for the EXPR1 experimental simulation. (<b>C</b>) Similar to (<b>A</b>) but for the EXPR2(MCD) experimental simulation. (<b>D</b>) Similar to (<b>A</b>) but for the EXPR3 (Isolation Forest) experimental simulation.</p>
Full article ">Figure 8 Cont.
<p>(<b>A</b>) A 500 hPa geopotential height (blue contours; gpm), vertically integrated water vapor flux (shading; <math display="inline"><semantics> <mrow> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> <mi mathvariant="normal">g</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">cm</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">P</mi> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 925 to 750 hPa, and 850 hPa wind vectors (arrows; <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013, based on the CTRL experiment simulation (outer domain, d01); the purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. (<b>B</b>) Similar to (<b>A</b>) but for the EXPR1 experimental simulation. (<b>C</b>) Similar to (<b>A</b>) but for the EXPR2(MCD) experimental simulation. (<b>D</b>) Similar to (<b>A</b>) but for the EXPR3 (Isolation Forest) experimental simulation.</p>
Full article ">Figure 8 Cont.
<p>(<b>A</b>) A 500 hPa geopotential height (blue contours; gpm), vertically integrated water vapor flux (shading; <math display="inline"><semantics> <mrow> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> <mi mathvariant="normal">g</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">cm</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">P</mi> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 925 to 750 hPa, and 850 hPa wind vectors (arrows; <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013, based on the CTRL experiment simulation (outer domain, d01); the purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. (<b>B</b>) Similar to (<b>A</b>) but for the EXPR1 experimental simulation. (<b>C</b>) Similar to (<b>A</b>) but for the EXPR2(MCD) experimental simulation. (<b>D</b>) Similar to (<b>A</b>) but for the EXPR3 (Isolation Forest) experimental simulation.</p>
Full article ">Figure 8 Cont.
<p>(<b>A</b>) A 500 hPa geopotential height (blue contours; gpm), vertically integrated water vapor flux (shading; <math display="inline"><semantics> <mrow> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> <mi mathvariant="normal">g</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">cm</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mi mathvariant="normal">h</mi> <mi mathvariant="normal">P</mi> <msup> <mrow> <mi mathvariant="normal">a</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 925 to 750 hPa, and 850 hPa wind vectors (arrows; <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> <mo> </mo> <msup> <mrow> <mi mathvariant="normal">s</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013, based on the CTRL experiment simulation (outer domain, d01); the purple line indicates the boundary of the Tibetan Plateau, and the black bold line indicates the boundary of Sichuan Province. (<b>B</b>) Similar to (<b>A</b>) but for the EXPR1 experimental simulation. (<b>C</b>) Similar to (<b>A</b>) but for the EXPR2(MCD) experimental simulation. (<b>D</b>) Similar to (<b>A</b>) but for the EXPR3 (Isolation Forest) experimental simulation.</p>
Full article ">Figure 9
<p>Distribution of observed 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013, provided by the CMA; unit: mm. Dot size is proportional to precipitation intensity, with larger dots indicating higher precipitation amounts.</p>
Full article ">Figure 10
<p>(<b>A</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the CTRL experiment; unit: mm. (<b>B</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR1 experiment; unit: mm. (<b>C</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR2 experiment QC by MCD method; unit: mm. (<b>D</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR3 experiment QC by Isolation Forest method; unit: mm.</p>
Full article ">Figure 10 Cont.
<p>(<b>A</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the CTRL experiment; unit: mm. (<b>B</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR1 experiment; unit: mm. (<b>C</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR2 experiment QC by MCD method; unit: mm. (<b>D</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR3 experiment QC by Isolation Forest method; unit: mm.</p>
Full article ">Figure 10 Cont.
<p>(<b>A</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the CTRL experiment; unit: mm. (<b>B</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR1 experiment; unit: mm. (<b>C</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR2 experiment QC by MCD method; unit: mm. (<b>D</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR3 experiment QC by Isolation Forest method; unit: mm.</p>
Full article ">Figure 10 Cont.
<p>(<b>A</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the CTRL experiment; unit: mm. (<b>B</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR1 experiment; unit: mm. (<b>C</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR2 experiment QC by MCD method; unit: mm. (<b>D</b>) Distribution of 6 h accumulated precipitation (inner domain, d02) from 06:00 UTC 8 July 2013 to 06:00 UTC 10 July 2013 in the EXPR3 experiment QC by Isolation Forest method; unit: mm.</p>
Full article ">Figure 11
<p>Bar graph of FSSs for the four experimental groups at nine assimilation times.</p>
Full article ">Figure 12
<p>Bar graph of the mean FSSs for the four groups of experiments.</p>
Full article ">
17 pages, 16284 KiB  
Article
NRCS Recalibration and Wind Speed Retrieval for SWOT KaRIn Radar Data
by Lin Ren, Xiao Dong, Limin Cui, Jingsong Yang, Yi Zhang, Peng Chen, Gang Zheng and Lizhang Zhou
Remote Sens. 2024, 16(16), 3103; https://doi.org/10.3390/rs16163103 - 22 Aug 2024
Viewed by 550
Abstract
In this study, wind speed sensitivity and calibration bias were first determined for Surface Water and Ocean Topography (SWOT) satellite Ka-band Radar Interferometer (KaRIn) Normalized Radar Backscatter Cross Section (NRCS) data at VV and HH polarizations. Here, the calibration bias was estimated by [...] Read more.
In this study, wind speed sensitivity and calibration bias were first determined for Surface Water and Ocean Topography (SWOT) satellite Ka-band Radar Interferometer (KaRIn) Normalized Radar Backscatter Cross Section (NRCS) data at VV and HH polarizations. Here, the calibration bias was estimated by comparing the KaRIn NRCS with collocated simulations from a model developed using Global Precipitation Measurement (GPM) satellite Dual-frequency Precipitation Radar (DPR) data. To recalibrate the bias, the correlation coefficient between the KaRIn data and the simulations was estimated, and the data with the corresponding top 10% correlation coefficients were used to estimate the recalibration coefficients. After recalibration, a Ka-band NRCS model was developed from the KaRIn data to retrieve ocean surface wind speeds. Finally, wind speed retrievals were evaluated using the collocated European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis winds, Haiyang-2C scatterometer (HY2C-SCAT) winds and National Data Buoy Center (NDBC) and Tropical Atmosphere Ocean (TAO) buoy winds. Evaluation results show that the Root Mean Square Error (RMSE) at both polarizations is less than 1.52 m/s, 1.34 m/s and 1.57 m/s, respectively, when compared to ECMWF, HY2C-SCAT and buoy collocated winds. Moreover, both the bias and RMSE were constant with the incidence angles and polarizations. This indicates that the winds from the SWOT KaRIn data are capable of correcting the sea state bias for sea surface height products. Full article
Show Figures

Figure 1

Figure 1
<p>Location map for SWOT KaRIn data and collocated HY2C-SCAT, NDBC buoy and TAO buoy wind data. Here, the red points indicate positions of collocations for KaRIn and HY2C-SCAT data. The green plus signs indicate the NDBC buoy positions. The blue multiple signs indicate the TAO buoy positions. The period for KaRIn data is from 6 September 2023 to 21 November 2023.</p>
Full article ">Figure 2
<p>Data distribution of ECMWF data for (<b>a</b>) wind speed and (<b>b</b>) sea surface temperature data.</p>
Full article ">Figure 3
<p>KaRIn NRCS trends with the wind speeds from ECMWF at (<b>a</b>) VV polarization and (<b>b</b>) HH polarization. Here, the gold line indicates the fitting line for KaRIn NRCS observations, while the red line indicates the model line. The incidence angle is 2.5° and the collocated sea surface temperature is 15 °C.</p>
Full article ">Figure 4
<p>KaRIn NRCS trends with the wind speeds from ECMWF at different sea surface temperatures of (<b>a</b>) 8 °C, (<b>b</b>) 15 °C, (<b>c</b>) 23 °C and (<b>d</b>) 30 °C. The incidence angle is 2.5°.</p>
Full article ">Figure 5
<p>Correlation coefficient trends with sea surface temperatures (<b>a</b>,<b>b</b>), incidence angles (<b>c</b>,<b>d</b>) and wind speeds (<b>e</b>,<b>f</b>). Here the left column is for HH polarization, while the right column is for VV polarization.</p>
Full article ">Figure 6
<p>The KaRIn recalibration coefficient trends with the incidence angles at HH and VV polarizations.</p>
Full article ">Figure 7
<p>NRCS comparisons between the KaRIn data and the model simulations. (<b>a</b>) Before recalibration and (<b>b</b>) after recalibration.</p>
Full article ">Figure 8
<p>Recalibrated KaRIn NRCS trends with the wind speeds from ECMWF at different incidence angles of (<b>a</b>) 0.5°, (<b>b</b>) 1.5°, (<b>c</b>) 2.5° and (<b>d</b>) 3.5°. The collocated sea surface temperature is 15 °C.</p>
Full article ">Figure 9
<p>GMF models developed by the recalibrated KaRIn NRCS data at (<b>a</b>) HH polarization and (<b>b</b>) VV polarization.</p>
Full article ">Figure 10
<p>Wind speed comparisons between KaRIn retrievals and collocations from ECMWF at (<b>a</b>) HH polarization and (<b>b</b>) VV polarization.</p>
Full article ">Figure 11
<p>Bias, RMSE and R trends with incidence angles by comparing KaRIn retrievals with ECMWF wind speeds. (<b>a</b>,<b>c</b>,<b>e</b>) HH polarization; (<b>b</b>,<b>d</b>,<b>f</b>) VV polarization.</p>
Full article ">Figure 12
<p>Wind speed comparisons between KaRIn retrievals and collocations from HY2C-SCAT at (<b>a</b>) HH polarization and (<b>b</b>) VV polarization.</p>
Full article ">Figure 13
<p>Bias, RMSE and R trends with incidence angles by comparing KaRIn retrievals with HY2C-SCAT wind speeds. (<b>a</b>,<b>c</b>,<b>e</b>) HH polarization; (<b>b</b>,<b>d</b>,<b>f</b>) VV polarization.</p>
Full article ">Figure 14
<p>Wind speed comparisons between KaRIn retrievals and collocations from NDBC buoy at (<b>a</b>) HH polarization and (<b>b</b>) VV polarization.</p>
Full article ">
18 pages, 8352 KiB  
Technical Note
Study of the Impact of Landforms on the Groundwater Level Based on the Integration of Airborne Laser Scanning and Hydrological Data
by Wioleta Blaszczak-Bak and Monika Birylo
Remote Sens. 2024, 16(16), 3102; https://doi.org/10.3390/rs16163102 - 22 Aug 2024
Viewed by 520
Abstract
This article presents a methodology for examining the impact of terrain on the level of groundwater in a well with an unconfined table aquifer. For this purpose, data from the groundwater observation and research network of the National Hydrogeological Service; airborne laser scanning [...] Read more.
This article presents a methodology for examining the impact of terrain on the level of groundwater in a well with an unconfined table aquifer. For this purpose, data from the groundwater observation and research network of the National Hydrogeological Service; airborne laser scanning technology; an SRTM height raster; orthophoto maps; and a WMTS raster were used and integrated for the specific parcels of Warmia and Mazury County. Groundwater is the largest and most important source of fresh drinking water. Apart from the influence of precipitation amount on groundwater level, the terrain is also important and is often omitted in comprehensive assessments. The research undertaken in this study provides new insights and a new methodology for the interpretation of hydrological data by taking into account the terrain, and it can be expanded with new data and increased research area or resolution. Research has shown that the attractiveness of the parcel in terms of construction development and excavation possibilities is greatly influenced by the groundwater level. Full article
Show Figures

Figure 1

Figure 1
<p>Schema of the underground layer distribution constituting of the groundwater unconfined mirror.</p>
Full article ">Figure 2
<p>Localization of the research area. Localization of selected objects: well in Kobulty—1; well in Radostowo—2; well in Groszkowo—3; well in Barcikowo—4; and well in Tomaryny—5.</p>
Full article ">Figure 3
<p>The scheme of the methodology.</p>
Full article ">Figure 4
<p>Groundwater and details points, including the well measurement, distinction between saturated and unsaturated sources, and connection with surface water in the context of the landform and direction of the terrain slope.</p>
Full article ">Figure 5
<p>parcel_1 classification: (<b>a</b>) ground class no. 2; (<b>b</b>) vegetation class no. 3–4; (<b>c</b>) high vegetation class no. 5, based on the ALS data. An illustrative figure of the parcel; not to scale.</p>
Full article ">Figure 6
<p>parcel_2 classification: (<b>a</b>) ground class no. 2, (<b>b</b>) vegetation class no. 3–4; (<b>c</b>) high vegetation class no. 5, based on the ALS data. An illustrative figure of the parcel; not to scale.</p>
Full article ">Figure 7
<p>parcel_3 classification: (<b>a</b>) ground class no. 2; (<b>b</b>) vegetation class no. 3–4; (<b>c</b>) high vegetation class no. 5, based on ALS data. Illustrative figure of the parcel; not to scale.</p>
Full article ">Figure 8
<p>parcel_4 classification: (<b>a</b>) ground class no. 2; (<b>b</b>) vegetation class no. 3–4; (<b>c</b>) high vegetation class no. 5, based on ALS data. An illustrative figure of the parcel; not to scale.</p>
Full article ">Figure 9
<p>Integrated datasets for each parcel, including raster map, orthophoto map, ground level from ALS, and high vegetation level from ALS.</p>
Full article ">Figure 10
<p>The month-by-month groundwater level in cm in the Kobulty (parcel_1), Radostowo (parcel_2), Groszkowo (parcel_3), and Barcikowo (parcel_4) wells.</p>
Full article ">Figure 11
<p>Illustrative drawing of the location of the plot and measuring wells around the slope and topography of the land along with distances for parcel_1, parcel_2, parcel_3, and parcel_4.</p>
Full article ">Figure 12
<p>Kobulty area, parcel contour in red: (<b>a</b>) DTM with terrain slopes representing parcel_1; (<b>b</b>) contour map; (<b>c</b>) flood hazard simulation for 154 m a.s.l.; (<b>d</b>) flood hazard simulation for 170 m a.s.l.</p>
Full article ">Figure 13
<p>Radostowo area, parcel contour in red: (<b>a</b>) DTM with terrain slopes representing parcel_2; (<b>b</b>) contour map; (<b>c</b>) flood hazard simulation for 121 m a.s.l.; (<b>d</b>) flood hazard simulation for 146 m a.s.l.</p>
Full article ">Figure 14
<p>Groszkowo area, parcel contour in red: (<b>a</b>) DTM with terrain slopes representing parcel_3; (<b>b</b>) contour map; (<b>c</b>) flood hazard simulation for 145 m a.s.l.; (<b>d</b>) flood hazard simulation for 150 m a.s.l.</p>
Full article ">Figure 15
<p>Barcikowo area, parcel contour in red: (<b>a</b>) DTM with terrain slopes representing parcel_4; (<b>b</b>) contour map; (<b>c</b>) flood hazard simulation for 104 m a.s.l.; (<b>d</b>) flood hazard simulation for 121 m a.s.l.</p>
Full article ">Figure A1
<p>parcel_1 and measurement well Kobulty: (<b>a</b>) parcel boundaries and its location; (<b>b</b>) top view; (<b>c</b>) front view.</p>
Full article ">Figure A2
<p>parcel_2 and measurement well Radostowo: (<b>a</b>) parcel boundaries and its location; (<b>b</b>) top view; (<b>c</b>) front view.</p>
Full article ">Figure A3
<p>parcel_3 and measurement well Groszkowo: (<b>a</b>) parcel boundaries and its location; (<b>b</b>) top view; (<b>c</b>) front view.</p>
Full article ">Figure A4
<p>parcel_4 and measurement well Barcikowo: (<b>a</b>) parcel boundaries and its location; (<b>b</b>) top view; (<b>c</b>) front view.</p>
Full article ">
27 pages, 7948 KiB  
Article
LTSCD-YOLO: A Lightweight Algorithm for Detecting Typical Satellite Components Based on Improved YOLOv8
by Zixuan Tang, Wei Zhang, Junlin Li, Ran Liu, Yansong Xu, Siyu Chen, Zhiyue Fang and Fuchenglong Zhao
Remote Sens. 2024, 16(16), 3101; https://doi.org/10.3390/rs16163101 - 22 Aug 2024
Cited by 1 | Viewed by 1422
Abstract
Typical satellite component detection is an application-valuable and challenging research field. Currently, there are many algorithms for detecting typical satellite components, but due to the limited storage space and computational resources in the space environment, these algorithms generally have the problem of excessive [...] Read more.
Typical satellite component detection is an application-valuable and challenging research field. Currently, there are many algorithms for detecting typical satellite components, but due to the limited storage space and computational resources in the space environment, these algorithms generally have the problem of excessive parameter count and computational load, which hinders their effective application in space environments. Furthermore, the scale of datasets used by these algorithms is not large enough to train the algorithm models well. To address the above issues, this paper first applies YOLOv8 to the detection of typical satellite components and proposes a Lightweight Typical Satellite Components Detection algorithm based on improved YOLOv8 (LTSCD-YOLO). Firstly, it adopts the lightweight network EfficientNet-B0 as the backbone network to reduce the model’s parameter count and computational load; secondly, it uses a Cross-Scale Feature-Fusion Module (CCFM) at the Neck to enhance the model’s adaptability to scale changes; then, it integrates Partial Convolution (PConv) into the C2f (Faster Implementation of CSP Bottleneck with two convolutions) module and Re-parameterized Convolution (RepConv) into the detection head to further achieve model lightweighting; finally, the Focal-Efficient Intersection over Union (Focal-EIoU) is used as the loss function to enhance the model’s detection accuracy and detection speed. Additionally, a larger-scale Typical Satellite Components Dataset (TSC-Dataset) is also constructed. Our experimental results show that LTSCD-YOLO can maintain high detection accuracy with minimal parameter count and computational load. Compared to YOLOv8s, LTSCD-YOLO improved the mean average precision (mAP50) by 1.50% on the TSC-Dataset, reaching 94.5%. Meanwhile, the model’s parameter count decreased by 78.46%, the computational load decreased by 65.97%, and the detection speed increased by 17.66%. This algorithm achieves a balance between accuracy and light weight, and its generalization ability has been validated on real images, making it effectively applicable to detection tasks of typical satellite components in space environments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>On-orbit service schematic: (<b>a</b>) rendezvous and docking; (<b>b</b>) on-orbit servicing; (<b>c</b>) target capture; (<b>d</b>) fuel resupply.</p>
Full article ">Figure 2
<p>The network architecture of LTSCD-YOLO.</p>
Full article ">Figure 3
<p>MBConv structure. Conv stands for convolution, BN represents batch normalization, Swish is the Swish activation function, Depwise signifies depthwise convolution, AvgPooling denotes average pooling, Sigmoid is the Sigmoid activation function, SE stands for squeeze and excitation, Dropout refers to the random dropout layer, and <math display="inline"><semantics> <mrow> <mi>k</mi> </mrow> </semantics></math> indicates the size of the convolution kernel.</p>
Full article ">Figure 4
<p>The working principles of (<b>a</b>) the Conv and (<b>b</b>) the PConv.</p>
Full article ">Figure 5
<p>F-C2f structure.</p>
Full article ">Figure 6
<p>RepConv structure: (<b>a</b>) structure during training and (<b>b</b>) validation during structure.</p>
Full article ">Figure 7
<p>RepHead structure.</p>
Full article ">Figure 8
<p>Partial satellite images from the TSC-Dataset: (<b>a</b>) sourced from the Satellite-Dataset; (<b>b</b>) sourced from STK software; (<b>c</b>) obtained via web scraping; (<b>d</b>) sourced from NASA.</p>
Full article ">Figure 9
<p>Comparison between the real image (<b>a</b>) and simulated image (<b>b</b>).</p>
Full article ">Figure 10
<p>Schematic diagram of satellite solar panel, main body, and antenna.</p>
Full article ">Figure 11
<p>Typical satellite components diagram: (<b>a</b>) solar panel and (<b>b</b>) antenna.</p>
Full article ">Figure 12
<p>Example of data augmentation: (<b>a</b>) original image and (<b>b</b>) images after data augmentation.</p>
Full article ">Figure 13
<p>Visualization of dataset annotation files: (<b>a</b>) dataset category histogram; (<b>b</b>) the length and width distribution map of annotation boxes in the dataset; (<b>c</b>) histograms of variables <math display="inline"><semantics> <mrow> <mi>x</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>y</mi> </mrow> </semantics></math>; (<b>d</b>) variable width and height histograms.</p>
Full article ">Figure 14
<p>Comparative images of the detection results: (<b>a</b>) YOLOv8s detection results and (<b>b</b>) LTSCD-YOLO detection results.</p>
Full article ">Figure 15
<p>Comparative images of the confusion matrices: (<b>a</b>) YOLOv8s confusion matrix and (<b>b</b>) LTSCD-YOLO confusion matrix.</p>
Full article ">Figure 16
<p>Detection results image: (<b>a</b>) Ground Truth; (<b>b</b>) YOLOv8s algorithm; (<b>c</b>) and ours.</p>
Full article ">Figure 16 Cont.
<p>Detection results image: (<b>a</b>) Ground Truth; (<b>b</b>) YOLOv8s algorithm; (<b>c</b>) and ours.</p>
Full article ">Figure 17
<p>Comparative images of the detection results: (<b>a</b>) YOLOv8s detection results and (<b>b</b>) LTSCD-YOLO detection results.</p>
Full article ">Figure 18
<p>Detection real image: (<b>a</b>) Ground Truth; (<b>b</b>) YOLOv8s algorithm; and (<b>c</b>) ours.</p>
Full article ">Figure 19
<p>Comparative images of the detection results: (<b>a</b>) YOLOv8s detection results and (<b>b</b>) LTSCD-YOLO detection results.</p>
Full article ">
15 pages, 12817 KiB  
Article
Aeolian Desertification Dynamics from 1995 to 2020 in Northern China: Classification Using a Random Forest Machine Learning Algorithm Based on Google Earth Engine
by Caixia Zhang, Ningjing Tan and Jinchang Li
Remote Sens. 2024, 16(16), 3100; https://doi.org/10.3390/rs16163100 - 22 Aug 2024
Viewed by 798
Abstract
Machine learning methods have improved in recent years and provide increasingly powerful tools for understanding landscape evolution. In this study, we used the random forest method based on Google Earth Engine to evaluate the desertification dynamics in northern China from 1995 to 2020. [...] Read more.
Machine learning methods have improved in recent years and provide increasingly powerful tools for understanding landscape evolution. In this study, we used the random forest method based on Google Earth Engine to evaluate the desertification dynamics in northern China from 1995 to 2020. We selected Landsat series image bands, remote sensing inversion data, climate baseline data, land use data, and soil type data as variables for majority voting in the random forest method. The method’s average classification accuracy was 91.6% ± 5.8 [mean ± SD], and the average kappa coefficient was 0.68 ± 0.09, suggesting good classification results. The random forest classifier results were consistent with the results of visual interpretation for the spatial distribution of different levels of desertification. From 1995 to 2000, the area of aeolian desertification increased at an average rate of 9977 km2 yr−1, and from 2000 to 2005, from 2005 to 2010, from 2010 to 2015, and from 2015 to 2020, the aeolian desertification decreased at an average rate of 2535, 3462, 1487, and 4537 km2 yr−1, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The random forest system variables for the desertification classification. Image bands L5, L7, and L8 represent Landsat-5 TM and Landsat-7 ETM bands 1 to 5 and 7 and Landsat-8 OLI bands 2 to 7. Abbreviations: BSI, bare soil index; MNDWI, modified normalized difference water index; MSAVI, modified soil-adjusted vegetation index; NDBI, normalized difference built-up index; NDVI, normalized difference vegetation index; TGSI, topsoil grain size index.</p>
Full article ">Figure 2
<p>The distribution of the desert types in the 16 regions of interest in the random forest study of northern China.</p>
Full article ">Figure 3
<p>The spatial distribution of aeolian desertification in 1995.</p>
Full article ">Figure 4
<p>The spatial distribution of aeolian desertification in 2000.</p>
Full article ">Figure 5
<p>The spatial distribution of aeolian desertification in 2005.</p>
Full article ">Figure 6
<p>The spatial distribution of aeolian desertification in 2010.</p>
Full article ">Figure 7
<p>The spatial distribution of aeolian desertification in 2015.</p>
Full article ">Figure 8
<p>The spatial distribution of aeolian desertification in 2020.</p>
Full article ">Figure 9
<p>Spatial distribution of desertification in the Xinjiang region based on (<b>a</b>) visual interpretation and (<b>b</b>) predictions by the random forest model and (<b>c</b>) the results of visual classification interpretation are continuous and centralized in space and (<b>d</b>) the classification results of the random forest model are pixel-based, and its image spot is relatively small.</p>
Full article ">Figure 10
<p>Spatial distribution of desertification severity in the central part of northern China in 2000 based on (<b>a</b>) visual interpretation and (<b>b</b>) predictions by the random forest model.</p>
Full article ">Figure 11
<p>Spatial distribution of desertification in the northeastern part of northern China in 2000 based on (<b>a</b>) visual interpretation and (<b>b</b>) predictions by the random forest model.</p>
Full article ">Figure 12
<p>Changes in the area of desertification 1995 to 2020 in northern China.</p>
Full article ">
21 pages, 16631 KiB  
Article
An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments
by Yanjie Liu, Chao Wang, Heng Wu and Yanlong Wei
Remote Sens. 2024, 16(16), 3099; https://doi.org/10.3390/rs16163099 - 22 Aug 2024
Viewed by 871
Abstract
SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction [...] Read more.
SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction process at the front end, and most of the loop closure detection at the back end is based on RNN (radius nearest neighbor). This results in low mapping accuracy and poor real-time performance. To solve this problem, we integrated the functions of point cloud segmentation and Scan Context loop closure detection based on the advanced LiDAR-inertial SLAM algorithm (LIO-SAM). First, we employed range images to extract ground points from raw LiDAR data, followed by the BFS (breadth-first search) algorithm to cluster non-ground points and downsample outliers. Then, we calculated the curvature to extract planar points from ground points and corner points from clustered segmented non-ground points. Finally, we used the Scan Context method for loop closure detection to improve back-end mapping speed and reduce odometry drift. Experimental validation with the KITTI dataset verified the advantages of the proposed method, and combined with Walking, Park, and other datasets comprehensively verified that the proposed method had good accuracy and real-time performance. Full article
Show Figures

Figure 1

Figure 1
<p>Traditional point cloud feature extraction and loop closure detection. (<b>a</b>) Feature extraction of the point cloud directly based on curvature, with coarse and redundant repetition results. (<b>b</b>) Odometry drift and loop closure failure when using the RNN algorithm for loop closure detection.</p>
Full article ">Figure 2
<p>The overall framework of the algorithm proposed in this paper. The light blue and dark blue modules are the original basic modules of the LIO-SAM algorithm, including IMU pre-integration, odometry publication, map position optimization, and so on. The green module is the improvement module proposed in this paper, including point cloud clustering segmentation, feature extraction, and back-end loop closure detection based on the Scan Context algorithm.</p>
Full article ">Figure 3
<p>Proposed general framework for point cloud clustering segmentation. In the output image, the green color is the corner point, and the purple color is the planar point.</p>
Full article ">Figure 4
<p>The current frame point cloud transformed to a range image.</p>
Full article ">Figure 5
<p>The ground point extraction process.</p>
Full article ">Figure 6
<p>Schematic diagram of ground point extraction. (<b>a</b>) The original LiDAR point cloud. (<b>b</b>) The extracted ground points.</p>
Full article ">Figure 7
<p>Center and neighborhood points in a range image. (<b>a</b>) The center point and neighboring points spatial relationship. (<b>b</b>) A schematic diagram of the calculated angles in the clustering process of the center and neighborhood points.</p>
Full article ">Figure 8
<p>Non-ground point cluster segmentation. (<b>a</b>) The LiDAR point cloud after extracting the ground points, which consists of outlier points (blue) and successfully clustered points (green) after downsampling. (<b>b</b>) The successfully clustered point cloud.</p>
Full article ">Figure 9
<p>Feature point extraction results. (<b>a</b>) The original LiDAR point cloud, where both yellow and orange-red are LiDAR raw point clouds, and these two colors are determined by the intensity of the point cloud. (<b>b</b>) The extracted edge features (corner points) and planar features (planar points), where green color is the planar points extracted from the ground points and purple color is the corner points extracted from the clustered segmented non-ground points.</p>
Full article ">Figure 10
<p>The overall framework of the Scan Context algorithm.</p>
Full article ">Figure 11
<p>Split the point cloud into sub-areas by radius and azimuth [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. Reprinted/adapted with permission from Ref. [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. 2018, Kim, G.</p>
Full article ">Figure 12
<p>One-frame point cloud converted to Scan Context matrix [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. Reprinted/adapted with permission from Ref. [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. 2018, Kim, G. The blue area indicates that the corresponding sub-area has no point cloud data or the area is not observable by LiDAR due to occlusion.</p>
Full article ">Figure 13
<p>Changes in the Scan Context column vector sequence caused by changes in the LiDAR viewing angle [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. Reprinted/adapted with permission from Ref. [<a href="#B30-remotesensing-16-03099" class="html-bibr">30</a>]. 2018, Kim, G. (<b>a</b>) The change in viewpoint when the LiDAR returns to the same place causes the Scan Context column vectors to be shifted. (<b>b</b>) The Scan Context matrices transformed with the history frames contain similar shapes.</p>
Full article ">Figure 14
<p>Trajectory and ground truth comparison results of the four algorithms on the corresponding sequences of KITTI. (<b>a</b>) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0042. (<b>b</b>) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0034. (<b>c</b>) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0016. (<b>d</b>) The trajectory and ground truth comparison results of four algorithms based on KITTI sequence 0027.</p>
Full article ">Figure 15
<p>Localized enlargement of sequence 0034 and 0027 comparison results. (<b>a</b>) A localized enlargement of the sequence 0034 trajectory comparison. (<b>b</b>) A localized enlargement of the sequence 0027 trajectory comparison.</p>
Full article ">Figure 16
<p>ATE errors of the four algorithms under the corresponding sequences of the KITTI dataset. (<b>a</b>) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0042. (<b>b</b>) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0034. (<b>c</b>) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0016. (<b>d</b>) The ATE errors of the four algorithms based on the sequence of KITTI dataset 0027. (<b>a</b>–<b>d</b>) show, from left to right, the ATE errors of A-LOAM, LeGO-LOAM, LIO-SAM, and our method on the corresponding sequences of the KITTI dataset.</p>
Full article ">Figure 17
<p>Loop closure detection comparison experiment. (<b>a</b>) The loop closure detection results using the RNN algorithm, where the point cloud color is rendered based on the point cloud intensity. (<b>b</b>) The loop closure detection results of our algorithm, where the point cloud color is rendered based on the point cloud coordinate axes.</p>
Full article ">Figure 18
<p>Results of our algorithm for mapping based on Walking and Park datasets. (<b>a</b>) The results of mapping based on the Walking dataset. (<b>b</b>) The results of mapping based on the Park dataset. In <a href="#remotesensing-16-03099-f018" class="html-fig">Figure 18</a>, the point cloud color is rendered based on the point cloud intensity.</p>
Full article ">
26 pages, 29874 KiB  
Article
Estimation of Spatial–Temporal Dynamic Evolution of Potential Afforestation Land and Its Carbon Sequestration Capacity in China
by Zhipeng Zhang, Zong Wang, Xiaoyuan Zhang and Shijie Yang
Remote Sens. 2024, 16(16), 3098; https://doi.org/10.3390/rs16163098 - 22 Aug 2024
Cited by 2 | Viewed by 789
Abstract
Afforestation is an important way to effectively reduce carbon emissions from human activities and increase carbon sinks in forest ecosystems. It also plays an important role in climate change mitigation. Currently, few studies have examined the spatiotemporal dynamics of future afforestation areas, which [...] Read more.
Afforestation is an important way to effectively reduce carbon emissions from human activities and increase carbon sinks in forest ecosystems. It also plays an important role in climate change mitigation. Currently, few studies have examined the spatiotemporal dynamics of future afforestation areas, which are crucial for assessing future carbon sequestration in forest ecosystems. In order to obtain the dynamic distribution of potential afforestation land over time under future climate change scenarios in China, we utilized the random forest method in this study to calculate weights for the selected influencing factors on potential afforestation land, such as natural vegetation attributes and environmental factors. The “weight hierarchy approach” was used to calculate the afforestation quality index of different regions in different 5-year intervals from 2021 to 2060 and extract high-quality potential afforestation lands in each period. By dynamically analyzing the distribution and quality of potential afforestation land from 2021 to 2060, we can identify optimal afforestation sites for each period and formulate a progressive afforestation plan. This approach allows for a more accurate application of the FCS model to evaluate the dynamic changes in the carbon sequestration capacity of newly afforested land from 2021 to 2060. The results indicate that the average potential afforestation land area will reach 75 Mha from 2021 to 2060. In the northern region, afforestation areas are mainly distributed on both sides of the “Hu Line”, while in the southern region, they are primarily distributed in the Yunnan–Guizhou Plateau and some coastal provinces. By 2060, the potential calculated cumulative carbon storage of newly afforested lands was 11.68 Pg C, with a peak carbon sequestration rate during 2056–2060 of 0.166 Pg C per year. Incorporating information on the spatiotemporal dynamics of vegetation succession, climate production potential, and vegetation resilience while quantifying the weights of each influencing factor can enhance the accuracy of predictions for potential afforestation lands. The conclusions of this study can provide a reference for the formulation of future afforestation plans and the assessment of their carbon sequestration capacity. Full article
(This article belongs to the Topic Forest Carbon Sequestration and Climate Change Mitigation)
Show Figures

Figure 1

Figure 1
<p>Technology roadmap.</p>
Full article ">Figure 2
<p>(<b>a</b>) Climatic conditions grading map. (<b>b</b>) Vegetation succession grading map. (<b>c</b>) Vegetation resilience grading map. (<b>d</b>) Topographic conditions grading map. (<b>e</b>) Transportation grading map. (<b>f</b>) Distribution of land sources derived from LUC data and timberline data (taking 2020 as an example).</p>
Full article ">Figure 3
<p>Characteristics of the national average climate productivity distribution from 2021 to 2060.</p>
Full article ">Figure 4
<p>Distribution of the dynamics of climate productivity change by 5-year time period from 2021 to 2060.</p>
Full article ">Figure 5
<p>(<b>a</b>) Distribution of potential afforestation sites with different levels of confidence from Xu et al. [<a href="#B22-remotesensing-16-03098" class="html-bibr">22</a>]. (<b>b</b>) Distribution map of the 10,000 selected sampling points. (<b>c</b>) Histogram of the weights of the impact factors.</p>
Full article ">Figure 6
<p>Dynamic distribution map of potential afforestation lands over 40 years in different eco-geographical zones.</p>
Full article ">Figure 7
<p>Projections of the vegetation biomass dynamics from afforestation from 2021 to 2060.</p>
Full article ">Figure 8
<p>Assessment of afforestation carbon sequestration capacity from 2021 to 2060. (<b>a</b>) Histogram of carbon stocks in afforested vegetation per year. (<b>b</b>) Cumulative histogram of carbon stocks in afforested vegetation. (<b>c</b>) Carbon stocks in afforested vegetation per five-year period.</p>
Full article ">Figure 9
<p>(<b>a</b>) A distribution map of potential forestation land derived from Xu [<a href="#B18-remotesensing-16-03098" class="html-bibr">18</a>]. (<b>b</b>) A distribution map of potential forestation land from Xu et al. [<a href="#B22-remotesensing-16-03098" class="html-bibr">22</a>].</p>
Full article ">Figure A1
<p>Spatial distribution of top vegetation in the current baseline.</p>
Full article ">Figure A2
<p>Spatial distribution of top vegetation under future climates.</p>
Full article ">Figure A3
<p>Distribution of potential afforestation lands by time period and selection of priority sites for afforestation.</p>
Full article ">Figure A4
<p>Distribution map of overlapping and non-overlapping points between the total potential afforestation sites from 2021 to 2060 obtained in this study and existing forests in the forest cover dataset [<a href="#B32-remotesensing-16-03098" class="html-bibr">32</a>].</p>
Full article ">Figure A5
<p>Distribution map of overlapping and non-overlapping points between the total potential afforestation sites from 2021 to 2060 obtained in this study and existing forests in the GlobeLand30 dataset [<a href="#B30-remotesensing-16-03098" class="html-bibr">30</a>].</p>
Full article ">Figure A6
<p>Distribution map of overlapping and non-overlapping points between the total potential afforestation sites from 2021 to 2060 obtained in this study and existing forests in the GLC_FCS30 dataset [<a href="#B31-remotesensing-16-03098" class="html-bibr">31</a>].</p>
Full article ">
30 pages, 6354 KiB  
Article
Continuous Wavelet Transform Peak-Seeking Attention Mechanism Conventional Neural Network: A Lightweight Feature Extraction Network with Attention Mechanism Based on the Continuous Wave Transform Peak-Seeking Method for Aero-Engine Hot Jet Fourier Transform Infrared Classification
by Shuhan Du, Wei Han, Zhenping Kang, Xiangning Lu, Yurong Liao and Zhaoming Li
Remote Sens. 2024, 16(16), 3097; https://doi.org/10.3390/rs16163097 - 22 Aug 2024
Viewed by 786
Abstract
Focusing on the problem of identifying and classifying aero-engine models, this paper measures the infrared spectrum data of aero-engine hot jets using a telemetry Fourier transform infrared spectrometer. Simultaneously, infrared spectral data sets with the six different types of aero-engines were created. For [...] Read more.
Focusing on the problem of identifying and classifying aero-engine models, this paper measures the infrared spectrum data of aero-engine hot jets using a telemetry Fourier transform infrared spectrometer. Simultaneously, infrared spectral data sets with the six different types of aero-engines were created. For the purpose of classifying and identifying infrared spectral data, a CNN architecture based on the continuous wavelet transform peak-seeking attention mechanism (CWT-AM-CNN) is suggested. This method calculates the peak value of middle wave band by continuous wavelet transform, and the peak data are extracted by the statistics of the wave number locations with high frequency. The attention mechanism was used for the peak data, and the attention mechanism was weighted to the feature map of the feature extraction block. The training set, validation set and prediction set were divided in the ratio of 8:1:1 for the infrared spectral data sets. For three different data sets, the CWT-AM-CNN proposed in this paper was compared with the classical classifier algorithm based on CO2 feature vector and the popular AE, RNN and LSTM spectral processing networks. The prediction accuracy of the proposed algorithm in the three data sets was as high as 97%, and the lightweight network structure design not only guarantees high precision, but also has a fast running speed, which can realize the rapid and high-precision classification of the infrared spectral data of the aero-engine hot jets. Full article
(This article belongs to the Special Issue Advances in Remote Sensing, Radar Techniques, and Their Applications)
Show Figures

Figure 1

Figure 1
<p>CWT-AM-CNN classification network of aero-engine hot jet infrared spectrum.</p>
Full article ">Figure 2
<p>Feature extraction block.</p>
Full article ">Figure 3
<p>Composition diagram of an attention mechanism module based on peak seeking: (<b>a</b>) represents a peak-seeking algorithm block, (<b>b</b>) represents an attention mechanism block.</p>
Full article ">Figure 4
<p>Attention mechanism operation diagram.</p>
Full article ">Figure 5
<p>Site layout of the outfield measurement experiment for the infrared spectrum of an aeroengine hot jet.</p>
Full article ">Figure 6
<p>Experimental measurement of the BTSs of aero-engines’ hot jet: the red box represents CO2, the gray box represents water vapor, and the blue box represents CO.</p>
Full article ">Figure 7
<p>CWT-AM-CNN network training and validation loss function and accuracy change curve: the blue curve represents the training set, the orange curve represents the validation set, (<b>a</b>) is the experimental result of data set a, (<b>b</b>) is the experimental result of data set B and (<b>c</b>) is the experimental result of data set C.</p>
Full article ">Figure 8
<p>Four characteristic peak positions in the infrared spectrum of an aero-engine hot jet.</p>
Full article ">Figure 9
<p>Experimental results of CWT peak detection and high-frequency peak statistics.</p>
Full article ">Figure 10
<p>CNN network training and validation loss function and accuracy change curve: the blue curve represents the training set, the orange curve represents the validation set, (<b>a</b>) is the experimental result of data set A, (<b>b</b>) is the experimental result of data set B and (<b>c</b>) is the experimental result of data set C.</p>
Full article ">Figure 11
<p>Loss function and accuracy change curve of CNN (without BN layer) network training and validation: the blue curve represents the training set, the orange curve represents the validation set, (<b>a</b>) is the experimental result of data set A, (<b>b</b>) is the experimental result of data set B and (<b>c</b>) is the experimental result of data set C.</p>
Full article ">Figure 11 Cont.
<p>Loss function and accuracy change curve of CNN (without BN layer) network training and validation: the blue curve represents the training set, the orange curve represents the validation set, (<b>a</b>) is the experimental result of data set A, (<b>b</b>) is the experimental result of data set B and (<b>c</b>) is the experimental result of data set C.</p>
Full article ">Figure 12
<p>Training and validation of the CNN network with different layers of loss function and accuracy change curve: the blue curve represents the one-layer structure, the orange curve represents the two-layer structure, the green represents the three-layer structure, the red represents the four-layer structure, the purple represents the five-layer structure, and the brown represents the six-layer structure, (<b>a</b>) is the experimental result of data set A, (<b>b</b>) is the experimental result of data set B and (<b>c</b>) is the experimental result of data set C.</p>
Full article ">Figure 13
<p>Network training and validation of the different optimizers on data set C loss function and accuracy change curve: blue is SGD, orange is SGDM, green is Adagrad, red is RMSProp and purple is Adam.</p>
Full article ">Figure 14
<p>Network training and validation of the Adam optimizer with different learning rates on data set C loss function and accuracy change curve: blue is the learning rate of 0.01, orange is the learning rate of 0.001, green is the learning rate of 0.0001, red is the learning rate of 0.00001 and purple is the learning rate of 0.000001.</p>
Full article ">
22 pages, 6774 KiB  
Article
Path Planning of UAV Formations Based on Semantic Maps
by Tianye Sun, Wei Sun, Changhao Sun and Ruofei He
Remote Sens. 2024, 16(16), 3096; https://doi.org/10.3390/rs16163096 - 22 Aug 2024
Viewed by 723
Abstract
This paper primarily studies the path planning problem for UAV formations guided by semantic map information. Our aim is to integrate prior information from semantic maps to provide initial information on task points for UAV formations, thereby planning formation paths that meet practical [...] Read more.
This paper primarily studies the path planning problem for UAV formations guided by semantic map information. Our aim is to integrate prior information from semantic maps to provide initial information on task points for UAV formations, thereby planning formation paths that meet practical requirements. Firstly, a semantic segmentation network model based on multi-scale feature extraction and fusion is employed to obtain UAV aerial semantic maps containing environmental information. Secondly, based on the semantic maps, a three-point optimization model for the optimal UAV trajectory is established, and a general formula for calculating the heading angle is proposed to approximately decouple the triangular equation of the optimal trajectory. For large-scale formations and task points, an improved fuzzy clustering algorithm is proposed to classify task points that meet distance constraints by clusters, thereby reducing the computational scale of single samples without changing the sample size and improving the allocation efficiency of the UAV formation path planning model. Experimental data show that the UAV cluster path planning method using angle-optimized fuzzy clustering achieves an 8.6% improvement in total flight range compared to other algorithms and a 17.4% reduction in the number of large-angle turns. Full article
Show Figures

Figure 1

Figure 1
<p>Semantic segmentation network model with multi-scale feature extraction fusion.</p>
Full article ">Figure 2
<p>Multi-scale feature extraction and fusion module.</p>
Full article ">Figure 3
<p>UAV aerial semantic map.</p>
Full article ">Figure 4
<p>UAV aerial region map and processed semantic segmentation map. (<b>a</b>) UAV aerial map and (<b>b</b>) UAV aerial semantic map.</p>
Full article ">Figure 5
<p>Illustration of UAV aerial semantic information and task point selection. (<b>a</b>) UAV aerial semantic map and (<b>b</b>) coordinates of task points map.</p>
Full article ">Figure 6
<p>Diagram of the optimal path.</p>
Full article ">Figure 7
<p>Diagram of the near-optimal path.</p>
Full article ">Figure 8
<p>Illustration of initial task point classification.</p>
Full article ">Figure 9
<p>Illustration of resetting of the membership matrix.</p>
Full article ">Figure 10
<p>Path planning results using different algorithms in a 1 × 1 km<sup>2</sup> area (<b>a</b>) ETSP model algorithm; (<b>b</b>) AA algorithm; (<b>c</b>) optimized heading angle algorithm; and (<b>d</b>) 2-opt algorithm.</p>
Full article ">Figure 11
<p>Illustration of classification and path planning of task points using different algorithms in a 20 km × 20 km area (<b>a</b>) Diagram depicting classification results using the original FCM algorithm; (<b>b</b>) path planning diagram using the original FCM algorithm; (<b>c</b>) diagram depicting classification results using the improved FCM algorithm; (<b>d</b>) path planning diagram using the improved FCM algorithm; (<b>e</b>) diagram depicting classification results using the MTSP model; and (<b>f</b>) path planning diagram using the MTSP model.</p>
Full article ">Figure 11 Cont.
<p>Illustration of classification and path planning of task points using different algorithms in a 20 km × 20 km area (<b>a</b>) Diagram depicting classification results using the original FCM algorithm; (<b>b</b>) path planning diagram using the original FCM algorithm; (<b>c</b>) diagram depicting classification results using the improved FCM algorithm; (<b>d</b>) path planning diagram using the improved FCM algorithm; (<b>e</b>) diagram depicting classification results using the MTSP model; and (<b>f</b>) path planning diagram using the MTSP model.</p>
Full article ">Figure 12
<p>Illustration of classification and path planning of task points using different algorithms in a 10 km × 10 km area (<b>a</b>) Diagram depicting classification results using the original FCM algorithm; (<b>b</b>) path planning diagram using the original FCM algorithm; (<b>c</b>) diagram depicting classification results using the improved FCM algorithm; (<b>d</b>) path planning diagram using the improved FCM algorithm; (<b>e</b>) diagram depicting classification results using the MTSP model; and (<b>f</b>) path planning diagram using the MTSP model.</p>
Full article ">Figure 13
<p>Illustration of path planning for UAV formation using different algorithms. (<b>a</b>) Path planned based on the ETSP model; (<b>b</b>) path planned based on the AA algorithm; and (<b>c</b>) path planned based on the optimized heading angle model.</p>
Full article ">Figure 13 Cont.
<p>Illustration of path planning for UAV formation using different algorithms. (<b>a</b>) Path planned based on the ETSP model; (<b>b</b>) path planned based on the AA algorithm; and (<b>c</b>) path planned based on the optimized heading angle model.</p>
Full article ">
21 pages, 4390 KiB  
Article
Mapping Shrub Biomass at 10 m Resolution by Integrating Field Measurements, Unmanned Aerial Vehicles, and Multi-Source Satellite Observations
by Wenchao Liu, Jie Wang, Yang Hu, Taiyong Ma, Munkhdulam Otgonbayar, Chunbo Li, You Li and Jilin Yang
Remote Sens. 2024, 16(16), 3095; https://doi.org/10.3390/rs16163095 - 22 Aug 2024
Viewed by 1020
Abstract
Accurately estimating shrub biomass in arid and semi-arid regions is critical for understanding ecosystem productivity and carbon stocks at both local and global scales. Due to the short and sparse features of shrubs, capturing the shrub biomass accurately by satellite observations is challenging. [...] Read more.
Accurately estimating shrub biomass in arid and semi-arid regions is critical for understanding ecosystem productivity and carbon stocks at both local and global scales. Due to the short and sparse features of shrubs, capturing the shrub biomass accurately by satellite observations is challenging. Previous studies mostly used ground samples and satellite observations to estimate shrub biomass by establishing a direct connection, which was often hindered by the limited number of ground samples and spatial scale mismatch between samples and observations. Unmanned aerial vehicles (UAVs) provide opportunities to obtain more samples that are in line with the aspects of satellite observations (i.e., scale) for regional-scale shrub biomass estimations accurately with low costs. However, few studies have been conducted based on the air-space-ground-scale connection assisted by UAVs. Here we developed a framework for estimating 10 m shrub biomass at a regional scale by integrating ground measurements, UAV, Landsat, and Sentinel-1/2 observations. First, the spatial distribution map of shrublands and non-shrublands was generated in 2023 in the Helan Mountains of Ningxia province, China. This map had an F1 score of 0.92. Subsequently, the UAV-based shrub biomass map was estimated using an empirical model between the biomass and the crown area of shrubs, which was aggregated at a 10 m × 10 m grid to match the spatial resolution of Sentinel-1/2 images. Then, a regional-scale estimation model of shrub biomass was developed with a random forest regression (RFR) approach driven by ground biomass measurements, UAV-based biomass, and the optimal satellite metrics. Finally, the developed model was used to produce the biomass map of shrublands over the study area in 2023. The uncertainty of the resultant biomass map was characterized by the pixel-level standard deviation (SD) using the leave-one-out cross-validation (LOOCV) method. The results suggested that the integration of multi-scale observations from the ground, UAVs, and satellites provided a promising approach to obtaining the regional shrub biomass accurately. Our developed model, which integrates satellite spectral bands and vegetation indices (R2 = 0.62), outperformed models driven solely by spectral bands (R2 = 0.33) or vegetation indices (R2 = 0.55). In addition, our estimated biomass has an average uncertainty of less than 4%, with the lowest values (<2%) occurring in regions with high shrub coverage (>30%) and biomass production (>300 g/m2). This study provides a methodology to accurately monitor the shrub biomass from satellite images assisted by near-ground UAV observations as well as ground measurements. Full article
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing II)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) Locations of the Helan Mountain in China and Ningxia province, and (<b>c</b>) the distribution of ground truth samples from field measurements, UAV, and visual interpretation.</p>
Full article ">Figure 2
<p>The workflow for estimating biomass of shrubland.</p>
Full article ">Figure 3
<p>(<b>a</b>) The original unmanned aerial vehicle (UAV) image. (<b>b</b>) The classified map of shrublands. (<b>c</b>) The fishnet constructed based on the UAV imagery. (<b>d</b>–<b>g</b>) The zoomed-in views of four sample points in (<b>b</b>).</p>
Full article ">Figure 4
<p>(<b>a</b>) The shrublands and other land over types of Helan Mountain, China, in 2023. (<b>b</b>–<b>i</b>) The zoom-in views of four example regions in the resultant map and the Google Earth images.</p>
Full article ">Figure 5
<p>The comparison of accuracy among the three models. The x-axis represents three models driven by the basic bands (SB), the vegetation indices (VI), and the combination of the basic bands and vegetation indices (SBVI). Their performance is evaluated using R<sup>2</sup> and RMSE.</p>
Full article ">Figure 6
<p>(<b>a</b>) The distribution of R<sup>2</sup> and EOPC within different ranges of shrub coverage. (<b>b</b>) The distribution of R<sup>2</sup> and EOUB within different ranges of shrub biomass. (<b>c</b>) The sensitivity of the biomass model to each variable examined by R<sup>2</sup> and RMSE. These analyses were conducted based on the ground samples. EOPC denotes the error of one percent coverage of shrub, calculated by RMSE/mean shrub coverage. EOUB denotes the error of one unit biomass, calculated by RMSE/mean shrub biomass.</p>
Full article ">Figure 7
<p>(<b>a</b>) The estimated distribution map of shrub biomass in the Helan Mountains. (<b>b</b>) The corresponding map of standard deviation (SD). (<b>c</b>) The distribution of EOPC within different ranges of shrub coverage. (<b>d</b>) The distribution of EOUB within different ranges of shrub biomass. These analyses were conducted based on the estimated distribution map of shrub biomass and the corresponding map of standard deviation. EOPC denotes the error of one percent coverage of shrub, calculated by SD/mean shrub coverage. EOUB denotes the error of one unit biomass, calculated by SD/mean shrub biomass.</p>
Full article ">Figure 8
<p>(<b>a</b>) The distribution of shrub biomass under precipitation gradients. (<b>b</b>) The distribution of shrub biomass under temperature gradients. (<b>c</b>) The distribution of shrub biomass within different ranges of aridity index. (<b>d</b>) The distribution of shrub biomass along elevation gradients.</p>
Full article ">
13 pages, 5370 KiB  
Communication
Predicting Abiotic Soil Characteristics Using Sentinel-2 at Nature-Management-Relevant Spatial Scales and Extents
by Jesper Erenskjold Moeslund and Christian Frølund Damgaard
Remote Sens. 2024, 16(16), 3094; https://doi.org/10.3390/rs16163094 - 22 Aug 2024
Viewed by 642
Abstract
Knowledge of local plant community characteristics is imperative for practical nature planning and management, and for understanding plant diversity and distribution drivers. Today, retrieving such data is only possible by fieldwork and is hence costly both in time and money. Here, we used [...] Read more.
Knowledge of local plant community characteristics is imperative for practical nature planning and management, and for understanding plant diversity and distribution drivers. Today, retrieving such data is only possible by fieldwork and is hence costly both in time and money. Here, we used nine bands from multispectral high-to-medium resolution (10–60 m) satellite data (Sentinel-2) and machine learning to predict local vegetation plot characteristics over a broad area (approx. 30,000 km2) in terms of plants’ preferences for soil moisture, soil fertility, and pH, mirroring the levels of the corresponding actual soil factors. These factors are believed to be among the most important for local plant community composition. Our results showed that there are clear links between the Sentinel-2 data and plants’ abiotic soil preferences, and using solely satellite data we achieved predictive powers between 26 and 59%, improving to around 70% when habitat information was included as a predictor. This shows that plants’ abiotic soil preferences can be detected quite well from space, but also that retrieving soil characteristics using satellites is complicated and that perfect detection of soil conditions using remote sensing—if at all possible—needs further methodological and data development. Full article
(This article belongs to the Special Issue Local-Scale Remote Sensing for Biodiversity, Ecology and Conservation)
Show Figures

Figure 1

Figure 1
<p>Overview of vegetation plots (black dots), showing their distribution in Denmark. Because of lacking satellite data on the date selected for this study, islands including Zealand and Bornholm (the right-most parts of the country) are missing from the plot data. Green dots and numbers mark the location of the examples shown in Figure 3.</p>
Full article ">Figure 2
<p>Predicted vs. actual values (purple circles) for the average Ellenberg indicator values (EIVs [<a href="#B21-remotesensing-16-03094" class="html-bibr">21</a>]) from the validation plots (<span class="html-italic">n</span> = 28,017). The dotted line shows where perfect predictions would be. F, N, and R: EIVs for plants’ preferences for soil moisture, fertility, and pH, respectively.</p>
Full article ">Figure 3
<p>Actual (colored dots) mean Ellenberg indicator values (EIVs) for soil moisture (F), fertility (N), pH (R), and the nutrient ratio (N/R). Blue error bars show the absolute prediction error for each plot for the model including both satellite data and habitat type as predictors. The red lines connect the error bars to their respective dot. The location of the examples is marked on <a href="#remotesensing-16-03094-f001" class="html-fig">Figure 1</a> with numbers. Example 1 (i.e., first column of panels) is from Tversted in northern Jutland, example 2 is from Fuglbæk in western Jutland, and example 3 is from Otterup on Funen. The scales are 1:10,000, 1:5000, and 1:2800, respectively, for the three examples (when viewed or printed in original figure size).</p>
Full article ">
23 pages, 39394 KiB  
Article
Fine-Scale Mangrove Species Classification Based on UAV Multispectral and Hyperspectral Remote Sensing Using Machine Learning
by Yuanzheng Yang, Zhouju Meng, Jiaxing Zu, Wenhua Cai, Jiali Wang, Hongxin Su and Jian Yang
Remote Sens. 2024, 16(16), 3093; https://doi.org/10.3390/rs16163093 - 22 Aug 2024
Viewed by 1545
Abstract
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental [...] Read more.
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental conservation of coastal ecosystems. Traditional satellite data are limited in fine-scale mangrove species classification due to low spatial resolution and less spectral information. This study employed unmanned aerial vehicle (UAV) technology to acquire high-resolution multispectral and hyperspectral mangrove forest imagery in Guangxi, China. We leveraged advanced algorithms, including RFE-RF for feature selection and machine learning models (Adaptive Boosting (AdaBoost), eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM)), to achieve mangrove species mapping with high classification accuracy. The study assessed the classification performance of these four machine learning models for two types of image data (UAV multispectral and hyperspectral imagery), respectively. The results demonstrated that hyperspectral imagery had superiority over multispectral data by offering enhanced noise reduction and classification performance. Hyperspectral imagery produced mangrove species classification with overall accuracy (OA) higher than 91% across the four machine learning models. LightGBM achieved the highest OA of 97.15% and kappa coefficient (Kappa) of 0.97 based on hyperspectral imagery. Dimensionality reduction and feature extraction techniques were effectively applied to the UAV data, with vegetation indices proving to be particularly valuable for species classification. The present research underscored the effectiveness of UAV hyperspectral images using machine learning models for fine-scale mangrove species classification. This approach has the potential to significantly improve ecological management and conservation strategies, providing a robust framework for monitoring and safeguarding these essential coastal habitats. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and UAV-based visible image ((<b>A</b>): Yingluo Bay, (<b>B</b>): Pearl Bay).</p>
Full article ">Figure 2
<p>Workflow diagram illustrating the methodology of this study.</p>
Full article ">Figure 3
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by four learning models based on multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">Figure 4
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by LightGBM learning model based on the multi- and hyper-spectral image in Pearl Bay.</p>
Full article ">Figure 5
<p>The mangrove species classification maps using four learning models (LightGBM, RF, XGBoost, and AdaBoost) based on UAV multispectral image (<b>a</b>–<b>d</b>) and hyperspectral image (<b>e</b>–<b>h</b>), respectively, in Yingluo Bay.</p>
Full article ">Figure 6
<p>The UAV visual image covering Yingluo Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results based on the LightGBM learning model.</p>
Full article ">Figure 7
<p>The mangrove species classification maps using the LightGBM learning model based on UAV multispectral image (<b>a</b>) and hyperspectral image (<b>b</b>) in Pearl Bay.</p>
Full article ">Figure 8
<p>The UAV visual image covering Pearl Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results using LightGBM learning model.</p>
Full article ">Figure A1
<p>Normalized confusion matrices of mangrove species classification using four learning models (AdaBoost, XGboost, RF, and LightGBM) based on UAV multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">
11 pages, 3246 KiB  
Technical Note
Wavelength Cut-Off Error of Spectral Density from MTF3 of SWIM Instrument Onboard CFOSAT: An Investigation from Buoy Data
by Yuexin Luo, Ying Xu, Hao Qin and Haoyu Jiang
Remote Sens. 2024, 16(16), 3092; https://doi.org/10.3390/rs16163092 - 22 Aug 2024
Viewed by 552
Abstract
The Surface Waves Investigation and Monitoring instrument (SWIM) provides the directional wave spectrum within the wavelength range of 23–500 m, corresponding to a frequency range of 0.056–0.26 Hz in deep water. This frequency range is narrower than the 0.02–0.485 Hz frequency range of [...] Read more.
The Surface Waves Investigation and Monitoring instrument (SWIM) provides the directional wave spectrum within the wavelength range of 23–500 m, corresponding to a frequency range of 0.056–0.26 Hz in deep water. This frequency range is narrower than the 0.02–0.485 Hz frequency range of buoys used to validate the SWIM nadir Significant Wave Height (SWH). The modulation transfer function used in the current version of the SWIM data product normalizes the energy of the wave spectrum using the nadir SWH. A discrepancy in the cut-off frequency/wavelength ranges between the nadir and off-nadir beams can lead to an overestimation of off-nadir cut-off SWHs and, consequently, the spectral densities of SWIM wave spectra. This study investigates such errors in SWHs due to the wavelength cut-off effect using buoy data. Results show that this wavelength cut-off error of SWH is small in general thanks to the high-frequency extension of the resolved frequency range. The corresponding high-frequency cut-off errors are systematic errors amenable to statistical correction, and the low-frequency cut-off error can be significant under swell-dominated conditions. By leveraging the properties of these errors, we successfully corrected the high-frequency cut-off SWH error using an artificial neural network and mitigated the low-frequency cut-off SWH error with the help of a numerical wave hindcast. These corrections significantly reduced the error in the estimated cut-off SWH, improving the bias, root-mean-square error, and correlation coefficient from 0.086 m, 0.111 m, and 0.9976 to 0 m, 0.039 m, and 0.9994, respectively. Full article
(This article belongs to the Special Issue Satellite Remote Sensing for Ocean and Coastal Environment Monitoring)
Show Figures

Figure 1

Figure 1
<p>The location of the NDBC buoys used in this study.</p>
Full article ">Figure 2
<p>The scatter plot between the SWHs integrated from full buoy wave spectra (0.02–0.485 Hz) and from cut-off buoy wave spectra. The integral frequency range for the cut-off SWH is (<b>a</b>) 0.056–0.26 Hz, (<b>b</b>) 0.02–0.26 Hz (high-frequency cut-off SWH), and (<b>c</b>) 0.056–0.485 Hz.</p>
Full article ">Figure 3
<p>Wavelength cut-off error of SWH as functions of total SWH and wind speed. (<b>a</b>,<b>b</b>) Low-frequency cut-off errors as a function of SWH and wind speed, respectively. (<b>c</b>,<b>d</b>) The same as subplots a and b but for high-frequency cut-off errors.</p>
Full article ">Figure 4
<p>(<b>a</b>) Comparison between the high-frequency cut-off SWHs (0.02–0.26 Hz) from ANN correction and from the original buoy data. (<b>b</b>) The same as (<b>a</b>) but for low-frequency cut-off SWHs (0.056–0.485 Hz).</p>
Full article ">Figure 5
<p>(<b>a</b>) The scatter plot between the low-frequency cut-off errors of SWH derived from buoys and those from IOWAGA. (<b>b</b>) The scatter plot between the low-frequency cut-off SWHs (0.056–0.485 Hz) after correction using the numerical wave model and the original buoy data. (<b>c</b>) The scatter plot between the cut-off SWHs (0.056–0.26 Hz) from direct buoy observations and those from the correction method presented in this study.</p>
Full article ">Figure 6
<p>(<b>a</b>) Mean buoy spectra for the entire sample (blue) and for different SWH ranges (other colors) with vertical dashed line indicates the cut-off frequencies. (<b>b</b>) Buoy spectra for cases with cut-off errors exceed 0.5 m, where the red curve indicates their mean spectrum. (<b>c</b>) The same as <a href="#remotesensing-16-03092-f002" class="html-fig">Figure 2</a>a, but using the nominal cut-off range of SWIM, 70 to 500 m.</p>
Full article ">
21 pages, 14155 KiB  
Article
Statistical Characteristics of Remote Sensing Extreme Temperature Anomaly Events in the Taiwan Strait
by Ze-Feng Jin and Wen-Zhou Zhang
Remote Sens. 2024, 16(16), 3091; https://doi.org/10.3390/rs16163091 - 22 Aug 2024
Viewed by 819
Abstract
With global warming, the global ocean is experiencing more and stronger marine heatwaves (MHWs) and less and weaker marine cold spells (MCSs). On the regional scale, the complex circulation structure means that the changes in sea surface temperature (SST) and extreme temperature anomaly [...] Read more.
With global warming, the global ocean is experiencing more and stronger marine heatwaves (MHWs) and less and weaker marine cold spells (MCSs). On the regional scale, the complex circulation structure means that the changes in sea surface temperature (SST) and extreme temperature anomaly events in the Taiwan Strait (TWS) exhibit unique regional characteristics. In summer (autumn), the SST in most regions of the TWS has a significant increasing trend with a regionally averaged rate of 0.22 °C (0.19 °C) per decade during the period 1982–2021. In winter and spring, the SST in the western strait shows a significant decreasing trend with a maximum decreasing rate of −0.48 °C per decade, while it shows an increasing trend in the eastern strait. The annual mean results show that the TWS is experiencing more MHWs and MCSs with time. The frequency of the MHWs in the eastern strait is increasing faster than that in the western strait. In the western region controlled by the Zhe-Min Coastal Current, the MCSs have an increasing trend while in the other areas they have a decreasing trend. The MHWs occur in most areas of the TWS in summer and autumn, but the MCSs are mainly concentrated in the west of the TWS in spring and winter. The cooling effect of summer upwelling tends to inhibit the occurrence of MHWs and enhance MCSs. The rising background SST is a dominant driver for the increasing trend of summer MHWs. By contrast, both the SST decreasing trend and internal variability contribute to the winter MCSs increasing trend in the strait. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The location of the TWS and its adjacent seas. The current patterns in the TWS in (<b>b</b>) summer and (<b>c</b>) winter. The shading shows the bathymetry in meters (m). In each panel, PT, ZY, PH, TB, DS, YCC, SCSWC, KBC, and ZCC denote Pingtan Island, Zhangyun Ridge, the Penghu Islands, Taiwan Bank, Dongshan Island, the Yuedong Coastal Current, the South China Sea Warm Current, the Kuroshio Branch Current, and the Zhe-Min Coastal Current, respectively.</p>
Full article ">Figure 2
<p>Seasonal mean SST (°C, (<b>a</b>–<b>d</b>)), seasonal 90th percentile SST (<b>e</b>–<b>h</b>), seasonal 10th percentile SST (<b>i</b>–<b>l</b>) in spring (March–May, first column), summer (June–August, second column), autumn (September–November, third column), and winter (December–February, fourth column) obtained from the period 1982–2011.</p>
Full article ">Figure 3
<p>SST long-term trends (<math display="inline"><semantics> <mrow> <mo>℃</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>d</mi> <mi>e</mi> <mi>c</mi> <mi>a</mi> <mi>d</mi> <mi>e</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>) in spring (<b>a</b>), summer (<b>b</b>), autumn (<b>c</b>), and winter (<b>d</b>) during 1982–2021. Hatching indicates the trend is significant above the 95% significance level (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>The annual mean metrics of MHWs during 1982–2021: (<b>a</b>) total days, (<b>b</b>) frequency, (<b>c</b>) duration, (<b>d</b>) mean intensity, (<b>e</b>) cumulative intensity, (<b>f</b>) maximum intensity; the linear trends of the metrics of MHWs during 1982–2021 recorded per decade: (<b>g</b>) total days, (<b>h</b>) frequency, (<b>i</b>) duration, (<b>j</b>) mean intensity, (<b>k</b>) cumulative intensity, (<b>l</b>) maximum intensity. Hatching indicates the trend is significant above the 95% significance level (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>The annual mean metrics of MCSs during 1982–2021: (<b>a</b>) total days, (<b>b</b>) frequency, (<b>c</b>) duration, (<b>d</b>) mean intensity, (<b>e</b>) cumulative intensity, (<b>f</b>) maximum intensity; the linear trends of the metrics of MCSs during 1982–2021 recorded per decade: (<b>g</b>) total days, (<b>h</b>) frequency, (<b>i</b>) duration, (<b>j</b>) mean intensity, (<b>k</b>) cumulative intensity, (<b>l</b>) maximum intensity. Hatching indicates the trend is significant above the 95% significance level (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>The spatial distribution of seasonal mean MHWs metrics in TWS during 1982–2021: (<b>a</b>–<b>d</b>) total days, (<b>e</b>–<b>h</b>) frequency, (<b>i</b>–<b>l</b>) duration, (<b>m</b>–<b>p</b>) mean intensity, (<b>q</b>–<b>t</b>) cumulative intensity. The first to the fourth columns are the results for spring, summer, autumn, and winter, respectively.</p>
Full article ">Figure 7
<p>The linear trends of the corresponding MHWs metrics in <a href="#remotesensing-16-03091-f006" class="html-fig">Figure 6</a> from 1982–2021: (<b>a</b>–<b>d</b>) total days, (<b>e</b>–<b>h</b>) frequency, (<b>i</b>–<b>l</b>) duration, (<b>m</b>–<b>p</b>) mean intensity, (<b>q</b>–<b>t</b>) cumulative intensity. The first to the fourth columns are the results for spring, summer, autumn, and winter, respectively.</p>
Full article ">Figure 8
<p>The spatial distribution of the seasonal mean MCSs metrics in TWS during 1982–2021: (<b>a</b>–<b>d</b>) total days, (<b>e</b>–<b>h</b>) frequency, (<b>i</b>–<b>l</b>) duration, (<b>m</b>–<b>p</b>) mean intensity, (<b>q</b>–<b>t</b>) cumulative intensity. The first to the fourth columns are the results for spring, summer, autumn, and winter, respectively.</p>
Full article ">Figure 9
<p>The linear trends of the corresponding MCSs metrics in <a href="#remotesensing-16-03091-f008" class="html-fig">Figure 8</a> from 1982–2021: (<b>a</b>–<b>d</b>) total days, (<b>e</b>–<b>h</b>) frequency, (<b>i</b>–<b>l</b>) duration, (<b>m</b>–<b>p</b>) mean intensity, (<b>q</b>–<b>t</b>) cumulative intensity. The first to the fourth columns are the results for spring, summer, autumn, and winter, respectively.</p>
Full article ">Figure 10
<p>Empirical orthogonal function (EOF) analysis of total intensity for (<b>a</b>,<b>b</b>) summer MHWs and (<b>c</b>,<b>d</b>) winter MCSs in TWS during 1982–2021: (<b>a</b>,<b>c</b>) spatial patterns of the first EOF modes for MHWs and MCSs, and (<b>b</b>,<b>d</b>) their corresponding principal components.</p>
Full article ">Figure 11
<p>Temporal variations of (<b>a</b>) MHWs and (<b>b</b>) MCSs seasonal total intensity (°C) during 1982–2021. The red curve in (<b>a</b>) and blue curve in (<b>b</b>) are calculated from original SST time series (original results) and black curves in (<b>a</b>,<b>b</b>) are from detrended SST time series (detrended results). The green lines are the differences between the original results and detrended results (the former minus the latter). Dashed lines represent the linear trends.</p>
Full article ">Figure 12
<p>Probability distribution function for the regionally averaged SST in (<b>a</b>,<b>b</b>) summer and (<b>c</b>,<b>d</b>) winter. In each panel, the SST density distributions for the first decade (1982–1991) and last decade (2012–2021) are shown as grey and black curves, respectively. The first column is obtained from the original SST data and the second column is from the detrended SST data. Red shades in (<b>a</b>,<b>b</b>) and blue shades in (<b>c</b>,<b>d</b>) represent the SST range for MHWs and MCSs, respectively.</p>
Full article ">
18 pages, 14973 KiB  
Article
Developing a Generalizable Spectral Classifier for Rhodamine Detection in Aquatic Environments
by Ámbar Pérez-García, Alba Martín Lorenzo, Emma Hernández, Adrián Rodríguez-Molina, Tim H. M. van Emmerik and José F. López
Remote Sens. 2024, 16(16), 3090; https://doi.org/10.3390/rs16163090 - 22 Aug 2024
Cited by 2 | Viewed by 739
Abstract
In environmental studies, rhodamine dyes are commonly used to trace water movements and pollutant dispersion. Remote sensing techniques offer a promising approach to detecting rhodamine and estimating its concentration, enhancing our understanding of water dynamics. However, research is needed to address more complex [...] Read more.
In environmental studies, rhodamine dyes are commonly used to trace water movements and pollutant dispersion. Remote sensing techniques offer a promising approach to detecting rhodamine and estimating its concentration, enhancing our understanding of water dynamics. However, research is needed to address more complex environments, particularly optically shallow waters, where bottom reflectance can significantly influence the spectral response of the rhodamine. Therefore, this study proposes a novel approach: transferring pre-trained classifiers to develop a generalizable method across different environmental conditions without the need for in situ calibration. Various samples incorporating distilled and seawater on light and dark backgrounds were analyzed. Spectral analysis identified critical detection regions (400–500 nm and 550–650 nm) for estimating rhodamine concentration. Significant spectral variations were observed between light and dark backgrounds, highlighting the necessity for precise background characterization in shallow waters. Enhanced by the Sequential Feature Selector, classification models achieved robust accuracy (>90%) in distinguishing rhodamine concentrations, particularly effective under controlled laboratory conditions. While band transfer was successful (>80%), the transfer of pre-trained models posed a challenge. Strategies such as combining diverse sample sets and applying the first derivative prevent overfitting and improved model generalizability, surpassing 85% accuracy across three of the four scenarios. Therefore, the methodology provides us with a generalizable classifier that can be used across various scenarios without requiring recalibration. Future research aims to expand dataset variability and enhance model applicability across diverse environmental conditions, thereby advancing remote sensing capabilities in water dynamics, environmental monitoring and pollution control. Full article
(This article belongs to the Special Issue Coastal and Littoral Observation Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Rhodamine samples for distilled and seawater at different concentrations. (<b>a</b>) Distilled 0 mg/L; (<b>b</b>) Distilled 1 mg/L; (<b>c</b>) Distilled 15 mg/L; (<b>d</b>) Distilled 30 mg/L; (<b>e</b>) Seawater 0 mg/L; (<b>f</b>) Seawater 1 mg/L; (<b>g</b>) Seawater 15 mg/L; (<b>h</b>) Seawater 30 mg/L.</p>
Full article ">Figure 2
<p>A 3D model of the acquisition system (adapted with permission from [<a href="#B27-remotesensing-16-03090" class="html-bibr">27</a>], under a Creative Commons Attribution (CC BY) 4.0 license. Copyright 2022).</p>
Full article ">Figure 3
<p>Methodology for transferring results and obtaining a generalizable classifier.</p>
Full article ">Figure 4
<p>Band selection method (adapted with permission from [<a href="#B30-remotesensing-16-03090" class="html-bibr">30</a>], Copyright 2024, IEEE).</p>
Full article ">Figure 5
<p>Mean spectral signature and standard deviation (shaded in the corresponding colour) of the backgrounds with and without the beaker.</p>
Full article ">Figure 6
<p>Mean spectra with standard deviation (shaded in the corresponding colour) for each concentration and sample. (<b>a</b>) Distilled water with a dark background; (<b>b</b>) distilled water with a light background; (<b>c</b>) sea water with a dark background; (<b>d</b>) sea water with a light background.</p>
Full article ">Figure 7
<p>Spectral difference between distilled and seawater for the two backgrounds: (<b>a</b>) 0 mg/L; (<b>b</b>) 1 mg/L; (<b>c</b>) 15 mg/L; (<b>d</b>) 30 mg/L.</p>
Full article ">Figure 8
<p>Spectral difference between backgrounds: (<b>a</b>) 0 mg/L; (<b>b</b>) 1 mg/L; (<b>c</b>) 15 mg/L; (<b>d</b>) 30 mg/L.</p>
Full article ">Figure 9
<p>OAC of all model combinations for dark (solid line) and light (dashed line) backgrounds. (<b>a</b>) Distilled water; (<b>b</b>) seawater.</p>
Full article ">Figure 10
<p>Spectral areas of interest, identified by grouping the two most significant bands for each combination of SFS with RF, LR, and SVM.</p>
Full article ">Figure 11
<p>Accuracy obtained by transferring bands of interest from one sample to another. The colours indicate performance: green tones for accuracies above 80%, yellowish for 60–80%, orange for 40–60%, and red for accuracy below 40%.</p>
Full article ">Figure 12
<p>Accuracy obtained by transferring trained classifiers from one sample to another. The colours indicate performance: green tones for accuracies above 80%, yellowish for 60–80%, orange for 40–60%, and red for accuracy below 40%.</p>
Full article ">Figure 13
<p>Mean spectra and standard deviation (shaded in the corresponding colour). The two best bands are indicated with black vertical lines. (<b>a</b>) Combined samples (580 and 610 nm); (<b>b</b>) the first derivative of the combined samples (591 and 607 nm).</p>
Full article ">Figure 14
<p>Confusion matrix training with CS and CD for the best and worst scenarios in <a href="#remotesensing-16-03090-f015" class="html-fig">Figure 15</a>. (<b>a</b>) CS validating on distilled light; (<b>b</b>) CS validating on seawater light; (<b>c</b>) CD validating on seawater light.</p>
Full article ">Figure 15
<p>Accuracy obtained by transferring trained classifiers from the combined sample. The colours indicate performance: green tones for accuracies above 80%, yellowish for 60–80%, orange for 40–60%, and red for accuracy below 40%.</p>
Full article ">
18 pages, 2209 KiB  
Article
Extraction of Spatiotemporal Information of Rainfall-Induced Landslides from Remote Sensing
by Tongxiao Zeng, Jun Zhang, Yulin Chen and Shaonan Zhu
Remote Sens. 2024, 16(16), 3089; https://doi.org/10.3390/rs16163089 - 22 Aug 2024
Viewed by 757
Abstract
With global climate change and increased human activities, landslides increasingly threaten human safety and property. Precisely extracting large-scale spatiotemporal information on landslides is crucial for risk management. However, existing methods are either locally based or have coarse temporal resolution, which is insufficient for [...] Read more.
With global climate change and increased human activities, landslides increasingly threaten human safety and property. Precisely extracting large-scale spatiotemporal information on landslides is crucial for risk management. However, existing methods are either locally based or have coarse temporal resolution, which is insufficient for regional analysis. In this study, spatiotemporal information on landslides was extracted using multiple remote sensing data from Emilia, Italy. An automated algorithm for extracting spatial information of landslides was developed with NDVI datasets. Then, we established a landslide prediction model based on a hydrometeorological threshold of three-day soil moisture and three-day accumulated rainfall. Based on this model, the locations and dates of rainfall-induced landslides were identified. Then, we further matched these identified locations with the extracted landslides from remote sensing data and finally determined the occurrence time. This approach was validated with recorded landslides events in Emilia. Despite some temporal clustering, the overall trend matched historical records, accurately reflecting the dynamic impacts of rainfall and soil moisture on landslides. The temporal bias for 87.3% of identified landslides was within seven days. Furthermore, higher rainfall magnitude was associated with better temporal accuracy, validating the effectiveness of the model and the reliability of rainfall as a landslide predictor. Full article
(This article belongs to the Special Issue Study on Hydrological Hazards Based on Multi-source Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the study area and the distribution of recorded landslide data: (<b>a</b>) Location of the study area within Italy; (<b>b</b>) Distribution of recorded landslides along with the latitude and longitude grid within the study area; (<b>c</b>) Landslide susceptibility map of the study area, derived from the global landslide hazard dataset [<a href="#B40-remotesensing-16-03089" class="html-bibr">40</a>].</p>
Full article ">Figure 2
<p>Schematic of the hydrometeorological threshold model based on antecedent soil moisture and recent rainfall.</p>
Full article ">Figure 3
<p>Monthly distribution of landslide numbers and average rainfall.</p>
Full article ">Figure 4
<p>Spatial distribution map of extracted rainfall-induced landslide events in Emilia (2017).</p>
Full article ">Figure 5
<p>Distribution of landslide events in the Emilia region: (<b>a</b>) Slope distribution; (<b>b</b>) Land use type distribution.</p>
Full article ">Figure 6
<p>The evaluation results of landslide prediction models under the thresholds of different combinations of soil saturation and rainfall indices.</p>
Full article ">Figure 7
<p>The dates and corresponding regions that meeting the hydrometeorological thresholds: (<b>a</b>) The blue areas are the regions corresponding to the dates that meet the rainfall threshold conditions; (<b>b</b>) The green areas are the regions corresponding to the dates that meet the soil moisture threshold conditions.</p>
Full article ">Figure 7 Cont.
<p>The dates and corresponding regions that meeting the hydrometeorological thresholds: (<b>a</b>) The blue areas are the regions corresponding to the dates that meet the rainfall threshold conditions; (<b>b</b>) The green areas are the regions corresponding to the dates that meet the soil moisture threshold conditions.</p>
Full article ">Figure 8
<p>Time series of landslide identification and its relationship with daily rainfall (February to March 2018).</p>
Full article ">Figure 9
<p>Bias of rainfall-induced landslide temporal identification.</p>
Full article ">Figure 10
<p>The relationship between the temporal bias of landslide event extraction results and rainfall from February to March 2018. The positive temporal bias indicates that the predicted time is earlier than the recorded time, while the negative value indicates the opposite.</p>
Full article ">
23 pages, 29625 KiB  
Article
HA-Net for Bare Soil Extraction Using Optical Remote Sensing Images
by Junqi Zhao, Dongsheng Du, Lifu Chen, Xiujuan Liang, Haoda Chen and Yuchen Jin
Remote Sens. 2024, 16(16), 3088; https://doi.org/10.3390/rs16163088 - 21 Aug 2024
Viewed by 730
Abstract
Bare soil will cause soil erosion and contribute to air pollution through the generation of dust, making the timely and effective monitoring of bare soil an urgent requirement for environmental management. Although there have been some researches on bare soil extraction using high-resolution [...] Read more.
Bare soil will cause soil erosion and contribute to air pollution through the generation of dust, making the timely and effective monitoring of bare soil an urgent requirement for environmental management. Although there have been some researches on bare soil extraction using high-resolution remote sensing images, great challenges still need to be solved, such as complex background interference and small-scale problems. In this regard, the Hybrid Attention Network (HA-Net) is proposed for automatic extraction of bare soil from high-resolution remote sensing images, which includes the encoder and the decoder. In the encoder, HA-Net initially utilizes BoTNet for primary feature extraction, producing four-level features. The extracted highest-level features are then input into the constructed Spatial Information Perception Module (SIPM) and the Channel Information Enhancement Module (CIEM) to emphasize the spatial and channel dimensions of bare soil information adequately. To improve the detection rate of small-scale bare soil areas, during the decoding stage, the Semantic Restructuring-based Upsampling Module (SRUM) is proposed, which utilizes the semantic information from input features and compensate for the loss of detailed information during downsampling in the encoder. An experiment is performed based on high-resolution remote sensing images from the China–Brazil Resources Satellite 04A. The results show that HA-Net obviously outperforms several excellent semantic segmentation networks in bare soil extraction. The average precision and IoU of HA-Net in two scenes can reach 90.9% and 80.9%, respectively, which demonstrates the excellent performance of HA-Net. It embodies the powerful ability of HA-Net for suppressing the interference from complex backgrounds and solving multiscale issues. Furthermore, it may also be used to perform excellent segmentation tasks for other targets from remote sensing images. Full article
(This article belongs to the Special Issue AI-Driven Satellite Data for Global Environment Monitoring)
Show Figures

Figure 1

Figure 1
<p>Study Regions in Hunan Province.</p>
Full article ">Figure 2
<p>The sample examples of the bare soil data set. The red color represents areas of bare soil, while the black color represents background areas.</p>
Full article ">Figure 3
<p>Overall architecture of the HA-Net.</p>
Full article ">Figure 4
<p>Overall structure of BoTNet.</p>
Full article ">Figure 5
<p>The structure of the SIPM.</p>
Full article ">Figure 6
<p>The structure of the CIEM.</p>
Full article ">Figure 7
<p>The structure of the SRUM.</p>
Full article ">Figure 8
<p>The stitching strategy: (<b>a</b>) sliding window stitching strategy; (<b>b</b>) final stitching strategy.</p>
Full article ">Figure 9
<p>Typical bare soil scene images.</p>
Full article ">Figure 10
<p>Different network extraction results for typical regions in test Scene I. (<b>a</b>–<b>c</b>) correspond to the three regions shown in Scene I of <a href="#remotesensing-16-03088-f009" class="html-fig">Figure 9</a>. The orange-red regions indicate areas identified as bare soil by different networks. The yellow and light blue boxes mainly demonstrate missed detections and false alarms.</p>
Full article ">Figure 11
<p>Different network extraction results for typical regions in test Scene II. (<b>a</b>,<b>b</b>) correspond to the two regions shown in Scene II of <a href="#remotesensing-16-03088-f009" class="html-fig">Figure 9</a>. The orange-red regions indicate areas identified as bare soil by different networks. The yellow and light blue boxes mainly demonstrate missed detections and false alarms.</p>
Full article ">Figure 11 Cont.
<p>Different network extraction results for typical regions in test Scene II. (<b>a</b>,<b>b</b>) correspond to the two regions shown in Scene II of <a href="#remotesensing-16-03088-f009" class="html-fig">Figure 9</a>. The orange-red regions indicate areas identified as bare soil by different networks. The yellow and light blue boxes mainly demonstrate missed detections and false alarms.</p>
Full article ">Figure 12
<p>Different network extraction results for typical regions in Scene III: (<b>a</b>) Scene III; (<b>b</b>) ground truth for typical regions; (<b>c</b>–<b>h</b>) are the extraction results of bare soil by DeepLabV3+, DA-Net, BuildFormer, YOSO, HA-Net-B50, and HA-Net-B101.</p>
Full article ">
23 pages, 3244 KiB  
Article
Assessment of Hygroscopic Behavior of Arctic Aerosol by Contemporary Lidar and Radiosonde Observations
by Nele Eggers, Sandra Graßl and Christoph Ritter
Remote Sens. 2024, 16(16), 3087; https://doi.org/10.3390/rs16163087 - 21 Aug 2024
Viewed by 631
Abstract
This study presents the hygroscopic properties of aerosols from the Arctic free troposphere by means of contemporary lidar and radiosonde observations only. It investigates the period from the Arctic Haze in spring towards the summer season in 2021. Therefore, a one-parameter growth curve [...] Read more.
This study presents the hygroscopic properties of aerosols from the Arctic free troposphere by means of contemporary lidar and radiosonde observations only. It investigates the period from the Arctic Haze in spring towards the summer season in 2021. Therefore, a one-parameter growth curve model is applied to lidar data from the Koldewey Aerosol Raman Lidar (AWIPEV in Ny-Ålesund, Svalbard) and simultaneous radiosonde measurements. Hygroscopic growth depends on different factors like aerosol diameter and chemical composition. To detangle this dependency, three trends in hygroscopicity are additionally investigated by classifying the aerosol first by its dry color ratio, and then by its season and altitude. Generally, we found a complex altitude dependence with the least hygroscopic particles in the middle of the troposphere. The most hygroscopic aerosol is located in the upper free troposphere. A hypothesis based on prior lifting of the particles is given. The expected trend with aerosol diameter is not observed, which draws attention to the complex dependence of hygroscopic growth on geographical region and altitude, and to the development of backscatter with the aerosol size itself. In a seasonal overview, two different modes of stronger or weaker hygroscopic particles are additionally observed. Furthermore, two special days are discussed using the Mie theory. They show, on the one hand, the complexity of analyzing hygroscopic growth by means of lidar data, but on the other hand, they demonstrate that it is in fact measurable with this approach. For these two case studies, we calculated that the aerosol effective radius increased from 0.16μm (dry) to 0.18μm (wet) and from 0.28μm to 0.32μm for the second case. Full article
Show Figures

Figure 1

Figure 1
<p>The daily median of the backscatter (<b>a</b>), the color ratio (<b>b</b>) and the aerosol depolarization (<b>c</b>) was calculated within altitudes between <math display="inline"><semantics> <mrow> <mn>0.7</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10.0</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>, and for the lidar ratio (<b>d</b>) from <math display="inline"><semantics> <mrow> <mn>0.7</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mn>2.5</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>. After a decrease in April, the backscatter takes its maximum in May. An unusual second increase in July is observed. The lidar ratio is enhanced throughout the whole season and is maximal in May and June. The color ratio continuously increases, and the depolarization decreases. Three estimated seasons are indicated by dotted lines.</p>
Full article ">Figure 2
<p>The daily median of the backscatter (<b>a</b>) and color ratio (<b>b</b>) are illustrated for four different height intervals: 0.7–2.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>, 2.5–4.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>, 4.5–6.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> and 6.5–10.0 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>. Overall, the backscatter is the highest in the lowest height interval. However, the seasonal development, i.e., the transition from spring to summer, is most pronounced between <math display="inline"><semantics> <mrow> <mn>2.5</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>6.5</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>. The color ratio increases towards summer. The strongest gradient in time is visible below <math display="inline"><semantics> <mrow> <mn>2.5</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The seasonal development of the relative humidity is illustrated in (<b>a</b>). The median was determined between <math display="inline"><semantics> <mrow> <mn>0.7</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10.0</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>. Dotted lines indicate the three seasons—Haze, summer season and forest fire-impacted season. Figure (<b>b</b>) shows the vertical distribution of data points from the whole season between <math display="inline"><semantics> <mrow> <mn>0.7</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10.0</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> that provided a relative humidity smaller than 40%. On average, the relative humidity decreases with altitude.</p>
Full article ">Figure 4
<p>The vertical distribution of the color ratio (<b>a</b>) and relative humidity (<b>b</b>) over the troposphere are shown. Values between <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>%</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>60</mn> <mo>%</mo> </mrow> </semantics></math>, as well as the smallest color ratio values, occur most often between <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>4</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>. Note, as no direct comparison of radiosonde and lidar data was performed here, not only the simultaneous data are illustrated, which enhances the data basis.</p>
Full article ">Figure 5
<p>The backscatter development between April and July 2021 with regard to the relative humidity over water is shown in (<b>a</b>). The median backscatter of each percentage of relative humidity is additionally illustrated. In general, the aerosol demonstrates a hygroscopic growth between 40% and 67% relative humidity. Beginning at <math display="inline"><semantics> <mrow> <mn>67</mn> <mo>%</mo> </mrow> </semantics></math> relative humidity, a more irregular behavior dominates. The growth curve is fitted onto the normalized median backscatter between <math display="inline"><semantics> <mrow> <mn>41</mn> <mo>%</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>67</mn> <mo>%</mo> </mrow> </semantics></math> relative humidity in (<b>b</b>). The fitting parameter <math display="inline"><semantics> <mi>γ</mi> </semantics></math> amounts to <math display="inline"><semantics> <mrow> <mn>0.23</mn> </mrow> </semantics></math> with an R<sup>2</sup> of 0.43.</p>
Full article ">Figure 6
<p>The median backscatter of the subdivided dataset is illustrated in scatter plots (<b>a</b>–<b>c</b>) along with relative humidity. The intervals of the subdivision were the following: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>3</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>≥</mo> <mspace width="3.33333pt"/> <mn>3.0</mn> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>≤</mo> <mspace width="3.33333pt"/> <mn>1.2</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="3.33333pt"/> <mo>≤</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1.2</mn> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> </mrow> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> <mo>≤</mo> <mn>1.7</mn> </mrow> </semantics></math>. It resulted in an increasing aerosol diameter. The growth curve was calculated for each dataset. Hygroscopic growth was the strongest for (<b>b</b>) and the weakest for (<b>a</b>).</p>
Full article ">Figure 7
<p>The growth curve was fitted onto the median backscatter above 40% relative humidity over water. The data were taken from the seasonally classified dataset. It was subdivided into Arctic Haze (<b>a</b>), summer (<b>b</b>) and the season with forest fire impacts (<b>c</b>). Modes of higher and one of lower hygroscopicity were visible during Arctic Haze and summer. The high modes almost coincided, whereas the lower mode of the summer season was still stronger than during Arctic Haze and the forest fire-impacted season.</p>
Full article ">Figure 8
<p>The growth curve was fitted onto the data of the different height intervals 0.7–2.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>a</b>), 2.5–4.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>b</b>), 4.5–6.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>c</b>) and 6.5–10.0 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>d</b>). Except for the uppermost height interval, no clear growth trend was observed. Especially within 2.5–6.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>, random trends seemed to dominate.</p>
Full article ">Figure 9
<p>The development of the color ratio (<b>a</b>) and of the depolarization and lidar ratio (<b>b</b>) with relative humidity on 23 May between <math display="inline"><semantics> <mrow> <mn>2.28</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>3.28</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> is illustrated. <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>532</mn> <mn>355</mn> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>1064</mn> <mn>532</mn> </msubsup> </semantics></math> developed contrarily. While the lidar ratio at <math display="inline"><semantics> <mrow> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math> was constantly low, it had a maximum at <math display="inline"><semantics> <mrow> <mn>30</mn> <mo>%</mo> </mrow> </semantics></math> relative humidity for <math display="inline"><semantics> <mrow> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math>. In <a href="#sec5dot1-remotesensing-16-03087" class="html-sec">Section 5.1</a> the aerosol radius of this case study is shown to have increased from <math display="inline"><semantics> <mrow> <mn>0.16</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mn>0.18</mn> <mspace width="0.166667em"/> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The backscatter profiles at 10:52:31 and the relative humidity profiles at 11:00:00 between 1550–1900 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>a</b>), 3000–3800 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>b</b>) and 6350–6650 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>c</b>) on 29 April are illustrated. These cases demonstrate the difficulties when analyzing hygroscopic growth with combined radiosonde and lidar data.</p>
Full article ">Figure 11
<p>The color ratio development of <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>532</mn> <mn>355</mn> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>1064</mn> <mn>532</mn> </msubsup> </semantics></math> with regard to relative humidity is displayed between 3000–3800 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>a</b>) and 6350–6650 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> (<b>b</b>). The development of the two color ratios seems chaotic. No hygroscopic growth is visible.</p>
Full article ">Figure 12
<p>The profiles of color ratio <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>1064</mn> <mn>532</mn> </msubsup> </semantics></math> and relative humidity for the lowest layer, 1550–1900 <math display="inline"><semantics> <mrow> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, are illustrated (<b>a</b>). No strict correlation is seen. In addition, the development of <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>532</mn> <mn>355</mn> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>1064</mn> <mn>532</mn> </msubsup> </semantics></math> with relative humidity is shown (<b>b</b>). In particular, <math display="inline"><semantics> <msubsup> <mi>CR</mi> <mn>532</mn> <mn>355</mn> </msubsup> </semantics></math> stays almost constant.</p>
Full article ">Figure 13
<p>Dependence of the color ratios on the effective radius of aerosol according to Mie theory for a log-normal distribution of geometric standard deviation (<math display="inline"><semantics> <mi>σ</mi> </semantics></math>) of 1.1 and a complex index of refraction of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1.5</mn> <mspace width="0.166667em"/> <mo>+</mo> <mspace width="0.166667em"/> <mn>0.01</mn> <mi>i</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Dependence of the aerosol backscatter at the three colors of <math display="inline"><semantics> <mrow> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> </mrow> </semantics></math> as a function of the effective radius of the aerosol according to Mie theory for a log-normal distribution of geometric standard deviation (<math display="inline"><semantics> <mi>σ</mi> </semantics></math>) of 1.1 and a complex index of refraction of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1.5</mn> <mspace width="0.166667em"/> <mo>+</mo> <mspace width="0.166667em"/> <mn>0.01</mn> <mi>i</mi> </mrow> </semantics></math>. The values on the y-axis are in arbitrary units as the concentration of aerosol is different from case to case.</p>
Full article ">Figure A1
<p>The median extinction over all available data points (without cloud influence) was calculated and is displayed for each height step. The standard deviation for each height step is illustrated as a filled area. It demonstrates the strongly increasing noise in extinction and thus the lidar ratio with altitude. For the previous analysis in this paper, the height intervals 0.7–2.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>, 2.5–4.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math>, 4.5–6.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> and 6.5–10.0 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> were often utilized. In particular, the lidar ratio was only used within the lowest height interval due to noise. To emphasize this strengthening in noise, the mean standard deviation within this height interval is denoted in the figure. It rose by a magnitude of about 2.</p>
Full article ">Figure A2
<p>The vertical distribution of the relative humidity over ice is shown. Significant supersaturation (&gt;110%) occurs only above a <math display="inline"><semantics> <mrow> <mn>4</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> altitude.</p>
Full article ">Figure A3
<p>The backscatter and radiosonde data from April to July 2021 were subdivided according to the profiles’ median dry color ratios. The intervals were the following: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>3</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>≥</mo> <mspace width="3.33333pt"/> <mn>3.0</mn> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>≤</mo> <mspace width="3.33333pt"/> <mn>1.2</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="3.33333pt"/> <mo>≤</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>355</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1.2</mn> <mspace width="3.33333pt"/> <mo>&lt;</mo> <mspace width="3.33333pt"/> <msub> <mi>CR</mi> <mi>dry</mi> </msub> <mrow> <mo>(</mo> <mn>532</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>,</mo> <mo> </mo> <mn>1064</mn> <mspace width="0.166667em"/> <mi>nm</mi> <mo>)</mo> </mrow> <mo>≤</mo> <mn>1.7</mn> </mrow> </semantics></math>. The median backscatter was calculated for each percentage of relative humidity. Overall, the backscatter still rises with humidity, as expected.</p>
Full article ">Figure A4
<p>The lidar and radiosonde data were subdivided into the three seasons, Arctic Haze (<b>a</b>), summer (<b>b</b>) and season with forest fire impacts (<b>c</b>). This separation was based on <a href="#sec3-remotesensing-16-03087" class="html-sec">Section 3</a>. To visualize an average growth behavior of the season, the median backscatter was calculated for each percentage of relative humidity. Note that backscatter developments of individual time steps may have a great impact on the overall trend.</p>
Full article ">Figure A5
<p>The dataset was subdivided by altitude. The backscatter and median backscatter between 0.7–2.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>a</b>), 2.5–4.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>b</b>), 4.5–6.5 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>c</b>) and 6.5–10.0 <math display="inline"><semantics> <mrow> <mi>km</mi> </mrow> </semantics></math> (<b>d</b>) are illustrated.</p>
Full article ">Figure A6
<p>The tropospheric profiles from backscatter, relative humidity (<b>a</b>) and temperature (<b>b</b>) are illustrated. A strong gradient in relative humidity is visible from <math display="inline"><semantics> <mrow> <mn>2.28</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mn>3.28</mn> <mspace width="0.166667em"/> <mi>km</mi> </mrow> </semantics></math>. A focused analysis of this interval was performed in <a href="#sec4dot4-remotesensing-16-03087" class="html-sec">Section 4.4</a>.</p>
Full article ">
19 pages, 4892 KiB  
Article
Comparative Analysis of Machine Learning Techniques and Data Sources for Dead Tree Detection: What Is the Best Way to Go?
by Júlia Matejčíková, Dana Vébrová and Peter Surový
Remote Sens. 2024, 16(16), 3086; https://doi.org/10.3390/rs16163086 - 21 Aug 2024
Viewed by 911
Abstract
In Central Europe, the extent of bark beetle infestation in spruce stands due to prolonged high temperatures and drought has created large areas of dead trees, which are difficult to monitor by ground surveys. Remote sensing is the only possibility for the assessment [...] Read more.
In Central Europe, the extent of bark beetle infestation in spruce stands due to prolonged high temperatures and drought has created large areas of dead trees, which are difficult to monitor by ground surveys. Remote sensing is the only possibility for the assessment of the extent of the dead tree areas. Several options exist for mapping individual dead trees, including different sources and different processing techniques. Satellite images, aerial images, and images from UAVs can be used as sources. Machine and deep learning techniques are included in the processing techniques, although models are often presented without proper realistic validation.This paper compares methods of monitoring dead tree areas using three data sources: multispectral aerial imagery, multispectral PlanetScope satellite imagery, and multispectral Sentinel-2 imagery, as well as two processing methods. The classification methods used are Random Forest (RF) and neural network (NN) in two modalities: pixel- and object-based. In total, 12 combinations are presented. The results were evaluated using two types of reference data: accuracy of model on validation data and accuracy on vector-format semi-automatic classification polygons created by a human evaluator, referred to as real Ground Truth. The aerial imagery was found to have the highest model accuracy, with the CNN model achieving up to 98% with object classification. A higher classification accuracy for satellite imagery was achieved by combining pixel classification and the RF model (87% accuracy for Sentinel-2). For PlanetScope Imagery, the best result was 89%, using a combination of CNN and object-based classifications. A comparison with the Ground Truth showed a decrease in the classification accuracy of the aerial imagery to 89% and the classification accuracy of the satellite imagery to around 70%. In conclusion, aerial imagery is the most effective tool for monitoring bark beetle calamity in terms of precision and accuracy, but satellite imagery has the advantage of fast availability and shorter data processing time, together with larger coverage areas. Full article
Show Figures

Figure 1

Figure 1
<p>Development of spruce stands due to drought and high temperatures.</p>
Full article ">Figure 2
<p>Location of interest. The image shows the location of the Czech Switzerland National Park (northwestern Czech Republic). The yellow polygon on the right shows the boundaries of the Czech Switzerland National Park.</p>
Full article ">Figure 3
<p>Data source in false-color view; component A shows the full image used for the survey, and component B shows the resolution detail. (<b>1A</b>,<b>1B</b>): Sentinel-2 satellite imagery at 10 m resolution; (<b>2A</b>,<b>2B</b>): PlanetScope satellite imagery at 3 m resolution; (<b>3A</b>,<b>3B</b>): Aerial images at 0.2 m resolution.</p>
Full article ">Figure 4
<p>Example of training samples in aerial images in false color composition NIR, R, B bands. In image (<b>A</b>) is a class of dead trees and in image (<b>B</b>) is a class of Green Forest. The green circles represent samples of green forest, while the yellow circles denote dead trees.</p>
Full article ">Figure 5
<p>Structure of RulSet of classification using Random Forest model.</p>
Full article ">Figure 6
<p>Structure of RulSet of classification using CNN.</p>
Full article ">Figure 7
<p>Graphical representation of the procedure for deriving the input variables into the error matrix.</p>
Full article ">Figure 8
<p>Visualization of an aerial image (<b>A</b>) with Ground Truth layer (<b>B</b>) and object classification using RF (<b>C</b>).</p>
Full article ">Figure 9
<p>Visualization of an aerial image (<b>A</b>) with Ground Truth layer (<b>B</b>) and pixel-based classification using RF (<b>C</b>).</p>
Full article ">
18 pages, 4027 KiB  
Article
Effect of Albedo Footprint Size on Relationships between Measured Albedo and Forest Attributes for Small Forest Plots
by Eirik Næsset Ramtvedt, Hans Ole Ørka, Ole Martin Bollandsås, Erik Næsset and Terje Gobakken
Remote Sens. 2024, 16(16), 3085; https://doi.org/10.3390/rs16163085 - 21 Aug 2024
Cited by 1 | Viewed by 632
Abstract
The albedo of boreal forests depends on the properties of the forest and is a key parameter for understanding the climate impact of forest management practices at high northern latitudes. While high-resolution albedo retrievals from satellites remain challenging, unmanned aerial vehicles (UAVs) offer [...] Read more.
The albedo of boreal forests depends on the properties of the forest and is a key parameter for understanding the climate impact of forest management practices at high northern latitudes. While high-resolution albedo retrievals from satellites remain challenging, unmanned aerial vehicles (UAVs) offer the ability to obtain albedo corresponding to the typical size of forest stands or even smaller areas, such as forest plots. Plots and pixels of sizes in the typical range of 200–400 m2 are used as the basic units in forest management in the Nordic countries. In this study, the aim was to evaluate the effect of the differences in the footprint size of the measured albedo and fixed-area forest plots on the relationship between albedo and forest attributes. This was performed by examining the correlation between albedo and field-measured forest attributes and metrics derived from airborne laser scanner data using linear regression models. The albedo was measured by a UAV above 400 m2, circular forest plots (n = 128) for seven different flight heights above the top of the canopy. The flight heights were chosen so the plots were always smaller than the footprint of the measured albedo, and the area of a forest plot constituted 30–90% of the measured albedo. The applied pyranometer aboard the UAV measured the albedo according to a cosine response across the footprint. We found the strongest correlation when there was the greatest correspondence between the spatial size of the albedo footprint and the size of the forest plots, i.e., when the target area constituted 80–90% of the measured albedo. The measured albedo of the plots in both regeneration forests and mature forests were highly sensitive (p-values ≤ 0.001) to the footprint size, with a mean albedo difference of 11% between the smallest and largest footprints. The mean albedo of regeneration forests was 33% larger than that of mature forests for footprint sizes corresponding to 90%. The study demonstrates the importance of corresponding spatial sizes of albedo measurements and the target areas subject to measurements. Full article
(This article belongs to the Special Issue Remote Sensing of Solar Radiation Absorbed by Land Surfaces)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Location of the study area in southeastern Norway (black square); (<b>b</b>) the dots (<span class="html-italic">n</span> = 178) illustrate the systematic design of the forest plots from 1998 [<a href="#B26-remotesensing-16-03085" class="html-bibr">26</a>] and for which field data were collected in 2022. The colored dots (<span class="html-italic">n</span> = 128, excluding the black ones) constitute the material of the current study. Blue dots (<span class="html-italic">n</span> = 83) correspond to fixed-area forest plots with albedo measurements for all seven footprint sizes (see <a href="#sec2dot4dot2-remotesensing-16-03085" class="html-sec">Section 2.4.2</a>), while orange dots (<span class="html-italic">n</span> = 45) illustrate forest plots with albedo measurements for only some of the footprint sizes. The green areas are classified as forest according to the official N50 topographic map series.</p>
Full article ">Figure 2
<p>Set-up of UAV platform with upward- and downward-looking pyranometers with their Bluetooth loggers (white devices).</p>
Full article ">Figure 3
<p>The seven different measured albedo footprints (white lines) determined according to the proportion (%) a fixed-area forest plot (white shading) constitutes of measured albedo. The innermost white line illustrates the albedo footprint when the fixed-area forest plot constitutes 90% of measured albedo, while the outermost white line illustrates the albedo footprint when the fixed-area forest plot constitutes 30% of measured albedo. The white lines are given for each 10% interval.</p>
Full article ">Figure 4
<p>Adjusted coefficient of determination (<math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>a</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics></math>) (upper panel) and RMSE (lower panel) for LMs of forest attributes and different footprint sizes of measured albedo (i.e., proportion (%) a fixed-area forest plot constitutes of measured albedo; see <a href="#remotesensing-16-03085-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-03085-t002" class="html-table">Table 2</a> for definition). The LMs are based on field data from the whole dataset (both regeneration and mature forests).</p>
Full article ">Figure 4 Cont.
<p>Adjusted coefficient of determination (<math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>a</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics></math>) (upper panel) and RMSE (lower panel) for LMs of forest attributes and different footprint sizes of measured albedo (i.e., proportion (%) a fixed-area forest plot constitutes of measured albedo; see <a href="#remotesensing-16-03085-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-03085-t002" class="html-table">Table 2</a> for definition). The LMs are based on field data from the whole dataset (both regeneration and mature forests).</p>
Full article ">Figure 5
<p>Spread of mean height against different footprint sizes of measured albedo (color grading) based on the whole dataset (both regeneration and mature forests). The footprint size of measured albedo corresponds to the proportions (%) a fixed-area (400 m<sup>2</sup>) forest plot constitutes of measured albedo (see <a href="#remotesensing-16-03085-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-03085-t002" class="html-table">Table 2</a> for definition).</p>
Full article ">Figure 6
<p>Adjusted coefficient of determination (<math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>a</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics></math>) (upper panel) and RMSE (lower panel) for LMs of forest attributes and different footprint sizes of measured albedo (i.e., proportion (%) a fixed-area forest plot constitutes of measured albedo; see <a href="#remotesensing-16-03085-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-03085-t002" class="html-table">Table 2</a> for definition). The LMs are based on ALS data from the whole dataset (both regeneration and mature forests).</p>
Full article ">
18 pages, 20239 KiB  
Article
Geoclimatic Distribution of Satellite-Observed Salinity Bias Classified by Machine Learning Approach
by Yating Ouyang, Yuhong Zhang, Ming Feng, Fabio Boschetti and Yan Du
Remote Sens. 2024, 16(16), 3084; https://doi.org/10.3390/rs16163084 - 21 Aug 2024
Viewed by 816
Abstract
Sea surface salinity (SSS) observed by satellite has been widely used since the successful launch of the first salinity satellite in 2009. However, compared with other oceanographic satellite products (e.g., sea surface temperature, SST) that became operational in the 1980s, the SSS product [...] Read more.
Sea surface salinity (SSS) observed by satellite has been widely used since the successful launch of the first salinity satellite in 2009. However, compared with other oceanographic satellite products (e.g., sea surface temperature, SST) that became operational in the 1980s, the SSS product is less mature and lacks effective validation from the user end. We employed an unsupervised machine learning approach to classify the Level 3 SSS bias from the Soil Moisture Active Passive (SMAP) satellite and its observing environment. The classification model divides the samples into fifteen classes based on four variables: satellite SSS bias, SST, rain rate, and wind speed. SST is one of the most significant factors influencing the classification. In regions with cold SST, satellite SSS has an accuracy of less than 0.2 PSU (Practical Salinity Unit), mainly due to the higher uncertainty in the cold environment. A small number of observations near the seawater freezing point show a significant fresh bias caused by sea ice. A systematic bias of the SMAP SSS product is found in the mid-latitudes: positive bias tends to occur north (south) of 45°N(S) and negative bias is more common in 25°N(S)–45°N(S) bands, likely associated with the SMAP calibration scheme. A significant bias also occurs in regions with strong ocean currents and eddy activities, likely due to spatial mismatch in the highly dynamic background. Notably, satellite SSS and in situ data correlations remain good in similar environments with weaker ocean dynamic activities, implying that satellite salinity data are reliable in dynamically active regions for capturing high-resolution details. The features of the SMAP SSS shown in this work call for careful consideration by the data user community when interpreting biased values. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the data selection, assembly processing, and production of the final classification. The 15 maps are the geographical distribution of the 15 classes, and the coloured dots in the maps represent the ΔS value of the sample.</p>
Full article ">Figure 2
<p>Ensemble mean (blue line) and spread (grey shading) of the BIC score for increasing the number of GMM classes. The black bars are the standard deviation of the ensemble mean. The BIC scores are computed for 50 random sample groups, each consisting of 90% of the total profiles.</p>
Full article ">Figure 3
<p>Visualisation of the classification results. For each class, the mean value of SST, rain rate, and wind speed is plotted as a 3D coordinate. (<b>a</b>) is the mean values of each class, the size of the marker represents the sample size of the class, and the colour of the marker represents the mean ΔS of the class. To better illustrate the spread of the classes and without hiding the small classes, we subdivided the classes into 3 subplots according to different temperature ranges. (<b>b</b>) Classes with mean SST below 10 °C, corresponding to the triangle markers in (<b>a</b>); (<b>c</b>) between 10–20 °C, corresponding to the square markers in (<b>a</b>); (<b>d</b>) above 20 °C, corresponding to the round markers in (<b>a</b>). The <span class="html-italic">x</span>-axis is SST, the <span class="html-italic">y</span>-axis is wind speed, and the <span class="html-italic">z</span>-axis is rain rate. Rain rate is plotted in log scale for ease of visualisation in (<b>b</b>–<b>d</b>). The details of each class are referred to in <a href="#remotesensing-16-03084-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 4
<p>Classes with mean SST higher than 25 °C. (<b>a</b>,<b>b</b>,<b>e</b>–<b>g</b>) Scatterplot maps of ΔS (unit: PSU) in the class K11, K13, K3, K15, and K8, respectively. The dotted area in (<b>a</b>) is where the number of members exceeds 200 in a 5° × 5° grid cell and the samples exceed 12. Regions where samples are insufficient for identifying the predominant season are discarded. (<b>c</b>,<b>d</b>,<b>h</b>–<b>j</b>) Prevailing season of the observations in the same classes above. Colours represent over 50% of the observations in the area being taken in the same season: blue is December to February of next year, green is March to May, red is June to August, orange is September to November, and grey means there is no prevailing season in the area.</p>
Full article ">Figure 5
<p>Classes with mean SST between 10 °C and 25 °C. (<b>a</b>,<b>b</b>,<b>e</b>–<b>g</b>) Scatterplot maps of ΔS in classes K1, K6, K9, K14, and K7, respectively. (<b>c</b>,<b>d</b>,<b>h</b>–<b>j</b>) Prevailing season of the observations. The legend is the same as <a href="#remotesensing-16-03084-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Scatterplot of all SMAP SSS bias observations over a PSU (<span class="html-italic">x</span>-axis) and latitude (<span class="html-italic">y</span>-axis) plot. The coloured shading represents the observation count in a 0.02 PSU and 0.5° grid size. The overlaid dashed lines are the mean rain rate (black) and the mean SSS (red), respectively, along the latitude. The mean rain rate and SSS values are in the top <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 7
<p>Classes with mean SST lower than 10 °C. (<b>a</b>,<b>b</b>) Scatterplot maps of ΔS in classes K2 and K10, respectively. (<b>c</b>,<b>d</b>) Prevailing season of the observations in classes K2 and K10, respectively. The legend is the same as <a href="#remotesensing-16-03084-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 8
<p>The distribution of members in K12 and its relationship with sea ice concentration. (<b>a</b>) Scatterplot map of K12, where the colour represents ΔS. (<b>b</b>) Prevailing season of the observations. (<b>c</b>) Scatter plot of observations with sea ice presence within 50 km, with the colour representing the percentage of ice concentration. (<b>d</b>) Observations and mean ΔS concerning sea ice concentration. (<b>e</b>) Scatterplot within the classification parameter space, with the <span class="html-italic">x</span>-, <span class="html-italic">y</span>-, and <span class="html-italic">z</span>-axes representing SST, wind speed, and rain rate, respectively, and the colour of the marker representing ΔS.</p>
Full article ">Figure 9
<p>The distribution of members in K4 and its relationship with precipitation. (<b>a</b>) Scatterplot map of K4. (<b>b</b>) Prevailing season of the observations. (<b>c</b>) Annual mean precipitation. (<b>d</b>) Relations between ΔS and rain rate, the colour is the member count in the corresponding ΔS and rain rate. (<b>e</b>) Scatterplot for classification parameters, same as in <a href="#remotesensing-16-03084-f008" class="html-fig">Figure 8</a>e. The observation count in (<b>d</b>) is calculated with the bin size of 0.1 PSU along the <span class="html-italic">x</span>-axis and 2.5 mm/day along the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 10
<p>The distribution of members in K5 and its relationship with sea surface current. (<b>a</b>) Scatter plot of K5. (<b>b</b>) Prevailing season of the observations. (<b>c</b>) Annual mean Eddy Kinetic Energy (EKE) of surface current (shading) overlaps with the mean velocity of sea surface current (contour, unit: m/s). (<b>d</b>) Snapshot of SMAP SSS and ocean surface current. The colour shading is SSS, the quiver is current, and the red pentagram marker is Argo observation.</p>
Full article ">
14 pages, 549 KiB  
Communication
Joint Constant-Modulus Waveform and RIS Phase Shift Design for Terahertz Dual-Function MIMO Radar and Communication System
by Rui Yang, Hong Jiang and Liangdong Qu
Remote Sens. 2024, 16(16), 3083; https://doi.org/10.3390/rs16163083 - 21 Aug 2024
Viewed by 871
Abstract
This paper considers a terahertz (THz) dual-function multi-input multi-output (MIMO) radar and communication system with the assistance of a reconfigurable intelligent surface (RIS) and jointly designs the constant modulus (CM) waveform and RIS phase shifts. A weighted optimization scheme is presented, to minimize [...] Read more.
This paper considers a terahertz (THz) dual-function multi-input multi-output (MIMO) radar and communication system with the assistance of a reconfigurable intelligent surface (RIS) and jointly designs the constant modulus (CM) waveform and RIS phase shifts. A weighted optimization scheme is presented, to minimize the weighted sum of three objectives, including communication multi-user interference (MUI) energy, the negative of multi-target illumination power and the MIMO radar waveform similarity error under a CM constraint. For the formulated non-convex problem, a novel alternating coordinate descent (ACD) algorithm is introduced, to transform it into two subproblems for waveform and phase shift design. Unlike the existing optimization algorithms that solve each subproblem by iteratively approximating the optimal solution with iteration stepsize selection, the ACD algorithm can alternately solve each subproblem by dividing it into multiple simpler problems, to achieve closed-form solutions. Our numerical simulations demonstrate the superiorities of the ACD algorithm over the existing methods. In addition, the impacts of the weighting coefficients, RIS and channel conditions on the radar communication performance of the THz system are analyzed. Full article
Show Figures

Figure 1

Figure 1
<p>RIS-aided THz dual-function MIMO radar and communication system with multiple targets and UEs.</p>
Full article ">Figure 2
<p>Sum rate versus the transmit SNR for different algorithms.</p>
Full article ">Figure 3
<p>Beampatterns of the transmit waveforms achieved by different algorithms.</p>
Full article ">Figure 4
<p>Auto-correlation functions achieved by different algorithms.</p>
Full article ">Figure 5
<p>Sum rate versus the transmit SNR of our algorithm under different weighting coefficients and RIS conditions.</p>
Full article ">Figure 6
<p>Beampatterns of the transmit waveforms of our algorithm under different weighting coefficients and RIS conditions.</p>
Full article ">Figure 7
<p>Average detection probability versus the radar transmit SNR of our algorithm under different weighting coefficients and RIS conditions.</p>
Full article ">Figure 8
<p>Auto-correlation functions of the transmit waveforms of our algorithm under different weighting coefficients and RIS conditions.</p>
Full article ">Figure 9
<p>Sum rate versus the transmit SNR under different channel conditions of THz system.</p>
Full article ">
24 pages, 4634 KiB  
Article
Multimodal Semantic Collaborative Classification for Hyperspectral Images and LiDAR Data
by Aili Wang, Shiyu Dai, Haibin Wu and Yuji Iwahori
Remote Sens. 2024, 16(16), 3082; https://doi.org/10.3390/rs16163082 - 21 Aug 2024
Cited by 1 | Viewed by 1172
Abstract
Although the collaborative use of hyperspectral images (HSIs) and LiDAR data in land cover classification tasks has demonstrated significant importance and potential, several challenges remain. Notably, the heterogeneity in cross-modal information integration presents a major obstacle. Furthermore, most existing research relies heavily on [...] Read more.
Although the collaborative use of hyperspectral images (HSIs) and LiDAR data in land cover classification tasks has demonstrated significant importance and potential, several challenges remain. Notably, the heterogeneity in cross-modal information integration presents a major obstacle. Furthermore, most existing research relies heavily on category names, neglecting the rich contextual information from language descriptions. Visual-language pretraining (VLP) has achieved notable success in image recognition within natural domains by using multimodal information to enhance training efficiency and effectiveness. VLP has also shown great potential for land cover classification in remote sensing. This paper introduces a dual-sensor multimodal semantic collaborative classification network (DSMSC2N). It uses large language models (LLMs) in an instruction-driven manner to generate land cover category descriptions enriched with domain-specific knowledge in remote sensing. This approach aims to guide the model to accurately focus on and extract key features. Simultaneously, we integrate and optimize the complementary relationship between HSI and LiDAR data, enhancing the separability of land cover categories and improving classification accuracy. We conduct comprehensive experiments on benchmark datasets like Houston 2013, Trento, and MUUFL Gulfport, validating DSMSC2N’s effectiveness compared to various baseline methods. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed DSMSC<sup>2</sup>N.</p>
Full article ">Figure 2
<p>Workflow for automated construction of a high-dimensional spectral class descriptor collection.</p>
Full article ">Figure 3
<p>Graphical representation of ModaUnion encoder.</p>
Full article ">Figure 4
<p>The mechanism of clustering.</p>
Full article ">Figure 5
<p>The visualization of the Houston 2013 dataset. (<b>a</b>) Pseudo color map of an HSI. (<b>b</b>) DSM of LiDAR. (<b>c</b>) Training sample map. (<b>d</b>) Testing sample map.</p>
Full article ">Figure 6
<p>The visualization of the Trento dataset. (<b>a</b>) Pseudo color map of an HSI. (<b>b</b>) DSM of LiDAR. (<b>c</b>) Training sample map. (<b>d</b>) Testing sample map.</p>
Full article ">Figure 7
<p>The visualization of the MUUFL Gulfport dataset. (<b>a</b>) Pseudo color map of an HSI. (<b>b</b>) DSM of LiDAR. (<b>c</b>) Training sample map. (<b>d</b>) Testing sample map.</p>
Full article ">Figure 8
<p>T-SNE visualization of loss functions on Trento. (<b>a</b>) CE; (<b>b</b>) without HTBCL; (<b>c</b>) all.</p>
Full article ">Figure 9
<p>Classification maps of Houston 2013. (<b>a</b>) Ground-truth map; (<b>b</b>) two-branch. (<b>c</b>) EndNet; (<b>d</b>) MDL-Middle; (<b>e</b>) MAHiDFNet; (<b>f</b>) FusAtNet; (<b>g</b>) CALC; (<b>h</b>) SepG-ResNet50; (<b>i</b>) DSMSC<sup>2</sup>N.</p>
Full article ">Figure 10
<p>Classification maps of Trento. (<b>a</b>) Ground-truth map; (<b>b</b>) two-branch; (<b>c</b>) EndNet; (<b>d</b>) MDL-Middle; (<b>e</b>) MAHiDFNet; (<b>f</b>) FusAtNet; (<b>g</b>) CALC; (<b>h</b>) SepG-ResNet50; (<b>i</b>) DSMSC<sup>2</sup>N.</p>
Full article ">Figure 11
<p>Classification maps of MUUFL Gulfport. (<b>a</b>) Ground-truth map; (<b>b</b>) two-branch; (<b>c</b>) EndNet; (<b>d</b>) MDL-Middle; (<b>e</b>) MAHiDFNet; (<b>f</b>) FusAtNet; (<b>g</b>) CALC; (<b>h</b>) SepG-ResNet50; (<b>i</b>) DSMSC<sup>2</sup>N.</p>
Full article ">
20 pages, 2672 KiB  
Article
Low-Rank Discriminative Embedding Regression for Robust Feature Extraction of Hyperspectral Images via Weighted Schatten p-Norm Minimization
by Chen-Feng Long, Ya-Ru Li, Yang-Jun Deng, Wei-Ye Wang, Xing-Hui Zhu and Qian Du
Remote Sens. 2024, 16(16), 3081; https://doi.org/10.3390/rs16163081 - 21 Aug 2024
Viewed by 684
Abstract
Low-rank representation (LRR) is widely utilized in image feature extraction, as it can reveal the underlying correlation structure of data. However, the subspace learning methods based on LRR suffer from the problems of lacking robustness and discriminability. To address these issues, this paper [...] Read more.
Low-rank representation (LRR) is widely utilized in image feature extraction, as it can reveal the underlying correlation structure of data. However, the subspace learning methods based on LRR suffer from the problems of lacking robustness and discriminability. To address these issues, this paper proposes a new robust feature extraction method named the weighted Schatten p-norm minimization via low-rank discriminative embedding regression (WSNM-LRDER) method. This method works by integrating weighted Schatten p-norm and linear embedding regression into the LRR model. In WSNM-LRDER, the weighted Schatten p-norm is adopted to relax the low-rank function, which can discover the underlying structural information of the image, to enhance the robustness of projection learning. In order to improve the discriminability of the learned projection, an embedding regression regularization is constructed to make full use of prior information. The experimental results on three hyperspectral images datasets show that the proposed WSNM-LRDER achieves better performance than some advanced feature extraction methods. In particular, the proposed method yielded increases of more than 1.2%, 1.1%, and 2% in the overall accuracy (OA) for the Kennedy Space Center, Salinas, and Houston datasets, respectively, when comparing with the comparative methods. Full article
Show Figures

Figure 1

Figure 1
<p>Convergence of WSNM-LRDER on three databases. (<b>a</b>) The Kennedy Space Center dataset, (<b>b</b>) The Salinas dataset, (<b>c</b>) The Houston dataset.</p>
Full article ">Figure 2
<p>The change of classification OA for HSI datasets with different dimension. (<b>a</b>) The KSC dataset, (<b>b</b>) The Salinas dataset, (<b>c</b>) The Houston dataset.</p>
Full article ">Figure 3
<p>Classification maps of different methods for the KSC. (<b>a</b>) Ground Truth. (<b>b</b>) all bands. (<b>c</b>) PCA. (<b>d</b>) LRE. (<b>e</b>) LRAGE. (<b>f</b>) DLRP. (<b>g</b>) RPL. (<b>h</b>) LRPER. (<b>i</b>) WSNM-LRDER (<span class="html-italic">p</span> = 2/3). (<b>j</b>) WSNM-LRDER (<span class="html-italic">p</span> = 0.8).</p>
Full article ">Figure 4
<p>Classification maps of different methods for the Salinas. (<b>a</b>) Ground Truth. (<b>b</b>) all bands. (<b>c</b>) PCA. (<b>d</b>) LRE. (<b>e</b>) LRAGE. (<b>f</b>) DLRP. (<b>g</b>) RPL. (<b>h</b>) LRPER. (<b>i</b>) WSNM-LRDER (<span class="html-italic">p</span> = 2/3). (<b>j</b>) WSNM-LRDER (<span class="html-italic">p</span> = 0.8).</p>
Full article ">Figure 5
<p>Classification maps of different methods for the Houston. (<b>a</b>) Ground Truth. (<b>b</b>) all bands. (<b>c</b>) PCA. (<b>d</b>) LRE. (<b>e</b>) LRAGE. (<b>f</b>) DLRP. (<b>g</b>) RPL. (<b>h</b>) LRPER. (<b>i</b>) WSNM-LRDER (<span class="html-italic">p</span> = 0.5). (<b>j</b>) WSNM-LRDER (<span class="html-italic">p</span> = 0.7).</p>
Full article ">Figure 6
<p>The change of classification OA, AA and Kappa for HSI datasets yielded by WSNM-LRDER with different <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>.</mo> </mrow> </semantics></math> (<b>a</b>) The KSC dataset, (<b>b</b>) The Salinas dataset, (<b>c</b>) The Houston dataset.</p>
Full article ">Figure 7
<p>The change of classification OA for HSI datasets yielded by WSNM-LRDER with different <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. (<b>a</b>) The KSC dataset, (<b>b</b>) The Salinas dataset, (<b>c</b>) The Houston dataset.</p>
Full article ">Figure 8
<p>The change of classification OA, AA and Kappa for HSI datasets yielded by WSNM-LRDER with different <math display="inline"><semantics> <mi>λ</mi> </semantics></math> at the condition of <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>a</b>) The KSC dataset, (<b>b</b>) The Salinas dataset, (<b>c</b>) The Houston dataset.</p>
Full article ">
Previous Issue
Back to TopTop