Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 16, May-2
Previous Issue
Volume 16, April-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 16, Issue 9 (May-1 2024) – 174 articles

Cover Story (view full-size image): Sentinel-2 spectral data are regularly used in machine learning (ML) models, in conjunction with multitemporal and topographical data, to estimate soil organic carbon (SOC) content. However, topographical covariates are typically utilised in models with large study areas and lower spatial resolution. This study explores the use and importance of single-date and multitemporal Sentinel-2 spectral reflectance data with the introduction of topographical covariates to predict SOC content at an intra-field crop scale. Utilising high-resolution digital elevation models, ML models are used to predict SOC content based on Sentinel-2 data and topography. The results demonstrate the efficacy of these models and suggest a potential negative correlation between the topographical wetness index and SOC at intra-field scales. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 15092 KiB  
Article
Terrace Extraction Method Based on Remote Sensing and a Novel Deep Learning Framework
by Yinghai Zhao, Jiawei Zou, Suhong Liu and Yun Xie
Remote Sens. 2024, 16(9), 1649; https://doi.org/10.3390/rs16091649 - 6 May 2024
Viewed by 1687
Abstract
Terraces, farmlands built along hillside contours, are common anthropogenically designed landscapes. Terraces control soil and water loss and improve land productivity; therefore, obtaining their spatial distribution is necessary for soil and water conservation and agricultural production. Spatial information of large-scale terraces can be [...] Read more.
Terraces, farmlands built along hillside contours, are common anthropogenically designed landscapes. Terraces control soil and water loss and improve land productivity; therefore, obtaining their spatial distribution is necessary for soil and water conservation and agricultural production. Spatial information of large-scale terraces can be obtained using satellite images and through deep learning. However, when extracting terraces, accurately segmenting the boundaries of terraces and identifying small terraces in diverse scenarios continues to be challenging. To solve this problem, we combined two deep learning modules, ANB-LN and DFB, to produce a new deep learning framework (NLDF-Net) for terrace extraction using remote sensing images. The model first extracted the features of the terraces through the coding area to obtain abstract semantic features, and then gradually recovered the original size through the decoding area using feature fusion. In addition, we constructed a terrace dataset (the HRT-set) for Guangdong Province and conducted a series of comparative experiments on this dataset using the new framework. The experimental results show that our framework had the best extraction effect compared to those of other deep learning methods. This framework provides a method and reference for extracting ground objects using remote sensing images. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Topographical map of Guangdong province and (<b>b</b>–<b>d</b>) images of classical terrace distribution areas, including (<b>b</b>) a terrace in Hongguan Town, Xinyi, Maoming, Guangdong; (<b>c</b>) a terrace in Chaotian Town, Lianzhou, Qingyuan, Guangdong; and (<b>d</b>) a terrace in Tanjitou, Fengkai County, Zhaoqing, Guangdong.</p>
Full article ">Figure 2
<p>Schematic of the research workflow.</p>
Full article ">Figure 3
<p>Architecture of the NLDF-Net framework.</p>
Full article ">Figure 4
<p>Some classical examples of remote sensing images from the HRT-set: (<b>a</b>) rice terraces in the bare-soil stage; (<b>b</b>) rice terraces in the planting stage; (<b>c</b>) shrub terraces in the bare soil stage; (<b>d</b>) shrub terraces in the planting stage; (<b>e</b>) neat forest belts; (<b>f</b>) ridges and furrows; (<b>g</b>) fields; (<b>h</b>) striped roads; and (<b>i</b>–<b>l</b>) fragmented terraces.</p>
Full article ">Figure 5
<p>Evaluation metrics results of the comparisons with different module combination. The bolded part is the highest value for each indicator. (<b>a</b>) OA, precision, and recall results. (<b>b</b>) F1 results and corresponding <span class="html-italic">t</span>-test results, where * means 0.01 ≤ <span class="html-italic">p</span> &lt; 0.05 and ** means <span class="html-italic">p</span> &lt; 0.01. (<b>c</b>) IoU results and corresponding <span class="html-italic">t</span>-test results, where * means 0.01 ≤ <span class="html-italic">p</span> &lt; 0.05 and ** means <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 6
<p>Visual comparison of the terrace extraction results among different experiment groups. (<b>a</b>) Original images. (<b>b</b>) Ground truth label. (<b>c</b>–<b>g</b>)Predicted labels of No-attention, Softmax-attention, Add-fusion, Concat-fusion, and NLDF-Net, respectively.</p>
Full article ">Figure 7
<p>Evaluation metrics results of the comparisons with advanced state-of-the-art deep learning models. The bolded part is the highest value for each indicator. (<b>a</b>) OA, precision, recall results. (<b>b</b>) F1 results and corresponding <span class="html-italic">t</span>-test results, where ** means <span class="html-italic">p</span> &lt; 0.01. (<b>c</b>) IoU results and corresponding <span class="html-italic">t</span>-test results, where ** means <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 8
<p>Visual comparison of the terrace extraction results with different comparison algorithms: (<b>a</b>) original images, (<b>b</b>) ground truth label, and (<b>c</b>–<b>g</b>) the predicted labels of (<b>c</b>) PSP-Net, (<b>d</b>) U-Net, (<b>e</b>) IEU-Net, (<b>f</b>) D-Link, and (<b>g</b>) NLDF-Net.</p>
Full article ">
23 pages, 16364 KiB  
Article
Mapping the Continuous Cover of Invasive Noxious Weed Species Using Sentinel-2 Imagery and a Novel Convolutional Neural Regression Network
by Fei Xing, Ru An, Xulin Guo and Xiaoji Shen
Remote Sens. 2024, 16(9), 1648; https://doi.org/10.3390/rs16091648 - 6 May 2024
Cited by 1 | Viewed by 1359
Abstract
Invasive noxious weed species (INWS) are typical poisonous plants and forbs that are considered an increasing threat to the native alpine grassland ecosystems in the Qinghai–Tibetan Plateau (QTP). Accurate knowledge of the continuous cover of INWS across complex alpine grassland ecosystems over a [...] Read more.
Invasive noxious weed species (INWS) are typical poisonous plants and forbs that are considered an increasing threat to the native alpine grassland ecosystems in the Qinghai–Tibetan Plateau (QTP). Accurate knowledge of the continuous cover of INWS across complex alpine grassland ecosystems over a large scale is required for their control and management. However, the cooccurrence of INWS and native grass species results in highly heterogeneous grass communities and generates mixed pixels detected by remote sensors, which causes uncertainty in classification. The continuous coverage of INWS at the pixel level has not yet been achieved. In this study, objective 1 was to test the capability of Senginel-2 imagery at estimating continuous INWS cover across complex alpine grasslands over a large scale and objective 2 was to assess the performance of the state-of-the-art convolutional neural network-based regression (CNNR) model in estimating continuous INWS cover. Therefore, a novel CNNR model and a random forest regression (RFR) model were evaluated for estimating INWS continuous cover using Sentinel-2 imagery. INWS continuous cover was estimated directly from Sentinel-2 imagery with an R2 ranging from 0.88 to 0.93 using the CNNR model. The RFR model combined with multiple features had a comparable accuracy, which was slightly lower than that of the CNNR model, with an R2 of approximately 0.85. Twelve green band-, red-edge band-, and near-infrared band-related features had important contributions to the RFR model. Our results demonstrate that the CNNR model performs well when estimating INWS continuous cover directly from Sentinel-2 imagery, and the RFR model combined with multiple features derived from the Sentinel-2 imager can also be used for INWS continuous cover mapping. Sentinel-2 imagery is suitable for mapping continuous INWS cover across complex alpine grasslands over a large scale. Our research provides information for the advanced mapping of the continuous cover of invasive species across complex grassland ecosystems or, more widely, terrestrial ecosystems over large spatial areas using remote sensors such as Sentinel-2. Full article
Show Figures

Figure 1

Figure 1
<p>Study area. (<b>a</b>) Location of the TRHR and QTP at the Asia scale. (<b>b</b>) Location of the study area in the TRHR. (<b>c</b>) Sampling sites and the study area shown by Sentinel-2 (R: b8, G: b2, B: b1) imagery.</p>
Full article ">Figure 2
<p>Schematic diagram of the sample plots: (<b>a</b>) 30 m × 30 m quadrat; (<b>b</b>) 1 m × 1 m subplot quadrat in the field.</p>
Full article ">Figure 3
<p>Schematic diagram showing the convolutional neural network regression model used in this study. The input layer was composed of Sentinel-2 imagery and the fiend observation point with INWS continuous cover, three convolutional layers, two fully connected layers, and a regression layer with a sigmoid activation function for INWS continuous cover estimation. Conv + BN + ReLU represents the convolutional layer followed by the batch normalization layer and ReLU activation function.</p>
Full article ">Figure 4
<p>Input patch size. The green point represents the center of the 30 m × 30 m sample plot. The red line represents an input image patch size of 3 × 3, the yellow line represents 5 × 5, and the blue line represents 7 × 7.</p>
Full article ">Figure 5
<p>Field-observed INWS cover (%). (<b>a</b>) INWS cover distribution. (<b>b</b>) INWS cover variation in the longitudinal direction. (<b>c</b>) INWS cover variation in the latitude direction. (<b>d</b>) INWS cover variation along the elevation.</p>
Full article ">Figure 6
<p>Feature importance evaluation based on RFR. (<b>a</b>) is the importance of Bands, (<b>b</b>) is VIs, (<b>c</b>) is Bands + VIs, (<b>d</b>) is PCA + VIs, and (<b>e</b>) is the Bands + PCA + VIs.</p>
Full article ">Figure 6 Cont.
<p>Feature importance evaluation based on RFR. (<b>a</b>) is the importance of Bands, (<b>b</b>) is VIs, (<b>c</b>) is Bands + VIs, (<b>d</b>) is PCA + VIs, and (<b>e</b>) is the Bands + PCA + VIs.</p>
Full article ">Figure 7
<p>Scatterplots showing the model performance in estimating INWS continuous cover for the study area. (<b>a</b>) Random forest (RF) model with Bands + VIs features. (<b>b</b>) Convolutional neural network (CNN) model with Sentinel-2 bands. (<b>c</b>) CNN model with PCA.</p>
Full article ">Figure 8
<p>The spatial pattern of estimated INWS continuous cover over the study area. (<b>a</b>) INWS cover along elevation change. (<b>b</b>) RFR model. (<b>c</b>) CNNR model with Sentinel-2 bands. (<b>d</b>) CNNR model with PCA. The areas covered except grassland were masked using land cover/use data.</p>
Full article ">
24 pages, 8326 KiB  
Article
High Resolution Ranging with Small Sample Number under Low SNR Utilizing RIP-OMCS Strategy and AHRC l1 Minimization for Laser Radar
by Min Xue, Mengdao Xing, Yuexin Gao, Jixiang Fu, Zhixin Wu and Wangshuo Tang
Remote Sens. 2024, 16(9), 1647; https://doi.org/10.3390/rs16091647 - 6 May 2024
Viewed by 1131
Abstract
This manuscript presents a novel scheme to achieve high-resolution laser-radar ranging with a small sample number under low signal-to-noise ratio (SNR) conditions. To reduce the sample number, the Restricted Isometry Property-based optimal multi-channel coprime-sampling (RIP-OMCS) strategy is established. In the RIP-OMCS strategy, the [...] Read more.
This manuscript presents a novel scheme to achieve high-resolution laser-radar ranging with a small sample number under low signal-to-noise ratio (SNR) conditions. To reduce the sample number, the Restricted Isometry Property-based optimal multi-channel coprime-sampling (RIP-OMCS) strategy is established. In the RIP-OMCS strategy, the data collected across multiple channels with very low coprime-sampling rates can record accurate range information on each target. Further, the asynchronous problem caused by channel sampling-time errors is considered. The sampling-time errors are estimated using the cross-correlation function. After canceling the asynchronous problem, the data collected by multiple channels are then merged into non-uniform sampled signals. Using data combination, target-range estimation is converted into an optimization problem of sparse representation consisting of a non-uniform Fourier dictionary. This optimization problem is solved using adaptive hybrid re-weighted constraint (AHRC) l1 minimization. Two constraints are formed from statistical attributes of the targets and clutter. Moreover, as the detailed characteristics of the target, clutter, and noise are unknown before the solution, the two constraints can be adaptively modified, which guarantees that l1 minimization obtains the high-resolution range profile and accurate distance of all targets under a low SNR. Our experiments confirmed the effectiveness of the proposed method. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

Figure 1
<p>Typical laser-radar ranging system.</p>
Full article ">Figure 2
<p>Principle diagram of laser-radar ranging.</p>
Full article ">Figure 3
<p>Multi-channel coprime low-sampling scheme.</p>
Full article ">Figure 4
<p>Flowchart of the proposed method.</p>
Full article ">Figure 5
<p>Asynchronous sampling-time error diagram.</p>
Full article ">Figure 6
<p>Sampling-time error-estimation diagram.</p>
Full article ">Figure 7
<p>Flowchart of the AHRC.</p>
Full article ">Figure 8
<p>Flowchart of the range-estimation method based on AHRC <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mn>1</mn> </msub> </mrow> </semantics></math> minimization.</p>
Full article ">Figure 9
<p>Performance comparison of different coprime combinations. (<b>a</b>) Maximum and minimum eigenvalues of different coprime combinations; (<b>b</b>) RICs of different coprime combinations.</p>
Full article ">Figure 10
<p>Performance comparison of optimal coprime combinations. (<b>a</b>) Range profile of Nyquist sampling. (<b>b</b>) Range profile of ADC1. (<b>c</b>) Range profile of ADC2. (<b>d</b>) Range profile of ADC3. (<b>e</b>) Range profile of the proposed method. (<b>f</b>) Target phase difference. (<b>g</b>) Two-dimensional figure of target range phase. (<b>h</b>) CORR under 10 scenes.</p>
Full article ">Figure 11
<p>Comparison of the results before and after the sampling-time error compensation. (<b>a</b>) With compensation. (<b>b</b>) Without compensation. (<b>c</b>) Two-dimensional figure of target range phase. (<b>d</b>) CORR under 10 sampling-time error scenarios.</p>
Full article ">Figure 12
<p>Average of the correlation coefficient under different sampling-time error conditions. (<b>a</b>) Correlation coefficient under Channel 2 with sampling-time error of <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>−</mo> <mn>0.2</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) Correlation coefficient under Channel 3 with sampling-time error of <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>−</mo> <mn>0.2</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) Correlation coefficient under Channels 2 and 3 in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>−</mo> <mn>0.2</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math> sampling-time error conditions.</p>
Full article ">Figure 13
<p>Target-range-estimation result of different algorithms under different SNRs.</p>
Full article ">Figure 14
<p>Average of the correlation coefficients of four methods under different SNRs in multiple scenarios.</p>
Full article ">Figure 15
<p>Target-range-estimation result of different algorithms under 10 dB SNR when the sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.02</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Target-range-estimation result of different algorithms under 10 dB SNR when the sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.1</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Target-range-estimation result of different algorithms under 10 dB SNR when the sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.2</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Average of the correlation coefficients of the four methods under different SNRs in multiple scenarios with the sampling-time error compensation. (<b>a</b>) Sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.02</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) Sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.1</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) Sampling-time error is <math display="inline"><semantics> <mrow> <mn>0.2</mn> <mo>∗</mo> <msub> <mi>T</mi> <mi>P</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 9313 KiB  
Article
Remote Detection of Geothermal Alteration Using Airborne Light Detection and Ranging Return Intensity
by Yan Restu Freski, Christoph Hecker, Mark van der Meijde and Agung Setianto
Remote Sens. 2024, 16(9), 1646; https://doi.org/10.3390/rs16091646 - 5 May 2024
Viewed by 1813
Abstract
The remote detection of hydrothermally altered grounds in geothermal exploration demands datasets capable of reliably detecting key outcrops with fine spatial resolution. While optical thermal or radar-based datasets have resolution limitations, airborne LiDAR offers point-based detection through its LiDAR return intensity (LRI) values, [...] Read more.
The remote detection of hydrothermally altered grounds in geothermal exploration demands datasets capable of reliably detecting key outcrops with fine spatial resolution. While optical thermal or radar-based datasets have resolution limitations, airborne LiDAR offers point-based detection through its LiDAR return intensity (LRI) values, serving as a proxy for surface reflectivity. Despite this potential, few studies have explored LRI value variations in the context of hydrothermal alteration and their utility in distinguishing altered from unaltered rocks. Although the link between alteration degree and LRI values has been established under laboratory conditions, this relationship has yet to be demonstrated in airborne data. This study investigates the applicability of laboratory results to airborne LRI data for alteration detection. Utilising LRI data from an airborne LiDAR point cloud (wavelength 1064 nm, density 12 points per square metre) acquired over a prospective geothermal area in Bajawa, Indonesia, where rock sampling for a related laboratory study took place, we compare the airborne LRI values within each ground sampling area of a 3 m radius (due to hand-held GPS uncertainty) with laboratory LRI values of corresponding rock samples. Our findings reveal distinguishable differences between strongly altered and unaltered samples, with LRI discrepancies of approximately ~28 for airborne data and ~12 for laboratory data. Furthermore, the relative trends of airborne and laboratory-based LRI data concerning alteration degree exhibit striking similarity. These consistent results for alteration degree in laboratory and airborne data mark a significant step towards LRI-based alteration mapping from airborne platforms. Full article
Show Figures

Figure 1

Figure 1
<p>The study area is located in the Bajawa area, central Flores Island, Indonesia, and a part of the volcanic arc of the Lesser Sunda (<b>a</b>,<b>b</b>), surrounded by active volcanoes (Mt. Inierie, Mt. Inielika, and Mt. Ebulobo in b) and monogenetic volcanoes (shaded with orange colour, (<b>c</b>)). The expressions of the volcanic activity on the surface indicate the presence of geothermal systems beneath the Bajawa City and Mataloko production well (<b>c</b>). The airborne datasets were obtained from Wawomuda and Manulalu (shaded with red in (<b>c</b>)) and covered the sampling locations (red dots with sample names in (<b>c</b>)). The hill-shaded topographic map on the background (including the modified inset map) is available from ESRI.</p>
Full article ">Figure 2
<p>Field alteration map and photographs showing the outcrops and the sampling location in Wawomuda (<b>a</b>,<b>c</b>) and Manulalu (<b>b</b>,<b>d</b>). In Wawomuda, the samples were collected at the foot of the crater wall (<b>a</b>,<b>c</b>). The outcrop of strongly altered rocks (i.e., the source of SA_PC, SA_PP, and SA_PF) builds up the lower section of the Wawomuda Crater wall with weakly altered rocks above it (i.e., the source of WA_PF) with no solid boundary (red dashed line). The sampling location of WA_PP is behind the observer (<b>c</b>). In Manulalu, the outcrop is composed of a breccia of unaltered volcanic rock (i.e., the source of UA_PA; see the breccia fragments pointed by red arrows in (<b>d</b>)). The hill-shaded topographic map on the background is generated from the LiDAR dataset.</p>
Full article ">Figure 3
<p>The comparison of LRI trends from the laboratory ((<b>a</b>), from [<a href="#B27-remotesensing-16-01646" class="html-bibr">27</a>]) and the airborne dataset (<b>b</b>). The increasing trends similarly show that higher LRI values result from higher alteration degrees (<b>a</b>,<b>b</b>). The standard deviations of the LRI values from the airborne data are larger than those derived from samples in the laboratory (<b>c</b>).</p>
Full article ">Figure 4
<p>The linear relationship between airborne and laboratory LRI mean (the colour refers to <a href="#remotesensing-16-01646-f003" class="html-fig">Figure 3</a>). The regression line with an R<sup>2</sup> of 0.92 means that both airborne and laboratory data share a strong linear relationship. LRI values increase in both datasets with alteration degree (with the exception of one sample of weakly altered rocks plotting between the strongly altered samples). Note that both LRI datasets have been normalised at the comparable range.</p>
Full article ">Figure 5
<p>Sampling areas with filtered and coloured airborne LRI points (<b>a</b>,<b>c</b>,<b>e</b>) and corresponding alteration degree from field work (<b>b</b>,<b>d</b>,<b>f</b>), respectively. Note that points with low LRI are found along gullies (see all arrows in (<b>c</b>,<b>d</b>)). For clarity in orientation with the outcrop photograph (<a href="#remotesensing-16-01646-f002" class="html-fig">Figure 2</a>a), the ridge next to the sampling locations is indicated with an orange dashed line (<b>f</b>). The hill-shaded topographic map with contour lines in metres in the background is generated from the LiDAR point cloud.</p>
Full article ">
18 pages, 5315 KiB  
Article
Analysis of Atmospheric Boundary Layer Characteristics on Different Underlying Surfaces of the Eastern Tibetan Plateau in Summer
by Xiaohang Wen, Jie Ma and Mei Chen
Remote Sens. 2024, 16(9), 1645; https://doi.org/10.3390/rs16091645 - 5 May 2024
Cited by 2 | Viewed by 1120
Abstract
The atmospheric boundary layer is a key region for human activities and the interaction of various layers and is an important channel for the transportation of momentum, heat, and various substances between the free atmosphere and the surface, which has a significant impact [...] Read more.
The atmospheric boundary layer is a key region for human activities and the interaction of various layers and is an important channel for the transportation of momentum, heat, and various substances between the free atmosphere and the surface, which has a significant impact on the development of weather and climate change. During the Second Tibetan Plateau Scientific Expedition and Research Program (STEP) in June 2022, utilizing the comprehensive stereoscopic observation experiment of the “Plateau Low Vortex Network”, this study analyzed the variation characteristics and influencing factors of the atmospheric boundary layer height (ABLH) at three stations with different underlying surface types on the Qinghai–Tibet Plateau (QTP): Qumalai Station (grassland), Southeast Tibet Observation and Research Station for the Alpine Environment (SETORS, forest), and Sieshan Station (cropland). The analysis utilized sounding observation data, microwave radiometer data, and ERA5 reanalysis data. The results revealed that the temperature differences between the sounding observation data and microwave radiometer data were minor at the three stations, with a notable temperature inversion phenomenon observed at Sieshan Station. Regarding water vapor density, the differences between the sounding observation data and microwave radiometer data were relatively small at Sieshan Station. The relative humidity increased with height at Sieshan Station, whereas it increased and then decreased with height at SETORS and Qumalai Station. The ABLH at all sites reached its maximum value around noon, approximately 1500 m, and exhibited mostly convective boundary layer (CBL) characteristics. During the night, the ABLH mostly showed a stable boundary layer (SBL) pattern, with heights around 250 m. In summer, latent heat flux (LE) and sensible heat flux (H) in the eastern plateau were generally lower than those in the western plateau except at 20:00, where they were higher. Vertical velocity (w) in the eastern plateau was greater than in the western plateau. Among Sieshan Station and SETORS, LE, and H had the most significant impact on ABLH, while at Qumalai Station, ABLH was more influenced by surface long-wave radiation (Rlu). These four influencing factors showed a positive correlation with ABLH. The impact of different underlying surface types on ABLH primarily manifests in surface temperature variations, solar radiation intensity, vegetation cover, and terrain. Grasslands typically exhibit a larger range of ABLH variations, while the ABLH in forests and mountainous cropland areas is relatively stable. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial pattern of land use and station distribution [<a href="#B21-remotesensing-16-01645" class="html-bibr">21</a>].</p>
Full article ">Figure 2
<p>Schematic diagram of CBL height determination by the gas block method.</p>
Full article ">Figure 3
<p>The vertical distribution characteristics of mean temperature and water vapor density of microwave radiometer data (<b>a</b>,<b>c</b>) and observation data (<b>b</b>,<b>d</b>) at Sieshan station (The 4 subfigures of <b>a</b>–<b>d</b> share a common legend).</p>
Full article ">Figure 4
<p>Vertical distribution characteristics of mean relative humidity and specific humidity at Sieshan stations (<b>a</b>,<b>d</b>), SETORS (<b>b</b>,<b>e</b>), and Qumalai station (<b>c</b>,<b>f</b>) (The 6 subfigures of <b>a</b>–<b>f</b> share a common legend).</p>
Full article ">Figure 5
<p>Vertical distribution characteristics of mean wind direction and wind speed at Sieshan stations (<b>a</b>,<b>d</b>), SETORS (<b>b</b>,<b>e</b>), and Qumalai station (<b>c</b>,<b>f</b>) (The 6 subfigures of <b>a</b>–<b>f</b> share a common legend).</p>
Full article ">Figure 6
<p>Diurnal variation of mean ABLH at three stations.</p>
Full article ">Figure 7
<p>Diurnal variation of sample numbers of CBL and SBL at three stations (The 3 subfigures of <b>a</b>–<b>c</b> share a common legend).</p>
Full article ">Figure 8
<p>Box plot of ABLH at three stations, the * indicates outliers.</p>
Full article ">Figure 9
<p>Hourly variation of the mean ABLH between the ERA5 reanalysis data and the observed data in June.</p>
Full article ">Figure 10
<p>Spatial distribution of LE (<b>a</b>–<b>d</b>), H (<b>e</b>–<b>h</b>), and Rlu (<b>i</b>–<b>l</b>) for four time periods in June (unit: W/m<sup>2</sup>).</p>
Full article ">Figure 11
<p>Latitudinal profiles of w at four times in June (here, positive w means rising, negative means sinking, unit: Pa/s).</p>
Full article ">Figure 12
<p>Average daily variation processes of LE, H, Rlu, w, and ABLH (top: LE, H, Rlu, and ABLH, bottom: w and ABLH) (The 3 subfigures of <b>a</b>–<b>c</b> share a common legend).</p>
Full article ">
17 pages, 4891 KiB  
Article
A Technique for SAR Significant Wave Height Retrieval Using Azimuthal Cut-Off Wavelength Based on Machine Learning
by Shaijie Leng, Mengyu Hao, Weizeng Shao, Armando Marino and Xingwei Jiang
Remote Sens. 2024, 16(9), 1644; https://doi.org/10.3390/rs16091644 - 5 May 2024
Cited by 1 | Viewed by 1339
Abstract
This study introduces a new machine learning-based algorithm for the retrieving significant wave height (SWH) using synthetic aperture radar (SAR) images. This algorithm is based on the azimuthal cut-off wavelength and was developed in quad-polarized stripmap (QPS) mode in coastal waters. The collected [...] Read more.
This study introduces a new machine learning-based algorithm for the retrieving significant wave height (SWH) using synthetic aperture radar (SAR) images. This algorithm is based on the azimuthal cut-off wavelength and was developed in quad-polarized stripmap (QPS) mode in coastal waters. The collected images are collocated with a wave simulation from the numeric model, called WAVEWATCH-III (WW3), and the current speed from the HYbrid Coordinate Ocean Model (HYCOM). The sea surface wind is retrieved from the image at the vertical–vertical polarization channel, using the geophysical model function (GMF) CSARMOD-GF. The results of the algorithm were validated against the measurements obtained from the Haiyang-2B (HY-2B) scatterometer, yielding a root mean squared error (RMSE) of 1.99 m/s with a 0.82 correlation (COR) and 0.27 scatter index of wind speed. It was found that the SWH depends on the wind speed and azimuthal cut-off wavelength. However, the current speed has less of an influence on azimuthal cut-off wavelength. Following this rationale, four widely known machine learning methods were employed that take the SAR-derived azimuthal cut-off wavelength, wind speed, and radar incidence angle as inputs and then output the SWH. The validation result shows that the SAR-derived SWH by eXtreme Gradient Boosting (XGBoost) against the HY-2B altimeter products has a 0.34 m RMSE with a 0.97 COR and a 0.07 bias, which is better than the results obtained using an existing algorithm (i.e., a 1.10 m RMSE with a 0.77 COR and a 0.44 bias) and the other three machine learning methods (i.e., a >0.58 m RMSE with a <0.95 COR), i.e., convolutional neural networks (CNNs), Support Vector Regression (SVR) and the ridge regression model (RR). As a result, XGBoost is a highly efficient approach for GF-3 wave retrieval at the regular sea state. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A quick look at a Gaofen-3 (GF-3) synthetic aperture radar (SAR) in vertical–vertical (VV) polarization after calibration, which was taken at 09:44 UTC on 29 September 2021.</p>
Full article ">Figure 2
<p>Frame of all images. Black and blue rectangles represent the spatial coverage of images.</p>
Full article ">Figure 3
<p>Two-dimensional SAR spectrum at a spatial scale between 800 m and 3 km extracted from the image in <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>, in which the red line represents wind direction with 180° ambiguity.</p>
Full article ">Figure 4
<p>European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-5) wind at 10:00 UTC on 29 September 2021, in which the black rectangle represents the spatial coverage of the image in <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 5
<p>Maps from the HY-2B scatterometer and altimeter on 25 October 2020: (<b>a</b>) wind and (<b>b</b>) significant wave height (SWH).</p>
Full article ">Figure 6
<p>(<b>a</b>) Current map at 9:00 UTC on 29 September 2021, from HYbrid Coordinate Ocean Model (HYCOM), and (<b>b</b>) the WW3-simulated SWH map at 10:00 UTC on 29 September 2021, in which the black rectangle represents the spatial coverage of the image in <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 7
<p>The general processing flow diagram.</p>
Full article ">Figure 8
<p>(<b>a</b>) SAR-derived wind map corresponding to the image in <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>, and (<b>b</b>) a comparison between SAR retrievals and wind speeds of the HY-2B scatterometer.</p>
Full article ">Figure 9
<p>(<b>a</b>) Two-dimensional SAR intensity spectrum of the sub-scene in <a href="#remotesensing-16-01644-f003" class="html-fig">Figure 3</a>a at a spatial scale between 60 m and 1 km. (<b>b</b>) The one-dimensional SAR-derived wave spectrum.</p>
Full article ">Figure 10
<p>Relation between SWH and two variables: (<b>a</b>) wind speed for a 1 m/s bin and (<b>b</b>) azimuthal cut-off wavelength for a 1 m bin. (<b>c</b>) Relation between azimuthal cut-off wavelength and current speed for a 0.1 m/s bin.</p>
Full article ">Figure 11
<p>Performance of the training process: (<b>a</b>) eXtreme Gradient Boosting (XGBoost), (<b>b</b>) convolutional neural networks (CNN), and (<b>c</b>) the SHAP value map.</p>
Full article ">Figure 12
<p>(<b>a</b>) Retrieval results along the track corresponding to the image in <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>; (<b>b</b>) the retrieval results and HY-2B footprints with respect to latitude. The color circles represent the footprints of the HY-2B altimeter, and the black rectangles represent the spatial coverage of the image corresponding to <a href="#remotesensing-16-01644-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 13
<p>Validation of SAR retrievals by (<b>a</b>) the XGBoost, (<b>b</b>) parameterized first-guess spectrum method (PFSM), (<b>c</b>) the CNN, (<b>d</b>) the RR, and (<b>e</b>) the SVR against the measurements from the HY-2B altimeter.</p>
Full article ">Figure 14
<p>Variations in the bias (SAR retrievals minus HY-2B measurements) with respect to (<b>a</b>) SAR-derived azimuthal cut-off wavelength, (<b>b</b>) SAR-derived wind speed, and (<b>c</b>) SWH measured by the HY-2B altimeter.</p>
Full article ">
21 pages, 11192 KiB  
Article
Estimating Urban Forests Biomass with LiDAR by Using Deep Learning Foundation Models
by Hanzhang Liu, Chao Mou, Jiateng Yuan, Zhibo Chen, Liheng Zhong and Xiaohui Cui
Remote Sens. 2024, 16(9), 1643; https://doi.org/10.3390/rs16091643 - 5 May 2024
Viewed by 2945
Abstract
Accurately estimating vegetation biomass in urban forested areas is of great interest to researchers as it is a key indicator of the carbon sequestration capacity necessary for cities to achieve carbon neutrality. The emerging vegetation biomass estimation methods that use AI technologies with [...] Read more.
Accurately estimating vegetation biomass in urban forested areas is of great interest to researchers as it is a key indicator of the carbon sequestration capacity necessary for cities to achieve carbon neutrality. The emerging vegetation biomass estimation methods that use AI technologies with remote sensing images often suffer from arge estimating errors due to the diversity of vegetation and the complex three-dimensional terrain environment in urban ares. However, the high resolution of Light Detection and Ranging (i.e., LiDAR) data provides an opportunity to accurately describe the complex 3D scenes of urban forests, thereby improving estimation accuracy. Additionally, deep earning foundation models have widely succeeded in the industry, and show great potential promise to estimate vegetation biomass through processing complex and arge amounts of urban LiDAR data efficiently and accurately. In this study, we propose an efficient and accurate method called 3D-CiLBE (3DCity Long-term Biomass Estimation) to estimate urban vegetation biomass by utilizing advanced deep earning foundation models. In the 3D-CiLBE method, the Segment Anything Model (i.e., SAM) was used to segment single wood information from a arge amount of complex urban LiDAR data. Then, we modified the Contrastive Language–Image Pre-training (i.e., CLIP) model to identify the species of the wood so that the classic anisotropic growth equation can be used to estimate biomass. Finally, we utilized the Informer model to predict the biomass in the ong term. We evaluate it in eight urban areas across the United States. In the task of identifying urban greening areas, the 3D-CiLBE achieves optimal performance with a mean Intersection over Union (i.e., mIoU) of 0.94. Additionally, for vegetation classification, 3D-CiLBE achieves an optimal recognition accuracy of 92.72%. The estimation of urban vegetation biomass using 3D-CiLBE achieves a Mean Square Error of 0.045 kg/m2, reducing the error by up to 8.2% compared to 2D methods. The MSE for biomass prediction by 3D-CiLBE was 0.06kg/m2 smaller on average than the inear regression model. Therefore, the experimental results indicate that the 3D-CiLBE method can accurately estimate urban vegetation biomass and has potential for practical application. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of 3D-CiLBE. (1#) LiDAR-SAM: Segmentation of vegetation regions and extraction of vegetation features in LiDAR. (2#) MLiDAR-CLIP: Species identification of single trees. (3#) St-Informer: Making of temporal biomass predictions.</p>
Full article ">Figure 2
<p>LiDAR-SAM framework. Using multi-source data, the images, heights, and breast diameters of all individual vegetation in the input LiDAR images are extracted by projection, convolution, splicing, encoding, decoding, 3D reconstruction, cropping, and other methods.</p>
Full article ">Figure 3
<p>MLiDAR-CLIP framework. Pre-Training Images will go through projection and the RandMask process, then feature aggregation, and finally input into the Textual Encoder. Single-tree images will go through projection and the feature aggregation process and input into the Visual Encoder.</p>
Full article ">Figure 4
<p>St-Informer framework. On the eft is the biomass calculation process and on the right is the biomass prediction process using Informer.</p>
Full article ">Figure 5
<p>Comparative experimental results of MLiDAR-CLIP.</p>
Full article ">Figure 6
<p>A comparison is made between the St-Informer prediction and the inear regression prediction. The results for each of the eight cities are presented in subfigures (<b>a</b>–<b>h</b>). On the eft side of each subfigure, a scatterplot of the predicted values is shown, while on the right side, the absolute error of the St-Informer and inear regression predictions versus the true values is presented.</p>
Full article ">Figure 6 Cont.
<p>A comparison is made between the St-Informer prediction and the inear regression prediction. The results for each of the eight cities are presented in subfigures (<b>a</b>–<b>h</b>). On the eft side of each subfigure, a scatterplot of the predicted values is shown, while on the right side, the absolute error of the St-Informer and inear regression predictions versus the true values is presented.</p>
Full article ">Figure 7
<p>Comparison of LiDAR-SAM ablation experiments. (<b>a</b>) This illustrates the impact of the application of Area Prompt on mIoU. (<b>b</b>) This depicts the impact of the application of OSM mask on mIoU. The straight ines illustrate the mean mIoU partitioned via these techniques in the areas of the eight cities.</p>
Full article ">Figure 8
<p>Comparison of MLiDAR-CLIP ablation experiments. (<b>a</b>) This illustrates the impact of utilizing Pre-Training Images on the accuracy of recognition. (<b>b</b>) This depicts the recognition accuracy for varying numbers of projected views. The horizontal ine signifies the mean probability of correctly identifying species using these techniques across 8 cities.</p>
Full article ">Figure 9
<p>The impact of the OSM image vegetation region determination threshold (OSM token rate) on mIoU.</p>
Full article ">Figure 10
<p>Case study city diagram. (<b>a</b>) This represents the geographic ocation of Chicago within the United States. (<b>b</b>) This denotes the specific area of the city of Chicago that is the subject of study. (<b>c</b>) This is an intercepted floor plan. (<b>d</b>–<b>g</b>) These represent some specific study areas.</p>
Full article ">Figure 11
<p>Visualization of segmentation effect of LiDAR-SAM model. The bottom-left corner displays a top-view projection of the segmented vegetation area, while the top-right corner presents the flat-view cross-section of the same area. The feature extraction map for the single-tree point cloud is shown in the bottom-right corner.</p>
Full article ">Figure 12
<p>Demonstration of segmentation effect of different methods: (<b>a</b>) ground truth, (<b>b</b>) LiDAR-SAM, and (<b>c</b>) SAM. The colors of the point cloud in the image represent height information. Top images are top-down projections, bottom images are front projections.</p>
Full article ">Figure 13
<p>MSE for temporal and non-temporal forecasting in Chicago.</p>
Full article ">
12 pages, 3256 KiB  
Article
Miniaturizing Hyperspectral Lidar System Employing Integrated Optical Filters
by Haibin Sun, Yicheng Wang, Zhipei Sun, Shaowei Wang, Shengli Sun, Jianxin Jia, Changhui Jiang, Peilun Hu, Haima Yang, Xing Yang, Mika Karjalnen, Juha Hyyppä and Yuwei Chen
Remote Sens. 2024, 16(9), 1642; https://doi.org/10.3390/rs16091642 - 4 May 2024
Cited by 2 | Viewed by 1828
Abstract
Hyperspectral LiDAR (HSL) has been utilized as an efficacious technique in object classification and recognition based on its unique capability to obtain ranges and spectra synchronously. Different kinds of HSL prototypes with varied structures have been promoted and measured its performance. However, almost [...] Read more.
Hyperspectral LiDAR (HSL) has been utilized as an efficacious technique in object classification and recognition based on its unique capability to obtain ranges and spectra synchronously. Different kinds of HSL prototypes with varied structures have been promoted and measured its performance. However, almost all of these HSL prototypes employ complex and large spectroscopic devices, such as an Acousto-Optic Tunable Filter and Liquid-Crystal Tunable Filter, which makes this HSL system bulky and expensive, and then hinders its extensive application in many fields. In this paper, a smart and smaller spectroscopic component, an intergraded optical filter (IOF), is promoted to miniaturize these HSL systems. The system calibration, range precision, and spectral profile experiments were carried out to test the HSL prototype. Although the IOF employed here only covered a wavelength range of 699–758 nm with a six-channel passband and showed a transmittance of less than 50%, the HSL prototype showed excellent performance in ranging and spectral profile collecting. The spectral profiles collected are well in accordance with those acquired based on the AOTF. The spectral profiles of the fruits, vegetables, plants, and ore samples collected by the HSL based on an IOF can effectively reveal the status of the plants, the component materials, and ore species. Finally, we also showed the integrated design of the HSL based on a three-dimensional IOF and combined with a detector. The performance and designs of this HSL system based on an IOF show great potential for miniaturizing in some specific applications. Full article
(This article belongs to the Special Issue Remote Sensing and Lidar Data for Forest Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>–<b>e</b>) A diagram of the procedure for fabricating the IOF filter; (<b>f</b>) the IOF component compared with a coin.</p>
Full article ">Figure 2
<p>(<b>a</b>) The optical schematic and (<b>b</b>) system setup of the 6-channel IOF-HSL; (<b>c</b>) shows the details of the structure of the IOF and the APD detector.</p>
Full article ">Figure 3
<p>The echo waveforms from an SRB-60% collected by the (<b>a</b>) AOTF-HSL and (<b>b</b>) IOF-HSL at distances of 6 m, 7 m, 8 m, and 9 m; (<b>c</b>,<b>d</b>) are the corresponding reflectance calibrated by an SRB-99%. The straight line in (<b>a</b>,<b>b</b>) is the linear fitting result.</p>
Full article ">Figure 4
<p>The waveforms from a paper box, SRB-99%, and white wall at different distances.</p>
Full article ">Figure 5
<p>(<b>a</b>) An orange and carrot, the arrows are the rotating direction; (<b>b</b>) the echo waveforms and (<b>c</b>) reflectance of each spectral channel, and the solid lines in (<b>b</b>,<b>c</b>) are the spectral profiles obtained by the AOTF-HSL.</p>
Full article ">Figure 6
<p>(<b>a</b>) three apples with different appearances, the arrows are the rotating direction; (<b>b</b>) the echo waveform and (<b>c</b>) the reflectance of each spectral channel.</p>
Full article ">Figure 7
<p>(<b>a</b>) Green/dry leaves and three kinds of dry wood, (<b>b</b>) the waveforms and (<b>c</b>) reflectance of each target, and (<b>d</b>) the Normalized Difference Vegetation Index (NDVI) parameters based on spectral profiles are also given.</p>
Full article ">Figure 8
<p>(<b>a</b>) three kinds of ores irradiated with the laser and (<b>b</b>) the corresponding spectral profiles collected by the IOF-HSL prototype.</p>
Full article ">Figure 9
<p>(<b>a</b>) HSL based on single-element APD, and IOF-HSL prototype employing (<b>b</b>) single-element APD with strip IOF; (<b>c</b>) array detector with strip IOF; and (<b>d</b>) array detector with array IOF.</p>
Full article ">
21 pages, 6695 KiB  
Article
MVT: Multi-Vision Transformer for Event-Based Small Target Detection
by Shilong Jing, Hengyi Lv, Yuchen Zhao, Hailong Liu and Ming Sun
Remote Sens. 2024, 16(9), 1641; https://doi.org/10.3390/rs16091641 - 4 May 2024
Cited by 2 | Viewed by 2200
Abstract
Object detection in remote sensing plays a crucial role in various ground identification tasks. However, due to the limited feature information contained within small targets, which are more susceptible to being buried by complex backgrounds, especially in extreme environments (e.g., low-light, motion-blur scenes). [...] Read more.
Object detection in remote sensing plays a crucial role in various ground identification tasks. However, due to the limited feature information contained within small targets, which are more susceptible to being buried by complex backgrounds, especially in extreme environments (e.g., low-light, motion-blur scenes). Meanwhile, event cameras offer a unique paradigm with high temporal resolution and wide dynamic range for object detection. These advantages enable event cameras without being limited by the intensity of light, to perform better in challenging conditions compared to traditional cameras. In this work, we introduce the Multi-Vision Transformer (MVT), which comprises three efficiently designed components: the downsampling module, the Channel Spatial Attention (CSA) module, and the Global Spatial Attention (GSA) module. This architecture simultaneously considers short-term and long-term dependencies in semantic information, resulting in improved performance for small object detection. Additionally, we propose Cross Deformable Attention (CDA), which progressively fuses high-level and low-level features instead of considering all scales at each layer, thereby reducing the computational complexity of multi-scale features. Nevertheless, due to the scarcity of event camera remote sensing datasets, we provide the Event Object Detection (EOD) dataset, which is the first dataset that includes various extreme scenarios specifically introduced for remote sensing using event cameras. Moreover, we conducted experiments on the EOD dataset and two typical unmanned aerial vehicle remote sensing datasets (VisDrone2019 and UAVDT Dataset). The comprehensive results demonstrate that the proposed MVT-Net achieves a promising and competitive performance. Full article
(This article belongs to the Special Issue Remote Sensing of Target Object Detection and Identification II)
Show Figures

Figure 1

Figure 1
<p>The process of DVS generates events. Each pixel serves as an independent detection unit for changes in brightness. An event is generated when the logarithmic intensity change at a pixel exceeds a specified threshold <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mi>t</mi> </msub> <mi>h</mi> </mrow> </semantics></math>. The continuous generation of events forms an event stream, which consists of two types of polarity: when the light intensity changes from strong to weak and reaches the threshold, DVS outputs a negative event (red arrow); when the light intensity changes from weak to strong and reaches the threshold, DVS outputs a positive event (blue arrow).</p>
Full article ">Figure 2
<p>Overview of the MVT framework, which contains five main components, including: (1) the data preprocessing method of converting event streams into dense tensors; (2) the proposed MVT Backbone used to extract multi-scale features; (3) the designed feature fusion module for encoding and aggregating features at different scales; (4) the detection head that applies bipartite matching strategy; (5) Each MVT Block, composed of three designed components.</p>
Full article ">Figure 3
<p>Architecture of CSA module, which consists of channel attention and spatial attention module to extract short-term dependent attention.</p>
Full article ">Figure 4
<p>Architecture of GSA module, which consists of window attention and grid attention to extract long-term-dependent attention.</p>
Full article ">Figure 5
<p>Overview of the Cross-scale Deformable Encoder layer. The three high-level features are used as the basic tokens to fuse low-level features layer by layer using Cross-scale Deformable Attention, finally building the architecture of the transformer encoder.</p>
Full article ">Figure 6
<p>Prediction examples on the EOD dataset. The MVT-B/S/T variants are applied to detect in normal, motion blur, and low-light scenarios, respectively.</p>
Full article ">Figure 7
<p>Visualization of attention maps. (<b>a</b>) Visualization of feature maps generated by the model without CDA. (<b>b</b>) Visualization of feature maps generated by the model with CDA. It can be observed that the attention applied by CDA is more focused on small targets. (<b>c</b>) Detection results applied CDA.</p>
Full article ">Figure 8
<p>Comparison of the detection results before and after using CSA alone, GSA alone, and both CSA and GSA in the MVT network. (<b>a</b>) Baseline. (<b>b</b>) Baseline + CSA. (<b>c</b>) Baseline + GSA. (<b>d</b>) Baseline + CSA + GSA.</p>
Full article ">Figure 9
<p>Prediction examples on the EOD dataset using different approaches involving Faster R-CNN, YOLOv7, Deformable DETR, and proposed method.</p>
Full article ">Figure 10
<p>Prediction examples on the VisDrone2019 dataset using different approaches involving YOLOv5, DMNet, and proposed method.</p>
Full article ">Figure 11
<p>Prediction examples on the UAVDT dataset using different approaches involving Faster R-CNN, DMNet, and proposed method.</p>
Full article ">Figure 12
<p>Extreme scenarios in UAVDT dataset. These scenes captured by traditional cameras pose challenges for object detection.</p>
Full article ">
19 pages, 855 KiB  
Article
Space–Air–Ground–Sea Integrated Network with Federated Learning
by Hao Zhao, Fei Ji, Yan Wang, Kexing Yao and Fangjiong Chen
Remote Sens. 2024, 16(9), 1640; https://doi.org/10.3390/rs16091640 - 4 May 2024
Cited by 3 | Viewed by 1908
Abstract
A space–air–ground–sea integrated network (SAGSIN) is a promising heterogeneous network framework for the next generation mobile communications. Moreover, federated learning (FL), as a widely used distributed intelligence approach, can improve advanced network performance. In view of the combination and cooperation of SAGSINs and [...] Read more.
A space–air–ground–sea integrated network (SAGSIN) is a promising heterogeneous network framework for the next generation mobile communications. Moreover, federated learning (FL), as a widely used distributed intelligence approach, can improve advanced network performance. In view of the combination and cooperation of SAGSINs and FL, an FL-based SAGSIN framework faces a number of unprecedented challenges, not only from the communication aspect but also on the security and privacy side. Motivated by these observations, in this article, we first give a detailed state-of-the-art review of recent progress and ongoing research works on FL-based SAGSINs. Then, the challenges of FL-based SAGSINs are discussed. After that, for different service demands, basic applications are introduced with their benefits and functions. In addition, two case studies are proposed, in order to improve SAGSINs’ communication efficiency under a significant communication latency difference and to protect user-level privacy for SAGSIN participants, respectively. Simulation results show the effectiveness of the proposed algorithms. Moreover, future trends of FL-based SAGSINs are discussed. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of an SAGSIN.</p>
Full article ">Figure 2
<p>Multi-scale delay characteristic of FL-based SAGSIN.</p>
Full article ">Figure 3
<p>Time consumption vs. scheduling ratio of SAGSIN.</p>
Full article ">Figure 4
<p>The training process of NbFML, where the training of the local UE model is carried out by meta learning.</p>
Full article ">Figure 5
<p>Accuracy vs. communication rounds with different protection levels, where models were trained on MNIST and validated on MNIST-m.</p>
Full article ">
16 pages, 4159 KiB  
Article
Urban Land Surface Temperature Downscaling in Chicago: Addressing Ethnic Inequality and Gentrification
by Jangho Lee, Max Berkelhammer, Matthew D. Wilson, Natalie Love and Ralph Cintron
Remote Sens. 2024, 16(9), 1639; https://doi.org/10.3390/rs16091639 - 4 May 2024
Cited by 1 | Viewed by 1435
Abstract
In this study, we developed a XGBoost-based algorithm to downscale 2 km-resolution land surface temperature (LST) data from the GOES satellite to a finer 70 m resolution, using ancillary variables including NDVI, NDBI, and DEM. This method demonstrated a superior performance over the [...] Read more.
In this study, we developed a XGBoost-based algorithm to downscale 2 km-resolution land surface temperature (LST) data from the GOES satellite to a finer 70 m resolution, using ancillary variables including NDVI, NDBI, and DEM. This method demonstrated a superior performance over the conventional TsHARP technique, achieving a reduced RMSE of 1.90 °C, compared to 2.51 °C with TsHARP. Our approach utilizes the geostationary GOES satellite data alongside high-resolution ECOSTRESS data, enabling hourly LST downscaling to 70 m—a significant advancement over previous methodologies that typically measure LST only once daily. Applying these high-resolution LST data, we examined the hottest days in Chicago and their correlation with ethnic inequality. Our analysis indicated that Hispanic/Latino communities endure the highest LSTs, with a maximum LST that is 1.5 °C higher in blocks predominantly inhabited by Hispanic/Latino residents compared to those predominantly occupied by White residents. This study highlights the intersection of urban development, ethnic inequality, and environmental inequities, emphasizing the need for targeted urban planning to mitigate these disparities. The enhanced spatial and temporal resolution of our LST data provides deeper insights into diurnal temperature variations, crucial for understanding and addressing the urban heat distribution and its impact on vulnerable communities. Full article
(This article belongs to the Special Issue Remote Sensing for Land Surface Temperature and Related Applications)
Show Figures

Figure 1

Figure 1
<p>Location and land use characteristics of the study area. (<b>a</b>) A red box on the map of the US highlights the study region. (<b>b</b>) Local climate zones within the study area, providing an overview of different land use patterns.</p>
Full article ">Figure 2
<p>Data visualizations for the study, with permanent water bodies masked out in navy. (<b>a</b>) Cloud-free average LST from ECOSTRESS with 70 m resolution for the entire study period. (<b>b</b>) Same as (<b>a</b>), but for GOES estimates with 2 km resolution. (<b>c</b>) Average NDVI data with 10 m resolution for the study period. (<b>d</b>) Similar to (<b>c</b>), but for NDBI. Note that NDBI values range from 0 to 1. However, for better visualization, the color bar is scaled from 0 to 0.1, since NDBI map contains a high frequency of values near zero. (<b>e</b>) 10 m-resolution ELEV map for the year 2017, derived from DEM data.</p>
Full article ">Figure 3
<p>Schematic of data collection, model development, and application of XGBoost LST downscaling model.</p>
Full article ">Figure 4
<p>Comparative analysis of LST downscaling methods. (<b>a</b>–<b>d</b>) Downscaling result for 15 July 2020, at 00:22, with (<b>a</b>) LST data from ECOSTRESS (LST<sub>70m</sub>), (<b>b</b>) GOES LST data (LST<sub>2km</sub>), (<b>c</b>) TsHARP-downscaled LST (LST<sub>TsHARP</sub>), and (<b>d</b>) XGB-downscaled LST (LST<sub>XGB</sub>). Panels (<b>e</b>–<b>h</b>) show the same set for 23 June 2022, at 12:13. (<b>i</b>) Depiction of a heatmap comparing LST<sub>70m</sub> with LST<sub>TsHARP</sub> across the entire collocated dataset, including a linear fit equation in orange, and statistical metrics for the evaluation set in blue and for the final model in red. (<b>j</b>) Same as (<b>i</b>), but for LST<sub>70m</sub> and LST<sub>XGB</sub>. R squared, RMSE, and MAE values are also depicted in the figure for the Evaluation set (Eval, using 92% of data for training and tested on 8% of data) and the final model (Final, using all available data for training and testing).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) Distribution of ethnicity (map) and corresponding diurnal cycle of LST obtained from LST<sub>2km</sub> (blue) and LST<sub>XGB</sub> (red) for (<b>a</b>) White, (<b>b</b>) Black, (<b>c</b>) Asian, and (<b>d</b>) Hispanic/Latino population. (<b>e</b>–<b>h</b>) Heat metric of each ethnicity, calculated as relative to White population. Metric calculated with LSTXGB are in gray bar, while values in LST<sub>2km</sub> are in red line. (<b>e</b>) Mean, (<b>f</b>) maximum, (<b>g</b>) minimum LST, and (<b>h</b>) DTR of LST.</p>
Full article ">Figure 6
<p>(<b>a</b>) Ethnic map of Humboldt Park region, where ethnic difference is calculated as Hispanic/Latino percentage minus White percentage. Red colors show blocks predominantly inhabited by Hispanic/Latino residents, while blue shows the blocks that are home to White residents. (<b>b</b>) Map of economic hardship index for Humboldt Park region. (<b>c</b>) Map of educational attainment in Humboldt Park region. (<b>d</b>) Maximum LST<sub>XGB</sub> map for Humboldt Park region, calculated from 10 clear-sky hottest day in Chicago. (<b>e</b>) Scatterplot and linear fit of relationship between ethnic difference and maximum LST<sub>XGB</sub>. (<b>f</b>,<b>g</b>) Same as (<b>d</b>,<b>e</b>), but with LST<sub>2km</sub>.</p>
Full article ">
17 pages, 7188 KiB  
Article
Spatial and Temporal Evolution of Precipitation in the Bahr el Ghazal River Basin, Africa
by Jinyu Meng, Zengchuan Dong, Guobin Fu, Shengnan Zhu, Yiqing Shao, Shujun Wu and Zhuozheng Li
Remote Sens. 2024, 16(9), 1638; https://doi.org/10.3390/rs16091638 - 3 May 2024
Cited by 2 | Viewed by 1749
Abstract
Accurate and punctual precipitation data are fundamental to understanding regional hydrology and are a critical reference point for regional flood control. The aims of this study are to evaluate the performance of three widely used precipitation datasets—CRU TS, ERA5, and NCEP—as potential alternatives [...] Read more.
Accurate and punctual precipitation data are fundamental to understanding regional hydrology and are a critical reference point for regional flood control. The aims of this study are to evaluate the performance of three widely used precipitation datasets—CRU TS, ERA5, and NCEP—as potential alternatives for hydrological applications in the Bahr el Ghazal River Basin in South Sudan, Africa. This includes examining the spatial and temporal evolution of regional precipitation using relatively accurate precipitation datasets. The findings indicate that CRU TS is the best precipitation dataset in the Bahr el Ghazal Basin. The spatial and temporal distributions of precipitation from CRU TS reveal that precipitation in the Bahr el Ghazal Basin has a clear wet season, with June–August accounting for half of the annual precipitation and peaking in July and August. The long-term annual total precipitation exhibits a gradual increasing trend from the north to the south, with the southwestern part of the Basin having the largest percentage of wet season precipitation. Notably, the Bahr el Ghazal Basin witnessed a significant precipitation shift in 1967, followed by an increasing trend. Moreover, the spatial and temporal precipitation evolutions reveal an ongoing risk of flooding in the lower part of the Basin; therefore, increased engineering counter-measures might be needed for effective flood prevention. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geographic location of the Bahr el Ghazal Basin; (<b>b</b>) spatial distribution of climate types in the Bahr el Ghazal Basin; (<b>c</b>) elevation map of the Bahr el Ghazal Basin.</p>
Full article ">Figure 2
<p>Total monthly precipitation and fitted monthly precipitation. (<b>a</b>) WAU; (<b>b</b>) MALAKAL.</p>
Full article ">Figure 3
<p>Taylor diagram of precipitation information for three reanalysis datasets.</p>
Full article ">Figure 4
<p>Average precipitation in the Bahr el Ghazal River Basin from CRU TS for the years 1961–2022.</p>
Full article ">Figure 5
<p>Spatial distribution of multi-year monthly average precipitation in the Bahr el Ghazal River Basin from 1961 to 2022 using CRU TS precipitation data.</p>
Full article ">Figure 6
<p>The annual precipitation series for different climatic zones in the Bahr el Ghazal River Basin from 1901 to 2022.</p>
Full article ">Figure 7
<p>Monthly and seasonal precipitation processes for three climate zones, 1901–2022.</p>
Full article ">Figure 8
<p>Spatial distribution of decade-average precipitation every ten years.</p>
Full article ">
27 pages, 9009 KiB  
Article
Temporal Variations in Land Surface Temperature within an Urban Ecosystem: A Comprehensive Assessment of Land Use and Land Cover Change in Kharkiv, Ukraine
by Gareth Rees, Liliia Hebryn-Baidy and Vadym Belenok
Remote Sens. 2024, 16(9), 1637; https://doi.org/10.3390/rs16091637 - 3 May 2024
Cited by 7 | Viewed by 3208
Abstract
Remote sensing technologies are critical for analyzing the escalating impacts of global climate change and increasing urbanization, providing vital insights into land surface temperature (LST), land use and cover (LULC) changes, and the identification of urban heat island (UHI) and surface urban heat [...] Read more.
Remote sensing technologies are critical for analyzing the escalating impacts of global climate change and increasing urbanization, providing vital insights into land surface temperature (LST), land use and cover (LULC) changes, and the identification of urban heat island (UHI) and surface urban heat island (SUHI) phenomena. This research focuses on the nexus between LULC alterations and variations in LST and air temperature (Tair), with a specific emphasis on the intensified SUHI effect in Kharkiv, Ukraine. Employing an integrated approach, this study analyzes time-series data from Landsat and MODIS satellites, alongside Tair climate records, utilizing machine learning techniques and linear regression analysis. Key findings indicate a statistically significant upward trend in Tair and LST during the summer months from 1984 to 2023, with a notable positive correlation between Tair and LST across both datasets. MODIS data exhibit a stronger correlation (R2 = 0.879) compared to Landsat (R2 = 0.663). The application of a supervised classification through Random Forest algorithms and vegetation indices on LULC data reveals significant alterations: a 70.3% increase in urban land and a decrement in vegetative cover comprising a 15.5% reduction in dense vegetation and a 62.9% decrease in sparse vegetation. Change detection analysis elucidates a 24.6% conversion of sparse vegetation into urban land, underscoring a pronounced trajectory towards urbanization. Temporal and seasonal LST variations across different LULC classes were analyzed using kernel density estimation (KDE) and boxplot analysis. Urban areas and sparse vegetation had the smallest average LST fluctuations, at 2.09 °C and 2.16 °C, respectively, but recorded the most extreme LST values. Water and dense vegetation classes exhibited slightly larger fluctuations of 2.30 °C and 2.24 °C, with the bare land class showing the highest fluctuation 2.46 °C, but fewer extremes. Quantitative analysis with the application of Kolmogorov-Smirnov tests across various LULC classes substantiated the normality of LST distributions p > 0.05 for both monthly and annual datasets. Conversely, the Shapiro-Wilk test validated the normal distribution hypothesis exclusively for monthly data, indicating deviations from normality in the annual data. Thresholded LST classifies urban and bare lands as the warmest classes at 39.51 °C and 38.20 °C, respectively, and classifies water at 35.96 °C, dense vegetation at 35.52 °C, and sparse vegetation 37.71 °C as the coldest, which is a trend that is consistent annually and monthly. The analysis of SUHI effects demonstrates an increasing trend in UHI intensity, with statistical trends indicating a growth in average SUHI values over time. This comprehensive study underscores the critical role of remote sensing in understanding and addressing the impacts of climate change and urbanization on local and global climates, emphasizing the need for sustainable urban planning and green infrastructure to mitigate UHI effects. Full article
Show Figures

Figure 1

Figure 1
<p>The spatial location of Kharkiv, Ukraine.</p>
Full article ">Figure 2
<p>Monthly average T<sub>air</sub> in °C, over a 42-year period from CRU TS, Kharkiv, Ukraine.</p>
Full article ">Figure 3
<p>Decadal analysis of global T<sub>air</sub> trends 1981–2022 (CRU TS).</p>
Full article ">Figure 4
<p>LST maps for July 1984–2023 based on Landsat.</p>
Full article ">Figure 5
<p>Mean LST for April–September 1984–2023 based on Landsat.</p>
Full article ">Figure 6
<p>Mean LST for April–September 1984–2023 based on MODIS.</p>
Full article ">Figure 7
<p>Linear relationship between T<sub>air</sub> and LST values.</p>
Full article ">Figure 8
<p>Dynamics of LULC classes over the years.</p>
Full article ">Figure 9
<p>Change detection between 1984 and 2023.</p>
Full article ">Figure 10
<p>Quantitative assessment of deviations from normal distribution patterns of LST delineated by average annual values within various LULC classes (based on Shapiro-Wilk and Kolmogorov-Smirnov statistical tests).</p>
Full article ">Figure 11
<p>Quantitative assessment of deviations from normal distribution patterns of LST delineated by average monthly values within various LULC classes (based on Shapiro-Wilk and Kolmogorov-Smirnov statistical tests). Emp. Exc.—empirical excess; Th. Exc.—theoretical excess; Emp. KDE—empirical kernel density estimation; Norm. Dist.—normal distribution.</p>
Full article ">Figure 12
<p>Comparison of SUHI effect for July (<b>a</b>) 1984 and (<b>b</b>) 2023.</p>
Full article ">
22 pages, 14050 KiB  
Article
An Evaluation and Improvement of Microphysical Parameterization for a Heavy Rainfall Process during the Meiyu Season
by Zhimin Zhou, Muyun Du, Yang Hu, Zhaoping Kang, Rong Yu and Yinglian Guo
Remote Sens. 2024, 16(9), 1636; https://doi.org/10.3390/rs16091636 - 3 May 2024
Cited by 3 | Viewed by 1464
Abstract
The present study assesses the simulated precipitation and cloud properties using three microphysics schemes (Morrison, Thompson and MY) implemented in the Weather Research and Forecasting model. The precipitation, differential reflectivity (ZDR), specific differential phase (KDP) and mass-weighted mean diameter [...] Read more.
The present study assesses the simulated precipitation and cloud properties using three microphysics schemes (Morrison, Thompson and MY) implemented in the Weather Research and Forecasting model. The precipitation, differential reflectivity (ZDR), specific differential phase (KDP) and mass-weighted mean diameter of raindrops (Dm) are compared with measurements from a heavy rainfall event that occurred on 27 June 2020 during the Integrative Monsoon Frontal Rainfall Experiment (IMFRE). The results indicate that all three microphysics schemes generally capture the characteristics of rainfall, ZDR, KDP and Dm, but tend to overestimate their intensity. To enhance the model performance, adjustments are made based on the MY scheme, which exhibited the best performance. Specifically, the overall coalescence and collision parameter (Ec) is reduced, which effectively decreases Dm and makes it more consistent with observations. Generally, reducing Ec leads to an increase in the simulated content (Qr) and number concentration (Nr) of raindrops across most time steps and altitudes. With a smaller Ec, the impact of microphysical processes on Nr and Qr varies with time and altitude. Generally, the autoconversion of droplets to raindrops primarily contributes to Nr, while the accretion of cloud droplets by raindrops plays a more significant role in increasing Qr. In this study, it is emphasized that even if the precipitation characteristics could be adequately reproduced, accurately simulating microphysical characteristics remains challenging and it still needs adjustments in the most physically based parameterizations to achieve more accurate simulation. Full article
Show Figures

Figure 1

Figure 1
<p>Background circulation at different times: (<b>a1</b>–<b>a3</b>) high-level jet (shaded, unit: m/s) at 200 hPa; geopotential height (blue solid contours, units: 10 gpm), temperature (red solid contours, units: K) and winds (arrows, units: m s<sup>−1</sup>) at 500 hPa; (<b>b1</b>–<b>b3</b>) generalized potential temperature (shaded, unit: K); geopotential height (blue solid contours, units: 10 gpm) and winds (arrows, units: m s<sup>−1</sup>) at 700 hPa; (<b>c1</b>–<b>c3</b>) moisture flux (units: g s<sup>−1</sup> hPa<sup>−1</sup> cm<sup>−1</sup>) and moisture flux divergence (shaded; units: 10<sup>−7</sup> g s<sup>−1</sup> hPa<sup>−1</sup> cm<sup>−2</sup>) at 850 hPa. 1 to 3 represent 00:00 UTC, 14:00 UTC and 23:00 UTC on 26 June 2020. The Yangtze River and Huang River are indicated by purple lines.</p>
Full article ">Figure 2
<p>Nested model domains. The red circle indicates the center of the domain.</p>
Full article ">Figure 3
<p>Spatial distribution of cumulative rainfall during 00:00 UTC 27th to 12:00 UTC 28th in 2020 for (<b>a</b>) the observations and (<b>b</b>–<b>d</b>) the WRF simulations using three microphysics schemes: (<b>b</b>) Morrison, (<b>c</b>) Thompson, (<b>d</b>) MY. The red solid squares represent the regions of Hubei province, central China. And the black dashed rectangle represents the detecting area of the S-band polarimetric radar.</p>
Full article ">Figure 4
<p>The hourly precipitation averaged over the red rectangle in <a href="#remotesensing-16-01636-f003" class="html-fig">Figure 3</a>. The green line is from observations, the blue line is from the MY run, the red line is from the Thompson run and the black line is from the Morrison run.</p>
Full article ">Figure 5
<p>Distribution of composite echo reflectivity at 08:00, 15:00 and 22:00 (represented by 1, 2 and 3, respectively) UTC 27 June 2020. (<b>a1</b>–<b>a3</b>) obervations, (<b>b1</b>–<b>b3</b>) Morrison run, (<b>c1</b>–<b>c3</b>) Thompson run, (<b>d1</b>–<b>d3</b>) MY run. The red rectangle in (<b>a1</b>–<b>a3</b>) is chosen through the detection coverage of the polarimetric radar and the evolution of the composite echo reflectivity. The red rectangle in the modeled runs moves northward by 0.3°. Label A and B in (<b>a1</b>) and (<b>b1</b>) indicate the composite echo reflectivity by low vortex and shear line, respectively.</p>
Full article ">Figure 6
<p>Differential reflectivity (Z<sub>DR</sub>) contoured frequency with altitude diagrams (CFADs) for the region represented by a red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a> on 27 June 2020 derived from (<b>a1</b>–<b>a3</b>) S-Pol radar observations, (<b>b1</b>–<b>b3</b>) Morrison simulations, (<b>c1</b>–<b>c3</b>) Thompson simulations and (<b>d1</b>–<b>d3</b>) MY simulations. 1, 2 and 3 indicate 08:00, 15:00 and 22:00, respectively.</p>
Full article ">Figure 7
<p>Specific differential phase (K<sub>DP</sub>) contoured frequency with altitude diagrams (CFADs) for the region represented by a red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a> on 27 June 2020 derived from (<b>a1</b>–<b>a3</b>) S-Pol radar observations, (<b>b1</b>–<b>b3</b>) Morrison simulations, (<b>c1</b>–<b>c3</b>) Thompson simulations and (<b>d1</b>–<b>d3</b>) MY simulations. 1, 2 and 3 indicate 08:00, 15:00 and 22:00, respectively.</p>
Full article ">Figure 8
<p>Profiles of area-averaged Dm derived from observations and simulations over the domain represented by the red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a> on 26 June 2020: (<b>a</b>) 08:00, (<b>b</b>) 15:00, (<b>c</b>) 22:00.</p>
Full article ">Figure 9
<p>Profiles of area-averaged Dm derived from observations and simulations (the B2, B3 and MY) over the domain represented by the red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a> on 26 June 2020: (<b>a</b>) 08:00, (<b>b</b>) 15:00, (<b>c</b>) 22:00.</p>
Full article ">Figure 10
<p>The vertical profiles of the area-averaged (over the red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a>) differences of rain drop content (Qr, <b>left</b>, unit: 10<sup>−3</sup> g m<sup>−3</sup>) and raindrop number concentration (Nr, <b>right</b>, unit:m<sup>−3</sup>) between the B3 (red solid line) (B2, blue solid line) run and the CTRL run: (<b>a1</b>,<b>b1</b>): 08:00, (<b>a2</b>,<b>b2</b>): 15:00, (<b>a3</b>,<b>b3</b>): 22:00.</p>
Full article ">Figure 11
<p>The vertical profiles of area-averaged (over the red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a>) source and sink terms of Nr on 27 June 2020: (<b>a1</b>–<b>c1</b>) CTRL run (MY), (<b>a2</b>–<b>c2</b>) difference between the B2 and CTRL runs, (<b>a3</b>–<b>c3</b>) difference between B3 and the B2 runs. (<b>a</b>, <b>b</b>, and <b>c</b>) represent 08:00, 15:00 and 22:00, respectively.</p>
Full article ">Figure 12
<p>The vertical profiles of hourly variation of area-averaged (over the red rectangle in <a href="#remotesensing-16-01636-f005" class="html-fig">Figure 5</a>) source and sink terms of Qr on 27 June 2020: (<b>a1</b>–<b>c1</b>) control test (MY), (<b>a2</b>–<b>c2</b>) difference between the B2 and CTRL runs, (<b>a3</b>–<b>c3</b>) difference between the B3 and B2 runs. (<b>a</b>, <b>b</b>, and <b>c</b>) represent 08:00, 15:00 and 22:00, respectively.</p>
Full article ">
24 pages, 4272 KiB  
Article
JPSSL: SAR Terrain Classification Based on Jigsaw Puzzles and FC-CRF
by Zhongle Ren, Yiming Lu, Biao Hou, Weibin Li and Feng Sha
Remote Sens. 2024, 16(9), 1635; https://doi.org/10.3390/rs16091635 - 3 May 2024
Viewed by 1320
Abstract
Effective features play an important role in synthetic aperture radar (SAR) image interpretation. However, since SAR images contain a variety of terrain types, it is not easy to extract effective features of different terrains from SAR images. Deep learning methods require a large [...] Read more.
Effective features play an important role in synthetic aperture radar (SAR) image interpretation. However, since SAR images contain a variety of terrain types, it is not easy to extract effective features of different terrains from SAR images. Deep learning methods require a large amount of labeled data, but the difficulty of SAR image annotation limits the performance of deep learning models. SAR images have inevitable geometric distortion and coherence speckle noise, which makes it difficult to extract effective features from SAR images. If effective semantic context features cannot be learned for SAR images, the extracted features struggle to distinguish different terrain categories. Some existing terrain classification methods are very limited and can only be applied to some specified SAR images. To solve these problems, a jigsaw puzzle self-supervised learning (JPSSL) framework is proposed. The framework comprises a jigsaw puzzle pretext task and a terrain classification downstream task. In the pretext task, the information in the SAR image is learned by completing the SAR image jigsaw puzzle to extract effective features. The terrain classification downstream task is trained using only a small number of labeled data. Finally, fully connected conditional random field processing is performed to eliminate noise points and obtain a high-quality terrain classification result. Experimental results on three large-scene high-resolution SAR images confirm the effectiveness and generalization of our method. Compared with the supervised methods, the features learned in JPSSL are highly discriminative, and the JPSSL achieves good classification accuracy when using only a small amount of labeled data. Full article
Show Figures

Figure 1

Figure 1
<p>JPSSL framework. JPSSL consists of the pretext jigsaw puzzle task and the downstream terrain classification task. In the pretext jigsaw puzzle task, shuffled image patches are input to predict which permutation is used to shuffle them. The downstream terrain classification task includes the SAR image terrain classification training and testing part.</p>
Full article ">Figure 2
<p>Comparing SAR images and optical images to obtain large-scale accumulation areas. The square area in the figure represents the large-scale aggregation area found in the SAR image.</p>
Full article ">Figure 3
<p>Permutation selection diagram. (<b>a</b>) represents the original permutation; (<b>b</b>) represents the permutation that only changes the position of few image blocks; (<b>c</b>) represents the proposed permutation selection method.</p>
Full article ">Figure 4
<p>Image retrieval graph for different images. 1, 2, and 3 represent the input image of the nearest neighbor image, the second nearest neighbor image, and the third nearest neighbor image in turn.</p>
Full article ">Figure 5
<p>Randomly selected data and manually selected data.</p>
Full article ">Figure 6
<p>Visualization of terrain classification results with different methods on Jiujiang data. (<b>a</b>) SAR image. (<b>b</b>) Ground truth. (<b>c</b>) Deeplabv3+. (<b>d</b>) Segformer. (<b>e</b>) SimCLR. (<b>f</b>) JPSSL (no pre-training). (<b>g</b>) JPSSL (pre-training). (<b>h</b>) JPSSL (pre-training + FC-CRF).</p>
Full article ">Figure 7
<p>Visualization of terrain classification results with different methods on Napoli data. (<b>a</b>) SAR image. (<b>b</b>) Ground truth. (<b>c</b>) Deeplabv3+. (<b>d</b>) Segformer. (<b>e</b>) SimCLR. (<b>f</b>) JPSSL (no pre-training). (<b>g</b>) JPSSL (pre-training). (<b>h</b>) JPSSL (pre-training + FC-CRF).</p>
Full article ">Figure 8
<p>Visualization of terrain classification results with different methods on PoDelta data. (<b>a</b>) SAR image. (<b>b</b>) Ground truth. (<b>c</b>) Deeplabv3+. (<b>d</b>) Segformer. (<b>e</b>) SimCLR. (<b>f</b>) JPSSL (no pre-training). (<b>g</b>) JPSSL (pre-training). (<b>h</b>) JPSSL (pre-training + FC-CRF).</p>
Full article ">
19 pages, 3476 KiB  
Article
Early Detection of Rubber Tree Powdery Mildew by Combining Spectral and Physicochemical Parameter Features
by Xiangzhe Cheng, Mengning Huang, Anting Guo, Wenjiang Huang, Zhiying Cai, Yingying Dong, Jing Guo, Zhuoqing Hao, Yanru Huang, Kehui Ren, Bohai Hu, Guiliang Chen, Haipeng Su, Lanlan Li and Yixian Liu
Remote Sens. 2024, 16(9), 1634; https://doi.org/10.3390/rs16091634 - 3 May 2024
Cited by 2 | Viewed by 1419
Abstract
Powdery mildew significantly impacts the yield of natural rubber by being one of the predominant diseases that affect rubber trees. Accurate, non-destructive recognition of powdery mildew in the early stage is essential for the cultivation management of rubber trees. The objective of this [...] Read more.
Powdery mildew significantly impacts the yield of natural rubber by being one of the predominant diseases that affect rubber trees. Accurate, non-destructive recognition of powdery mildew in the early stage is essential for the cultivation management of rubber trees. The objective of this study is to establish a technique for the early detection of powdery mildew in rubber trees by combining spectral and physicochemical parameter features. At three field experiment sites and in the laboratory, a spectroradiometer and a hand-held optical leaf-clip meter were utilized, respectively, to measure the hyperspectral reflectance data (350–2500 nm) and physicochemical parameter data of both healthy and early-stage powdery-mildew-infected leaves. Initially, vegetation indices were extracted from hyperspectral reflectance data, and wavelet energy coefficients were obtained through continuous wavelet transform (CWT). Subsequently, significant vegetation indices (VIs) were selected using the ReliefF algorithm, and the optimal wavelengths (OWs) were chosen via competitive adaptive reweighted sampling. Principal component analysis was used for the dimensionality reduction of significant wavelet energy coefficients, resulting in wavelet features (WFs). To evaluate the detection capability of the aforementioned features, the three spectral features extracted above, along with their combinations with physicochemical parameter features (PFs) (VIs + PFs, OWs + PFs, WFs + PFs), were used to construct six classes of features. In turn, these features were input into support vector machine (SVM), random forest (RF), and logistic regression (LR), respectively, to build early detection models for powdery mildew in rubber trees. The results revealed that models based on WFs perform well, markedly outperforming those constructed using VIs and OWs as inputs. Moreover, models incorporating combined features surpass those relying on single features, with an overall accuracy (OA) improvement of over 1.9% and an increase in F1-Score of over 0.012. The model that combines WFs and PFs shows superior performance over all the other models, achieving OAs of 94.3%, 90.6%, and 93.4%, and F1-Scores of 0.952, 0.917, and 0.941 on SVM, RF, and LR, respectively. Compared to using WFs alone, the OAs improved by 1.9%, 2.8%, and 1.9%, and the F1-Scores increased by 0.017, 0.017, and 0.016, respectively. This study showcases the viability of early detection of powdery mildew in rubber trees. Full article
(This article belongs to the Special Issue Advancements in Remote Sensing for Sustainable Agriculture)
Show Figures

Figure 1

Figure 1
<p>Examples of rubber tree leaves: (<b>a</b>) healthy; (<b>b</b>) early.</p>
Full article ">Figure 2
<p>Physicochemical parameter measurement of rubber tree leaves.</p>
Full article ">Figure 3
<p>Flowchart of data analysis and processing.</p>
Full article ">Figure 4
<p>(<b>a</b>) Average spectral reflectance curves of healthy and early-stage diseased samples and (<b>b</b>) spectral reflectance ratio between the two.</p>
Full article ">Figure 5
<p>Physiochemical parameter responses: (<b>a</b>) chlorophyll; (<b>b</b>) anthocyanin.</p>
Full article ">Figure 6
<p>The weights of VIs obtained by ReliefF.</p>
Full article ">Figure 7
<p>CARS results: (<b>a</b>) variation in the number of selected features; (<b>b</b>) variation in RMSECV; (<b>c</b>) selected wavelengths.</p>
Full article ">Figure 8
<p>PCA feature contribution rate distribution.</p>
Full article ">
23 pages, 7072 KiB  
Article
Multi-Year Cropland Mapping Based on Remote Sensing Data: A Case Study for the Khabarovsk Territory, Russia
by Konstantin Dubrovin, Andrey Verkhoturov, Alexey Stepanov and Tatiana Aseeva
Remote Sens. 2024, 16(9), 1633; https://doi.org/10.3390/rs16091633 - 3 May 2024
Viewed by 1259
Abstract
Cropland mapping using remote sensing data is the basis for effective crop monitoring, crop rotation control, and the detection of irrational land use. Classification using Normalized Difference Vegetation Index (NDVI) time series from multi-year data requires additional time costs, especially when [...] Read more.
Cropland mapping using remote sensing data is the basis for effective crop monitoring, crop rotation control, and the detection of irrational land use. Classification using Normalized Difference Vegetation Index (NDVI) time series from multi-year data requires additional time costs, especially when sentinel data are sparse. Approximation by nonlinear functions was proposed to solve this problem. Time series of weekly NDVI composites were plotted using multispectral Sentinel-2 (Level-2A) images at a resolution of 10 m for sites in Khabarovsk District from April to October in the years 2021 and 2022. Missing values due to the lack of suitable images for analysis were recovered using cubic polynomial, Fourier series, and double sinusoidal function approximation. The classes that were considered included crops, namely, soybean, buckwheat, oat, and perennial grasses, and fallow. The mean absolute percentage error (MAPE) of each class fitting was calculated. It was found that Fourier series fitting showed the highest accuracy, with a mean error of 8.2%. Different classifiers, such as the support vector machine (SVM), random forest (RF), and gradient boosting (GB), were comparatively evaluated. The overall accuracy (OA) for the site pixels during the cross-validation (Fourier series restored) was 67.3%, 87.2%, and 85.9% for the SVM, RF, and GB classifiers, respectively. Thus, it was established that the best result in terms of combined accuracy, performance, and limitations in cropland mapping was achieved by composite construction using Fourier series and machine learning using GB. Similar results should be expected in regions with similar cropland structures and crop phenological cycles, including other regions of the Far East. Full article
(This article belongs to the Special Issue Advancements in Remote Sensing for Sustainable Agriculture)
Show Figures

Figure 1

Figure 1
<p>Study area.</p>
Full article ">Figure 2
<p>Crop phenology in Khabarovsk District.</p>
Full article ">Figure 3
<p>Flowchart for <span class="html-italic">NDVI</span> time series generation.</p>
Full article ">Figure 4
<p>Selected Sentinel-2 scenes (marked in green) for the 2021 and 2022 seasons for the study area. The green color marked scenes with no more than 20% cloudiness and the red color marked scenes with over 20% cloudiness.</p>
Full article ">Figure 5
<p>Flowchart for crop mapping.</p>
Full article ">Figure 6
<p><span class="html-italic">NDVI</span> time series example for 2022.</p>
Full article ">Figure 7
<p>Restored <span class="html-italic">NDVI</span> composite time series: (<b>a</b>) Cube fitted in 2021, (<b>b</b>) Cube fitted in 2022, (<b>c</b>) DF fitted in 2021, (<b>d</b>) DF fitted in 2022, (<b>e</b>) DS fitted in 2021, (<b>f</b>) DS fitted in 2022. The points corresponding to the weeks when satellite observations were made are marked on the graphs.</p>
Full article ">Figure 8
<p>SVM confusion matrix (DF fitted time series).</p>
Full article ">Figure 9
<p>RF confusion matrix (DF fitted time series).</p>
Full article ">Figure 10
<p>GB confusion matrix (DF fitted time series).</p>
Full article ">Figure 11
<p>Crop mapping in test sites: (<b>a</b>) soybean site (marked blue), (<b>b</b>) fallow land recognition (marked black), (<b>c</b>) oat site (marked red), (<b>d</b>) grasses sites (marked green), (<b>e</b>) buckwheat sites (marked yellow), (<b>f</b>) misclassified site.</p>
Full article ">Figure 12
<p>Contribution of <span class="html-italic">NDVI</span> values to GB prediction for: (<b>a</b>) soybean, (<b>b</b>) fallow, (<b>c</b>) oat, (<b>d</b>) grasses, and (<b>e</b>) buckwheat.</p>
Full article ">
29 pages, 2637 KiB  
Article
Four Years of Atmospheric Boundary Layer Height Retrievals Using COSMIC-2 Satellite Data
by Ginés Garnés-Morales, Maria João Costa, Juan Antonio Bravo-Aranda, María José Granados-Muñoz, Vanda Salgueiro, Jesús Abril-Gago, Sol Fernández-Carvelo, Juana Andújar-Maqueda, Antonio Valenzuela, Inmaculada Foyo-Moreno, Francisco Navas-Guzmán, Lucas Alados-Arboledas, Daniele Bortoli and Juan Luis Guerrero-Rascado
Remote Sens. 2024, 16(9), 1632; https://doi.org/10.3390/rs16091632 - 3 May 2024
Cited by 3 | Viewed by 2050
Abstract
This work aimed to study the atmospheric boundary layer height (ABLH) from COSMIC-2 refractivity data, endeavoring to refine existing ABLH detection algorithms and scrutinize the resulting spatial and seasonal distributions. Through validation analyses involving different ground-based methodologies (involving data from lidar, ceilometer, microwave [...] Read more.
This work aimed to study the atmospheric boundary layer height (ABLH) from COSMIC-2 refractivity data, endeavoring to refine existing ABLH detection algorithms and scrutinize the resulting spatial and seasonal distributions. Through validation analyses involving different ground-based methodologies (involving data from lidar, ceilometer, microwave radiometers, and radiosondes), the optimal ABLH determination relied on identifying the lowest refractivity gradient negative peak with a magnitude at least τ% times the minimum refractivity gradient magnitude, where τ is a fitting parameter representing the minimum peak strength relative to the absolute minimum refractivity gradient. Different τ values were derived accounting for the moment of the day (daytime, nighttime, or sunrise/sunset) and the underlying surface (land or sea). Results show discernible relations between ABLH and various features, notably, the land cover and latitude. On average, ABLH is higher over oceans (≈1.5 km), but extreme values (maximums > 2.5 km, and minimums < 1 km) are reached over intertropical lands. Variability is generally subtle over oceans, whereas seasonality and daily evolution are pronounced over continents, with higher ABLHs during daytime and local wintertime (summertime) in intertropical (middle) latitudes. Full article
(This article belongs to the Special Issue Observation of Atmospheric Boundary-Layer Based on Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the measurement stations used in this work, marked with a yellow star. (<b>a</b>) Stations in the Iberian Peninsula. (<b>b</b>) Station in Azores (Portugal). (<b>c</b>) Station in California (USA).</p>
Full article ">Figure 2
<p>Example of the ABLH determination from a refractivity gradient profile according to Santosh’s method. The dashed line in grey indicates the peak with minimum magnitude (MRG method; ABLH<sub>MRG</sub>). The ABLH is attributed to the lowest negative peak whose absolute magnitude is at least 80% that of the MRG peak (dashed line in red; ABL<math display="inline"><semantics> <msub> <mi mathvariant="normal">H</mi> <mi>τ</mi> </msub> </semantics></math>).</p>
Full article ">Figure 3
<p>Values of the GF function resulting from comparing the ABLHs obtained through COSMIC-2 refractivity data with those computed from data collected in the Granada and Évora stations. (<b>a</b>) GF values for daytime. (<b>b</b>) GF values for nighttime. (<b>c</b>) GF values for transition periods (sunrise and sunset). The vertical axis refers to the <math display="inline"><semantics> <mi>τ</mi> </semantics></math> value used in COSMIC-2 data. The horizontal axis corresponds to the validation methodologies: MW<math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mi>x</mi> </msub> </semantics></math>: refractivity method from MWR data using <span class="html-italic">x</span>% as <math display="inline"><semantics> <mi>τ</mi> </semantics></math> value; MW<math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mrow> <mi>P</mi> <mi>M</mi> </mrow> </msub> </semantics></math>: parcel method applied to MWR data; MW<math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mrow> <mi>L</mi> <mi>L</mi> </mrow> </msub> </semantics></math>: Liu and Liang method applied to MWR data; Ceil: gradient method applied to RCS data from ceilometer; Lidar: gradient method applied to RCS data from lidar. Blank cells inform of the non-availability of data to carry out the comparison.</p>
Full article ">Figure 4
<p>Values of the GF function resulting from comparing ABLHs obtained through COSMIC-2 refractivity data with those computed from data collected in ARM stations. The vertical axis refers to the <math display="inline"><semantics> <mi>τ</mi> </semantics></math> value used in COSMIC-2 data. The horizontal axis corresponds to the validation methodologies: R<math display="inline"><semantics> <msub> <mi mathvariant="normal">S</mi> <mi>x</mi> </msub> </semantics></math>: refractivity method from radiosounding data using <span class="html-italic">x</span>% as <math display="inline"><semantics> <mi>τ</mi> </semantics></math> value.</p>
Full article ">Figure 5
<p>Seasonal mean fields of ABLH above ground level resulting from applying the algorithm derived in <a href="#sec3dot1-remotesensing-16-01632" class="html-sec">Section 3.1</a> to refractivity data of COSMIC-2 mission. Each subpanel refers to one season: (<b>a</b>) December, January, and February (DJF); (<b>b</b>) March, April, and May (MAM); (<b>c</b>) June, July, and August (JJA); (<b>d</b>) September, October, and November (SON). Locations in latitudes beyond <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>45</mn> </mrow> </semantics></math>° are out of the COSMIC-2 spatial domain.</p>
Full article ">Figure 6
<p>Quantification of the ABLH variability. (<b>a</b>) Standard deviation considering the whole period. (<b>b</b>) Seasonal amplitude: difference between the mean ABLH field in JJA and DJF. Locations in latitudes beyond <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>45</mn> </mrow> </semantics></math>° are out of the COSMIC-2 spatial domain.</p>
Full article ">Figure 7
<p>Intradiurnal fields of ABLH above ground level resulting from applying the algorithm derived in <a href="#sec3-remotesensing-16-01632" class="html-sec">Section 3</a> to refractivity data of COSMIC-2 mission. (<b>a</b>,<b>b</b>) represent the daytime and nighttime means, respectively. (<b>c</b>) depicts the diurnal amplitude, i.e., the difference between daytime and nighttime mean fields (ABLH<sub>daytime</sub> − ABLH<sub>nighttime</sub>). Locations in latitudes beyond <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>45</mn> </mrow> </semantics></math>° are out of the COSMIC-2 spatial domain.</p>
Full article ">Figure 8
<p>Mean ABLH seasonal differences between the proposed algorithm (using a relative peak strength threshold, ABL<math display="inline"><semantics> <msub> <mi mathvariant="normal">H</mi> <mi>τ</mi> </msub> </semantics></math>) and the MRG method (equivalent to using <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>%, ABL<math display="inline"><semantics> <msub> <mi mathvariant="normal">H</mi> <mrow> <mi>M</mi> <mi>R</mi> <mi>G</mi> </mrow> </msub> </semantics></math>). Each subpanel refers to one season: (<b>a</b>) December, January, and February (DJF); (<b>b</b>) March, April, and May (MAM); (<b>c</b>) June, July, and August (JJA); (<b>d</b>) September, October, and November (SON). Locations in latitudes beyond <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>45</mn> </mrow> </semantics></math>° are out of the COSMIC-2 spatial domain.</p>
Full article ">Figure 9
<p>Mean ABLH intradiurnal differences between the proposed algorithm (using a relative peak strength threshold, ABL<math display="inline"><semantics> <msub> <mi mathvariant="normal">H</mi> <mi>τ</mi> </msub> </semantics></math>) and the MRG method (equivalent to using <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>%, ABL<math display="inline"><semantics> <msub> <mi mathvariant="normal">H</mi> <mrow> <mi>M</mi> <mi>R</mi> <mi>G</mi> </mrow> </msub> </semantics></math>). (<b>a</b>) Daytime; (<b>b</b>) nighttime. Locations in latitudes beyond <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>45</mn> </mrow> </semantics></math>° are out of the COSMIC-2 spatial domain.</p>
Full article ">
16 pages, 5049 KiB  
Technical Note
Impact of Urbanization on Cloud Characteristics over Sofia, Bulgaria
by Ventsislav Danchovski
Remote Sens. 2024, 16(9), 1631; https://doi.org/10.3390/rs16091631 - 2 May 2024
Viewed by 1281
Abstract
Urban artificial surfaces and structures induce modifications in land–atmosphere interactions, affecting the exchange of energy, momentum, and substances. These modifications stimulate urban climate formation by altering the values and dynamics of atmospheric parameters, including cloud-related features. This study evaluates the presence and quantifies [...] Read more.
Urban artificial surfaces and structures induce modifications in land–atmosphere interactions, affecting the exchange of energy, momentum, and substances. These modifications stimulate urban climate formation by altering the values and dynamics of atmospheric parameters, including cloud-related features. This study evaluates the presence and quantifies the extent of such changes over Sofia, Bulgaria. The findings reveal that estimations of low-level cloud base height (CBH) derived from lifting condensation level (LCL) calculations may produce unexpected outcomes due to microclimate influence. Ceilometer data indicate that the CBH of low-level clouds over urban areas exceeds that of surrounding regions by approximately 200 m during warm months and afternoon hours. Moreover, urban clouds exhibit reduced persistence relative to rural counterparts, particularly pronounced in May, June, and July afternoons. Reanalysis-derived low-level cloud cover (LCC) shows no significant disparities between urban and rural areas, although increased LCC is observed above the western and northern city boundaries. Satellite-derived cloud products reveal that the optically thinnest low-level clouds over urban areas exhibit slightly higher cloud tops, but the optically thickest clouds are more prevalent during warm months. These findings suggest an influence of urbanization on cloudiness, albeit nuanced and potentially influenced by the city size and surrounding physical and geographical features. Full article
Show Figures

Figure 1

Figure 1
<p>Sofia valley and the locations of LBSF (cyan), NIMH (blue) and SU (pink). The city building boundaries are depicted by a red polyline; the magenta polyline is the valley. Rural area is denoted by difference between the magenta and the red polygons.</p>
Full article ">Figure 2
<p>Diurnal and seasonal variations (heat map) of the mean LCL at LBSF, NIMH, and SU. The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 3
<p>Heat maps of the mean difference in air temperature (<b>a</b>), relative humidity (<b>b</b>), and lifting condensation level (<b>c</b>) between SU and LBSF. Circles indicate statistically significant differences (conducted <span class="html-italic">t</span>-test with <span class="html-italic">p</span>-value 0.05). The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 4
<p>Diurnal and seasonal variations in the mean CBH of low-level clouds at LBSF and SU, respectively. The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 5
<p>The mean difference in CBH (measured by the ceilometers) of low-level clouds (CBH &lt; 2500 m) over SU (city center) and over LBSF (the airport at the city edge) as a heat map for different months and hours (<b>a</b>), where circles indicate statistically significant difference (<span class="html-italic">t</span>-test with <span class="html-italic">p</span>-value 0.05). A bivariate polar plot of the difference in CBH varying by wind speed (ws) and wind direction at 700 hPa (<b>b</b>). The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 6
<p>Diurnal and seasonal variations of the mean low-level cloud persistence at LBSF and SU, respectively. The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 7
<p>The mean difference in cloud persistence (measured by the ceilometers) of low clouds (CBH &lt; 2500 m) at SU (city center) and LBSF (the airport at the city edge). A heat map for different months and hours (<b>a</b>), where circles indicate statistically significant (test of equal proportions at <span class="html-italic">p</span>-value 0.05) differences. A bivariate polar plot of the difference in cloud persistence varying by wind speed (ws) and wind direction at 700 hPa (<b>b</b>). The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 8
<p>BIAS of CBH determined from rawinsonde RH profiles against CBH obtained by the ceilometers—CL31(at the airport) and CHM15k (in the city center) as a function of the method parameters RH<sub>min</sub> and RH<sub>jump</sub>. The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 9
<p>Seasonal variations in CERRA’s LCC. The magenta and red polylines enclose the valley and built-up areas, respectively. The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 10
<p>The mean difference in CERRA’s low cloud cover (CBH &lt; 2500 m) over the built-up area and over the rural area. A heat map for different months and hours (<b>a</b>), where circles indicate statistically significant (test of equal proportions at <span class="html-italic">p</span>-value 0.05) differences. A bivariate polar plot of the difference in LCC varying by wind speed (ws) and wind direction at 700 hPa (<b>b</b>). The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">Figure 11
<p>COT-CTP histograms for ice cloud top (<b>left side</b>) and water cloud top (<b>right side</b>) clouds detected over the rural and urban areas, respectively, during different seasons (MAM—spring, JJA—summer, SON—autumn, DJF—winter). The figure presents a summary of the dataset spanning the period from 2011 to 2020.</p>
Full article ">
18 pages, 15686 KiB  
Article
From Point Cloud to BIM: A New Method Based on Efficient Point Cloud Simplification by Geometric Feature Analysis and Building Parametric Objects in Rhinoceros/Grasshopper Software
by Massimiliano Pepe, Alfredo Restuccia Garofalo, Domenica Costantino, Federica Francesca Tana, Donato Palumbo, Vincenzo Saverio Alfio and Enrico Spacone
Remote Sens. 2024, 16(9), 1630; https://doi.org/10.3390/rs16091630 - 2 May 2024
Cited by 4 | Viewed by 1927
Abstract
The aim of the paper is to identify an efficient method for transforming the point cloud into parametric objects in the fields of architecture, engineering and construction by four main steps: 3D survey of the structure under investigation, generation of a new point [...] Read more.
The aim of the paper is to identify an efficient method for transforming the point cloud into parametric objects in the fields of architecture, engineering and construction by four main steps: 3D survey of the structure under investigation, generation of a new point cloud based on feature extraction and identification of suitable threshold values, geometry reconstruction by semi-automatic process performed in Rhinoceros/Grasshopper and BIM implementation. The developed method made it possible to quickly obtain geometries that were very realistic to the original ones as shown in the case study described in the paper. In particular, the application of ShrinkWrap algorithm on the simplify point cloud allowed us to obtain a polygonal mesh model without errors such as holes, non-manifold surfaces, compenetrating surfaces, etc. Full article
(This article belongs to the Special Issue Remote Sensing in Geomatics)
Show Figures

Figure 1

Figure 1
<p>Pipeline of the method developed for the BIM design from point cloud.</p>
Full article ">Figure 2
<p>Geometric features as the radius changes; on the left are reported the results of the verticality and on the right those of surface variation obtained, respectively, for the radius equal to the minimum (<b>a</b>,<b>b</b>), 2 times (<b>c</b>,<b>d</b>), 3 times (<b>e</b>,<b>f</b>) and 4 times (<b>g</b>,<b>h</b>) this minimum value.</p>
Full article ">Figure 3
<p>Point cloud reduction process: from raw point cloud (<b>a</b>) to key point cloud (<b>b</b>).</p>
Full article ">Figure 4
<p>Three-dimensional modelling of the porthole: from the point cloud to NURBS: point cloud raw (<b>a</b>), point cloud editable (<b>b</b>), edge limits (paths) by points and curves (<b>c</b>), mesh generated by ShrinkWrap (<b>d</b>), quad mesh generation (<b>e</b>).</p>
Full article ">Figure 5
<p>Workflow for generating parametric objects in Rhinoceros/Grasshopper environment.</p>
Full article ">Figure 6
<p>BIM implementation of Beach Patrol in Autodesk Revit.</p>
Full article ">Figure 7
<p>Application of the method on several structures: dense point cloud (left side) and extraction only the significant point cloud (right side). Case study: Pagoda of “Buziaș Colonnade” located in Romania (<b>a</b>,<b>b</b>), Baroque staircase of the church of San Domenico in Taranto located in Italy (<b>c</b>,<b>d</b>), Roman Catholic Diocese of Gurk-Klagenfurt located in Austria (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 8
<p>3D quad-mesh models obtained from the point cloud of a vase as the parameters of the ShrinkWrap tool.</p>
Full article ">Figure 9
<p>3D quad-mesh models obtained from the point cloud of a statue as the parameters of the ShrinkWrap tool.</p>
Full article ">Figure 10
<p>3D quad-mesh models obtained from the point cloud of a church façade as the parameters of the ShrinkWrap tool.</p>
Full article ">
22 pages, 8260 KiB  
Article
Spatiotemporal Distribution Characteristics and Influencing Factors of Freeze–Thaw Erosion in the Qinghai–Tibet Plateau
by Zhenzhen Yang, Wankui Ni, Fujun Niu, Lan Li and Siyuan Ren
Remote Sens. 2024, 16(9), 1629; https://doi.org/10.3390/rs16091629 - 2 May 2024
Cited by 4 | Viewed by 1322
Abstract
Freeze–thaw (FT) erosion intensity may exhibit a future increasing trend with climate warming, humidification, and permafrost degradation in the Qinghai–Tibet Plateau (QTP). The present study provides a reference for the prevention and control of FT erosion in the QTP, as well as for [...] Read more.
Freeze–thaw (FT) erosion intensity may exhibit a future increasing trend with climate warming, humidification, and permafrost degradation in the Qinghai–Tibet Plateau (QTP). The present study provides a reference for the prevention and control of FT erosion in the QTP, as well as for the protection and restoration of the regional ecological environment. FT erosion is the third major type of soil erosion after water and wind erosion. Although FT erosion is one of the major soil erosion types in cold regions, it has been studied relatively little in the past because of the complexity of several influencing factors and the involvement of shallow surface layers at certain depths. The QTP is an important ecological barrier area in China. However, this area is characterized by harsh climatic and fragile environmental conditions, as well as by frequent FT erosion events, making it necessary to conduct research on FT erosion. In this paper, a total of 11 meteorological, vegetation, topographic, geomorphological, and geological factors were selected and assigned analytic hierarchy process (AHP)-based weights to evaluate the FT erosion intensity in the QTP using a comprehensive evaluation index method. In addition, the single effects of the selected influencing factors on the FT erosion intensity were further evaluated in this study. According to the obtained results, the total FT erosion area covered 1.61 × 106 km2, accounting for 61.33% of the total area of the QTP. The moderate and strong FT erosion intensity classes covered 6.19 × 105 km2, accounting for 38.37% of the total FT erosion area in the QTP. The results revealed substantial variations in the spatial distribution of the FT erosion intensity in the QTP. Indeed, the moderate and strong erosion areas were mainly located in the high mountain areas and the hilly part of the Hoh Xil frozen soil region. Full article
Show Figures

Figure 1

Figure 1
<p>Land use types in the QTP.</p>
Full article ">Figure 2
<p>Distribution of the freeze–thaw erosion areas in the QTP.</p>
Full article ">Figure 3
<p>Spatial distributions of the 11 freeze–thaw erosion factors in the QTP. (<b>a</b>) Annual temperature difference. (<b>b</b>) Annual mean precipitation. (<b>c</b>) Vegetation coverage. (<b>d</b>) Slope. (<b>e</b>) Slope aspect. (<b>f</b>) Elevation. (<b>g</b>) Sand content. (<b>h</b>) Maximum freezing depth. (<b>i</b>) Active layer thickness. (<b>j</b>) Distribution of the thaw slumping points. (<b>k</b>) Distribution of the rock glacier points.</p>
Full article ">Figure 3 Cont.
<p>Spatial distributions of the 11 freeze–thaw erosion factors in the QTP. (<b>a</b>) Annual temperature difference. (<b>b</b>) Annual mean precipitation. (<b>c</b>) Vegetation coverage. (<b>d</b>) Slope. (<b>e</b>) Slope aspect. (<b>f</b>) Elevation. (<b>g</b>) Sand content. (<b>h</b>) Maximum freezing depth. (<b>i</b>) Active layer thickness. (<b>j</b>) Distribution of the thaw slumping points. (<b>k</b>) Distribution of the rock glacier points.</p>
Full article ">Figure 4
<p>Hierarchical model for evaluation of freeze–thaw erosion strength in Yaahp software.</p>
Full article ">Figure 5
<p>Classification of the freeze–thaw erosion intensity in the study area.</p>
Full article ">Figure 6
<p>Comparison of the obtained freeze–thaw erosion results in the QTP. Volumes of soil erosion revealed by Jiao [<a href="#B47-remotesensing-16-01629" class="html-bibr">47</a>] (<b>a</b>); research results of this article (<b>b</b>).</p>
Full article ">Figure 7
<p>Freeze–thaw erosion in the Hoh Xil region.</p>
Full article ">Figure 8
<p>Comparison of the obtained freeze–thaw erosion results in the QTP. Thaw slump susceptibility revealed by Yin [<a href="#B48-remotesensing-16-01629" class="html-bibr">48</a>] (<b>a</b>); research results of this article (<b>b</b>).</p>
Full article ">Figure 9
<p>Effects of the meteorological and vegetation factors on the FT erosion intensity.</p>
Full article ">Figure 10
<p>Influences of the topographical and geomorphological factors on the FT erosion intensity.</p>
Full article ">Figure 11
<p>Influences of the geological factors on the freeze–thaw erosion intensity.</p>
Full article ">
18 pages, 11407 KiB  
Article
Estimation of Rice Plant Coverage Using Sentinel-2 Based on UAV-Observed Data
by Yuki Sato, Takeshi Tsuji and Masayuki Matsuoka
Remote Sens. 2024, 16(9), 1628; https://doi.org/10.3390/rs16091628 - 2 May 2024
Cited by 1 | Viewed by 1623
Abstract
Vegetation coverage is a crucial parameter in agriculture, as it offers essential insight into crop growth and health conditions. The spatial resolution of spaceborne sensors is limited, hindering the precise measurement of vegetation coverage. Consequently, fine-resolution ground observation data are indispensable for establishing [...] Read more.
Vegetation coverage is a crucial parameter in agriculture, as it offers essential insight into crop growth and health conditions. The spatial resolution of spaceborne sensors is limited, hindering the precise measurement of vegetation coverage. Consequently, fine-resolution ground observation data are indispensable for establishing correlations between remotely sensed reflectance and plant coverage. We estimated rice plant coverage per pixel using time-series Sentinel-2 Multispectral Instrument (MSI) data, enabling the monitoring of rice growth conditions over a wide area. Coverage was calculated using unmanned aerial vehicle (UAV) data with a spatial resolution of 3 cm with the spectral unmixing method. Coverage maps were generated every 2–3 weeks throughout the rice-growing season. Subsequently, crop growth was estimated at 10 m resolution through multiple linear regression utilizing Sentinel-2 MSI reflectance data and coverage maps. In this process, a geometric registration of MSI and UAV data was conducted to improve their spatial agreement. The coefficients of determination (R2) of the multiple linear regression models were 0.92 and 0.94 for the Level-1C and Level-2A products of Sentinel-2 MSI, respectively. The root mean square errors of estimated rice plant coverage were 10.77% and 9.34%, respectively. This study highlights the promise of satellite time-series models for accurate estimation of rice plant coverage. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>The study field. Labels A to H represent the study plots.</p>
Full article ">Figure 2
<p>Flowchart of this study’s methodology.</p>
Full article ">Figure 3
<p>Geometric registration process of UAV and Sentinel-2 images. (<b>a</b>) Schematic diagram of the geometric registration process; (<b>b</b>) downscaled UAV NIR image; (<b>c</b>) Sentinel-2 band 8 image.</p>
Full article ">Figure 4
<p>Extraction of paddy field pixels: (<b>a</b>) UAV image on 16 June 2023; (<b>b</b>) Sentinel-2 image on 19 June 2023.</p>
Full article ">Figure 5
<p>Five hundred random points on the RGB image. (<b>a</b>) Water endmember on 16 June 2023; (<b>b</b>) rice endmember on 2 September 2023.</p>
Full article ">Figure 6
<p>Normalization results: (<b>a</b>) before normalization, (<b>b</b>) after normalization.</p>
Full article ">Figure 7
<p>Rice plant coverage map based on UAV images.</p>
Full article ">Figure 8
<p>Comparison of UAV images and rice plant coverage maps for Plot A.</p>
Full article ">Figure 9
<p>Seasonal coverage changes in UAV images by plots.</p>
Full article ">Figure 10
<p>Correlation analysis between estimated coverage and correct labels: (<b>a</b>) Sentinel-2 Level-1C product, (<b>b</b>) Sentinel-2 Level-2A product.</p>
Full article ">Figure 11
<p>Rice plant coverage map based on Sentinel-2 images.</p>
Full article ">Figure 11 Cont.
<p>Rice plant coverage map based on Sentinel-2 images.</p>
Full article ">Figure 12
<p>Seasonal changes in coverage in Sentinel-2 images of plots: (<b>a</b>) Sentinel-2 Level-1C product, (<b>b</b>) Sentinel-2 Level-2A product.</p>
Full article ">Figure 13
<p>Seasonal changes in coverage in Sentinel-2 images of Yamada Nishiki rice plants.</p>
Full article ">Figure 14
<p>UAV image taken on July 29. (<b>a</b>) Floating weeds in plot H. (<b>b</b>) Rice plant variability in plot G.</p>
Full article ">
22 pages, 46483 KiB  
Article
SWIFT: Simulated Wildfire Images for Fast Training Dataset
by Luiz Fernando, Rafik Ghali and Moulay A. Akhloufi
Remote Sens. 2024, 16(9), 1627; https://doi.org/10.3390/rs16091627 - 2 May 2024
Cited by 1 | Viewed by 1875
Abstract
Wildland fires cause economic and ecological damage with devastating consequences, including loss of life. To reduce these risks, numerous fire detection and recognition systems using deep learning techniques have been developed. However, the limited availability of annotated datasets has decelerated the development of [...] Read more.
Wildland fires cause economic and ecological damage with devastating consequences, including loss of life. To reduce these risks, numerous fire detection and recognition systems using deep learning techniques have been developed. However, the limited availability of annotated datasets has decelerated the development of reliable deep learning techniques for detecting and monitoring fires. For such, a novel dataset, namely, SWIFT, is presented in this paper for detecting and recognizing wildland smoke and fires. SWIFT includes a large number of synthetic images and videos of smoke and wildfire with their corresponding annotations, as well as environmental data, including temperature, humidity, wind direction, and speed. It represents various wildland fire scenarios collected from multiple viewpoints, covering forest interior views, views near active fires, ground views, and aerial views. In addition, three deep learning models, namely, BoucaNet, DC-Fire, and CT-Fire, are adopted to recognize forest fires and address their related challenges. These models are trained using the SWIFT dataset and tested using real fire images. BoucaNet performed well in recognizing wildland fires and overcoming challenging limitations, including the complexity of the background, the variation in smoke and wildfire features, and the detection of small wildland fire areas. This shows the potential of sim-to-real deep learning in wildland fires. Full article
Show Figures

Figure 1

Figure 1
<p>Images of developed biomes for SWIFT, from left to right: boreal, temperate, and tundra.</p>
Full article ">Figure 2
<p>Background image examples.</p>
Full article ">Figure 3
<p>Fire image examples. (<b>Top</b>): RGB fire images. (<b>Bottom</b>): Their corresponding ground-truth images.</p>
Full article ">Figure 4
<p>Fire and smoke image examples. (<b>Top</b>): RGB images. (<b>Bottom</b>): Their corresponding ground-truth images.</p>
Full article ">Figure 5
<p>Smoke example images. (<b>Top</b>) to (<b>Bottom</b>): RGB smoke images, their corresponding grayscale ground truth, and their corresponding binary ground truth.</p>
Full article ">Figure 6
<p>The proposed architecture of BoucaNet and CT-Fire methods. L and L1 refer to the likelihood of the input image being classified as fire or no-fire.</p>
Full article ">Figure 7
<p>The proposed architecture of DC-Fire. L and L1 refer to the likelihood of the input image being classified as fire or no-fire.</p>
Full article ">Figure 8
<p>Loss curves for the proposed DL methods (BoucaNet, DC-Fire, CT-Fire, RegNetY-16GF, and ResNeXt-101) during training and validation steps.</p>
Full article ">Figure 9
<p>Confusion matrices of BoucaNet, CT-Fire, and DC-Fire using real images. From (<b>left</b>) to (<b>right</b>): BoucaNet results, CT-Fire results, and DC-Fire results.</p>
Full article ">Figure 10
<p>Fire classification results of the proposed models.</p>
Full article ">Figure 11
<p>No-Fire classification results of the proposed models.</p>
Full article ">
15 pages, 4589 KiB  
Article
Domain Feature Decomposition for Efficient Object Detection in Aerial Images
by Ren Jin, Zikai Jia, Xingyu Yin, Yi Niu and Yuhua Qi
Remote Sens. 2024, 16(9), 1626; https://doi.org/10.3390/rs16091626 - 2 May 2024
Cited by 3 | Viewed by 1494
Abstract
Object detection in UAV aerial images faces domain-adaptive challenges, such as changes in shooting height, viewing angle, and weather. These changes constitute a large number of fine-grained domains that place greater demands on the network’s generalizability. To tackle these challenges, we initially decompose [...] Read more.
Object detection in UAV aerial images faces domain-adaptive challenges, such as changes in shooting height, viewing angle, and weather. These changes constitute a large number of fine-grained domains that place greater demands on the network’s generalizability. To tackle these challenges, we initially decompose image features into domain-invariant and domain-specific features using practical imaging condition parameters. The composite feature can improve domain generalization and single-domain accuracy compared to the conventional fine-grained domain-detection method. Then, to solve the problem of the overfitting of high-frequency imaging condition parameters, we mixed images from different imaging conditions in a balanced sampling manner as input for the training of the detection network. The data-augmentation method improves the robustness of training and reduces the overfitting of high-frequency imaging parameters. The proposed algorithm is compared with state-of-the-art fine-grained domain detectors on the UAVDT and VisDrone datasets. The results show that it achieves an average detection precision improvement of 5.7 and 2.4, respectively. The airborne experiments validate that the algorithm achieves a 20 Hz processing performance for 720P images on an onboard computer with Nvidia Jetson Xavier NX. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Figure 1

Figure 1
<p>The imaging diagram of the imaging equipment mounted on the UAV platform at different flight heights and different pan/tilt angles; these altitude and angle data can be read out by sensors on the UAV.</p>
Full article ">Figure 2
<p>Fine-grained domains in aerial images under different imaging conditions.</p>
Full article ">Figure 3
<p>Illustration of the proposed main framework, including fine-grained domain mix augmentation and fine-grained domain feature disentanglement modules, and combining them as components in the YOLOv5 series. ‘Diff’ and ‘GRL’ separately indicate the difference disentanglement and Gradient Reversal Layer [<a href="#B26-remotesensing-16-01626" class="html-bibr">26</a>,<a href="#B27-remotesensing-16-01626" class="html-bibr">27</a>].</p>
Full article ">Figure 4
<p>Several visualization results of the proposed algorithm compared with NDFT and YOLOv5m. The red circles are some representative hard-to-detect objects.</p>
Full article ">Figure 5
<p>Experimental flight platform hardware and definition of flight height and gimbal angle conditions.</p>
Full article ">Figure 6
<p>Several visualization results of the proposed algorithm compared to the NDFT in our hanging flight experiment.</p>
Full article ">
22 pages, 14624 KiB  
Article
Drought Risk Assessment of Winter Wheat at Different Growth Stages in Huang-Huai-Hai Plain Based on Nonstationary Standardized Precipitation Evapotranspiration Index and Crop Coefficient
by Wenhui Chen, Rui Yao, Peng Sun, Qiang Zhang, Vijay P. Singh, Shao Sun, Amir AghaKouchak, Chenhao Ge and Huilin Yang
Remote Sens. 2024, 16(9), 1625; https://doi.org/10.3390/rs16091625 - 2 May 2024
Cited by 4 | Viewed by 1584
Abstract
Soil moisture plays a crucial role in determining the yield of winter wheat. The Huang-Huai-Hai (HHH) Plain is the main growing area of winter wheat in China, and frequent occurrence of drought seriously restricts regional agricultural development. Hence, a daily-scale Non-stationary Standardized Precipitation [...] Read more.
Soil moisture plays a crucial role in determining the yield of winter wheat. The Huang-Huai-Hai (HHH) Plain is the main growing area of winter wheat in China, and frequent occurrence of drought seriously restricts regional agricultural development. Hence, a daily-scale Non-stationary Standardized Precipitation Evapotranspiration Index (NSPEI), based on winter wheat crop coefficient (Kc), was developed in the present study to evaluate the impact of drought characteristics on winter wheat in different growth stages. Results showed that the water demand for winter wheat decreased with the increase in latitude, and the water shortage was affected by effective precipitation, showing a decreasing trend from the middle to both sides in the HHH Plain. Water demand and water shortage showed an increasing trend at the jointing stage and heading stage, while other growth stages showed a decreasing trend. The spatial distributions of drought duration and intensity were consistent, which were higher in the northern region than in the southern region. Moreover, the water shortage and drought intensity at the jointing stage and heading stage showed an increasing trend. The drought had the greatest impact on winter wheat yield at the tillering stage, jointing stage, and heading stage, and the proportions of drought risk vulnerability in these three stages accounted for 0.25, 0.21, and 0.19, respectively. The high-value areas of winter wheat loss due to drought were mainly distributed in the northeast and south-central regions. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>A framework of drought risk assessment in different growth stages of winter wheat.</p>
Full article ">Figure 3
<p>Schematic diagram for the run theory.</p>
Full article ">Figure 4
<p>Comparison of evaluation metrics for machine learning models (the ordinate is the value of each evaluation index).</p>
Full article ">Figure 5
<p>Spatial distribution of water demand in different growth stages of winter wheat.</p>
Full article ">Figure 6
<p>Spatial distribution of effective precipitation in different growth periods of winter wheat.</p>
Full article ">Figure 7
<p>Spatial distribution of water deficiency in different growth stages of winter wheat.</p>
Full article ">Figure 8
<p>Temporal trends of water demand and water deficit in different growth periods from 1970 to 2019.</p>
Full article ">Figure 9
<p>Spatial distribution of drought duration in each growth period.</p>
Full article ">Figure 10
<p>Spatial distribution of drought intensity in different growth periods.</p>
Full article ">Figure 11
<p>Temporal variation of drought duration from 1970 to 2019. In (<b>c</b>), J-H-M refers to jointing–heading–maturity time.</p>
Full article ">Figure 12
<p>Temporal variation of drought intensity from 1970 to 2019.</p>
Full article ">Figure 13
<p>Drought vulnerability weights of winter wheat at different growth stages in different provinces.</p>
Full article ">Figure 14
<p>Spatial distribution pattern of yield reduction rate in the Huang-Huai-Hai Plain.</p>
Full article ">Figure 15
<p>Spatial and temporal distribution pattern of annual mean drought risk in the growing period of Huang-Huai-Hai Plain.</p>
Full article ">Figure 16
<p>Spatial and temporal distribution of the daily mean value of drought risk during the growth period in the Huang-Huai-Hai Plain.</p>
Full article ">
22 pages, 9502 KiB  
Article
Mapping Foliar C, N, and P Concentrations in An Ecological Restoration Area with Mixed Plant Communities Based on LiDAR and Hyperspectral Data
by Yongjun Yang, Jing Dong, Jiajia Tang, Jiao Zhao, Shaogang Lei, Shaoliang Zhang and Fu Chen
Remote Sens. 2024, 16(9), 1624; https://doi.org/10.3390/rs16091624 - 2 May 2024
Cited by 2 | Viewed by 1489
Abstract
Interactions between carbon (C), nitrogen (N), and phosphorus (P), the vital indicators of ecological restoration, play an important role in signaling the health of ecosystems. Rapidly and accurately mapping foliar C, N, and P is essential for interpreting community structure, nutrient limitation, and [...] Read more.
Interactions between carbon (C), nitrogen (N), and phosphorus (P), the vital indicators of ecological restoration, play an important role in signaling the health of ecosystems. Rapidly and accurately mapping foliar C, N, and P is essential for interpreting community structure, nutrient limitation, and primary production during ecosystem recovery. However, research on how to rapidly map C, N, and P in restored areas with mixed plant communities is limited. This study employed laser imaging, detection, and ranging (LiDAR) and hyperspectral data to extract spectral, textural, and height features of vegetation as well as vegetation indices and structural parameters. Causal band, multiple linear regression, and random forest models were developed and tested in a restored area in northern China. Important parameters were identified including (1), for C, red-edge bands, canopy height, and vegetation structure; for N, textural features, height percentile of 40–95%, and vegetation structure; for P, spectral features, height percentile of 80%, and 1 m foliage height diversity. (2) R2 was used to compare the accuracy of the three models as follows: R2 values for C were 0.07, 0.42, and 0.56, for N they were 0.20, 0.48, and 0.53, and for P they were 0.32, 0.39, and 0.44; the random forest model demonstrated the highest accuracy. (3) The accuracy of the concentration estimates could be ranked as C > N > P. (4) The inclusion of LiDAR features significantly improved the accuracy of the C concentration estimation, with increases of 22.20% and 47.30% in the multiple linear regression and random forest models, respectively, although the inclusion of LiDAR features did not notably enhance the accuracy of the N and P concentration estimates. Therefore, LiDAR and hyperspectral data can be used to effectively map C, N, and P concentrations in a mixed plant community in a restored area, revealing their heterogeneity in terms of species and spatial distribution. Future efforts should involve the use of hyperspectral data with additional bands and a more detailed classification of plant communities. The application of this information will be useful for analyzing C, N, and P limitations, and for planning for the maintenance of restored plant communities. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical situation of the study area (<b>a</b>,<b>b</b>) in Shaanxi Province, north China; (<b>c</b>) orthophoto view of the study area.</p>
Full article ">Figure 2
<p>Foliar carbon (C), nitrogen (N), and phosphorus (P) concentrations of eight dominant plant communities in the study area.</p>
Full article ">Figure 3
<p>Technical roadmap for foliar C, N, and P concentrations mapping. Note: LiDAR, laser imaging, detection, and ranging; MAE, mean absolute error; RMSE, root mean squared error; R<sup>2</sup>, coefficient of determination.</p>
Full article ">Figure 4
<p>Selection results of the features of (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P. All features were ranked according to the Z score calculated by the Boruta algorithm. The red, yellow, and green box plots represent the Z scores for the rejected, tentative, and confirmed features, respectively. For C, red-edge bands, height variables, and vegetation structure parameters were identified as comparatively important. For N, textural features, height percentiles of 40–95%, and vegetation structure parameters were deemed significant. For P, spectral features, a height percentile of 80%, and 1 m foliage height diversity were considered crucial.</p>
Full article ">Figure 4 Cont.
<p>Selection results of the features of (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P. All features were ranked according to the Z score calculated by the Boruta algorithm. The red, yellow, and green box plots represent the Z scores for the rejected, tentative, and confirmed features, respectively. For C, red-edge bands, height variables, and vegetation structure parameters were identified as comparatively important. For N, textural features, height percentiles of 40–95%, and vegetation structure parameters were deemed significant. For P, spectral features, a height percentile of 80%, and 1 m foliage height diversity were considered crucial.</p>
Full article ">Figure 5
<p>Accuracy of estimation model using causal bands for (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P.</p>
Full article ">Figure 6
<p>Accuracy of estimation model of multiple linear regression for (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P.</p>
Full article ">Figure 7
<p>Accuracy of estimation model of random forest for (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P.</p>
Full article ">Figure 8
<p>Study site maps produced using the random forest model for foliar: (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P concentrations.</p>
Full article ">Figure 8 Cont.
<p>Study site maps produced using the random forest model for foliar: (<b>a</b>) C, (<b>b</b>) N, and (<b>c</b>) P concentrations.</p>
Full article ">Figure 9
<p>Maps of the foliar (<b>a</b>) C:N and (<b>b</b>) N:P ratios in the study area.</p>
Full article ">Figure 9 Cont.
<p>Maps of the foliar (<b>a</b>) C:N and (<b>b</b>) N:P ratios in the study area.</p>
Full article ">
32 pages, 7440 KiB  
Review
A Systematic Review of the Application of the Geostationary Ocean Color Imager to the Water Quality Monitoring of Inland and Coastal Waters
by Shidi Shao, Yu Wang, Ge Liu and Kaishan Song
Remote Sens. 2024, 16(9), 1623; https://doi.org/10.3390/rs16091623 - 1 May 2024
Cited by 2 | Viewed by 2322
Abstract
In recent decades, eutrophication in inland and coastal waters (ICWs) has increased due to anthropogenic activities and global warming, thus requiring timely monitoring. Compared with traditional sampling and laboratory analysis methods, satellite remote sensing technology can provide macro-scale, low-cost, and near real-time water [...] Read more.
In recent decades, eutrophication in inland and coastal waters (ICWs) has increased due to anthropogenic activities and global warming, thus requiring timely monitoring. Compared with traditional sampling and laboratory analysis methods, satellite remote sensing technology can provide macro-scale, low-cost, and near real-time water quality monitoring services. The Geostationary Ocean Color Imager (GOCI), aboard the Communication Ocean and Meteorological Satellite (COMS) from the Republic of Korea, marked a significant milestone as the world’s inaugural geostationary ocean color observation satellite. Its operational tenure spanned from 1 April 2011 to 31 March 2021. Over ten years, the GOCI has observed oceans, coastal waters, and inland waters within its 2500 km × 2500 km target area centered on the Korean Peninsula. The most attractive feature of the GOCI, compared with other commonly used water color sensors, was its high temporal resolution (1 h, eight times daily from 0 UTC to 7 UTC), providing an opportunity to monitor ICWs, where their water quality can undergo significant changes within a day. This study aims to comprehensively review GOCI features and applications in ICWs, analyzing progress in atmospheric correction algorithms and water quality monitoring. Analyzing 123 articles from the Web of Science and China National Knowledge Infrastructure (CNKI) through a bibliometric quantitative approach, we examined the GOCI’s strength and performance with different processing methods. These articles reveal that the GOCI played an essential role in monitoring the ecological health of ICWs in its observation coverage (2500 km × 2500 km) in East Asia. The GOCI has led the way to a new era of geostationary ocean satellites, providing new technical means for monitoring water quality in oceans, coastal zones, and inland lakes. We also discuss the challenges encountered by Geostationary Ocean Color Sensors in monitoring water quality and provide suggestions for future Geostationary Ocean Color Sensors to better monitor the ICWs. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of this paper.</p>
Full article ">Figure 2
<p>The regional observation area of GOCI (<a href="https://kosc.kiost.ac.kr/index.nm?menuCd=43&amp;lang=en" target="_blank">https://kosc.kiost.ac.kr/index.nm?menuCd=43&amp;lang=en</a>, accessed on 25 January 2024).</p>
Full article ">Figure 3
<p>Number and countries of papers published related to GOCI between 2010 and 2023.</p>
Full article ">Figure 4
<p>Map of the study area.</p>
Full article ">Figure 5
<p>Journals and number of publications.</p>
Full article ">Figure 6
<p>Keyword mapping from the Web of Science, searched by author name.</p>
Full article ">Figure 7
<p>Overview of the application of GOCI retrieved by subject terms from CNKI.</p>
Full article ">Figure 8
<p>Summary and proportion of the applications of GOCI in inland and coastal waters.</p>
Full article ">Figure 9
<p>Schematic diagram of integrated ground–air space.</p>
Full article ">Figure 10
<p>Schematic diagram of image fusion, taking GOCI-II and Himawari8/9 as an example.</p>
Full article ">
12 pages, 7133 KiB  
Communication
Deterministic Global 3D Fractal Cloud Model for Synthetic Scene Generation
by Aaron M. Schinder, Shannon R. Young, Bryan J. Steward, Michael Dexter, Andrew Kondrath, Stephen Hinton and Ricardo Davila
Remote Sens. 2024, 16(9), 1622; https://doi.org/10.3390/rs16091622 - 30 Apr 2024
Cited by 1 | Viewed by 1380
Abstract
This paper describes the creation of a fast, deterministic, 3D fractal cloud renderer for the AFIT Sensor and Scene Emulation Tool (ASSET). The renderer generates 3D clouds by ray marching through a volume and sampling the level-set of a fractal function. The fractal [...] Read more.
This paper describes the creation of a fast, deterministic, 3D fractal cloud renderer for the AFIT Sensor and Scene Emulation Tool (ASSET). The renderer generates 3D clouds by ray marching through a volume and sampling the level-set of a fractal function. The fractal function is distorted by a displacement map, which is generated using horizontal wind data from a Global Forecast System (GFS) weather file. The vertical windspeed and relative humidity are used to mask the creation of clouds to match realistic large-scale weather patterns over the Earth. Small-scale detail is provided by the fractal functions which are tuned to match natural cloud shapes. This model is intended to run quickly, and it can run in about 700 ms per cloud type. This model generates clouds that appear to match large-scale satellite imagery, and it reproduces natural small-scale shapes. This should enable future versions of ASSET to generate scenarios where the same scene is consistently viewed from both GEO and LEO satellites from multiple perspectives. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Ray march geometry. (A) Sensor position, (B) first surface normal, (C) sun-pointing vector, (D) cloud shell, (E) Earth surface boundary. For each pixel in the field of view, a ray is generated, which intersects the cloud shell. Within the cloud shell, a fractal function is sampled to find a level-set boundary. If the ray crosses a cloud, radiance due to solar reflection and thermal emission is calculated for the pixel.</p>
Full article ">Figure 2
<p>Fractal functions and level-sets: (<b>A</b>) coarse Perlin fractal, (<b>B</b>) fine Perlin fractal, (<b>C</b>) Worley fractal, (<b>D</b>) weighted sum, (<b>E</b>) level-set for cloud surface, (<b>F</b>) altitude boundary shape function. The top row shows a top-down perspective, and the bottom row a horizontal perspective. Arrows are weighted sum and level set processing steps.</p>
Full article ">Figure 3
<p>A weather-mask for low-altitude cumulus cloud cover derived from a GFS grid-4 model file from the National Centers for Environmental Information (NCEI), processed by the ASSET cloud model decision procedure.</p>
Full article ">Figure 4
<p>Displacement map: equirectangular grid displaced in 2000 m wind field for 50 ks.</p>
Full article ">Figure 5
<p>Clouds distorted by (<b>A</b>) no displacement, (<b>B</b>) 7 h displacement, and (<b>C</b>) 13 h displacement in the GFS-supplied wind field.</p>
Full article ">Figure 6
<p>Light scattered from the illuminated cloud surface in a simple 1D model given several values for the single-scattering albedo.</p>
Full article ">Figure 7
<p>Stratus and cumulus extinction coefficients and single-scattering albedo, derived from [<a href="#B11-remotesensing-16-01622" class="html-bibr">11</a>,<a href="#B14-remotesensing-16-01622" class="html-bibr">14</a>] for water clouds and [<a href="#B13-remotesensing-16-01622" class="html-bibr">13</a>,<a href="#B15-remotesensing-16-01622" class="html-bibr">15</a>] for cirrus clouds.</p>
Full article ">Figure 8
<p>A comparison of fine features, maritime cumulus over stratus clouds. (<b>A</b>) GOES-16 Band-2 image, (<b>B</b>) fractal cloud simulation. The yellow line indicates an equivalent scale in the satellite and rendered image.</p>
Full article ">Figure 9
<p>Histograms of cloudtop spectral radiance derived from full disc GOES ABI and simulated images. (<b>A</b>) Visible, (<b>B</b>) NIR, (<b>C</b>) long-wave infrared.</p>
Full article ">Figure 10
<p>GOES ABI Band-1 (visible band) compared with simulated fractal clouds showing plausible large-scale weather patterns. Also, comparable cloudtop radiances.</p>
Full article ">Figure 11
<p>Full disc comparison in NIR band (0.86 µm), 1200 EST.</p>
Full article ">Figure 12
<p>Long- wave infrared full disc comparison. High-altitude tropical altostratus clouds are cooler and dimmer in this band. Lower-altitude clouds are warmer and brighter but also attenuated by the atmospheric transmission.</p>
Full article ">Figure 13
<p>Three views of a cloud-bank, 90 degrees apart, taken at 15 degrees of elevation with a 60-degree sun angle to the horizon.</p>
Full article ">
23 pages, 8941 KiB  
Article
DS-Trans: A 3D Object Detection Method Based on a Deformable Spatiotemporal Transformer for Autonomous Vehicles
by Yuan Zhu, Ruidong Xu, Chongben Tao, Hao An, Huaide Wang, Zhipeng Sun and Ke Lu
Remote Sens. 2024, 16(9), 1621; https://doi.org/10.3390/rs16091621 - 30 Apr 2024
Viewed by 1553
Abstract
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations [...] Read more.
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations and spatiotemporal continuity between consecutive frames. This leads to discontinuities and abrupt changes in the detection outcomes. To address this issue, this paper proposes a multi-frame 3D object detection algorithm based on a deformable spatiotemporal Transformer. Specifically, a deformable cross-scale Transformer module is devised, incorporating a multi-scale offset mechanism that non-uniformly samples features at different scales, enhancing the spatial information aggregation capability of the output features. Simultaneously, to address the issue of feature misalignment during multi-frame feature fusion, a deformable cross-frame Transformer module is proposed. This module incorporates independently learnable offset parameters for different frame features, enabling the model to adaptively correlate dynamic features across multiple frames and improve the temporal information utilization of the model. A proposal-aware sampling algorithm is introduced to significantly increase the foreground point recall, further optimizing the efficiency of feature extraction. The obtained multi-scale and multi-frame voxel features are subjected to an adaptive fusion weight extraction module, referred to as the proposed mixed voxel set extraction module. This module allows the model to adaptively obtain mixed features containing both spatial and temporal information. The effectiveness of the proposed algorithm is validated on the KITTI, nuScenes, and self-collected urban datasets. The proposed algorithm achieves an average precision improvement of 2.1% over the latest multi-frame-based algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the deformable spatiotemporal transformer.</p>
Full article ">Figure 2
<p>The deformable cross-scale Transformer module.</p>
Full article ">Figure 3
<p>The deformable cross-frame Transformer module.</p>
Full article ">Figure 4
<p>The visualization results of different sampling methods. (<b>a</b>) The target vehicle in the original image; (<b>b</b>) FPS; (<b>c</b>) SPC (<b>d</b>) PAS (proposed).</p>
Full article ">Figure 4 Cont.
<p>The visualization results of different sampling methods. (<b>a</b>) The target vehicle in the original image; (<b>b</b>) FPS; (<b>c</b>) SPC (<b>d</b>) PAS (proposed).</p>
Full article ">Figure 5
<p>The diagram of the hybrid voxel set extraction module.</p>
Full article ">Figure 6
<p>The hardware platform for the self-collected dataset.</p>
Full article ">Figure 7
<p>Visualization results of multi-frame point clouds in the nuScenes dataset. (<b>a</b>) The image of Frame <math display="inline"><semantics> <mrow> <mi>t</mi> </mrow> </semantics></math>; (<b>b</b>) The results of PointPillars of three consecutive frames; (<b>c</b>) The results of proposed method of three consecutive frames. The green boxes represent the predicted boxes by algorithm.</p>
Full article ">Figure 8
<p>Comparison of detection results of different voxel setting parameters (%).</p>
Full article ">Figure 9
<p>Detection results of the proposed method in rainy conditions. The green boxes represent the predicted boxes by algorithm.</p>
Full article ">Figure 10
<p>Visualization results of KITTI dataset. (<b>a</b>) The image of an enclosed road; (<b>b</b>) The detection results of PV-RCNN for an enclosed road; (<b>c</b>) The detection results of proposed method for an enclosed road; (<b>d</b>) The image of an intersection; (<b>e</b>) The detection results of PV-RCNN for an intersection; (<b>f</b>) The detection results of the proposed method for an intersection. The green boxes represent the predicted boxes by algorithm.</p>
Full article ">Figure 11
<p>Visualization results of the self-collected dataset. (<b>a</b>) The detection results of the proposed method for an intersection; (<b>b</b>) The detection results of the proposed method for a main road; (<b>c</b>) The detection results of the proposed method for a suburban road. The green boxes represent the predicted boxes by algorithm.</p>
Full article ">Figure 12
<p>Different objects in the point cloud of four consecutive frames.</p>
Full article ">
19 pages, 12576 KiB  
Article
A Mars Local Terrain Matching Method Based on 3D Point Clouds
by Binliang Wang, Shuangming Zhao, Xinyi Guo and Guorong Yu
Remote Sens. 2024, 16(9), 1620; https://doi.org/10.3390/rs16091620 - 30 Apr 2024
Cited by 2 | Viewed by 1473
Abstract
To address the matching challenge between the High Resolution Imaging Science Experiment (HiRISE) Digital Elevation Model (DEM) and the Mars Orbiter Laser Altimeter (MOLA) DEM, we propose a terrain matching framework based on the combination of point cloud coarse alignment and fine alignment [...] Read more.
To address the matching challenge between the High Resolution Imaging Science Experiment (HiRISE) Digital Elevation Model (DEM) and the Mars Orbiter Laser Altimeter (MOLA) DEM, we propose a terrain matching framework based on the combination of point cloud coarse alignment and fine alignment methods. Firstly, we achieved global coarse localization of the HiRISE DEM through nearest neighbor matching of key Intrinsic Shape Signatures (ISS) points in the Fast Point Feature Histograms (FPFH) feature space. We introduced a graph matching strategy to mitigate gross errors in feature matching, employing a numerical method of non-cooperative game theory to solve the extremal optimization problem under Karush–Kuhn–Tucker (KKT) conditions. Secondly, to handle the substantial resolution disparities between the MOLA DEM and HiRISE DEM, we devised a smoothing weighting method tailored to enhance the Voxelized Generalized Iterative Closest Point (VGICP) approach for fine terrain registration. This involves leveraging the Euclidean distance between distributions to effectively weight loss and covariance, thereby reducing the results’ sensitivity to voxel radius selection. Our experiments show that the proposed algorithm improves the accuracy of terrain registration on the proposed Curiosity landing area’s, Mawrth Vallis, data by nearly 20%, with faster convergence and better algorithm robustness. Full article
(This article belongs to the Special Issue Remote Sensing and Photogrammetry Applied to Deep Space Exploration)
Show Figures

Figure 1

Figure 1
<p>Framework for terrain registration method based on 3D point cloud.</p>
Full article ">Figure 2
<p>Point cloud feature calculation neighborhood. (<b>a</b>) PFH descriptor; (<b>b</b>) FPFH descriptor.</p>
Full article ">Figure 3
<p>Example of PFH features histogram (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Construction of graph matching problem. Here, (<b>a</b>) shows the results of key point matching, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>M</mi> </mrow> <mrow> <mn>8</mn> </mrow> </msub> </mrow> </semantics></math> indicate gross mismatches. (<b>a</b>) Left image is the MOLA DEM, and the right image is the HiRISE DEM. Elevation is represented through color rendering, with red indicating higher elevations and blue indicating lower elevations. And (<b>b</b>) is the association graph corresponding to (<b>a</b>).</p>
Full article ">Figure 5
<p>Geometric illustration of distance cost [<a href="#B49-remotesensing-16-01620" class="html-bibr">49</a>].</p>
Full article ">Figure 6
<p>Distance cost of our method (lines with different colors represent different weights).</p>
Full article ">Figure 7
<p>MOLA experimental data. (<b>a</b>) Global MOLA DEM image; (<b>b</b>) MOLA sampling result.</p>
Full article ">Figure 8
<p>HiRISE Experimental data. (<b>a</b>) HiRISE stereo image; (<b>b</b>) HiRISE DTM.</p>
Full article ">Figure 9
<p>Key points matching results. (<b>a</b>) FPFH feature matching; (<b>b</b>) optimization based on graph matching. The initial feature matching contains numerous mismatches (<b>a</b>), but after applying graph matching to remove gross errors, the remaining matches are correct (<b>b</b>).</p>
Full article ">Figure 10
<p>Coarse registration results. (<b>a</b>) Coarse registration mapping; (<b>b</b>) 3D views of registration result; (<b>c</b>) Top view of point cloud registration. The black dots, lines, and boundaries in (<b>a</b>) represent elevation discrepancies at the edges of DEM registration, and the ellipses in (<b>b</b>) highlight areas with significant elevation differences.</p>
Full article ">Figure 11
<p>Comparison of algorithm convergence performance.</p>
Full article ">Figure 12
<p>Visualization of the mean spatial registration error <span class="html-italic">m<sub>i</sub></span> within the grids using Equation (19). Uneven spatial errors are caused by the difference in spatial resolution between the MOLA DEM and HiRISE DEM.</p>
Full article ">Figure 13
<p>Top view of fine registration results. The performance of the realignment can be judged from the degree of mosaicking of the HiRISE DEM and the MOLA DEM. More uniformity indicates better alignment results.</p>
Full article ">Figure 14
<p>Influence of voxel radius <span class="html-italic">r</span>. The overlapping lines of our method indicate that the results are almost identical, demonstrating our method remains best performance when different voxel radius <span class="html-italic">r</span> is chosen.</p>
Full article ">Figure 15
<p>Fine registration results. (<b>a</b>) Registration mapping; (<b>b</b>) 3D visualization.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop