Nothing Special   »   [go: up one dir, main page]

Next Issue
Volume 15, June-2
Previous Issue
Volume 15, May-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 15, Issue 11 (June-1 2023) – 252 articles

Cover Story (view full-size image): Wildfires are a major disaster, annually burning millions of acres in the US alone and their accurate detection is crucial for mitigation. Uncertainty quantification is vital, as it provides more insight for decision-making. This paper proposes a supervised deep generative machine learning model for wildfire detection and projection. The model generates plausible segmentations representing expert disagreements. The model combines latent distributions and visual features, forming a supervised stochastic image-to-image detection model. Experiments suggest better agreement between the proposed model and ground-truth segmentations. It exhibits superior comprehension of wildfire dynamics. Our model enables accurate detection and uncertainty quantification, aiding authorities, and researchers in combating wildfires. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
38 pages, 14304 KiB  
Article
Unambiguous Wind Direction Estimation Method for Shipborne HFSWR Based on Wind Direction Interval Limitation
by Yunfeng Zhang, Yiming Wang, Yonggang Ji and Ming Li
Remote Sens. 2023, 15(11), 2952; https://doi.org/10.3390/rs15112952 - 5 Jun 2023
Viewed by 1567
Abstract
Due to its maneuverability and agility, the shipborne high-frequency surface wave radar (HFSWR) provides a new way of monitoring large-area marine dynamics and environment information. However, wind direction ambiguity is problematic when using monostatic shipborne HFSWR for wind direction inversion. In this article, [...] Read more.
Due to its maneuverability and agility, the shipborne high-frequency surface wave radar (HFSWR) provides a new way of monitoring large-area marine dynamics and environment information. However, wind direction ambiguity is problematic when using monostatic shipborne HFSWR for wind direction inversion. In this article, an unambiguous wind direction measurement method based on wind direction interval limitation is proposed. The two first-order spectral wind direction estimation methods are first presented using the relationship between the normalized amplitude differences or ratios of the broadened Doppler spectrum and the wind direction. Moreover, based on the characteristic of a small wind direction estimation error in a large included angle between the spectral wind direction and the radar beam, the wind direction interval is obtained by counting the distribution of radar-measured wind direction within this included angle. Furthermore, the eliminated ambiguity of wind direction is transformed to judge the relationship between the wind direction interval and the two curves, which represent the relationship between the spreading parameter and the wind direction. Therefore, the remote sensing monitoring of ocean surface wind direction fields can be realized by shipborne HFSWR. The simulation results are used to evaluate the performance of the proposed method and the multi-beam sampling method for wind direction inversion. The experimental results show that the errors of wind direction estimated by the multi-beam sampling method and the equivalent dual-station model are large, and the proposed method can improve the accuracy of wind direction measurement. Three widely used wave directional spreading models have been applied for performance comparison. The wind direction field measured by the proposed method under a modified cosine model agrees well with that observed by the China-France Oceanography Satellite (CFOSAT). Full article
(This article belongs to the Special Issue Feature Paper Special Issue on Ocean Remote Sensing - Part 2)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Incident directions of different sea echoes with adjacent sea cells.</p>
Full article ">Figure 2
<p>Diagram of a modified cosine model based on the relationship between the sailing platform, the sea echo, and the wind direction.</p>
Full article ">Figure 3
<p>Relationship between wind direction interval and two curves of the relationship between the spreading parameter and the wind direction: (<b>a</b>) Unique intersection in wind direction interval. (<b>b</b>) Multiple intersections in wind direction interval. (<b>c</b>) No intersection in wind direction interval. (<b>d</b>) Disjoint between the wind direction interval and the curve of cell A.</p>
Full article ">Figure 4
<p>Processing of the unambiguous wind direction measurement method.</p>
Full article ">Figure 5
<p>The broadened Doppler spectra for different wind directions: (<b>a</b>) The wind directions are 0° and 180°. (<b>b</b>) The wind directions are 45° and 135°. (<b>c</b>) The wind directions are 90° and 270°. (<b>d</b>) The wind directions are 225° and 315°.</p>
Full article ">Figure 6
<p>Relationship between the normalized amplitude differences and the wind directions: (<b>a</b>) The ship speed is 5 m/s. (<b>b</b>) The ship speed is 2.3 m/s.</p>
Full article ">Figure 7
<p>Relationship between ship speed and normalized amplitude differences of boundary regions.</p>
Full article ">Figure 8
<p>Relationship between noise and normalized amplitude differences of boundary regions.</p>
Full article ">Figure 9
<p>Relationship between the normalized amplitude ratios of broadened regions and the wind direction: (<b>a</b>) Normalized amplitude ratios of the positive-to-negative broadened region. (<b>b</b>) Normalized amplitude ratios of the negative-to-positive broadened region.</p>
Full article ">Figure 10
<p>Relationship between the ship speed and the normalized amplitude ratios of the broadened regions.</p>
Full article ">Figure 11
<p>Relationship between noise and the normalized amplitude ratios of the broadened regions.</p>
Full article ">Figure 12
<p>Diagram between the wind direction and the radar beam: (<b>a</b>) The wind direction of 156° estimation solution. (<b>b</b>) The wind direction of 240° estimation solution.</p>
Full article ">Figure 13
<p>Relationship between the Bragg peak ratio and the included angle of wind direction and radar beam: (<b>a</b>) Solutions corresponding to three wave directional spreading models. (<b>b</b>) Solutions corresponding to different spreading parameters.</p>
Full article ">Figure 14
<p>Histograms of wind direction measurement error: (<b>a</b>) The proposed method. (<b>b</b>) The multi-beam sampling method.</p>
Full article ">Figure 15
<p>Diagram between the wind direction error and SNR.</p>
Full article ">Figure 16
<p>Real-time ship speed.</p>
Full article ">Figure 17
<p>Real-time course of the shipborne platform.</p>
Full article ">Figure 18
<p>Diagram between wind direction estimation error and comprehensive factors.</p>
Full article ">Figure 19
<p>Radar-measured Doppler spectrum for experiment I.</p>
Full article ">Figure 20
<p>Statistical results of wind direction measurement from experiment I.</p>
Full article ">Figure 21
<p>Radar-measured wind direction maps with modified cosine model from experiment I: (<b>a</b>) The proposed method. (<b>b</b>) The multi-beam sampling method.</p>
Full article ">Figure 22
<p>Histograms of radar-measured wind field with modified cosine model from experiment I: (<b>a</b>) The proposed method. (<b>b</b>) The multi-beam sampling method.</p>
Full article ">Figure 23
<p>Radar-measured wind direction maps under two models from experiment I: (<b>a</b>) Cosine model. (<b>b</b>) Hyperbolic secant model.</p>
Full article ">Figure 24
<p>Histograms of radar-measured wind field with two models from experiment I: (<b>a</b>) Cosine model. (<b>b</b>) Hyperbolic secant model.</p>
Full article ">Figure 25
<p>Radar-measured Doppler spectrum for experiment II.</p>
Full article ">Figure 26
<p>Statistical results of wind direction measurement from experiment II.</p>
Full article ">Figure 27
<p>Radar-measured wind direction maps with modified cosine model from experiment II: (<b>a</b>) The proposed method. (<b>b</b>) The multi-beam sampling method.</p>
Full article ">Figure 28
<p>Histograms of radar-measured wind field with modified cosine model from experiment II: (<b>a</b>) The proposed method. (<b>b</b>) The multi-beam sampling method.</p>
Full article ">Figure 29
<p>Radar-measured wind direction maps with the two models from experiment II: (<b>a</b>) Cosine model. (<b>b</b>) Hyperbolic secant model.</p>
Full article ">Figure 30
<p>Histograms of radar-measured wind field under the two models from experiment II: (<b>a</b>) Cosine model. (<b>b</b>) Hyperbolic secant model.</p>
Full article ">Figure 31
<p>Sailing diagram of the shipborne platform.</p>
Full article ">Figure 32
<p>Radar-B-measured wind direction map under a equivalent dual-station model from experiment III.</p>
Full article ">Figure 33
<p>Histograms of radar-measured wind field under a equivalent dual-station model from experiment III.</p>
Full article ">Figure 34
<p>Comparison of wind direction maps obtained by CFOSAT (black) and the radar (red and blue) from experiment I.</p>
Full article ">Figure 35
<p>Scatterplots for the comparisons of wind direction between the radar (the proposed method using modified cosine model) and CFOSAT from experiment I.</p>
Full article ">Figure 36
<p>Scatterplots for the comparisons of wind direction between the radar (the multi-beam sampling method using modified cosine model) and CFOSAT from experiment I.</p>
Full article ">Figure 37
<p>Histograms of wind direction field measurement error between the radar and CFOSAT from experiment I.</p>
Full article ">Figure 38
<p>Comparison of wind direction maps obtained by CFOSAT (black) and the radar using the three models from experiment I.</p>
Full article ">Figure 39
<p>Scatterplots for the comparisons of wind direction between CFOSAT and the radar using the cosine model from experiment I.</p>
Full article ">Figure 40
<p>Scatterplots for the comparisons of wind direction between CFOSAT and the radar using the hyperbolic secant model from experiment I.</p>
Full article ">Figure 41
<p>Histograms of wind direction field measurement error between CFOSAT and the radar using the three models from experiment I.</p>
Full article ">Figure 42
<p>Comparison of wind direction maps obtained by CFOSAT (black) and the radar (red and blue) from experiment II.</p>
Full article ">Figure 43
<p>Scatterplots for the comparisons of wind direction between the radar (the proposed method using the modified cosine model) and CFOSAT from experiment II.</p>
Full article ">Figure 44
<p>Scatterplots for the comparisons of wind direction between the radar (the multi-beam sampling method using the modified cosine model) and CFOSAT from experiment II.</p>
Full article ">Figure 45
<p>Histograms of wind direction field measurement error between the radar and CFOSAT from experiment II.</p>
Full article ">Figure 46
<p>Comparison of wind direction maps obtained by CFOSAT (black) and the radar using the three models from experiment II.</p>
Full article ">Figure 47
<p>Scatterplots for the comparisons of wind direction between CFOSAT and the radar using the cosine model from experiment II.</p>
Full article ">Figure 48
<p>Scatterplots for the comparisons of wind direction between CFOSAT and the radar using the hyperbolic secant model from experiment II.</p>
Full article ">Figure 49
<p>Histograms of wind direction field measurement error between CFOSAT and the radar using the three models from experiment II.</p>
Full article ">Figure 50
<p>Comparison of wind direction maps obtained by CFOSAT (black) and the radar (red and blue) from experiment III.</p>
Full article ">Figure 51
<p>Scatterplots for the comparisons of wind direction between the radar using a equivalent dual-station model and CFOSAT from experiment III.</p>
Full article ">Figure 52
<p>Histograms of wind direction field measurement error between the radar using the two methods and CFOSAT from experiment III.</p>
Full article ">
29 pages, 12696 KiB  
Article
Landsat 8 and Sentinel-2 Fused Dataset for High Spatial-Temporal Resolution Monitoring of Farmland in China’s Diverse Latitudes
by Haiyang Zhang, Yao Zhang, Tingyao Gao, Shu Lan, Fanghui Tong and Minzan Li
Remote Sens. 2023, 15(11), 2951; https://doi.org/10.3390/rs15112951 - 5 Jun 2023
Cited by 3 | Viewed by 3441
Abstract
Crop growth and development exhibit high temporal heterogeneity. It is crucial to capture the dynamic characteristics of crop growth using intensive time-series data. However, single satellites are limited by revisit cycles and weather conditions to provide dense time-series data for earth observations. However, [...] Read more.
Crop growth and development exhibit high temporal heterogeneity. It is crucial to capture the dynamic characteristics of crop growth using intensive time-series data. However, single satellites are limited by revisit cycles and weather conditions to provide dense time-series data for earth observations. However, up until now, there has been no proposed remote sensing fusion product that offers high spatial-temporal resolution specifically for farmland monitoring. Therefore, focusing on the demands of farmland remote sensing monitoring, identifying quantitative conversion relationships between multiple sensors, and providing high spatial-temporal resolution products is the first step that needs to be addressed. In this study, a fused Landsat 8 (L8) Operational Land Imager (OLI) and Sentinel-2 (S-2) multi-spectral instruments (MSI) data product for regional monitoring of farmland at high, mid, and low latitudes in China is proposed. Two image pairs for each study area covering different years were acquired from simultaneous transits of L8 OLI and S-2 MSI sensors. Then, the isolation forest (iForest) algorithm was employed to remove the anomalous pixels of image pairs and eliminate the influence of anomalous data on the conversion relationships. Subsequently, the adjustment coefficients for multi-source sensors at mixed latitudes with high spatial resolution were obtained using an ordinary least squares regression method. Finally, the L8-S-2 fused dataset based on the adjustment coefficients is proposed, which is suitable for different latitude farming areas in China. The results showed that the iForest algorithm could effectively improve the correlation between the corresponding spectral bands of the two sensors at a spatial resolution of 10 m. After the removal of anomalous pixels, excellent correlation and consistency were obtained in three study areas, and the Pearson correlation coefficients between the corresponding spectral bands almost all exceeded 0.88. Furthermore, we mixed the six image pairs of the three latitudes to obtain the adjustment coefficients derived for integrated L8 and S-2 data with high-spatial-resolution. The significance and accuracy quantification of the adjustment coefficients were thoroughly examined from three dimensions: qualitative and quantitative analyses, and spatial heterogeneity assessment. The obtained results were highly satisfactory, affirming the validity and precision of the adjustment coefficients. Finally, we applied the adjustment coefficients to crop monitoring in three latitudes. The normalized difference vegetation index (NDVI) time-series curves drawn by the integrated dataset could accurately describe the cropping system and capture the intensity changes of crop growth within the high, middle, and low latitudes of China. This study provides valuable insights into enhancing the application of multi-source remote sensing satellite data for long-term, continuous quantitative inversion of surface parameters and is of great significance for crop remote sensing monitoring. Full article
(This article belongs to the Special Issue Digital Farming with Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the three study areas. The areas connected by the blue dotted line, the red dotted line, and the yellow dotted line are SA1, SA2, and SA3, respectively; the purple-filled pentagram indicates SA4.</p>
Full article ">Figure 2
<p>SRF curves of L8 OLI (green solid lines), S-2A MSI (blue dashed lines), and S-2B MSI (red dashed lines). (<b>a</b>–<b>g</b>) denote blue, green, red, NIR, NIR-8A, SWIR1, and SWIR2 bands, respectively. The SRF curves for L8 OLI were obtained from the metadata provided by the United States Geological Survey at <a href="https://landsat.usgs.gov/spectral-characteristics-viewer" target="_blank">https://landsat.usgs.gov/spectral-characteristics-viewer</a> (accessed on 15 September 2022), while the SRF curves for S-2A/B MSI were obtained using metadata information provided by the ESA at <a href="https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/document-library/-/asset_publisher/Wk0TKajiISaR/content/sentinel-2a-spectral-responses" target="_blank">https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/document-library/-/asset_publisher/Wk0TKajiISaR/content/sentinel-2a-spectral-responses</a> (accessed on 15 September 2022).</p>
Full article ">Figure 3
<p>RGB images of seven image pairs from L8 OLI and S-2A/B MSI for the four study areas (in the background images, the RGB composites are red, green, and blue). (<b>a</b>) indicates an S-2 MSI image on 10 June 2020; (<b>b</b>) represents an L8 OLI image on 10 June 2020; (<b>c</b>) denotes an S-2 MSI image with the time of 26 April 2021; (<b>d</b>) shows an L8 OLI image with the time of 26 April 2021; (<b>e</b>) represents a S-2 MSI image on 4 September 2020; (<b>f</b>) denotes a L8 OLI image on 4 September 2020; (<b>g</b>) indicates a S-2 MSI image with the time of 2 May 2021; (<b>h</b>) represents a L8 OLI image with the time of 2 May 2021; (<b>i</b>) shows a S-2 MSI image on 10 August 2020; (<b>j</b>) represents a L8 OLI image on 10 August 2020; (<b>k</b>) shows a S-2 MSI image on 3 December 2021; (<b>l</b>) indicates a L8 OLI image on 3 December 2021; (<b>m</b>) indicates S-2 MSI image with the time of 26 April 2021; (<b>n</b>) denotes a L8 OLI image with the time of 26 April 2021; (<b>o</b>) represents the S-2 image in the farmland scene at 10 m spatial resolution; (<b>p</b>) denotes the L8 image in the farmland scene at 30 m spatial resolution.</p>
Full article ">Figure 4
<p>Process for harmonized consistency analysis of L8 and S-2 sensor images.</p>
Full article ">Figure 5
<p>Processing results of the SA1T1 image pair using the iForest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure 6
<p>Processing results of the SA1T2 image pair using the iForest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure 7
<p>Scatterplots for the corresponding spectral bands of the SA1T1 image pair with 10 m spatial resolution were processed using the iForest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L 8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure 8
<p>Scatterplots for the corresponding spectral bands of the SA1T1 image pair with 30 m spatial resolution were processed using the iForest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L 8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure 9
<p>Scatterplots for the corresponding spectral bands of the SA1T2 image pair with 10 m spatial resolution were processed using the iForest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L 8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure 10
<p>Scatterplots for the corresponding spectral bands of the SA1T2 image pair with 30 m spatial resolution were processed using the iForest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L 8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure 11
<p>Corresponding spectral band relationships of L8 OLI and S-2 MSI after processing with the iForest algorithm. (<b>a</b>) Indicates 10 m spatial resolution, and (<b>b</b>) represents 30 m spatial resolution.</p>
Full article ">Figure 12
<p>Scatterplots for the corresponding spectral bands of L8 OLI and S-2 MSI sensors with 10 m spatial resolution were processed using the iForest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L 8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure 13
<p>NDVI time series curves for three-pixel sites using the integrated L8 and S-2 images. The red unfilled pentagon, blue unfilled circle, and green unfilled pentagram indicate the NDVI valves of S-2, adjusted L8, and original L8 images, respectively, and the purple lines represent the Smoothed curves by Savitzky-Golay Smoother using S-2 and adjusted L8 NDVI values. The pixel site locations of (<b>a</b>–<b>c</b>) are 134.1276°E, 47.7272°N (SA1), 114.3339°E, 35.1436°N (SA2), and 109.30203°E, 19.62935°N (SA3).</p>
Full article ">Figure 14
<p>Comparison of correlation coefficients between L8 OLI and S-2 MSI sensors using and not using the iForest algorithm for the corresponding spectral bands. (<b>a</b>) represents an image pair on SA1T1; (<b>b</b>) indicates an image pair on SA1T2; (<b>c</b>) represents an image pair on SA2T1; (<b>d</b>) denotes an image pair on SA2T2; (<b>e</b>) indicates an image pair on SA3T1; (<b>f</b>) represents an image pair on SA3T2.</p>
Full article ">Figure A1
<p>Processing results of the SA2T1 image pair using the isolation forest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure A2
<p>Processing results of the SA2T2 image pair using the isolation forest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure A3
<p>Processing results of the SA3T1 image pair using the isolation forest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure A4
<p>Processing results of the SA3T2 image pair using the isolation forest algorithm. (<b>a</b>–<b>g</b>) have a blue background with 10 m spatial resolution, and (<b>h</b>–<b>n</b>) have a green background with 30 m spatial resolution.</p>
Full article ">Figure A5
<p>Scatterplots for the corresponding spectral bands of the SA2T1 image pair with 10 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A6
<p>Scatterplots for the corresponding spectral bands of the SA2T1 image pair with 30 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A7
<p>Scatterplots for the corresponding spectral bands of the SA2T2 image pair with 10 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A8
<p>Scatterplots for the corresponding spectral bands of the SA2T2 image pair with 30 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A9
<p>Scatterplots for the corresponding spectral bands of the SA3T1 image pair with 10 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A10
<p>Scatterplots for the corresponding spectral bands of the SA3T1 image pair with 30 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A11
<p>Scatterplots for the corresponding spectral bands of the SA3T2 image pair with 10 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">Figure A12
<p>Scatterplots for the corresponding spectral bands of the SA3T2 image pair with 30 m spatial resolution were processed using the isolation forest algorithm. (<b>a</b>–<b>g</b>) denote the scatterplots between the corresponding blue, green, red, NIR-8, NIR-8A, SWIR1 and SWIR2 bands of the S-2 and L8 sensors, respectively. The black dashed line is the 1:1 line; the red solid line is the OLS regression.</p>
Full article ">
36 pages, 7870 KiB  
Article
Trade-Off and Synergy Relationships and Spatial Bundle Analysis of Ecosystem Services in the Qilian Mountains
by Yipeng Wang, Hongyi Cheng, Naiang Wang, Chufang Huang, Kaili Zhang, Bin Qiao, Yuanyuan Wang and Penghui Wen
Remote Sens. 2023, 15(11), 2950; https://doi.org/10.3390/rs15112950 - 5 Jun 2023
Cited by 8 | Viewed by 2111
Abstract
Significant heterogeneity has been observed among different ecosystem services (ES). Understanding the trade-offs and synergies among ES and delineating ecological functional zones is crucial for formulating regional management policies that improve human well-being and sustainably develop and maintain ecosystems. In this study, we [...] Read more.
Significant heterogeneity has been observed among different ecosystem services (ES). Understanding the trade-offs and synergies among ES and delineating ecological functional zones is crucial for formulating regional management policies that improve human well-being and sustainably develop and maintain ecosystems. In this study, we used the Integrated Valuation of Ecosystem Services and Trade-offs (InVEST) and Carnegie–Ames–Stanford Approach (CASA) models to evaluate the spatial distribution patterns of nine ES (food supply, raw material supply, water resource supply, water connotation, climate regulation, soil conservation, water purification, habitat quality, and entertainment tourism) in the Qilian Mountains from 2000 to 2018. We also investigated the trade-offs and synergistic relationships among ES through Spearman correlation analysis, identified ES hotspots through exploratory spatial data analysis, and identified ES bundles (ESB) using K-means clustering. Our results revealed that water purification and habitat quality remained relatively stable, while food supply, raw material supply, water resource supply, water conservation, climate regulation, soil conservation, and entertainment tourism increased by 1038.83 Yuan·ha−1, 448.21 Yuan·ha−1, 55.45 mm, 7.80 mm, 0.60 tc·ha−1, 40.01 t·ha−1 and 4.82, respectively. High-value areas for water resource supply were mainly concentrated in the high-altitude mountainous area, whereas high-value areas for soil conservation were found in the western and eastern parts of the study area. The low-value areas of water purification were primarily located in the east, while the remaining six services were highly distributed in the east and were less common in the west. Correlation analysis showed that water resource supply, water conservation, and soil conservation exhibited a synergistic relationship in the Qilian Mountains. Moreover, food supply, raw material supply, climate regulation, habitat quality, and entertainment tourism showed synergistic relationships. However, there were trade-offs between food supply and water purification as well as water resource supply, and habitat quality showed a tradeoff with water resource supply, water conservation, and soil conservation. We identified four ESB. The food supply bundle consisted mainly of farmland ecosystems, while the windbreak and sand fixation and ecological coordination bundles were dominant in the Qilian Mountains. Notably, the area of the water conservation bundle increased significantly. Our comprehensive findings on ES and ESB can provide a theoretical foundation for the formulation of ecological management policies and the sustainable development of ecosystems in the Qilian Mountains. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area.</p>
Full article ">Figure 2
<p>Overview of the study area: (<b>a</b>) data validation basin geographic location; (<b>b</b>) distribution of <span class="html-italic">NPP</span> validation points.</p>
Full article ">Figure 3
<p>Validation of model results: (<b>a</b>) water resource supply validation; (<b>b</b>) soil conservation validation; (<b>c</b>) water purification validation; (<b>d</b>) net primary productivity (<span class="html-italic">NPP</span>) validation. CMB, YLX, DCW, JTL, and ZMS represent Changmabao, Yingluoxia, Dangchengwan, Jiutiaoling, and Zamusi hydrological stations, respectively.</p>
Full article ">Figure 4
<p>Mean value of ecosystem services in Qilian Mountains: (<b>a</b>–<b>i</b>): s food supply, raw material supply, water resource supply, water conservation, climate regulation, soil conservation, nitrogen export, habitat quality, and entertainment tourism services, respectively.</p>
Full article ">Figure 5
<p>Local Moran’s I of ecosystem services in the Qilian Mountains: (<b>a<sub>1</sub></b>–<b>i<sub>3</sub></b>) represents food supply, raw material supply, water resource supply, water conservation, climate regulation, soil conservation, nitrogen export, habitat quality, and entertainment tourism services, respectively; (1–3) represents the years 2000, 2010, and 2018, respectively.</p>
Full article ">Figure 6
<p>Ecosystem trade-offs and synergies in the Qilian Mountains in 2000, 2010, and 2018. * Indicates that the test for significance has been passed (<span class="html-italic">p</span>-value less than 0.05). FS, RMS, WS, WC, RC, SC, NL, HQ, and ET represent food supply, raw material supply, water resource supply, water conservation, climate regulation, soil conservation, nitrogen export, habitat quality, and entertainment tourism services, respectively. Red and blue indicate positive and negative correlations, respectively.</p>
Full article ">Figure 7
<p>Spatial distribution and composition structure of ecosystem service bundles (ESB) in the Qilian Mountains: (<b>a</b>–<b>c</b>) ESB in 2000, 2010, and 2018, respectively; (<b>d</b>–<b>f</b>) service composition structure of different ESB in 2000, 2010, and 2018, respectively; (<b>g</b>) area of each service bundle in different years.</p>
Full article ">Figure A1
<p>Spatial distribution of ES in the Qilian Mountains from 2000 to 2018: (<b>a<sub>1</sub></b>–<b>i<sub>3</sub></b>) represents food supply, raw material supply, water resource supply, water conservation, climate regulation, soil conservation, nitrogen export, habitat quality, and entertainment tourism services, respectively; the subscript (1–3) represents the years 2000, 2010, and 2018, respectively.</p>
Full article ">
19 pages, 6623 KiB  
Article
A Bio-Inspired MEMS Wake Detector for AUV Tracking and Coordinated Formation
by Qingyu Qiao, Xiangzheng Kong, Shufeng Wu, Guochang Liu, Guojun Zhang, Hua Yang, Wendong Zhang, Yuhua Yang, Licheng Jia, Changde He, Jiangong Cui and Renxin Wang
Remote Sens. 2023, 15(11), 2949; https://doi.org/10.3390/rs15112949 - 5 Jun 2023
Cited by 3 | Viewed by 1699
Abstract
AUV (Autonomous Underwater Vehicle) coordinated formation can expand the detection range, improve detection efficiency, and complete complex tasks, which requires each AUV to have the ability to track and locate. A wake detector provides a new technical approach for AUV cooperative formation warfare. [...] Read more.
AUV (Autonomous Underwater Vehicle) coordinated formation can expand the detection range, improve detection efficiency, and complete complex tasks, which requires each AUV to have the ability to track and locate. A wake detector provides a new technical approach for AUV cooperative formation warfare. Now, most of the existing artificial lateral line detectors are for one-dimensional flow field applications, which are difficult to use for wake detection of AUVs. Therefore, based on the pressure gradient sensing mechanism of the canal neuromasts, we apply Micro-Electro-Mechanical System (MEMS) technology to develop a lateral line-inspired MEMS wake detector. The sensing mechanism, design, and fabrication are demonstrated in detail. Experimental results show the detector’s sensitivity is 147 mV·(m/s)−1, and the detection threshold is 0.3 m/s. In addition, the vector test results verify it has vector-detecting capacity. This wake detector can serve AUVs wake detection and tracking technology, which will be promising in AUV positioning and coordinated formation. Full article
(This article belongs to the Special Issue Advances on Autonomous Underwater Vehicles (AUV))
Show Figures

Figure 1

Figure 1
<p>Diagram of wake detector applied to AUV coordinated formation.</p>
Full article ">Figure 2
<p>Biomimetic principles of microstructure. (<b>a</b>) The lateral line system. (<b>b</b>) The sensitive microstructure of the detector. (<b>c</b>) Bionic diversion shell. (<b>d</b>) Pressure gradient simulation.</p>
Full article ">Figure 3
<p>Detector circuit. (<b>a</b>) The principle diagram of the detector circuit. (<b>b</b>) The Schematic diagram of resistance arrangement and beam structure.</p>
Full article ">Figure 4
<p>The diagram for force-bearing analysis on the single-side cantilever beam.</p>
Full article ">Figure 5
<p>Parameterized scan results of the model. (<b>a</b>) The correlation between max stress on the beam and parameters <span class="html-italic">h</span>, <span class="html-italic">r</span>. (<b>b</b>) The correlation between resonant frequency and parameters <span class="html-italic">h</span>, <span class="html-italic">r</span>. (<b>c</b>) The correlation between max stress on the beam and parameters <span class="html-italic">t</span>, <span class="html-italic">b</span>. (<b>d</b>) The correlation between resonant frequency and parameters <span class="html-italic">t</span>, <span class="html-italic">b</span>.</p>
Full article ">Figure 6
<p>Stress distribution on the crossbeam.</p>
Full article ">Figure 7
<p>Modal simulation of detector structure. (<b>a</b>) Simulation results in the air; (<b>b</b>) simulation results in the water.</p>
Full article ">Figure 8
<p>The bionic diversion cap of the lateral line canal.</p>
Full article ">Figure 9
<p>Wake environmental model.</p>
Full article ">Figure 10
<p>Scanning results of stress. The influence of (<b>a</b>) <span class="html-italic">n</span> and <span class="html-italic">m</span> on stress; (<b>b</b>) <span class="html-italic">s</span> on stress.</p>
Full article ">Figure 11
<p>Results of the finite element analysis. (<b>a</b>) The flow velocity distribution around and inside the encapsulation cap when v<sub>x</sub> = 0 m/s; (<b>b</b>) Velocity distribution around and inside the encapsulation cap and displacement of the cilium crossbeam when v<sub>x</sub> = 0.5 m/s; (<b>c</b>) Stress distribution curves on crossbeams at different flow rates; (<b>d</b>) Displacement curve in (<b>b</b>).</p>
Full article ">Figure 12
<p>Simulation curve of wake detector output sensitivity after 300 times magnification.</p>
Full article ">Figure 13
<p>General fabrication of the sensitive unit. (<b>a</b>) Making piezoresistive and heavily doped regions. (<b>b</b>) Sputtering and graphing the metal leads. (<b>c</b>) Forming the crossbeam. (<b>d</b>) Releasing the crossbeam structure.</p>
Full article ">Figure 14
<p>The encapsulation of the wake detector.</p>
Full article ">Figure 15
<p>The photo of the detector test platform.</p>
Full article ">Figure 16
<p>Diagram of detector position. (<b>a</b>) Detector position diagram during testing. (<b>b</b>) Detector position diagram when measuring X channel signal. (<b>c</b>) Detector position diagram when measuring Y channel signal.</p>
Full article ">Figure 17
<p>The fitting curve detector sensitivity.</p>
Full article ">Figure 18
<p>(<b>a</b>,<b>b</b>) Histogram of the output signal of the detector in vector property test; (<b>c</b>,<b>d</b>) distribution diagram of vector property test points.</p>
Full article ">Figure 19
<p>The photo of the detector test platform.</p>
Full article ">Figure 20
<p>Detection range scatter of the detector at different AUV speeds.</p>
Full article ">
24 pages, 6832 KiB  
Article
Developing Spatial and Temporal Continuous Fractional Vegetation Cover Based on Landsat and Sentinel-2 Data with a Deep Learning Approach
by Zihao Wang, Dan-Xia Song, Tao He, Jun Lu, Caiqun Wang and Dantong Zhong
Remote Sens. 2023, 15(11), 2948; https://doi.org/10.3390/rs15112948 - 5 Jun 2023
Cited by 6 | Viewed by 2386
Abstract
Fractional vegetation cover (FVC) has a significant role in indicating changes in ecosystems and is useful for simulating growth processes and modeling land surfaces. The fine-resolution FVC products represent detailed vegetation cover information within fine grids. However, the long revisit cycle of satellites [...] Read more.
Fractional vegetation cover (FVC) has a significant role in indicating changes in ecosystems and is useful for simulating growth processes and modeling land surfaces. The fine-resolution FVC products represent detailed vegetation cover information within fine grids. However, the long revisit cycle of satellites with fine-resolution sensors and cloud contamination has resulted in poor spatial and temporal continuity. In this study, we propose to derive a spatially and temporally continuous FVC dataset by comparing multiple methods, including the data-fusion method (STARFM), curve-fitting reconstruction (S-G filtering), and deep learning prediction (Bi-LSTM). By combining Landsat and Sentinel-2 data, the integrated FVC was used to construct the initial input of fine-resolution FVC with gaps. The results showed that the FVC of gaps were estimated and time-series FVC was reconstructed. The Bi-LSTM method was the most effective and achieved the highest accuracy (R2 = 0.857), followed by the data-fusion method (R2 = 0.709) and curve-fitting method (R2 = 0.705), and the optimal time step was 3. The inclusion of relevant variables in the Bi-LSTM model, including LAI, albedo, and FAPAR derived from coarse-resolution products, further reduced the RMSE from 5.022 to 2.797. By applying the optimized Bi-LSTM model to Hubei Province, a time series 30 m FVC dataset was generated, characterized by a spatial and temporal continuity. In terms of the major vegetation types in Hubei (e.g., evergreen and deciduous forests, grass, and cropland), the seasonal trends as well as the spatial details were captured by the reconstructed 30 m FVC. It was concluded that the proposed method was applicable to reconstruct the time-series FVC over a large spatial scale, and the produced fine-resolution dataset can support the data needed by many Earth system science studies. Full article
Show Figures

Figure 1

Figure 1
<p>The study areas and the major land cover types of Hubei province.</p>
Full article ">Figure 2
<p>The framework of spatio-temporal reconstruction of 30 m FVC using different methods.</p>
Full article ">Figure 3
<p>The structure of the LSTM/Bi-LSTM network.</p>
Full article ">Figure 4
<p>Comparison between the Landsat and Sentinel-2 FVC acquired for the same date over (<b>a</b>) forest, (<b>b</b>) grassland, and (<b>c</b>) cropland.</p>
Full article ">Figure 5
<p>Comparisons between real FVC and predicted FVC on 24 December 2017: (<b>a</b>) is the real FVC with gaps; (<b>b</b>) is the FVC predicted by the STARFM method; (<b>c</b>) is the FVC predicted by the S-G filter; and (<b>d</b>) is the FVC predicted by LSTM.</p>
Full article ">Figure 6
<p>The scatter plot comparisons of real FVC and predicted FVC on 24 December 2017: the (<b>left</b>) plot is the real FVC compared with STARFM-predicted FVC; the (<b>middle</b>) plot is the real FVC compared with S-G-filter-predicted FVC; the (<b>right</b>) plot is the real FVC compared with LSTM-predicted FVC.</p>
Full article ">Figure 7
<p>Time-series curves from real data and from other time reconstruction methods.</p>
Full article ">Figure 8
<p>Comparison of the validation results of the LSTM model (blue) and Bi-LSTM model (red).</p>
Full article ">Figure 9
<p>Model validation results with different time steps (blue line is the result of three steps, red line is the result of one step).</p>
Full article ">Figure 10
<p>Phik (φk) correlation coefficients for different GLASS products and FVC.</p>
Full article ">Figure 11
<p>The image pairs of 30 m FVC before and after reconstructions using the optimized Bi-LSTM method with multiple variables.</p>
Full article ">Figure 12
<p>Comparison of the reconstructed FVC in 2017 using the optimized multivariate Bi-LSTM model to three reference FVC products for different vegetation types, including grassland, cropland, and forest. The 30 m FVC has been aggregated to 500 m for the purpose of comparison.</p>
Full article ">Figure 13
<p>The reconstructed 30 m/16 days FVC of Hubei province in 2017 using the optimized Bi-LSTM method. The year and date-of-year are labeled below each mosaic in the format of YEARDOY.</p>
Full article ">Figure 14
<p>Comparison between the reconstructed FVC in January and coarse-resolution products for major vegetation types in Hubei. For illustration purpose, all FVC pixels have been aggregated to 1 km resolution.</p>
Full article ">Figure 15
<p>Comparison between the reconstructed FVC in July and coarse-resolution products for major vegetation types in Hubei. For illustration purposes, all FVC pixels have been aggregated to 1 km resolution.</p>
Full article ">
46 pages, 57708 KiB  
Article
Impacts of Climate Change and Human Activities on Plant Species α-Diversity across the Tibetan Grasslands
by Shaolin Huang and Gang Fu
Remote Sens. 2023, 15(11), 2947; https://doi.org/10.3390/rs15112947 - 5 Jun 2023
Cited by 21 | Viewed by 2651
Abstract
Plant species α-diversity is closely correlated with ecosystem structures and functions. However, whether climate change and human activities will reduce plant species α-diversity remains controversial. In this study, potential (i.e., potential species richness: SRp, Shannonp, Simpsonp and Pieloup) and actual plant species α-diversity [...] Read more.
Plant species α-diversity is closely correlated with ecosystem structures and functions. However, whether climate change and human activities will reduce plant species α-diversity remains controversial. In this study, potential (i.e., potential species richness: SRp, Shannonp, Simpsonp and Pieloup) and actual plant species α-diversity (i.e., actual species richness: SRa, Shannona, Simpsona and Pieloua) during 2000–2020 were quantified based on random forests in grasslands on the Tibetan Plateau. Overall, climate change had positive influences on potential plant species α-diversity across all the grassland systems. However, more than one-third areas showed decreasing trends for potential plant species α-diversity. Climate change increased the SRp at rates of 0.0060 and 0.0025 yr−1 in alpine steppes and alpine meadows, respectively. Temperature change predominated the variations of Shannonp and Simpsonp, and radiation change predominated the variations of SRp and Pieloup. Geography position, local temperature, precipitation and radiation conditions regulated the impacts of climate change on potential species α-diversity. On average, human activities caused 1% plant species loss but elevated the Shannon, Simpson and Pielou by 26%, 4% and 5%, respectively. There were 46.51%, 81.08%, 61.26% and 61.10% areas showing positive effects of human activities on plant species richness, Shannon, Simpson and Pielou, respectively. There were less than 48% areas showing increasing trends of human activities’ impacts on plant species α-diversity. Human activities increased plant species richness by 2% in alpine meadows but decreased plant species richness by 1% in alpine steppes. Accordingly, both the impacts of climate change and human activities on plant species α-diversity were not always negative and varied with space and grassland types. The study warned that both climate change and human activities may not cause as much species loss as expected. This study also cautioned that the impacts of radiation change on plant species α-diversity should be at least put on the same level as the impacts of climate warming and precipitation change on plant α-diversity. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial patterns for the change rate in (<b>a</b>) potential species richness (slope_SR<sub>p</sub>), (<b>b</b>) actual species richness (slope_SR<sub>a</sub>), (<b>c</b>) potential Shannon (slope_Shannon<sub>p</sub>), (<b>d</b>) actual Shannon (slope_Shannon<sub>a</sub>), (<b>e</b>) potential Simpson (slope_Simpson<sub>p</sub>), (<b>f</b>) actual Simpson (slope_Simpson<sub>a</sub>), (<b>g</b>) potential Pielou (slope_Pielou<sub>p</sub>) and (<b>h</b>) actual Pielou (slope_Pielou<sub>a</sub>).</p>
Full article ">Figure 2
<p>Spatial patterns for the significance of the change rate in (<b>a</b>) potential species richness (slope_SR<sub>p</sub>), (<b>b</b>) actual species richness (slope_SR<sub>a</sub>), (<b>c</b>) potential Shannon (slope_Shannon<sub>p</sub>), (<b>d</b>) actual Shannon (slope_Shannon<sub>a</sub>), (<b>e</b>) potential Simpson (slope_Simpson<sub>p</sub>), (<b>f</b>) actual Simpson (slope_Simpson<sub>a</sub>), (<b>g</b>) potential Pielou (slope_Pielou<sub>p</sub>) and (<b>h</b>) actual Pielou (slope_Pielou<sub>a</sub>).</p>
Full article ">Figure 3
<p>Relative contributions of geography position (i.e., longitude, latitude and elevation), mean climate conditions (i.e., mean annual temperature, mean annual precipitation and mean annual radiation in 2000–2020) and climate change (i.e., change rate for annual temperature, annual precipitation and annual radiation in 2000–2020) to change rate of (<b>a</b>) potential species richness (slope_SR<sub>p</sub>), (<b>b</b>) actual species richness (slope_SR<sub>a</sub>), (<b>c</b>) potential Shannon (slope_Shannon<sub>p</sub>), (<b>d</b>) actual Shannon (slope_Shannon<sub>a</sub>), (<b>e</b>) potential Simpson (slope_Simpson<sub>p</sub>), (<b>f</b>) actual Simpson (slope_ Simpson<sub>a</sub>), (<b>g</b>) potential Pielou (slope_Pielou<sub>p</sub>) and (<b>h</b>) actual Pielou (slope_Pielou<sub>a</sub>).</p>
Full article ">Figure 4
<p>Spatial patterns for (<b>a</b>) the ratio of mean actual species richness to potential species richness (Ratio<sub>SR</sub>), (<b>b</b>) the ratio of mean actual Shannon to potential Shannon (Ratio<sub>Shannon</sub>), (<b>c</b>) the ratio of mean actual Simpson to potential Simpson (Ratio<sub>Simpson</sub>) and (<b>d</b>) the ratio of mean actual Pielou to potential Pielou (Ratio<sub>Pielou</sub>).</p>
Full article ">Figure 5
<p>Relative contributions of geography position (i.e., longitude, latitude and elevation), mean climate conditions + NDVI<sub>max</sub> (i.e., mean annual temperature, mean annual precipitation, mean annual radiation and mean maximum normalized difference vegetation index during growing season in 2000–2020) and climate change + slope_NDVI<sub>max</sub> (i.e., change rate for annual temperature, annual precipitation, annual radiation and maximum normalized difference vegetation index during growing season in 2000–2020) to (<b>a</b>) mean effect of human activities on plant species richness (Ratio<sub>SR</sub>), (<b>b</b>) change rate for effect of human activities on plant species richness (slope_Ratio<sub>SR</sub>), (<b>c</b>) mean effect of human activities on plant Shannon (Ratio<sub>Shannon</sub>), (<b>d</b>) change rate for effect of human activities on plant Shannon (slope_Ratio<sub>Shannon</sub>), (<b>e</b>) mean effect of human activities on plant Simpson (Ratio<sub>Simpson</sub>), (<b>f</b>) change rate for effect of human activities on plant Simpson (slope_Ratio<sub>Simpson</sub>), (<b>g</b>) mean effect of human activities on plant Pielou (Ratio<sub>Pielou</sub>) and (<b>h</b>) change rate for effect of human activities on plant Pielou (slope_Ratio<sub>Pielou</sub>).</p>
Full article ">Figure 6
<p>Spatial patterns for (<b>a</b>) the change rate in the ratio of actual species richness to potential species richness (slope_Ratio<sub>SR</sub>), (<b>b</b>) the change rate in the ratio of actual Shannon to potential Shannon (slope_Ratio<sub>Shannon</sub>), (<b>c</b>) the change rate in the ratio of actual Simpson to potential Simpson (slope_Ratio<sub>Simpson</sub>), (<b>d</b>) the change rate in the ratio of actual Pielou to potential Pielou (slope_Ratio<sub>Pielou</sub>), (<b>e</b>) the significances of slope_Ratio<sub>SR</sub> (p_ slope_Ratio<sub>SR</sub>), (<b>f</b>) the significances of slope_Ratio<sub>Shannon</sub> (p_ slope_Ratio<sub>Shannon</sub>), (<b>g</b>) the significances of slope_Ratio<sub>Simpson</sub> (p_ slope_Ratio<sub>Simpson</sub>) and (<b>h</b>) the significances of slope_Ratio<sub>Pielou</sub> (p_ slope_Ratio<sub>Pielou</sub>).</p>
Full article ">Figure 7
<p>Spatial distribution for the changes of human activities on (<b>a</b>) species richness, (<b>b</b>) Shannon, (<b>c</b>) Simpson and (<b>d</b>) Pielou.</p>
Full article ">Figure A1
<p>Spatial patterns for the rate of change in (<b>a</b>) annual temperature (slope_AT), (<b>b</b>) annual precipitation (slope_AP), (<b>c</b>) annual radiation (slope_ARad) and (<b>d</b>) maximum normalized difference vegetation index (slope_NDVI<sub>max</sub>).</p>
Full article ">Figure A2
<p>Climate change scenes for alpine grassland regions on the Tibetan Plateau in 2000–2020.</p>
Full article ">Figure A3
<p>Correlations (<b>a</b>) between the rate of change in the potential species richness (slope_SR<sub>p</sub>) and longitude, (<b>b</b>) between the rate of change in the actual species richness (slope_SR<sub>a</sub>) and longitude, (<b>c</b>) between the slope_SR<sub>p</sub> and latitude, (<b>d</b>) between the slope_SR<sub>a</sub> and latitude, (<b>e</b>) between the slope_SR<sub>p</sub> and elevation and (<b>f</b>) between the slope_SR<sub>a</sub> and elevation.</p>
Full article ">Figure A4
<p>Correlations (<b>a</b>) between the rate of change in the potential Shannon (slope_Shannon<sub>p</sub>) and longitude, (<b>b</b>) between the rate of change in the actual Shannon (slope_Shannon<sub>a</sub>) and longitude, (<b>c</b>) between the slope_Shannon<sub>p</sub> and latitude, (<b>d</b>) between the slope_Shannon<sub>a</sub> and latitude, (<b>e</b>) between the slope_Shannon<sub>p</sub> and elevation and (<b>f</b>) between the slope_Shannon<sub>a</sub> and elevation.</p>
Full article ">Figure A5
<p>Correlations (<b>a</b>) between the rate of change in the potential Simpson (slope_Simpson<sub>p</sub>) and longitude, (<b>b</b>) between the rate of change in the actual Simpson (slope_Simpson<sub>a</sub>) and longitude, (<b>c</b>) between the slope_Simpson<sub>p</sub> and latitude, (<b>d</b>) between the slope_Simpson<sub>a</sub> and latitude, (<b>e</b>) between the slope_Simpson<sub>p</sub> and elevation and (<b>f</b>) between the slope_Simpson<sub>a</sub> and elevation.</p>
Full article ">Figure A6
<p>Correlations (<b>a</b>) between the rate of change in the potential Pielou (slope_Pielou<sub>p</sub>) and longitude, (<b>b</b>) between the rate of change in the actual Pielou (slope_Pielou<sub>a</sub>) and longitude, (<b>c</b>) between the slope_Pielou<sub>p</sub> and latitude, (<b>d</b>) between the slope_Pielou<sub>a</sub> and latitude, (<b>e</b>) between the slope_Pielou<sub>p</sub> and elevation and (<b>f</b>) between the slope_Pielou<sub>a</sub> and elevation.</p>
Full article ">Figure A7
<p>Correlations (<b>a</b>) between the rate of change in the potential species richness (slope_SR<sub>p</sub>) and mean annual temperature (MAT), (<b>b</b>) between the rate of change in the actual species richness (slope_SR<sub>a</sub>) and MAT, (<b>c</b>) between the slope_SR<sub>p</sub> and mean annual precipitation (MAP), (<b>d</b>) between the slope_SR<sub>a</sub> and MAP, (<b>e</b>) between the slope_SR<sub>p</sub> and mean annual radiation (MARad) and (<b>f</b>) between the slope_SR<sub>a</sub> and MARad.</p>
Full article ">Figure A8
<p>Correlations (<b>a</b>) between the rate of change in the potential Shannon (slope_Shannon<sub>p</sub>) and mean annual temperature (MAT), (<b>b</b>) between the rate of change in the actual Shannon (slope_Shannon<sub>a</sub>) and MAT, (<b>c</b>) between the slope_Shannon<sub>p</sub> and mean annual precipitation (MAP), (<b>d</b>) between the slope_Shannon<sub>a</sub> and MAP, (<b>e</b>) between the slope_Shannon<sub>p</sub> and mean annual radiation (MARad) and (<b>f</b>) between the slope_Shannon<sub>a</sub> and MARad.</p>
Full article ">Figure A9
<p>Correlations (<b>a</b>) between the rate of change in the potential Simpson (slope_Simpson<sub>p</sub>) and mean annual temperature (MAT), (<b>b</b>) between the rate of change in the actual Simpson (slope_Simpson<sub>a</sub>) and MAT, (<b>c</b>) between the slope_Simpson<sub>p</sub> and mean annual precipitation (MAP), (<b>d</b>) between the slope_Simpson<sub>a</sub> and MAP, (<b>e</b>) between the slope_Simpson<sub>p</sub> and mean annual radiation (MARad) and (<b>f</b>) between the slope_Simpson<sub>a</sub> and MARad.</p>
Full article ">Figure A10
<p>Correlations (<b>a</b>) between the rate of change in the potential Pielou (slope_Pielou<sub>p</sub>) and mean annual temperature (MAT), (<b>b</b>) between the rate of change in the actual Pielou (slope_Pielou<sub>a</sub>) and MAT, (<b>c</b>) between the slope_Pielou<sub>p</sub> and mean annual precipitation (MAP), (<b>d</b>) between the slope_Pielou<sub>a</sub> and MAP, (<b>e</b>) between the slope_Pielou<sub>p</sub> and mean annual radiation (MARad) and (<b>f</b>) between the slope_Pielou<sub>a</sub> and MARad.</p>
Full article ">Figure A11
<p>Correlations (<b>a</b>) between the rate of change in the potential species richness (slope_SR<sub>p</sub>) and the rate of change in annual temperature (ΔAT), (<b>b</b>) between the rate of change in the actual species richness (slope_SR<sub>a</sub>) and ΔAT, (<b>c</b>) between the slope_SR<sub>p</sub> and the rate of change in annual precipitation (ΔAP), (<b>d</b>) between the slope_SR<sub>a</sub> and ΔAP, (<b>e</b>) between the slope_SR<sub>p</sub> and the rate of change in annual radiation (ΔARad) and (<b>f</b>) between the slope_SR<sub>a</sub> and ΔARad.</p>
Full article ">Figure A12
<p>Correlations (<b>a</b>) between the rate of change in the potential Shannon (slope_Shannon<sub>p</sub>) and the rate of change in annual temperature (ΔAT), (<b>b</b>) between the rate of change in the actual Shannon (slope_Shannon<sub>a</sub>) and ΔAT, (<b>c</b>) between the slope_Shannon<sub>p</sub> and the rate of change in annual precipitation (ΔAP), (<b>d</b>) between the slope_Shannon<sub>a</sub> and ΔAP, (<b>e</b>) between the slope_Shannon<sub>p</sub> and the rate of change in annual radiation (ΔARad) and (<b>f</b>) between the slope_Shannon<sub>a</sub> and ΔARad.</p>
Full article ">Figure A13
<p>Correlations (<b>a</b>) between the rate of change in the potential Simpson (slope_Simpson<sub>p</sub>) and the rate of change in annual temperature (ΔAT), (<b>b</b>) between the rate of change in the actual Simpson (slope_Simpson<sub>a</sub>) and ΔAT, (<b>c</b>) between the slope_Simpson<sub>p</sub> and the rate of change in annual precipitation (ΔAP), (<b>d</b>) between the slope_Simpson<sub>a</sub> and ΔAP, (<b>e</b>) between the slope_Simpson<sub>p</sub> and the rate of change in annual radiation (ΔARad) and (<b>f</b>) between the slope_Simpson<sub>a</sub> and ΔARad.</p>
Full article ">Figure A14
<p>Correlations (<b>a</b>) between the rate of change in the potential Pielou (slope_Pielou<sub>p</sub>) and the rate of change in annual temperature (ΔAT), (<b>b</b>) between the rate of change in the actual Pielou (slope_Pielou<sub>a</sub>) and ΔAT, (<b>c</b>) between the slope_Pielou<sub>p</sub> and the rate of change in annual precipitation (ΔAP), (<b>d</b>) between the slope_Pielou<sub>a</sub> and ΔAP, (<b>e</b>) between the slope_Pielou<sub>p</sub> and the rate of change in annual radiation (ΔARad) and (<b>f</b>) between the slope_Pielou<sub>a</sub> and ΔARad.</p>
Full article ">Figure A15
<p>The relative contribution of (<b>a</b>) longitude, latitude and elevation to potential species richness (SR<sub>p</sub>), (<b>b</b>) mean annual temperature (MAT), mean annual precipitation (MAP) and mean annual radiation (MARad) to SR<sub>p</sub>, (<b>c</b>) the change rate for annual temperature (slope_AT), annual precipitation (slope_AP) and annual radiation (slope_ARad) to SR<sub>p</sub>, (<b>d</b>) longitude, latitude and elevation to potential Shannon (Shannon<sub>p</sub>), (<b>e</b>) MAT, MAP and MARad to Shannon<sub>p</sub>, (<b>f</b>) slope_AT, slope_AP and slope_ARad to Shannon<sub>p</sub>, (<b>g</b>) longitude, latitude and elevation to potential Simpson (Simpson<sub>p</sub>), (<b>h</b>) MAT, MAP and MARad to Simpson<sub>p</sub>, (<b>i</b>) slope_AT, slope_AP and slope_ARad to Simpson<sub>p</sub>, (<b>j</b>) longitude, latitude and elevation to potential Pielou (Pielou<sub>p</sub>), (<b>k</b>) MAT, MAP and MARad to Pielou<sub>p</sub> and (<b>l</b>) slope_AT, slope_AP and slope_ARad to Pielou<sub>p</sub>.</p>
Full article ">Figure A16
<p>The relative contribution of (<b>a</b>) longitude, latitude and elevation to actual species richness (SR<sub>a</sub>), (<b>b</b>) mean annual temperature (MAT), mean annual precipitation (MAP) and mean annual radiation (MARad) to SR<sub>a</sub>, (<b>c</b>) the change rate for annual temperature (slope_AT), annual precipitation (slope_AP) and annual radiation (slope_ARad) to SR<sub>a</sub>, (<b>d</b>) longitude, latitude and elevation to actual Shannon (Shannon<sub>a</sub>), (<b>e</b>) MAT, MAP and MARad to Shannon<sub>a</sub>, (<b>f</b>) slope_AT, slope_AP and slope_ARad to Shannon<sub>a</sub>, (<b>g</b>) longitude, latitude and elevation to actual Simpson (Simpson<sub>a</sub>), (<b>h</b>) MAT, MAP and MARad to Simpson<sub>a</sub>, (<b>i</b>) slope_AT, slope_AP and slope_ARad to Simpson<sub>a</sub>, (<b>j</b>) longitude, latitude and elevation to actual Pielou (Pielou<sub>a</sub>), (<b>k</b>) MAT, MAP and MARad to Pielou<sub>a</sub> and (<b>l</b>) slope_AT, slope_AP and slope_ARad to Pielou<sub>a</sub>.</p>
Full article ">Figure A17
<p>Correlations (<b>a</b>) between the rate of change in the actual species richness (slope_SR<sub>a</sub>) and mean value of maximum normalized difference vegetation index (MNDVI<sub>max</sub>), (<b>b</b>) between slope_SR<sub>a</sub> and the rate of change in the maximum normalized difference vegetation index (slope_NDVI<sub>max</sub>), (<b>c</b>) between the rate of change in the actual Shannon (slope_Shannon<sub>a</sub>) and MNDVI<sub>max</sub>, (<b>d</b>) between slope_Shannon<sub>a</sub> and slope_NDVI<sub>max</sub>, (<b>e</b>) between the rate of change in the actual Simpson (slope_Simpson<sub>a</sub>) and MNDVI<sub>max</sub>, (<b>f</b>) between slope_Simpson<sub>a</sub> and slope_NDVI<sub>max</sub>, (<b>g</b>) between the rate of change in the actual Pielou (slope_Pielou<sub>a</sub>) and MNDVI<sub>max</sub> and (<b>h</b>) between slope_Pielou<sub>a</sub> and slope_NDVI<sub>max</sub>.</p>
Full article ">Figure A18
<p>Correlations (<b>a</b>) between the ratio of actual to potential species richness (Ratio<sub>SR</sub>) and longitude, (<b>b</b>) between the Ratio<sub>SR</sub> and latitude, (<b>c</b>) between the Ratio<sub>SR</sub> and elevation, (<b>d</b>) between the ratio of actual to potential Shannon (Ratio<sub>Shannon</sub>) and longitude, (<b>e</b>) between the Ratio<sub>Shannon</sub> and latitude, (<b>f</b>) between the Ratio<sub>Shannon</sub> and elevation, (<b>g</b>) between the ratio of actual to potential Simpson (Ratio<sub>Simpson</sub>) and longitude, (<b>h</b>) between the Ratio<sub>Simpson</sub> and latitude, (<b>i</b>) between the Ratio<sub>Simpson</sub> and elevation, (<b>j</b>) between the ratio of actual to potential Pielou (Ratio<sub>Pielou</sub>) and longitude, (<b>k</b>) between the Ratio<sub>Pielou</sub> and latitude and (<b>l</b>) between the Ratio<sub>Pielou</sub> and elevation.</p>
Full article ">Figure A19
<p>Correlations (<b>a</b>) between the ratio of actual to potential species richness (Ratio<sub>SR</sub>) and mean annual temperature (MAT), (<b>b</b>) between the Ratio<sub>SR</sub> and mean annual precipitation (MAP), (<b>c</b>) between the Ratio<sub>SR</sub> and mean annual radiation (MARad), (<b>d</b>) between the Ratio<sub>SR</sub> and mean maximum normalized difference vegetation index (MNDVI<sub>max</sub>), (<b>e</b>) between the ratio of actual to potential Shannon (Ratio<sub>Shannon</sub>) and MAT, (<b>f</b>) between the Ratio<sub>Shannon</sub> and MAP, (<b>g</b>) between the Ratio<sub>Shannon</sub> and MARad, (<b>h</b>) between the Ratio<sub>Shannon</sub> and MNDVI<sub>max</sub>, (<b>i</b>) between the ratio of actual to potential Simpson (Ratio<sub>Simpson</sub>) and MAT, (<b>j</b>) between the Ratio<sub>Simpson</sub> and MAP, (<b>k</b>) between the Ratio<sub>Simpson</sub> and MARad, (<b>l</b>) the Ratio<sub>Simpson</sub> and MNDVI<sub>max</sub>, (<b>m</b>) between the ratio of actual to potential Pielou (Ratio<sub>Pielou</sub>) and MAT, (<b>n</b>) between the Ratio<sub>Pielou</sub> and MAP, (<b>o</b>) between the Ratio<sub>Pielou</sub> and MARad and (<b>p</b>) between the Ratio<sub>Pielou</sub> and MNDVI<sub>max</sub>.</p>
Full article ">Figure A20
<p>Correlations (<b>a</b>) between the ratio of actual to potential species richness (Ratio<sub>SR</sub>) and the change rate of annual temperature (ΔAT), (<b>b</b>) between the Ratio<sub>SR</sub> and the change rate of annual precipitation (ΔAP), (<b>c</b>) between the Ratio<sub>SR</sub> and the change rate of annual radiation (ΔARad), (<b>d</b>) between the Ratio<sub>SR</sub> and the change rate of maximum normalized difference vegetation index (ΔNDVI<sub>max</sub>), (<b>e</b>) between the ratio of actual to potential Shannon (Ratio<sub>Shannon</sub>) and ΔAT, (<b>f</b>) between the Ratio<sub>Shannon</sub> and ΔAP, (<b>g</b>) between the Ratio<sub>Shannon</sub> and ΔARad, (<b>h</b>) between the Ratio<sub>Shannon</sub> and ΔNDVI<sub>max</sub>, (<b>i</b>) between the ratio of actual to potential Simpson (Ratio<sub>Simpson</sub>) and ΔAT, (<b>j</b>) between the Ratio<sub>Simpson</sub> and ΔAP, (<b>k</b>) between the Ratio<sub>Simpson</sub> and ΔARad, (<b>l</b>) the Ratio<sub>Simpson</sub> and ΔNDVI<sub>max</sub>, (<b>m</b>) between the ratio of actual to potential Pielou (Ratio<sub>Pielou</sub>) and ΔAT, (<b>n</b>) between the Ratio<sub>Pielou</sub> and ΔAP, (<b>o</b>) between the Ratio<sub>Pielou</sub> and ΔARad and (<b>p</b>) between the Ratio<sub>Pielou</sub> and ΔNDVI<sub>max</sub>.</p>
Full article ">Figure A21
<p>Relative contribution of (<b>a</b>) longitude, latitude and elevation to the ratio of actual to potential species richness (Ratio<sub>SR</sub>), (<b>b</b>) mean maximum normalized difference vegetation index (MNDVI<sub>max</sub>), mean annual temperature (MAT), mean annual precipitation (MAP) and mean annual radiation (MARad) to Ratio<sub>SR</sub>, (<b>c</b>) the change rate for maximum normalized difference vegetation index (slope_NDVI<sub>max</sub>), annual temperature (slope_AT), annual precipitation (slope_AP) and annual radiation (slope_ARad) to Ratio<sub>SR</sub>, (<b>d</b>) longitude, latitude and elevation to the ratio of actual to potential Shannon (Ratio<sub>Shannon</sub>), (<b>e</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to Ratio<sub>Shannon</sub>, (<b>f</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to Ratio<sub>Shannon</sub>, (<b>g</b>) longitude, latitude and elevation to the ratio of actual to potential Simpson (Ratio<sub>Simpson</sub>), (<b>h</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to Ratio<sub>Simpson</sub>, (<b>i</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to Ratio<sub>Simpson</sub>, (<b>j</b>) longitude, latitude and elevation to the ratio of actual to potential Pielou (Ratio<sub>Pielou</sub>), (<b>k</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to Ratio<sub>Pielou</sub> and (<b>l</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to Ratio<sub>Pielou</sub>.</p>
Full article ">Figure A22
<p>Correlations (<b>a</b>) between the change rate for the ratio of actual to potential species richness (slope_Ratio<sub>SR</sub>) and longitude, (<b>b</b>) between the slope_Ratio<sub>SR</sub> and latitude, (<b>c</b>) between the slope_Ratio<sub>SR</sub> and elevation, (<b>d</b>) between the change rate for the ratio of actual to potential Shannon (slope_Ratio<sub>Shannon</sub>) and longitude, (<b>e</b>) between the slope_Ratio<sub>Shannon</sub> and latitude, (<b>f</b>) between the slope_Ratio<sub>Shannon</sub> and elevation, (<b>g</b>) between the change rate for the ratio of actual to potential Simpson (slope_Ratio<sub>Simpson</sub>) and longitude, (<b>h</b>) between the slope_Ratio<sub>Simpson</sub> and latitude, (<b>i</b>) between the slope_Ratio<sub>Simpson</sub> and elevation, (<b>j</b>) between the change rate for the ratio of actual to potential Pielou (slope_Ratio<sub>Pielou</sub>) and longitude, (<b>k</b>) between the slope_Ratio<sub>Pielou</sub> and latitude and (<b>l</b>) between the slope_Ratio<sub>Pielou</sub> and elevation.</p>
Full article ">Figure A23
<p>Correlations (<b>a</b>) between the change rate for the ratio of actual to potential species richness (slope_Ratio<sub>SR</sub>) and mean annual temperature (MAT), (<b>b</b>) between the slope_Ratio<sub>SR</sub> and mean annual precipitation (MAP), (<b>c</b>) between the slope_Ratio<sub>SR</sub> and mean annual radiation (MARad), (<b>d</b>) between the slope_Ratio<sub>SR</sub> and mean maximum normalized difference vegetation index (MNDVI<sub>max</sub>), (<b>e</b>) between the change rate for the ratio of actual to potential Shannon (slope_Ratio<sub>Shannon</sub>) and MAT, (<b>f</b>) between the slope_Ratio<sub>Shannon</sub> and MAP, (<b>g</b>) between the slope_Ratio<sub>Shannon</sub> and MARad, (<b>h</b>) between the slope_Ratio<sub>Shannon</sub> and MNDVI<sub>max</sub>, (<b>i</b>) between the change rate for the ratio of actual to potential Simpson (slope_Ratio<sub>Simpson</sub>) and MAT, (<b>j</b>) between the slope_Ratio<sub>Simpson</sub> and MAP, (<b>k</b>) between the slope_Ratio<sub>Simpson</sub> and MARad, (<b>l</b>) the slope_Ratio<sub>Simpson</sub> and MNDVI<sub>max</sub>, (<b>m</b>) between the change rate for the ratio of actual to potential Pielou (slope_Ratio<sub>Pielou</sub>) and MAT, (<b>n</b>) between the slope_Ratio<sub>Pielou</sub> and MAP, (<b>o</b>) between the slope_Ratio<sub>Pielou</sub> and MARad and (<b>p</b>) between the slope_Ratio<sub>Pielou</sub> and MNDVI<sub>max</sub>.</p>
Full article ">Figure A24
<p>Correlations (<b>a</b>) between the change rate for the ratio of actual to potential species richness (slope_Ratio<sub>SR</sub>) and the change rate of annual temperature (ΔAT), (<b>b</b>) between the slope_Ratio<sub>SR</sub> and the change rate of annual precipitation (ΔAP), (<b>c</b>) between the slope_Ratio<sub>SR</sub> and the change rate of annual radiation (ΔARad), (<b>d</b>) between the slope_Ratio<sub>SR</sub> and the change rate of maximum normalized difference vegetation index (ΔNDVI<sub>max</sub>), (<b>e</b>) between the change rate for the ratio of actual to potential Shannon (slope_Ratio<sub>Shannon</sub>) and ΔAT, (<b>f</b>) between the slope_Ratio<sub>Shannon</sub> and ΔAP, (<b>g</b>) between the slope_Ratio<sub>Shannon</sub> and ΔARad, (<b>h</b>) between the slope_Ratio<sub>Shannon</sub> and ΔNDVI<sub>max</sub>, (<b>i</b>) between the change rate for the ratio of actual to potential Simpson (slope_Ratio<sub>Simpson</sub>) and ΔAT, (<b>j</b>) between the slope_Ratio<sub>Simpson</sub> and ΔAP, (<b>k</b>) between the slope_Ratio<sub>Simpson</sub> and ΔARad, (<b>l</b>) the slope_Ratio<sub>Simpson</sub> and ΔNDVI<sub>max</sub>, (<b>m</b>) between the change rate for the ratio of actual to potential Pielou (slope_Ratio<sub>Pielou</sub>) and ΔAT, (<b>n</b>) between the slope_Ratio<sub>Pielou</sub> and ΔAP, (<b>o</b>) between the slope_Ratio<sub>Pielou</sub> and ΔARad and (<b>p</b>) between the slope_Ratio<sub>Pielou</sub> and ΔNDVI<sub>max</sub>.</p>
Full article ">Figure A25
<p>Relative contribution of (<b>a</b>) longitude, latitude and elevation to the change rate for the ratio of actual to potential species richness (slope_Ratio<sub>SR</sub>), (<b>b</b>) mean maximum normalized difference vegetation index (MNDVI<sub>max</sub>), mean annual temperature (MAT), mean annual precipitation (MAP) and mean annual radiation (MARad) to slope_Ratio<sub>SR</sub>, (<b>c</b>) the change rate for maximum normalized difference vegetation index (slope_NDVI<sub>max</sub>), annual temperature (slope_AT), annual precipitation (slope_AP) and annual radiation (slope_ARad) to slope_Ratio<sub>SR</sub>, (<b>d</b>) longitude, latitude and elevation to the change rate for the ratio of actual to potential Shannon (slope_Ratio<sub>Shannon</sub>), (<b>e</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to slope_Ratio<sub>Shannon</sub>, (<b>f</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to slope_Ratio<sub>Shannon</sub>, (<b>g</b>) longitude, latitude and elevation to the change rate for the ratio of actual to potential Simpson (slope_Ratio<sub>Simpson</sub>), (<b>h</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to slope_Ratio<sub>Simpson</sub>, (<b>i</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to slope_Ratio<sub>Simpson</sub>, (<b>j</b>) longitude, latitude and elevation to the change rate for the ratio of actual to potential Pielou (slope_Ratio<sub>Pielou</sub>), (<b>k</b>) MNDVI<sub>max</sub>, MAT, MAP and MARad to slope_Ratio<sub>Pielou</sub> and (<b>l</b>) slope_NDVI<sub>max</sub>, slope_AT, slope_AP and slope_ARad to slope_Ratio<sub>Pielou</sub>.</p>
Full article ">
26 pages, 20017 KiB  
Article
An Extraction Method for Large Gradient Three-Dimensional Displacements of Mining Areas Using Single-Track InSAR, Boltzmann Function, and Subsidence Characteristics
by Kegui Jiang, Keming Yang, Yanhai Zhang, Yaxing Li, Tingting Li and Xiangtong Zhao
Remote Sens. 2023, 15(11), 2946; https://doi.org/10.3390/rs15112946 - 5 Jun 2023
Cited by 5 | Viewed by 1463
Abstract
This paper presents an extraction method for large gradient three-dimensional (3-D) displacements of mining areas using single-track interferometric synthetic aperture radar (InSAR), Boltzmann function, and subsidence characteristics. This is mainly aimed at overcoming the limitations of surface deformation monitoring in mining areas by [...] Read more.
This paper presents an extraction method for large gradient three-dimensional (3-D) displacements of mining areas using single-track interferometric synthetic aperture radar (InSAR), Boltzmann function, and subsidence characteristics. This is mainly aimed at overcoming the limitations of surface deformation monitoring in mining areas by using single-track InSAR technology. One is that the rapid and large gradient deformation of the mine surface usually leads to image decoherence, which makes it difficult to obtain correct deformation information. Second, the surface deformation monitored by InSAR is only one-dimensional line of sight (LOS) displacement, and thus it is difficult to reflect the real 3-D displacements of the surface. Firstly, the Boltzmann function prediction model (BPM) is introduced to assist InSAR phase unwrapping; thus the missing large gradient deformation phase of InSAR is recovered. Then, the subsidence characteristics in mining horizontal (or near-horizontal) coal seams are used as prior knowledge for theoretical derivation, and a 3-D displacement extraction model of coal seam mining with single-track InSAR is constructed. The feasibility of the method is verified by simulating LOS displacements with random noise and underestimation phenomenon caused by the large gradient deformation as InSAR observations. The results show that the root mean square error (RMSE) of 3-D displacements on the observation line calculated by the proposed method is 21.5 mm, 19.0 mm, and 32.9 mm, respectively. Based on the single-track Sentinel-1 images, the method in this paper was applied to the extraction of surface 3-D displacements in the Huaibei coal mine, and the experimental results show that the extracted 3-D displacements are in good agreement with that of measurement by the surface observation station. The proposed method can adapt to limited InSAR acquisitions and complex monitoring environments. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Principle for predicting the deformation of surface point <span class="html-italic">A</span>(<span class="html-italic">x</span>,<span class="html-italic">y</span>) caused by underground coal mining.</p>
Full article ">Figure 2
<p>Geometric projection relationship between the LOS displacement (<span class="html-italic">DLOS</span>) monitored by InSAR and the real 3-D displacements of the surface.</p>
Full article ">Figure 3
<p>Vector relationship of the horizontal displacements of the symmetrical point on the ground.</p>
Full article ">Figure 4
<p>Technology roadmap of the construction method.</p>
Full article ">Figure 5
<p>Simulated displacement fields in the mining area: (<b>a</b>) Vertical displacement field; (<b>b</b>) North displacement field; (<b>c</b>) East displacement field; (<b>d</b>) LOS displacement field; (<b>e</b>) Observation noise, and (<b>f</b>) LOS displacement field with noise and limited maximum observation.</p>
Full article ">Figure 6
<p>(<b>a</b>) Wrapped phase of the residual phase; (<b>b</b>) Unwrapped phase of the residual phase.</p>
Full article ">Figure 7
<p>Extraction of 3-D displacement fields and accuracy evaluation by the construction method: (<b>a</b>,<b>b</b>) Extraction of the vertical displacement field, and its absolute error on the observation line L1–L2; (<b>c</b>,<b>d</b>) Extraction of the strike displacement field, and its absolute error on the observation line L3–L4; (<b>e</b>,<b>f</b>) Extraction of the inclination displacement field, and its absolute error on the observation line L5–L6.</p>
Full article ">Figure 8
<p>Geographic location of the Huaibei mining area (marked by the red circle in the lower left panel). The blue rectangle denotes the footprints of the collected Sentinel-1A images. The black rectangle denotes the area of Shuanglong Mining.</p>
Full article ">Figure 9
<p>Cumulative differential interferometric phase during the period from 12 March 2021 to 10 August 2022, and layout of working face and surface observation points. The black rectangle is the mining working face. The dots are the surface displacement observation station arranged above the working face.</p>
Full article ">Figure 10
<p>Unwrapping results of the interferometric phase in the study area: (<b>a</b>) Unwrapping results by using the minimum cost flow method; (<b>b</b>) Unwrapping results by using the proposed method.</p>
Full article ">Figure 11
<p>3-D displacements extracted by the proposed method in this paper: (<b>a</b>) Solved LOS displacement in <a href="#sec4dot2-remotesensing-15-02946" class="html-sec">Section 4.2</a>; (<b>b</b>) Extracted vertical displacement; (<b>c</b>) Extracted northward displacement; (<b>d</b>) Extracted eastward displacement. The black rectangle is the mining working face.</p>
Full article ">Figure 12
<p>Comparison between the vertical displacement solved using the proposed method and the result of surface observation points.</p>
Full article ">Figure 13
<p>Comparison between the horizontal displacement solved using the proposed method and the result of surface observation points.</p>
Full article ">Figure 14
<p>RMSE of the 3-D deformation calculated by each scheme of 100 experiments. The colored dots represent the RMSE calculated by the corresponding scheme, and the dotted line represents the average value of the RMSE with 100 experiments: (<b>a</b>) RMSE of resolved vertical displacement; (<b>b</b>) RMSE of resolved strike displacement; (<b>c</b>) RMSE of resolved inclination displacement.</p>
Full article ">Figure 15
<p>(<b>a</b>) Black rectangle represents the working face, and the yellow dots represent the simulated positions of leveling and 3-D laser; (<b>b</b>–<b>d</b>) Absolute error of calculated vertical displacement, strike displacement, and inclination displacement on line L1–L2.</p>
Full article ">
20 pages, 106782 KiB  
Article
Improved Generalized IHS Based on Total Variation for Pansharpening
by Xuefeng Zhang, Xiaobing Dai, Xuemin Zhang, Yuchen Hu, Yingdong Kang and Guang Jin
Remote Sens. 2023, 15(11), 2945; https://doi.org/10.3390/rs15112945 - 5 Jun 2023
Cited by 3 | Viewed by 1692
Abstract
Pansharpening refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image aimed at generating a high-quality outcome over the same area. This particular image fusion problem has been widely studied, but until recently, it has been challenging to balance the [...] Read more.
Pansharpening refers to the fusion of a panchromatic (PAN) and a multispectral (MS) image aimed at generating a high-quality outcome over the same area. This particular image fusion problem has been widely studied, but until recently, it has been challenging to balance the spatial and spectral fidelity in fused images. The spectral distortion is widespread in the component substitution-based approaches due to the variation in the intensity distribution of spatial components. We lightened the idea using the total variation optimization to improve upon a novel GIHS-TV framework for pansharpening. The framework drew the high spatial fidelity from the GIHS scheme and implemented it with a simpler variational expression. An improved L1-TV constraint to the new spatial–spectral information was introduced to the GIHS-TV framework, along with its fast implementation. The objective function was solved by the Iteratively Reweighted Norm (IRN) method. The experimental results on the “PAirMax” dataset clearly indicated that GIHS-TV could effectively reduce the spectral distortion in the process of component substitution. Our method has achieved excellent results in visual effects and evaluation metrics. Full article
(This article belongs to the Special Issue Machine Vision and Advanced Image Processing in Remote Sensing II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The spatial and spectral distortion.</p>
Full article ">Figure 2
<p>The GIHS-TV framework.</p>
Full article ">Figure 3
<p>The original and new spatial components.</p>
Full article ">Figure 4
<p>The residuals with details in the IHS and GIHS-TV methods.</p>
Full article ">Figure 5
<p>The evaluation metrics of fusion with different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>The visual effect of fusion with different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Fusion results in the “GE_Lond_Urb” scene.</p>
Full article ">Figure 8
<p>Fusion results in the “S7_NewY_Mix” scene.</p>
Full article ">Figure 9
<p>Fusion results in the “W2_Miam_Mix” scene.</p>
Full article ">Figure 10
<p>The local fused results in the “Pl_Hous_Urb” scene.</p>
Full article ">Figure 11
<p>The local fused results in the “W4_Mexi_Urb” scene.</p>
Full article ">Figure 12
<p>Evaluation metrics of fusion on the “PAirMax” dataset.</p>
Full article ">Figure 13
<p>The visual effect of fusion at reduced resolution. The first row shows the original MS image. Other rows’ images stands for the fused images by IHS and GIHS-TV methods, respectively.</p>
Full article ">
34 pages, 62588 KiB  
Technical Note
Use of ICEsat-2 and Sentinel-2 Open Data for the Derivation of Bathymetry in Shallow Waters: Case Studies in Sardinia and in the Venice Lagoon
by Massimo Bernardis, Roberto Nardini, Lorenza Apicella, Maurizio Demarte, Matteo Guideri, Bianca Federici, Alfonso Quarati and Monica De Martino
Remote Sens. 2023, 15(11), 2944; https://doi.org/10.3390/rs15112944 - 5 Jun 2023
Cited by 4 | Viewed by 2829
Abstract
Despite the high accuracy of conventional acoustic hydrographic systems, measurement of the seabed along coastal belts is still a complex problem due to the limitations arising from shallow water. In addition to traditional echo sounders, airborne LiDAR also suffers from high application costs, [...] Read more.
Despite the high accuracy of conventional acoustic hydrographic systems, measurement of the seabed along coastal belts is still a complex problem due to the limitations arising from shallow water. In addition to traditional echo sounders, airborne LiDAR also suffers from high application costs, low efficiency, and limited coverage. On the other hand, remote sensing offers a practical alternative for the extraction of depth information, providing fast, reproducible, low-cost mapping over large areas to optimize and minimize fieldwork. Satellite-derived bathymetry (SDB) techniques have proven to be a promising alternative to supply shallow-water bathymetry data. However, this methodology is still limited since it usually requires in situ observations as control points for multispectral imagery calibration and bathymetric validation. In this context, this paper illustrates the potential for bathymetric derivation conducted entirely from open satellite data, without relying on in situ data collected using traditional methods. The SDB was performed using multispectral images from Sentinel-2 and bathymetric data collected by NASA’s ICESat-2 on two areas of relevant interest. To assess outcomes’ reliability, bathymetries extracted from ICESat-2 and derived from Sentinel-2 were compared with the updated and reliable data from the BathyDataBase of the Italian Hydrographic Institute. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Representation of the two areas of operation (AOOs): (<b>a</b>) Gulf of Congianus, (<b>b</b>) Venice Lagoon and the open sea outside it. (Picture generated using Google Earth, 5 February 2023).</p>
Full article ">Figure 2
<p>Representation of S-2 images selected and cropped according to areas of interest: (<b>a</b>) Gulf of Congianus and (<b>b</b>) Venice Lagoon.</p>
Full article ">Figure 3
<p>Projection of soundings contained in the trajectories of ICESat-2 beams selected and cleaned for the study areas: (<b>a</b>) Gulf of Congianus and (<b>b</b>) Venice Lagoon.</p>
Full article ">Figure 4
<p>MBES bathymetric surveys of (<b>a</b>) Venice Lagoon and (<b>b</b>) Gulf of Congianus. To the left of each image is the graduated scale reflecting the reference depth.</p>
Full article ">Figure 5
<p>(<b>a</b>) Geographical distribution of tide gauges inside the Venice Lagoon and (<b>b</b>) the geographical division in two macro areas inside and outside the lagoon, with further subdivisions inside them.</p>
Full article ">Figure 6
<p>Example of a profile obtained from the first phase of ICESat-2 data processing, with the photons dataset referenced to ellipsoid WGS84 (<b>blue</b>) and the photons shifted and referenced to the geoid EGM-2008 (<b>green</b>).</p>
Full article ">Figure 7
<p>The measured height (blue dots) relative to the geoid was transformed into the water column depth (fuchsia dots) relative to a zero level (yellow line), which represents the shift of the water surface (red line) as a reference level. Zoom-in on the 16–17 m along-track distance of the profile in <a href="#remotesensing-15-02944-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Bathymetry points (in purple) corrected by refraction (green dots) and their tide level correction (red dots) relative to a specific date and time, using local tide gauge measurements. The ICESat-2 data were acquired during a lower tide than the reference time, causing an upward shift.</p>
Full article ">Figure 9
<p>Example of tide correction applied to the ICESat-2 and MBES data to reference them to the time of S-2 image acquisition.</p>
Full article ">Figure 10
<p>The profile obtained from the ATL03_20200114165545_02900602_005_01_gt2r beam at the end of the <span class="html-italic">Data automatic download and preparation</span> phase and after referencing to the geoid EGM-2008. The diagram shows the profile of the emerged land, the waterline, and the seabed, which remains well distinct from the noise points, especially in the more superficial layers. The profile section in rectangle A is zoomed in <a href="#remotesensing-15-02944-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 11
<p>The bathymetric points (green dots) extracted automatically, and their resulting depths after the refraction and tide level correction (red dots) relative to a specific date and time, using local tide gauge measurement and local temperature and salinity data (Section A from <a href="#remotesensing-15-02944-f010" class="html-fig">Figure 10</a>).</p>
Full article ">Figure 12
<p>Result of the seabed classification process in the Gulf of Congianus. Seabed classes are sand (yellow), rock (grey), and vegetation and other seabed cover (green).</p>
Full article ">Figure 13
<p>Calibration results, showing the relationship between the Blue/Green ratio (in green) or the Blue/Red ratio (in red) and the depth of the ICESat-2 calibration point set. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 13 Cont.
<p>Calibration results, showing the relationship between the Blue/Green ratio (in green) or the Blue/Red ratio (in red) and the depth of the ICESat-2 calibration point set. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 14
<p>SDB validation: error scatter plots in different depth ranges, showing the relationship between the depth of the ICESat-2 bathymetric points and the estimated SDB. N is the number of ICESat-2 points used. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 15
<p>SDBs obtained down to 5 m depth using the Blue/Red ratio for sand and rocks and using Blue/Green ratio in the 5–10 m range for sand. No bathymetries were derived for rocky areas in the 5–10 m range.</p>
Full article ">Figure 16
<p>BIAS distribution for the Gulf of Congianus.</p>
Full article ">Figure 17
<p>Profile obtained from beam ATL03_20200101053635_00840606_005_01_gt3l at the end of phase 1 <span class="html-italic">Data automatic download and preparation</span>, with a sub-section, and after the referencing to the geoid EGM-2008. The main elements of this image are the profile of the hinterland of Venice partially below the current sea level on the left, the Venice Lagoon in the middle, and the open sea with the seabed on the right. The lagoon shows a complex structure of water layers. The profile sections in rectangles A and B are zoomed in <a href="#remotesensing-15-02944-f018" class="html-fig">Figure 18</a> and <a href="#remotesensing-15-02944-f019" class="html-fig">Figure 19</a>.</p>
Full article ">Figure 18
<p>(<b>a</b>) Automatically extracted bathymetric points (red dots). The highlighted points (red dots) are not representative of the seabed. (<b>b</b>) Manually extracted bathymetric points (green dots) and their resulting depths after refraction and tidal level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section A of <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a> relating to the lagoon area.</p>
Full article ">Figure 18 Cont.
<p>(<b>a</b>) Automatically extracted bathymetric points (red dots). The highlighted points (red dots) are not representative of the seabed. (<b>b</b>) Manually extracted bathymetric points (green dots) and their resulting depths after refraction and tidal level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section A of <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a> relating to the lagoon area.</p>
Full article ">Figure 19
<p>Automatically extracted bathymetric points (green dots) and their resulting depths after refraction and tide level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section B from the outer sea in <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a>.</p>
Full article ">Figure 20
<p>Result of the seabed classification process in the Venice Lagoon. Classes of the seabed are identified as sand in yellow areas and marine vegetation in green.</p>
Full article ">Figure 21
<p>Results of the calibration phase showing the relationship between the Blue/Green ratio (green color) or the Blue/Red ratio (red color) and the depth of the ICESat-2 calibration point sets. (<b>a</b>) Lagoon; (<b>b</b>) Open sea, 0–5 m; (<b>c</b>) Open sea, 5–10 m. Venice Lagoon.</p>
Full article ">Figure 22
<p>SDB validation: error scatter plots of the relationship between the depth of the ICESat-2 bathymetric points and the estimated depth (SDB). The black line is the regression line. (<b>a</b>) Lagoon; (<b>b</b>) Open sea, 0–5 m; (<b>c</b>) Open sea, 5–10 m. Venice Lagoon.</p>
Full article ">Figure 23
<p>Sentinel-derived bathymetry (SDB) for the Venice Lagoon and the open sea area in front of Venice.</p>
Full article ">Figure 24
<p>BIAS distribution for the areas inside and outside the Venice Lagoon.</p>
Full article ">Figure 25
<p>Overlapping of ICESat-2 bathymetric points (yellow points) and MBES bathymetric data (the colored area from the range red–blue) and zoom-in of the coastal area with more ICESat-2 points. Sardinia.</p>
Full article ">Figure 26
<p>Histogram of the differences in the depth values measured by ICESat-2 and MBES in the Gulf of Congianus.</p>
Full article ">Figure 27
<p>ICESat-2 (red points) and MBES (area colored from red to blue) data. Zoom-in of two characteristic areas. Venice Lagoon.</p>
Full article ">Figure 28
<p>Bar charts of the differences in the depth values measured with MBES surveys The red dashed line represents the ±0.5 m range, the Total Vertical Uncertainty for the Order 1 Standard of the IHO-S44 Publication. (<b>a</b>) MBES-SDB, Gulf of Congianus, Sand; (<b>b</b>) MBES-SDB, Gulf of Congianus, Rocks; (<b>c</b>) MBES-SDB, Venice, Lagoon; (<b>d</b>) MBES-SDB, Venice, Sea.</p>
Full article ">Figure 29
<p>SDB results in the Gulf of Congianus area after a 5 m depth (filter only to rocky seabed areas) is applied for the range of acceptability of vertical uncertainty of IHO standards. On the left are the S-2 images and on the right are the SDB results. (<b>a</b>) Cugnana Gulf, the shallower part of the case study area; (<b>b</b>) a jagged coastal area from the Marinella and Aranci Gulfs.</p>
Full article ">Figure 30
<p>SDB results from the Venice Lagoon area, after a 3.5 m depth is filter applied for the range of acceptability of vertical uncertainty of IHO standards.</p>
Full article ">Figure 31
<p>SDB results from the Venice Lagoon area, after a 3.5 m depth filter is applied for the range of acceptability of vertical uncertainty of IHO standards. On the left are the S-2 images, and on the right are the SDB results: (<b>a</b>) the northern part of the Venice Lagoon, (<b>b</b>) the lagoon area near Venice Town, (<b>c</b>) the lagoon area near Malamocco, (<b>d</b>) the lagoon area near Chioggia Town.</p>
Full article ">
18 pages, 18394 KiB  
Article
Predictive Mapping of Mediterranean Seagrasses-Exploring the Influence of Seafloor Light and Wave Energy on Their Fine-Scale Spatial Variability
by Elias Fakiris, Vasileios Giannakopoulos, Georgios Leftheriotis, Athanassios Dimas and George Papatheodorou
Remote Sens. 2023, 15(11), 2943; https://doi.org/10.3390/rs15112943 - 5 Jun 2023
Cited by 1 | Viewed by 2809
Abstract
Seagrasses are flowering plants, adapted to marine environments, that are highly diverse in the Mediterranean Sea and provide a variety of ecosystem services. It is commonly recognized that light availability sets the lower limit of seagrass bathymetric distribution, while the upper limit depends [...] Read more.
Seagrasses are flowering plants, adapted to marine environments, that are highly diverse in the Mediterranean Sea and provide a variety of ecosystem services. It is commonly recognized that light availability sets the lower limit of seagrass bathymetric distribution, while the upper limit depends on the level of bottom disturbance by currents and waves. In this work, detailed distribution of seagrass, obtained through geoacoustic habitat mapping and optical ground truthing, is correlated to wave energy and light on the seafloor of the Marine Protected Area of Laganas Bay, Zakynthos Island, Greece, where the seagrasses Posidonia oceanica and Cymodocea nodosa form extensive meadows. Mean wave energy on the seafloor was estimated through wave propagation modeling, while the photosynthetically active radiation through open-access satellite-derived light parameters, reduced to the seafloor using the detailed acquired bathymetry. A significant correlation of seagrass distribution with wave energy and light was made clear, allowing for performing fine-scale predictive seagrass mapping using a random forest classifier. The predicted distributions exhibited >80% overall accuracy for P. oceanica and >90% for C. nodosa, indicating that fine-scale seagrass predictive mapping in the Mediterranean can be performed robustly through bottom wave energy and light, especially when detailed bathymetric data exist to allow for accurate estimations. Full article
Show Figures

Figure 1

Figure 1
<p>Study area map (National Marine Park of Zakynthos—NMPZ) overlayed by the acoustic habitat map and the towed camera transects with corresponding seafloor image examples.</p>
Full article ">Figure 2
<p>The computational domain with the bathymetry (<b>a</b>) and the triangular discretization (<b>b</b>) used for the wave propagation numerical simulations. Open boundaries are marked with red dashed line, while the limits of the mapped area (NMPZ) with black one.</p>
Full article ">Figure 3
<p>Spatial representation of the four environmental parameters considered in this work: (<b>a</b>) Depth, (<b>b</b>) Wave induced currents (Uwi), (<b>c</b>) Bottom orbital velocities (Ubr) and (<b>d</b>) PAR at seabed (PARz), superimposed on the <span class="html-italic">P. oceanica</span> (linear pattern) and <span class="html-italic">C. nodosa</span> (dotted pattern) distribution boundaries. Sub-images I, II, III refer to areas where <span class="html-italic">P. oceanica</span> boundaries remarkably follow local Ubr anomalies while IV to a wedge-shaped area with sand-ripples, formed by wave currents.</p>
Full article ">Figure 4
<p>PCA biplot as applied to the environmental variables considered in this work. PC scores are represented as dots with colors corresponding to bottom classes (dark green: <span class="html-italic">P. oceanica</span>, light green: <span class="html-italic">C. nodosa</span> and light grey: Other) and green lines to the variables’ loadings vectors (principal axes).</p>
Full article ">Figure 5
<p>Violin plots of Depth (<b>a</b>), Uwi (<b>b</b>), Ubr (<b>c</b>) and PARz (<b>d</b>), for seagrass (<span class="html-italic">P. oceanica</span> or <span class="html-italic">C. nodosa</span>) and non-seagrass (other) seafloors. <span class="html-italic">P. oceanica</span> is further analyzed in its two sub-types, namely “patchy” and “continuous”.</p>
Full article ">Figure 6
<p>Correspondence analysis between the seafloor classes (<span class="html-italic">P. oceanica, C. nodosa</span> and other) and the four environmental variables considered in this study. Variables were converted into ordinal, with Depth categorized into 5 m interval classes and Uwi, Ubr and PARz into 0.005 m/s, 0.1 m/s and 2 mol·photon/m<sup>2</sup>/day classes, respectively.</p>
Full article ">Figure 7
<p>Comparison of the original and predicted classification maps as well as the probability distributions of <span class="html-italic">P. oceanica</span> and <span class="html-italic">C. nodosa</span> seagrasses in the NMPZ, as generated by the random forest classifier, trained either with a 20% split subset of the data or with ones along the TUC. On the top right a table with the model validation metrics is provided, sorted by seagrass species and training set.</p>
Full article ">
22 pages, 8069 KiB  
Article
Tree Species Classification in UAV Remote Sensing Images Based on Super-Resolution Reconstruction and Deep Learning
by Yingkang Huang, Xiaorong Wen, Yuanyun Gao, Yanli Zhang and Guozhong Lin
Remote Sens. 2023, 15(11), 2942; https://doi.org/10.3390/rs15112942 - 5 Jun 2023
Cited by 8 | Viewed by 3418
Abstract
We studied the use of self-attention mechanism networks (SAN) and convolutional neural networks (CNNs) for forest tree species classification using unmanned aerial vehicle (UAV) remote sensing imagery in Dongtai Forest Farm, Jiangsu Province, China. We trained and validated representative CNN models, such as [...] Read more.
We studied the use of self-attention mechanism networks (SAN) and convolutional neural networks (CNNs) for forest tree species classification using unmanned aerial vehicle (UAV) remote sensing imagery in Dongtai Forest Farm, Jiangsu Province, China. We trained and validated representative CNN models, such as ResNet and ConvNeXt, as well as the SAN model, which incorporates Transformer models such as Swin Transformer and Vision Transformer (ViT). Our goal was to compare and evaluate the performance and accuracy of these networks when used in parallel. Due to various factors, such as noise, motion blur, and atmospheric scattering, the quality of low-altitude aerial images may be compromised, resulting in indistinct tree crown edges and deficient texture. To address these issues, we adopted Real-ESRGAN technology for image super-resolution reconstruction. Our results showed that the image dataset after reconstruction improved classification accuracy for both the CNN and Transformer models. The final classification accuracies, validated by ResNet, ConvNeXt, ViT, and Swin Transformer, were 96.71%, 98.70%, 97.88%, and 98.59%, respectively, with corresponding improvements of 1.39%, 1.53%, 0.47%, and 1.18%. Our study highlights the potential benefits of Transformer and CNN for forest tree species classification and the importance of addressing the image quality degradation issues in low-altitude aerial images. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: Dongtai Forest Farm. (<b>a</b>–<b>c</b>) UAV-RGB images of the main tree species for the three experimental areas, respectively.</p>
Full article ">Figure 2
<p>Sample tree species: (<b>a</b>) Bamboo; (<b>b</b>) Ginkgo; (<b>c</b>) Metasequoia; (<b>d</b>) Poplar.</p>
Full article ">Figure 3
<p>Schematic diagram of canopy sample data augmentation.</p>
Full article ">Figure 4
<p>Real-ESRGAN second-order degradation model.</p>
Full article ">Figure 5
<p>ESRGAN architecture. We used a pixel-unshuffle operation to diminish the spatial dimensions and re-arrange information to the channel dimension for scale factors of x2.</p>
Full article ">Figure 6
<p>Comparison of original canopy images and super-resolution reconstructed canopy images.</p>
Full article ">Figure 7
<p>Residual block structure.</p>
Full article ">Figure 8
<p>A depiction of the ResNet-50 network architecture. ‘/2’ means stride is set to 2.</p>
Full article ">Figure 9
<p>ConvNeXt-T Network Architecture.</p>
Full article ">Figure 10
<p>ViT-B Network Architecture.</p>
Full article ">Figure 11
<p>Swin-T Network Architecture.</p>
Full article ">Figure 12
<p>Two Successive Swin Transformer Blocks.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>d</b>) are plots of the accuracy and loss rates of the ResNet-50, ConvNeXt-T, ViT-B, and Swin-T models trained and validated on the original dataset, respectively.</p>
Full article ">Figure 14
<p>(<b>a</b>–<b>d</b>) are the confusion matrix plots for the ResNet-50, ConvNeXt-T, ViT-B, and Swin-T models validated on the original dataset, respectively.</p>
Full article ">Figure 15
<p>(<b>a</b>–<b>d</b>) depict the accuracy and loss rate of ResNet-50, ConvNeXt-T, ViT-B, and Swin-T models trained and validated on super-resolution reconstructed dataset, respectively.</p>
Full article ">Figure 16
<p>(<b>a</b>–<b>d</b>) depict the confusion matrix plots of ResNet-50, ConvNeXt-T, ViT-B, and Swin-T models validated on super-resolution reconstructed dataset, respectively.</p>
Full article ">Figure 17
<p>Tree species distribution map of Dongtai Forestry Sample Site.</p>
Full article ">Figure 18
<p>Validation accuracy obtained by training the model using the original dataset.</p>
Full article ">Figure 19
<p>Comparison of the validation accuracy obtained by the model using the original dataset and the Real-ESRGAN processed dataset for training.</p>
Full article ">
44 pages, 7087 KiB  
Review
Industry- and Academic-Based Trends in Pavement Roughness Inspection Technologies over the Past Five Decades: A Critical Review
by Ali Fares and Tarek Zayed
Remote Sens. 2023, 15(11), 2941; https://doi.org/10.3390/rs15112941 - 5 Jun 2023
Cited by 10 | Viewed by 2470
Abstract
Roughness is widely used as a primary measure of pavement condition. It is also the key indicator of the riding quality and serviceability of roads. The high demand for roughness data has bolstered the evolution of roughness measurement techniques. This study systematically investigated [...] Read more.
Roughness is widely used as a primary measure of pavement condition. It is also the key indicator of the riding quality and serviceability of roads. The high demand for roughness data has bolstered the evolution of roughness measurement techniques. This study systematically investigated the various trends in pavement roughness measurement techniques within the industry and research community in the past five decades. In this study, the Scopus and TRID databases were utilized. In industry, it was revealed that laser inertial profilers prevailed over response-type methods that were popular until the 1990s. Three-dimensional triangulation is increasingly used in the automated systems developed and used by major vendors in the USA, Canada, and Australia. Among the research community, a boom of research focusing on roughness measurement has been evident in the past few years. The increasing interest in exploring new measurement methods has been fueled by crowdsourcing, the effort to develop cheaper techniques, and the growing demand for collecting roughness data by new industries. The use of crowdsourcing tools, unmanned aerial vehicles (UAVs), and synthetic aperture radar (SAR) images is expected to receive increasing attention from the research community. However, the use of 3D systems is likely to continue gaining momentum in the industry. Full article
(This article belongs to the Special Issue Road Detection, Monitoring and Maintenance Using Remotely Sensed Data)
Show Figures

Figure 1

Figure 1
<p>Quarter car model [<a href="#B24-remotesensing-15-02941" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>Research methodology.</p>
Full article ">Figure 3
<p>Number of surveyed agencies in the used seven syntheses.</p>
Full article ">Figure 4
<p>Periodic and cumulative number of publications.</p>
Full article ">Figure 5
<p>Number of publications for the top affiliated countries considering the corresponding author’s country.</p>
Full article ">Figure 6
<p>Word cloud of the most frequently occurring author keywords.</p>
Full article ">Figure 7
<p>Popularity of roughness data collection.</p>
Full article ">Figure 8
<p>Popularity of roughness data collection technologies in industry practice.</p>
Full article ">Figure 9
<p>Roughness evaluation using direct measurement equipment type [<a href="#B53-remotesensing-15-02941" class="html-bibr">53</a>].</p>
Full article ">Figure 10
<p>Roughness evaluation using response-type methods (accelerometer-based) [<a href="#B54-remotesensing-15-02941" class="html-bibr">54</a>].</p>
Full article ">Figure 11
<p>Numbers of roughness measurement equipment pieces owned by surveyed agencies as reported in the 1986 synthesis [<a href="#B25-remotesensing-15-02941" class="html-bibr">25</a>].</p>
Full article ">Figure 12
<p>Numbers of roughness measurement equipment pieces owned by surveyed agencies as reported in the 1994 synthesis [<a href="#B27-remotesensing-15-02941" class="html-bibr">27</a>].</p>
Full article ">Figure 13
<p>Popularity of major roughness data collection technologies in the research community.</p>
Full article ">Figure 14
<p>Roughness measurement approach used in articles published in study period IV.</p>
Full article ">Figure 15
<p>Anticipated future trends in industry practice for pavement roughness measurement.</p>
Full article ">Figure 16
<p>Anticipated future trends among the research community regarding pavement roughness measurement.</p>
Full article ">
14 pages, 4444 KiB  
Technical Note
Assessing Transferability of Remote Sensing Pasture Estimates Using Multiple Machine Learning Algorithms and Evaluation Structures
by Hunter D. Smith, Jose C. B. Dubeux, Alina Zare and Chris H. Wilson
Remote Sens. 2023, 15(11), 2940; https://doi.org/10.3390/rs15112940 - 5 Jun 2023
Cited by 3 | Viewed by 1715
Abstract
Both the vastness of pasturelands and the value they contain—e.g., food security, ecosystem services—have resulted in increased scientific and industry efforts to remotely monitor them via satellite imagery and machine learning (ML). However, the transferability of these models is uncertain, as modelers commonly [...] Read more.
Both the vastness of pasturelands and the value they contain—e.g., food security, ecosystem services—have resulted in increased scientific and industry efforts to remotely monitor them via satellite imagery and machine learning (ML). However, the transferability of these models is uncertain, as modelers commonly train and test on site-specific or homogenized—i.e., randomly partitioned—datasets and choose complex ML algorithms with increased potential to overfit a limited dataset. In this study, we evaluated the accuracy and transferability of remote sensing pasture models, using multiple ML algorithms and evaluation structures. Specifically, we predicted pasture above-ground biomass and nitrogen concentration from Sentinel-2 imagery. The implemented ML algorithms include principal components regression (PCR), partial least squares regression (PLSR), least absolute shrinkage and selection operator (LASSO), random forest (RF), support vector machine regression (SVR), and a gradient boosting model (GBM). The evaluation structures were determined using levels of spatial and temporal dissimilarity to partition the train and test datasets. Our results demonstrated a general decline in accuracy as evaluation structures increase in spatiotemporal dissimilarity. In addition, the more simplistic algorithms—PCR, PLSR, and LASSO—out-performed the more complex models RF, SVR, and GBM for the prediction of dissimilar evaluation structures. We conclude that multi-spectral satellite and pasture physiological variable datasets, such as the one presented in this study, contain spatiotemporal internal dependence, which makes the generalization of predictive models to new localities challenging, especially for complex ML algorithms. Further studies on this topic should include the assessment of model transferability by using dissimilar evaluation structures, and we expect generalization to improve for larger and denser datasets. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Aerial view of (<b>a</b>) BRU experimental plots 1–10, (<b>b</b>) BRU experimental plots 11–20, (<b>c</b>) NFREC paddocks. The yellow squares represent the 20 m S2 pixels contained within the experimental boundaries, indicated in red.</p>
Full article ">Figure 2
<p>Comparison of the PLSR and SVR model predictions of AGB across the four evaluation structures: (<b>a</b>) random, (<b>b</b>) plot, (<b>c</b>) date, and (<b>d</b>) location.</p>
Full article ">Figure 3
<p>Comparison of the LASSO and SVR model predictions of %N across the four evaluation structures: (<b>a</b>) random, (<b>b</b>) plot, (<b>c</b>) date, and (<b>d</b>) year.</p>
Full article ">Figure 4
<p>Means and standard deviations (error bars) of model accuracies (R<sup>2</sup>) grouped by variable (color) and ML algorithm complexity (line type) and reported in order of increasing degree of extrapolation.</p>
Full article ">
18 pages, 2512 KiB  
Article
An Extended Simultaneous Algebraic Reconstruction Technique for Imaging the Ionosphere Using GNSS Data and Its Preliminary Results
by Yuanliang Long, Xingliang Huo, Haojie Liu, Ying Li and Weihong Sun
Remote Sens. 2023, 15(11), 2939; https://doi.org/10.3390/rs15112939 - 5 Jun 2023
Viewed by 1598
Abstract
To generate high-quality reconstructions of ionospheric electron density (IED), we propose an extended simultaneous algebraic reconstruction technique (ESART). The ESART method distributes the discrepancy between the actual GNSS TEC and the calculated TEC among the ray–voxels based on the contribution of voxels to [...] Read more.
To generate high-quality reconstructions of ionospheric electron density (IED), we propose an extended simultaneous algebraic reconstruction technique (ESART). The ESART method distributes the discrepancy between the actual GNSS TEC and the calculated TEC among the ray–voxels based on the contribution of voxels to GNSS TEC, rather than the ratio of the length of ray–voxel intersection to the sum of the lengths of all ray–voxel intersections, as is adopted by conventional methods. The feasibility of the ESART method for reconstructing the IED under different levels of geomagnetic activities is addressed. Additionally, a preliminary experiment is performed using the reconstructed IED profiles and comparing them with ionosonde measurements, which provide direct observations of electron density. The root mean square errors (RMSE) and absolute errors of the ESART method, the simultaneous algebraic reconstruction technique (SART) method, and the International Reference Ionosphere (IRI) 2016 model are calculated to evaluate the effectiveness of the proposed method. Compared to the conventional SART method of ionospheric tomography and the IRI-2016 model, the reconstructed IED profiles obtained using the ESART method are in better agreement with the electron density obtained from the ionosondes, especially for the peak electron densities (NmF2). In addition, a case study of an intense geomagnetic storm on 17–19 March 2015 shows that the spatial and temporal features of storm-related ionospheric disturbances can be more clearly depicted using the ESART method than with the SART method. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of GNSS stations and ionosonde stations used in this work. (The black triangles represent GNSS stations, and the red circles represent the ionosonde stations.).</p>
Full article ">Figure 2
<p>Variation in Dst index on 17–19 March 2015 and 10–12 January 2019.</p>
Full article ">Figure 3
<p>Comparison of electron density profiles over Beijing Station at 01:00 UT (<b>a1</b>–<b>a3</b>), 04:00 UT (<b>b1</b>–<b>b3</b>), 07:00 UT (<b>c1</b>–<b>c3</b>), and 10:00 UT (<b>d1</b>–<b>d3</b>) on 10 January (left panel), 11 January (middle panel), 12 January (right panel), 2019. The red curve is the ionosonde data, the blue dotted line is the ESART reconstruction result, the black dashed line is the SART reconstruction result, and the green line is the IRI-2016 model.</p>
Full article ">Figure 4
<p>Comparison of electron density profiles over Wuhan Station at 01:00 UT (<b>a1</b>–<b>a3</b>), 04:00 UT (<b>b1</b>–<b>b3</b>), 07:00 UT (<b>c1</b>–<b>c3</b>), and 10:00 UT (<b>d1</b>–<b>d3</b>) on 10 January (left panel), 11 January (middle panel), 12 January (right panel), 2019. The red curve is the ionosonde data, the blue dotted line is the ESART reconstruction result, the black dashed line is the SART reconstruction result, and the green line is the IRI-2016 model.</p>
Full article ">Figure 5
<p>Comparison of electron density profiles over Beijing Station at 01:00 UT (<b>a1</b>–<b>a3</b>), 04:00 UT (<b>b1</b>–<b>b3</b>), 07:00 UT (<b>c1</b>–<b>c3</b>), and 10:00 UT (<b>d1</b>–<b>d3</b>) on 17 March (left panel), 18 March (middle panel), 19 March (right panel), 2015. The red curve is the ionosonde data, the blue dotted line is the ESART reconstruction result, the black dashed line is the SART reconstruction result, and the green line is the IRI-2016 model.</p>
Full article ">Figure 6
<p>Comparison of electron density profiles over Wuhan Station at 01:00 UT (<b>a1</b>–<b>a3</b>), 04:00 UT (<b>b1</b>–<b>b3</b>), 07:00 UT (<b>c1</b>–<b>c3</b>), and 10:00 UT (<b>d1</b>–<b>d3</b>) on 17 March (left panel), 18 March (middle panel), 19 March (right panel), 2015. The red curve is the ionosonde data, the blue dotted line is the ESART reconstruction result, the black dashed line is the SART reconstruction result, and the green line is the IRI-2016 model.</p>
Full article ">Figure 7
<p>IED maps at an altitude of 250 km obtained using the IRI-2016 model, the SART method, and the ESART method, and the corresponding GNSS TEC maps at 01:00 UT–10:00 UT on 18 March 2015. The three left columns are the IED maps obtained using IRI-2016, SART, and ESART. The color bar represents TEC in TECu. The subfigures in the fourth column are two-dimensional pseudo-color maps of GNSS TEC, and the color bar represents TEC in TECu.</p>
Full article ">Figure 8
<p>The latitude–altitude profiles of the IED along the 110 °E meridian at 1:00–10:00 UT on 17–19 March 2015. The left columns are the IED maps on 17 March, the middle columns are the IED maps on 18 March, and the right columns are the IED maps on 19 March. The color bar to the right of the subfigures marks the electron density in 10<sup>12</sup> el/m<sup>3</sup> unit.</p>
Full article ">
29 pages, 23352 KiB  
Article
GNSS-Based Driver Assistance for Charging Electric City Buses: Implementation and Lessons Learned from Field Testing
by Iman Esfandiyar, Krzysztof Ćwian, Michał R. Nowicki and Piotr Skrzypczyński
Remote Sens. 2023, 15(11), 2938; https://doi.org/10.3390/rs15112938 - 5 Jun 2023
Viewed by 1792
Abstract
Modern public transportation in urban areas increasingly relies on high-capacity buses. At the same time, the share of electric vehicles is increasing to meet environmental standards. This introduces problems when charging these vehicles from chargers at bus stops, as untrained drivers often find [...] Read more.
Modern public transportation in urban areas increasingly relies on high-capacity buses. At the same time, the share of electric vehicles is increasing to meet environmental standards. This introduces problems when charging these vehicles from chargers at bus stops, as untrained drivers often find it difficult to execute docking manoeuvres on the charger. A practical solution to this problem requires a suitable advanced driver-assistance system (ADAS), which is a system used to automatise and make safer some of the tasks involved in driving a vehicle. In the considered case, ADAS supports docking to the electric charging station, and thus, it must solve two issues: precise positioning of the bus relative to the charger and motion planning in a constrained space. This paper addresses these issues by employing GNSS-based positioning and optimisation-based planning, resulting in an affordable solution to the ADAS for the docking of electric buses while recharging. We focus on the practical side of the system, showing how the necessary features were attained at a limited hardware and installation cost, also demonstrating an extensive evaluation of the fielded ADAS for an operator of public transportation in the city of Poznań in Poland. Full article
(This article belongs to the Special Issue GNSS for Urban Transport Applications II)
Show Figures

Figure 1

Figure 1
<p>Hardware used in the ADAS system. The onboard computer performs all the calculations, the GNSS modules provide the position and heading of the bus, the CAN-USB adapter enables reading data from the sensors of a bus, the LTE module receives RTCM corrections over the network from the base station, and the LCD presents the system interface to the driver.</p>
Full article ">Figure 2
<p>Mounting positions of the GNSS antennas with respect to the geometry of a bus roof. The first antenna, connected to the moving base receiver, is mounted <math display="inline"><semantics> <mrow> <mn>0.2</mn> </mrow> </semantics></math> m ahead of the rear axle and provides the position of a vehicle. The second one, connected to the rover module, is mounted 5 m in front of the first one and is used to calculate the bus heading.</p>
Full article ">Figure 3
<p>The architecture of the developed system. The software components were implemented as the following ROS nodes: NTRIP client, GNSS driver, CAN bridge, localisation node, system activation node, path planner, and feedback controller. The NTRIP client receives the RTCM correction using the LTE module, the GNSS driver handles the data coming from u-blox F9P receivers, the CAN bridge provides the interface for communication with the sensors mounted in the bus, the localisation node calculates the pose of a bus in relation to a charging station, the system activation node launches the motion-planning system when the bus approaches the depot, and the path planner and feedback controller generate the reference path for the driver and guide him during the docking manoeuvre.</p>
Full article ">Figure 4
<p>The recorded routes of the city bus plotted on the <span class="html-italic">OpenStreetMap</span>. RTK FIXED poses are marked with the blue line, less accurate RTK FLOAT poses are marked with a red line, and the least accurate standard GNSS poses are marked with the yellow line. Plot (<b>A</b>) shows the map for Dworzec Zachodni–Kacza, (<b>B</b>) for Garbary PKM–Strzeszyn, and (<b>C</b>) for Garbary PKM–Os. Dębina route.</p>
Full article ">Figure 5
<p>Exemplary parts of the trajectories, where the RTK FIXED mode of GNSS was not available because of the poor sky visibility caused by buildings and trees. Image (<b>1</b>) presents the part of the trajectory from <a href="#remotesensing-15-02938-f004" class="html-fig">Figure 4</a>B, while image (<b>2</b>) shows an example scene from <a href="#remotesensing-15-02938-f004" class="html-fig">Figure 4</a>C.</p>
Full article ">Figure 6
<p>Absolute Trajectory Error calculated between the <span class="html-italic">moving_base</span> (reference) and <span class="html-italic">rover</span> positions. The error was calculated for the part of the Garbary PKM–Os. Dębina route where the RTK FIXED mode of GNSS was partially not available due to high buildings and trees located near the road.</p>
Full article ">Figure 7
<p>Position of chargers at the Garbary PKM depot (<b>A</b>). The green marker indicates the charger for which the developed system was set up. The GNSS localisation mode around the bus depot (<b>B</b>), including the approach path for which the most accurate RTK FIXED mode was available. The arrows indicate the driving direction.</p>
Full article ">Figure 8
<p>The electric urban bus, Solaris Urbino 12 Electric, used for the experiment in this work.</p>
Full article ">Figure 9
<p>Car-like robot kinematics used to describe the electric bus.</p>
Full article ">Figure 10
<p>Bounding boxes are generated to provide the planner with a safe and obstacle-free tunnel interpreted as positional inequality containers; in this figure, (<b>A</b>–<b>C</b>) present different locations subject to path planning. The blue arrow denotes the initial configuration of the bus while the green one charger configuration.</p>
Full article ">Figure 11
<p>Block scheme explaining connections between the control module and the interface subsystem.</p>
Full article ">Figure 12
<p>Human–machineinterface in ADAS system. The top purple coloured bar indicates the path error: that is, the displacement of the bus’s guidance point from the reference path. The main bar provides steering guidance, consisting of a solid blue line visualising the desired steering angle, and a coloured-filled area indicating the actual steering angle. The colour of the bar is sensitive to the value of the steering error: an acceptable error is shown by green colour, while the bigger the error becomes, the colour changes to orange and red. The vertical bar placed on the right side of the window indicates the remaining distance to the charger; at zero, the pantograph is located exactly below the charger. (<b>A</b>): At this state, the centre of the rear axis of the bus (guidance point) is more than one metre away from the reference path, and the driver must turn the steering wheel to the right to align the actual steering with the blue line indicating the desired steering angle. The red portion of the right vertical bar shows the bus still has to proceed further to reach the charger. (<b>B</b>): The actual and reference steering is aligned with acceptable precision; however, the guidance point is still more than one metre away from the reference path. From the vertical bar, it can be seen that the bus is closer to the goal pose compared to the image on the left. (<b>C</b>): At this state, the reference steering angle is shown around zero, and the actual steering angle is shown around the same value. As the driver follows the guidance and keeps the wheels straight, the bus remains on the reference path and finally reaches the goal pose.</p>
Full article ">Figure 13
<p>Reference and real path of the bus generated for the bus approach to the charging station. The point (0, 0) is the charger location (<b>A</b>). The Absolute Trajectory Error between the reference and real trajectory over time (<b>B</b>). Plots present the results for sequence 42.</p>
Full article ">Figure 14
<p>Reference and real path of the bus, generated for the bus approach to the charging station. The point (0, 0) is the charger location (<b>A</b>). The Absolute Trajectory Error between the reference and real trajectory over time (<b>B</b>). Plots present the results for sequence 12.</p>
Full article ">Figure 15
<p>The average deviation between the planned and actual path in relation to the distance from the charger. The average value was calculated based on all 50 sequences.</p>
Full article ">Figure 16
<p>The visualisation of the docking manoeuvre trajectory for the case when the driver approaches a different charger than assumed by our system.</p>
Full article ">
18 pages, 10447 KiB  
Article
Deriving Agricultural Field Boundaries for Crop Management from Satellite Images Using Semantic Feature Pyramid Network
by Yang Xu, Xinyu Xue, Zhu Sun, Wei Gu, Longfei Cui, Yongkui Jin and Yubin Lan
Remote Sens. 2023, 15(11), 2937; https://doi.org/10.3390/rs15112937 - 5 Jun 2023
Cited by 4 | Viewed by 2698
Abstract
We propose a Semantic Feature Pyramid Network (FPN)-based algorithm to derive agricultural field boundaries and internal non-planting regions from satellite imagery. It is aimed at providing guidance not only for land use management, but more importantly for harvest or crop protection machinery planning. [...] Read more.
We propose a Semantic Feature Pyramid Network (FPN)-based algorithm to derive agricultural field boundaries and internal non-planting regions from satellite imagery. It is aimed at providing guidance not only for land use management, but more importantly for harvest or crop protection machinery planning. The Semantic Convolutional Neural Network (CNN) FPN is first employed for pixel-wise classification on each remote sensing image, detecting agricultural parcels; a post-processing method is then developed to transfer attained pixel classification results into closed contours, as field boundaries and internal non-planting regions, including slender paths (walking or water) and obstacles (trees or electronic poles). Three study sites with different plot sizes (0.11 ha, 1.39 ha, and 2.24 ha) are selected to validate the effectiveness of our algorithm, and the performance compared with other semantic CNN (including U-Net, U-Net++, PSP-Net, and Link-Net)-based algorithms. The test results show that the crop acreage information, field boundaries, and internal non-planting area could be determined by using the proposed algorithm in different places. When the boundary number applicable for machinery planning is attained, average and total crop planting area values all remain closer to the reference ones generally when using the semantic FPN with post-processing, compared with other methods. The post-processing methodology would greatly decrease the number of inapplicable and redundant field boundaries for path planning using different CNN models. In addition, the crop planting mode and scale (especially the small-scale planting and small/blurred gap between fields) both make a great difference to the boundary delineation and crop acreage determination. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location and satellite map imagery of the selected study and verification sites.</p>
Full article ">Figure 2
<p>The structure of the adopted semantic FPN model.</p>
Full article ">Figure 3
<p>Raw closures attained based on semantic FPN and contour-finding method.</p>
Full article ">Figure 4
<p>An example of field boundary and agricultural pattern delineation.</p>
Full article ">Figure 5
<p>Example of attained contours applicable, inapplicable, redundant and missed for the path-planning of crop-protection UAV or Harvest Tractors.</p>
Full article ">Figure 6
<p>Selected study place and reference boundary contours in three different sites.</p>
Full article ">Figure 7
<p>Attained agricultural parcels, boundaries and internal non-planting areas in site 1.</p>
Full article ">Figure 8
<p>Attained agricultural parcels, boundaries and internal non-planting areas in site 2.</p>
Full article ">Figure 9
<p>Attained agricultural parcels, boundaries and internal non-planting areas in site 3.</p>
Full article ">
21 pages, 5382 KiB  
Article
MCPT: Mixed Convolutional Parallel Transformer for Polarimetric SAR Image Classification
by Wenke Wang, Jianlong Wang, Bibo Lu, Boyuan Liu, Yake Zhang and Chunyang Wang
Remote Sens. 2023, 15(11), 2936; https://doi.org/10.3390/rs15112936 - 5 Jun 2023
Cited by 4 | Viewed by 1951
Abstract
Vision transformers (ViT) have the characteristics of massive training data and complex model, which cannot be directly applied to polarimetric synthetic aperture radar (PolSAR) image classification tasks. Therefore, a mixed convolutional parallel transformer (MCPT) model based on ViT is proposed for fast PolSAR [...] Read more.
Vision transformers (ViT) have the characteristics of massive training data and complex model, which cannot be directly applied to polarimetric synthetic aperture radar (PolSAR) image classification tasks. Therefore, a mixed convolutional parallel transformer (MCPT) model based on ViT is proposed for fast PolSAR image classification. First of all, a mixed depthwise convolution tokenization is introduced. It replaces the learnable linear projection in the original ViT to obtain patch embeddings. The process of tokenization can reduce computational and parameter complexity and extract features of different receptive fields as input to the encoder. Furthermore, combining the idea of shallow networks with lower latency and easier optimization, a parallel encoder is implemented by pairing the same modules and recombining to form parallel blocks, which can decrease the network depth and computing power requirement. In addition, the original class embedding and position embedding are removed during tokenization, and a global average pooling layer is added after the encoder for category feature extraction. Finally, the experimental results on AIRSAR Flevoland and RADARSAT-2 San Francisco datasets show that the proposed method achieves a significant improvement in training and prediction speed. Meanwhile, the overall accuracy achieved was 97.9% and 96.77%, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The scheme of the proposed PolSAR image classification method.</p>
Full article ">Figure 2
<p>Mixed depthwise convolution (MixConv).</p>
Full article ">Figure 3
<p>Mixed depthwise convolution tokenization.</p>
Full article ">Figure 4
<p>Vision Transformer model overview.</p>
Full article ">Figure 5
<p>The structure of different encoders. (<b>a</b>) Original serial structure encoder block. (<b>b</b>) Proposed parallel structure encoder block.</p>
Full article ">Figure 6
<p>Comparison diagram using the fully connected layer and the global average pooling layer. (<b>a</b>) Obtain output using fully connected layers. (<b>b</b>) Obtain output using global average pooling layer.</p>
Full article ">Figure 7
<p>AIRSAR Flevoland dataset and its color code. (<b>a</b>) Pauli RGB map. (<b>b</b>) Ground truth map. (<b>c</b>) Legend of the dataset.</p>
Full article ">Figure 8
<p>RADARSAT-2 San Francisco dataset and its color code. (<b>a</b>) Pauli RGB map. (<b>b</b>) Ground truth map. (<b>c</b>) Legend of the dataset.</p>
Full article ">Figure 9
<p>Prediction results of the whole map on AIRSAR Flevoland dataset. (<b>a</b>) Result of CV-FCN. (<b>b</b>) Result of CV-MLPs. (<b>c</b>) Result of CV-3D-CNN. (<b>d</b>) Result of SViT. (<b>e</b>) Result of PolSARFormer. (<b>f</b>) Result of MCPT.</p>
Full article ">Figure 10
<p>Prediction results of the whole map on RADARSAT-2 San Francisco dataset. (<b>a</b>) Result of CV-FCN. (<b>b</b>) Result of CV-MLPs. (<b>c</b>) Result of CV-3D-CNN. (<b>d</b>) Result of SViT. (<b>e</b>) Result of PolSARFormer. (<b>f</b>) Result of MCPT.</p>
Full article ">Figure 11
<p>Impact of data amount. (<b>a</b>) The amount of training data for the comparison methods. (<b>b</b>) Overall accuracy of comparison methods on two datasets using the same data amount of the proposed method. (<b>c</b>) Training and predicting time of comparison methods on AIRSAR Flevoland dataset using the same data amount of the proposed method. (<b>d</b>) Training and predicting time of comparison methods on RADARSAT-2 San Francisco dataset using the same data amount as in the proposed method.</p>
Full article ">
19 pages, 7923 KiB  
Article
Research and Evaluation on Dynamic Maintenance of an Elevation Datum Based on CORS Network Deformation
by Shenghao Liang, Chuanyin Zhang, Tao Jiang and Wei Wang
Remote Sens. 2023, 15(11), 2935; https://doi.org/10.3390/rs15112935 - 5 Jun 2023
Viewed by 1618
Abstract
This paper presents a method for dynamically maintaining a regional elevation datum using CORS stations as core nodes. By utilizing CORS station data and surface mass loading data (including land water storage, sea level, and atmospheric pressure), the normal height changes of each [...] Read more.
This paper presents a method for dynamically maintaining a regional elevation datum using CORS stations as core nodes. By utilizing CORS station data and surface mass loading data (including land water storage, sea level, and atmospheric pressure), the normal height changes of each station can be determined and dynamically maintained. The validity of this method is verified using multiple leveling survey results from five CORS stations in Beijing’s subsidence area between January 2012 and June 2021. Results show that it is necessary to derive and correct the height anomaly variation of CORS stations caused by surface mass loading using the remove-calculate-restore method and the Green’s function integration method, with the influence of surface mass changes reaching a subcentimeter level. CORS stations exhibiting great observation quality achieve a mean accuracy of 2.7 mm in determining normal height changes. Such accuracy surpasses the requirements of second-class leveling surveys covering route lengths exceeding 1.35 km, as well as conforming/closed loop routes with distances greater than 0.46 km. By strategically selecting CORS stations with long-term continuous observations and high-quality data as core nodes within the elevation control network, dynamic maintenance of the regional elevation datum can be achieved based on CORS station data. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of different kinds of elevations.</p>
Full article ">Figure 2
<p>CORS station distribution map.</p>
Full article ">Figure 3
<p>Flow Chart of CEEMDAN Algorithm.</p>
Full article ">Figure 4
<p>CEEMDAN decomposition results in mm for CHAO (<b>left</b>) and XIJI (<b>right</b>) station.</p>
Full article ">Figure 5
<p>Time series results in mm of CHAO, CHPN, DSQI, NLSH, and XIJI (arranged from top to bottom).</p>
Full article ">Figure 6
<p>Flow Chart of CORS Height Anomaly Change Estimation.</p>
Full article ">Figure 7
<p>Reference Height Anomaly Variations caused by Global Terrestrial Water Changes on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 8
<p>Reference Height Anomaly Variations caused by Global Atmospheric Pressure Changes on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 9
<p>Reference Height Anomaly Variations caused by Global Sea Level Changes on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 10
<p>Residual Height Anomaly Variations caused by Regional Terrestrial Water Changes on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 11
<p>Residual Height Anomaly Variations caused by Regional Atmospheric Pressure Changes on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 12
<p>Height Anomaly Variations in Beijing caused by Surface Loading on the 15th day of each month in 2018. (Triangle represents the CORS in Beijing).</p>
Full article ">Figure 13
<p>Height Anomaly variations due to surface mass load.</p>
Full article ">
21 pages, 52577 KiB  
Article
Use of Remotely Piloted Aircraft System Multispectral Data to Evaluate the Effects of Prescribed Burnings on Three Macrohabitats of Pantanal, Brazil
by Harold E. Pineda Valles, Gustavo Manzon Nunes, Christian Niel Berlinck, Luiz Gustavo Gonçalves and Gabriel Henrique Pires de Mello Ribeiro
Remote Sens. 2023, 15(11), 2934; https://doi.org/10.3390/rs15112934 - 4 Jun 2023
Cited by 1 | Viewed by 2122
Abstract
The controlled use of fires to reduce combustible materials in prescribed burning helps to prevent the occurrence of forest fires. In recent decades, these fires have mainly been caused by anthropogenic activities. The study area is located in the Pantanal biome. In 2020, [...] Read more.
The controlled use of fires to reduce combustible materials in prescribed burning helps to prevent the occurrence of forest fires. In recent decades, these fires have mainly been caused by anthropogenic activities. The study area is located in the Pantanal biome. In 2020, the greatest drought in 60 years happened in the Pantanal. The fire affected almost one third of the biome. The objective of this study is to evaluate the effect of prescribed burnings carried out in 2021 on three macrohabitats (M1: natural grassland flooded with a proliferation of Combretum spp., M2: natural grassland of seasonal swamps, and M3: natural grassland flooded with a proliferation of Vochysia divergens) inside the SESC Pantanal Private Natural Heritage Reserve. Multispectral and thermal data analyses were conducted with remotely piloted aircraft systems in 1 ha plots in three periods of the dry season with early, mid, and late burning. The land use and land cover classification indicate that the predominant vegetation type in these areas is seasonally flooded grassland, with percentages above 73%, except in zone three, which has a more diverse composition and structure, with the presence of arboreal specimens of V. divergem Pohl. The pattern of the thermal range showed differentiation pre- and post-burning. The burned area index indicated that fire was more efficient in the first two macrohabitats because they are natural grasslands, reducing the grass species in the burnings. Early and mid prescribed burnings are a good option to reduce the continuous accumulation of dry forest biomass fuel material and help to promote landscape heterogeneity. The use of multispectral sensor data with high spatial/spectral resolution can show the effects of fires, using highly detailed scales for technical decision making. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Flowchart of procedures performed in the study.</p>
Full article ">Figure 3
<p>Natural cover within the analysis plots for each period of PB evaluated.</p>
Full article ">Figure 4
<p>Orthomosaics of the three PB periods of each macrohabitat evaluated.</p>
Full article ">Figure 5
<p>Pre-PBs and post-PBs, thermal band behavior.</p>
Full article ">Figure 6
<p>Pre-PB and post-PB thermal band spatial behavior.</p>
Full article ">Figure 7
<p>Effect of burning on study macrohabitats.</p>
Full article ">Figure 8
<p>Fire severity—BAI in each macrohabitat and the evaluated period of PBs.</p>
Full article ">Figure 9
<p>PBs severity—BAI.</p>
Full article ">
25 pages, 5643 KiB  
Article
Autonomous Multi-Floor Localization Based on Smartphone-Integrated Sensors and Pedestrian Indoor Network
by Chaoyang Shi, Wenxin Teng, Yi Zhang, Yue Yu, Liang Chen, Ruizhi Chen and Qingquan Li
Remote Sens. 2023, 15(11), 2933; https://doi.org/10.3390/rs15112933 - 4 Jun 2023
Cited by 3 | Viewed by 2060
Abstract
Autonomous localization without local wireless facilities is proven as an efficient way for realizing location-based services in complex urban environments. The precision of the current map-matching algorithms is subject to the poor ability of integrated sensor-based trajectory estimation and the efficient combination of [...] Read more.
Autonomous localization without local wireless facilities is proven as an efficient way for realizing location-based services in complex urban environments. The precision of the current map-matching algorithms is subject to the poor ability of integrated sensor-based trajectory estimation and the efficient combination of pedestrian motion information and the pedestrian indoor network. This paper proposes an autonomous multi-floor localization framework based on smartphone-integrated sensors and pedestrian network matching (ML-ISNM). A robust data and model dual-driven pedestrian trajectory estimator is proposed for accurate integrated sensor-based positioning under different handheld modes and disturbed environments. A bi-directional long short-term memory (Bi-LSTM) network is further applied for floor identification using extracted environmental features and pedestrian motion features, and further combined with the indoor network matching algorithm for acquiring accurate location and floor observations. In the multi-source fusion procedure, an error ellipse-enhanced unscented Kalman filter is developed for the intelligent combination of a trajectory estimator, human motion constraints, and the extracted pedestrian network. Comprehensive experiments indicate that the presented ML-ISNM achieves autonomous and accurate multi-floor positioning performance in complex and large-scale urban buildings. The final evaluated average localization error was lower than 1.13 m without the assistance of wireless facilities or a navigation database. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of developed ML-ISNM.</p>
Full article ">Figure 2
<p>Deep-learning model for walking velocity estimation.</p>
Full article ">Figure 3
<p>Presentation of pedestrian network and error ellipse.</p>
Full article ">Figure 4
<p>Ground-truth walking route of public dataset.</p>
Full article ">Figure 5
<p>Estimated trajectory of inertial odometry.</p>
Full article ">Figure 6
<p>Error comparison of trajectory estimator and INS-PDR.</p>
Full article ">Figure 7
<p>Accuracy indices of four handheld modes.</p>
Full article ">Figure 8
<p>Accuracy comparison of different models.</p>
Full article ">Figure 9
<p>Multi-floor contained real-world indoor environment. (<b>a</b>) 6th Floor. (<b>b</b>) 7th Floor. (<b>c</b>) 8th Floor. (<b>d</b>) 9th Floor.</p>
Full article ">Figure 9 Cont.
<p>Multi-floor contained real-world indoor environment. (<b>a</b>) 6th Floor. (<b>b</b>) 7th Floor. (<b>c</b>) 8th Floor. (<b>d</b>) 9th Floor.</p>
Full article ">Figure 10
<p>Extracted pedestrian indoor network.</p>
Full article ">Figure 11
<p>(<b>a</b>). Two-dimensional trajectory comparison between TE and EE-UKF. (<b>b</b>). Three-dimensional trajectory comparison between TE and EE-UKF.</p>
Full article ">Figure 12
<p>Positioning error comparison between TE and EE-UKF.</p>
Full article ">
36 pages, 7030 KiB  
Review
Ground-Penetrating Radar and Electromagnetic Induction: Challenges and Opportunities in Agriculture
by Sashini Pathirana, Sébastien Lambot, Manokarajah Krishnapillai, Mumtaz Cheema, Christina Smeaton and Lakshman Galagedara
Remote Sens. 2023, 15(11), 2932; https://doi.org/10.3390/rs15112932 - 4 Jun 2023
Cited by 11 | Viewed by 6313
Abstract
Information on the spatiotemporal variability of soil properties and states within the agricultural landscape is vital to identify management zones supporting precision agriculture (PA). Ground-penetrating radar (GPR) and electromagnetic induction (EMI) techniques have been applied to assess soil properties, states, processes, and their [...] Read more.
Information on the spatiotemporal variability of soil properties and states within the agricultural landscape is vital to identify management zones supporting precision agriculture (PA). Ground-penetrating radar (GPR) and electromagnetic induction (EMI) techniques have been applied to assess soil properties, states, processes, and their spatiotemporal variability. This paper reviews the fundamental operating principles of GPR and EMI, their applications in soil studies, advantages and disadvantages, and knowledge gaps leading to the identification of the difficulties in integrating these two techniques to complement each other in soil data studies. Compared to the traditional methods, GPR and EMI have advantages, such as the ability to take non-destructive repeated measurements, high resolution, being labor-saving, and having more extensive spatial coverage with geo-referenced data within agricultural landscapes. GPR has been widely used to estimate soil water content (SWC) and water dynamics, while EMI has broader applications such as estimating SWC, soil salinity, bulk density, etc. Additionally, GPR can map soil horizons, the groundwater table, and other anomalies. The prospects of GPR and EMI applications in soil studies need to focus on the potential integration of GPR and EMI to overcome the intrinsic limitations of each technique and enhance their applications to support PA. Future advancements in PA can be strengthened by estimating many soil properties, states, and hydrological processes simultaneously to delineate management zones and calculate optimal inputs in the agricultural landscape. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of the review methodology.</p>
Full article ">Figure 2
<p>Ground-penetrating radar (GPR) surveys with (<b>a</b>) ground-coupled antennas (GPR instrument by S. Pathirana), (<b>b</b>) air-coupled antennas (GPR prototype by S. Lambot).</p>
Full article ">Figure 3
<p>Ray paths of ground-penetrating radar (GPR) wave propagation in a two-layer soil which has different dielectric permittivity values (modified from Huisman et al. [<a href="#B33-remotesensing-15-02932" class="html-bibr">33</a>]).</p>
Full article ">Figure 4
<p>Data acquisition methods in ground penetration radar (GPR) applications, (<b>a</b>) fixed offset, (<b>b</b>) common mid-point, (<b>c</b>) wide-angle reflection and refraction, (<b>d</b>) zero offset profiling, (<b>e</b>) multiple offsets gathering, and (<b>f</b>) vertical reflection profiling methods (modified from Liu et al. [<a href="#B35-remotesensing-15-02932" class="html-bibr">35</a>]).</p>
Full article ">Figure 5
<p>High-resolution soil moisture map obtained with drone-borne GPR and full-wave inversion (FWI) in an agricultural field in Belgium (GPR prototype by S. Lambot).</p>
Full article ">Figure 6
<p>Current flow paths in the electromagnetic induction (EMI) technique (modified from De Carlo et al. [<a href="#B205-remotesensing-15-02932" class="html-bibr">205</a>]).</p>
Full article ">Figure 7
<p>Electromagnetic induction (EMI) surveys with (<b>a</b>) multi-coil EMI sensor (CMD–MINIEXPLORER) (EMI instrument by S. Pathirana) and (<b>b</b>) multi-frequency EMI sensor (GEM-2) (EMI instrument by L. Galagedara).</p>
Full article ">Figure 8
<p>Integral depth and sampling volume variation of (<b>a</b>) multi-coil (MC) and (<b>b</b>) multi-frequency (MF) electromagnetic induction (EMI) sensors (modified from Keiswetter &amp; Won, [<a href="#B209-remotesensing-15-02932" class="html-bibr">209</a>]).</p>
Full article ">Figure 9
<p>Spatial variability of apparent electrical conductivity (σ<sub>a</sub>) measured using a multi-coil electromagnetic (EMI) sensor with vertical dipole ((<b>a</b>) = 0~0.5; (<b>b</b>) = 0~1.0; (<b>c</b>) = 0~1.8 m) and horizontal dipole ((<b>d</b>) = 0~0.25; (<b>e</b>) = 0~0.5; (<b>f</b>) = 0~0.9 m) orientations (S. Pathirana and L. Galagedara).</p>
Full article ">Figure 10
<p>Number of studies conducted from 1995 to 2022 to assess soil water content, soil salinity, soil compaction, soil texture, and soil organic carbon using the ground-penetrating radar (GPR) and electromagnetic induction (EMI) methods, showing the dominance of GPR in SWC measurement in contrast to the predominant use of EMI for salinity measurements.</p>
Full article ">Figure 11
<p>Percentage of soil properties and states estimated using (<b>a</b>) ground-penetrating radar (GPR) and (<b>b</b>) electromagnetic induction (EMI) from 1995 to 2022.</p>
Full article ">Figure 12
<p>Term maps of key word network during last 15 years (2007–2022) in (<b>a</b>) ground-penetrating radar (GPR) and (<b>b</b>) electromagnetic induction (EMI).</p>
Full article ">Figure 12 Cont.
<p>Term maps of key word network during last 15 years (2007–2022) in (<b>a</b>) ground-penetrating radar (GPR) and (<b>b</b>) electromagnetic induction (EMI).</p>
Full article ">
28 pages, 15615 KiB  
Article
Retrieving Atmospheric Gas Profiles Using FY-3E/HIRAS-II Infrared Hyperspectral Data by Neural Network Approach
by Han Li, Mingjian Gu, Chunming Zhang, Mengzhen Xie, Tianhang Yang and Yong Hu
Remote Sens. 2023, 15(11), 2931; https://doi.org/10.3390/rs15112931 - 4 Jun 2023
Cited by 6 | Viewed by 1951
Abstract
The observed radiation data from the second-generation Hyperspectral Infrared Atmospheric Sounder (HIRAS-II) on the Fengyun-3E (FY-3E) satellite contain useful vertical atmosphere information which can distinguish and retrieve vertical profiles of atmospheric gas components including ozone (O3), carbon monoxide (CO), and methane [...] Read more.
The observed radiation data from the second-generation Hyperspectral Infrared Atmospheric Sounder (HIRAS-II) on the Fengyun-3E (FY-3E) satellite contain useful vertical atmosphere information which can distinguish and retrieve vertical profiles of atmospheric gas components including ozone (O3), carbon monoxide (CO), and methane (CH4). This paper utilizes FY-3E/HIRAS-II observational data to optimize each gas channel using the improved Optimal Sensitivity Profile method (OSP) channel algorithm and establishes a typical convolutional neural network model (CNN) and a representative U-shaped network model (UNET) with deep features and shallow feature links to perform atmospheric profile retrieval calculations of O3, CO, and CH4. We chose the clear sky data of the Indian and its southern seas in December 2021 and January 2022, with reanalysis data from European Center for Medium-Range Weather Forecasts Reanalysis v5 (ERA5) and European Center for Medium-Range Weather Forecasts Atmospheric Composition Reanalysis v4 (EAC4) serving as the reference values. The retrieval outcomes were then compared against advanced numerical forecast models including the Whole Atmosphere Community Climate Model (WACCM), Global Forecast System (GFS), and satellite products from an Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI). Experimental results show that the generalization ability and retrieval accuracy of CNN are slightly higher compared with UNET. For O3 profile retrieval, the mean percentage error (MPE) of the whole layers for CNN and UNET data in relation to ERA5 data was less than 8%, while the root-mean-square error (RMSE) was below 1.5 × 10−7 kg/kg; for CH4 profile retrieval, the MPE of the whole layers for CNN and UNET data in relation to EAC4 data was less than 0.7%, while the RMSE was below 1.5 × 10−8 kg/kg. The retrieval of O3 and CH4 are resulted in a significant improvement compared to the forecast data and satellite products in most pressure levels; for CO profile retrieval, the MPE of the whole layers for CNN and UNET data in relation to EAC4 data was less than 11%, while the RMSE was below 4 × 10−8 kg/kg. The error of the CO retrieval results was higher than that of the forecast data at the pressure level of 200~500 hPa and lower than that of similar satellite products with most pressure levels. The experiments indicated that the neural network method effectively determines the atmospheric gas profiles using infrared hyperspectral data, exhibiting a positive performance in accuracy and retrieval speed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Retrieval system for gases profiles by FY-3E/HIRAS-II.</p>
Full article ">Figure 2
<p>Spectral brightness temperature apodization before and after comparison.</p>
Full article ">Figure 3
<p>Channel sensitivity analysis by FY-3E/HIRAS-II.</p>
Full article ">Figure 4
<p>The preferred channel of atmospheric component gases. (<b>a</b>) O<sub>3</sub> (<b>b</b>) CO (<b>c</b>) CH<sub>4</sub>.</p>
Full article ">Figure 4 Cont.
<p>The preferred channel of atmospheric component gases. (<b>a</b>) O<sub>3</sub> (<b>b</b>) CO (<b>c</b>) CH<sub>4</sub>.</p>
Full article ">Figure 5
<p>CNN model.</p>
Full article ">Figure 6
<p>UNET Network Model.</p>
Full article ">Figure 7
<p>Distribution of clear sky samples in the area: (<b>a</b>) distribution of training samples from 21 December 2021 to 9 January 2022; (<b>b</b>) distribution of testing samples from 10 to 18 January 2022.</p>
Full article ">Figure 8
<p>Scatter plot of model retrieval for O<sub>3</sub> (0~1000 hPa): (<b>a</b>)val set; (<b>b</b>) test set.</p>
Full article ">Figure 9
<p>Scatter plot of model retrieval for CO (0~700 hPa): (<b>a</b>)val set; (<b>b</b>) test set.</p>
Full article ">Figure 10
<p>Scatter plot of model retrieval for CH<sub>4</sub> (0~1000 hPa): (<b>a</b>)val set; (<b>b</b>) test set.</p>
Full article ">Figure 11
<p>O<sub>3</sub> profiles by different datasets.</p>
Full article ">Figure 12
<p>Comparison of O<sub>3</sub> between retrieval results and forecast data: (<b>a</b>) MPE; (<b>b</b>) RMSE.</p>
Full article ">Figure 13
<p>Comparison of O<sub>3</sub> between retrieval results and similar satellite products: (<b>a</b>) MPE; (<b>b</b>) RMSE.</p>
Full article ">Figure 14
<p>CO profiles by different datasets.</p>
Full article ">Figure 15
<p>Comparison of CO between retrieval results and forecast data: (<b>a</b>) MPE; (<b>b</b>) RMSE.</p>
Full article ">Figure 16
<p>Comparison of CO between retrieval results and similar satellite products: (<b>a</b>) MPE; (<b>b</b>) RMSE.</p>
Full article ">Figure 17
<p>CH<sub>4</sub> profiles by different datasets.</p>
Full article ">Figure 18
<p>Comparison of CH<sub>4</sub> between retrieval results and forecast data: (<b>a</b>) MPE; (<b>b</b>) RMSE.</p>
Full article ">Figure 19
<p>Comparison of CH<sub>4</sub> between retrieval results and similar satellite products (<b>a</b>) MPE (<b>b</b>) RMSE.</p>
Full article ">
27 pages, 31248 KiB  
Article
A Triangular Grid Filter Method Based on the Slope Filter
by Chuanli Kang, Zitao Lin, Siyi Wu, Yiling Lan, Chongming Geng and Sai Zhang
Remote Sens. 2023, 15(11), 2930; https://doi.org/10.3390/rs15112930 - 4 Jun 2023
Cited by 4 | Viewed by 1667
Abstract
High-precision ground point cloud data has a wide range of applications in various fields, and the separation of ground points from non-ground points is a crucial preprocessing step. Therefore, designing an efficient, accurate, and stable ground extraction algorithm is highly significant for improving [...] Read more.
High-precision ground point cloud data has a wide range of applications in various fields, and the separation of ground points from non-ground points is a crucial preprocessing step. Therefore, designing an efficient, accurate, and stable ground extraction algorithm is highly significant for improving the processing efficiency and analysis accuracy of point cloud data. The study area in this article was a park in Guilin, Guangxi, China. The point cloud was obtained by utilizing the UAV platform. In order to improve the stability and accuracy of the filter algorithm, this article proposed a triangular grid filter based on the Slope Filter, found violation points by the spatial position relationship within each point in the triangulation network, improved KD-Tree-Based Euclidean Clustering, and applied it to the non-ground point extraction. This method is accurate, stable, and achieves the separation of ground points from non-ground points. Firstly, the Slope Filter is used to remove some non-ground points and reduce the error of taking ground points as non-ground points. Secondly, a triangular grid based on the triangular relationship between each point is established, and the violation triangle is determined through the grid; thus, the corresponding violation points are found in the violation triangle. Thirdly, according to the three-point collinear method to extract the regular points, these points are used to extract the regular landmarks by the KD-Tree-Based Euclidean Clustering and Convex Hull Algorithm. Finally, the dispersed points and irregular landmarks are removed by the Clustering Algorithm. In order to confirm the superiority of this algorithm, this article compared the filter effects of various algorithms on the study area and filtered the 15 data samples provided by ISPRS, obtaining an average error of 3.46%. The results show that the algorithm presented in this article has high processing efficiency and accuracy, which can significantly improve the processing efficiency of point cloud data in practical applications. Full article
Show Figures

Figure 1

Figure 1
<p>The algorithm flowchart.</p>
Full article ">Figure 2
<p>How to obtain an elevation grid.</p>
Full article ">Figure 3
<p>How to obtain a slope grid.</p>
Full article ">Figure 4
<p>How to obtain the attribute grid.</p>
Full article ">Figure 5
<p>A triangular grid.</p>
Full article ">Figure 6
<p>The Euclidean Clustering.</p>
Full article ">Figure 7
<p>The Euclidean Clustering process.</p>
Full article ">Figure 8
<p>The whole study is a point cloud.</p>
Full article ">Figure 9
<p>The study area’s point cloud.</p>
Full article ">Figure 10
<p>The violation points.</p>
Full article ">Figure 11
<p>The regular violation point.</p>
Full article ">Figure 12
<p>The clustering result (different color points represent different point sets after clustering).</p>
Full article ">Figure 13
<p>The boundary of the clustering points (different colored boundaries represent the corresponding boundaries of different point sets after clustering).</p>
Full article ">Figure 14
<p>The violation points of the second round.</p>
Full article ">Figure 15
<p>The study area after the triangular grid filter (the blue dot indicates the non-ground point, the red dot indicates the ground point, and the red box indicates the ineffective area).</p>
Full article ">Figure 16
<p>The study area after the Slope Filter with a high threshold.</p>
Full article ">Figure 17
<p>The study area after the filtering process combining two methods (the blue dot indicates the non-ground point, the red dot indicates the ground point, and the red box indicates the same area effect after the improvement).</p>
Full article ">Figure 18
<p>The EMD Filter (the red box selection area indicates that the non-ground point is used as the ground point retention area).</p>
Full article ">Figure 19
<p>The SMRF Filter (the red box selection area indicates that the non-ground point is used as the ground point retention area).</p>
Full article ">Figure 20
<p>The Progressive Triangular Mesh Filter (the red box selection area indicates that the non-ground point is used as the ground point retention area).</p>
Full article ">Figure 21
<p>Slope Filter (the red box selection area indicates that the non-ground point is used as the ground point retention area; the blue box selection area indicates that the ground point is used as the non-ground point removal area).</p>
Full article ">Figure 22
<p>Cloth Simulation Filter (the red box selection area indicates that the non-ground point is used as the ground point retention area; the blue box selection area indicates that the ground point is used as the non-ground point removal area).</p>
Full article ">Figure 23
<p>Combining the Slope Filter and triangular grid filter (the red box selected area indicates that the filtering effect is improved).</p>
Full article ">Figure 24
<p>Sample 1-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 25
<p>Sample 1-2 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 26
<p>Sample 2-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point, the blue dot indicates ground point, and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error, the blue dot indicates Type I error, and the white dot indicates the correct point).</p>
Full article ">Figure 27
<p>Sample 2-2 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 28
<p>Sample 2-3 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 29
<p>Sample 2-4 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 30
<p>Sample 3-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 31
<p>Sample 4-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 32
<p>Sample 4-2 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 33
<p>Sample 5-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 34
<p>Sample 5-2 after filtering ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 35
<p>Sample 5-3 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 36
<p>Sample 5-4 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 37
<p>Sample 6-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 38
<p>Sample 7-1 after the filter ((<b>a</b>) represents the overview of the filtering effect; the red dot in a indicates that the original data is not ground point; the blue dot indicates ground point; and the white dot indicates the remaining point after filtering as ground point; (<b>b</b>) represents the poor filtering effect; the red dot indicates Type II error; the blue dot indicates Type I error; and the white dot indicates the correct point).</p>
Full article ">Figure 39
<p>The error of different methods in each sample.</p>
Full article ">Figure 40
<p>Sample 3-1 surface reconstruction ((<b>a</b>) represents the original point cloud to construct the surface model; (<b>b</b>) represents the original ground point cloud to construct the surface model; (<b>c</b>) represents the filtered ground point to construct the surface model).</p>
Full article ">Figure 41
<p>Sample 5-4 surface reconstruction ((<b>a</b>) represents the original point cloud to construct the surface model; (<b>b</b>) represents the original ground point cloud to construct the surface model; (<b>c</b>) represents the filtered ground point to construct the surface model).</p>
Full article ">
20 pages, 3203 KiB  
Article
Polarimetric Range Extended Target Detection via Adaptive Range Weighted Feature Extraction
by Mingchen Yuan, Liang Zhang, Yanhua Wang and Chang Han
Remote Sens. 2023, 15(11), 2929; https://doi.org/10.3390/rs15112929 - 4 Jun 2023
Cited by 3 | Viewed by 1581
Abstract
In ground static target detection, polarimetric high-resolution radar can distinguish the target from the strong ground clutter by reducing the clutter power in the range cell and providing additional polarimetric features. Since the energy of a target is split over several range cells, [...] Read more.
In ground static target detection, polarimetric high-resolution radar can distinguish the target from the strong ground clutter by reducing the clutter power in the range cell and providing additional polarimetric features. Since the energy of a target is split over several range cells, the resulting detection problem is called polarimetric range extended target (RET) detection, where all target scattering centers should be considered. In this paper, we propose a novel polarimetric RET detection method via adaptive range weighted feature extraction. Specifically, polarimetric features of range cells are extracted, and a pretrained attention-mechanism-based module is used to adaptively calculate range cells weights, which are used to accumulate the range cells features as detection statistics. While calculating weights, both amplitude and polarimetric features are considered. This method can make the most of polarization information and improve the accumulation effect, thus increasing the discrimination between targets and clutter. The effectiveness of the proposed method is verified compared to both popular energy-domain detection methods and existing feature-domain detection methods, and the results show that our method exhibits superior detection performance. Moreover, we further analyze our method on different target models and different clutter distributions to prove that our method is suitable for different types of targets and clutter. Full article
(This article belongs to the Special Issue Advances in Radar Systems for Target Detection and Tracking)
Show Figures

Figure 1

Figure 1
<p>The structure of the AMN.</p>
Full article ">Figure 2
<p>Overall structure of our proposed detection method. The symbol ⊗ represents accumulating features of range cells. (a) Range cells feature extraction module: extracting multiple polarimetric features for each range cell. (b) Adaptive range cells weights calculation module: adaptively calculating range cells weights, which are used for adaptive range weighted feature extraction. (c) Supervision weights generation module: generating supervision weights of known data, which are used for pretraining the adaptive range cells weights calculation module. (d) Classification module: using a classifier in the feature domain to determine whether a target exists.</p>
Full article ">Figure 3
<p>Structure of the adaptive range cells weights calculation module, and the process of adaptive range weighted feature extraction. The symbol ⊗ represents matrix product.</p>
Full article ">Figure 4
<p>Framework of the supervision weights generation module. In <span class="html-italic">W</span>, 1 means marked as high contribution and 0 means marked as low contribution.</p>
Full article ">Figure 5
<p>Generated HRRP data with different SCRs. (<b>a</b>) SCR = −5dB; (<b>b</b>) SCR = 0dB; (<b>c</b>) SCR = 5dB; (<b>d</b>) SCR = 10dB.</p>
Full article ">Figure 5 Cont.
<p>Generated HRRP data with different SCRs. (<b>a</b>) SCR = −5dB; (<b>b</b>) SCR = 0dB; (<b>c</b>) SCR = 5dB; (<b>d</b>) SCR = 10dB.</p>
Full article ">Figure 6
<p>Detection probability curves compared with other detection methods.</p>
Full article ">Figure 7
<p>Detection probability curves of three feature accumulation methods.</p>
Full article ">Figure 8
<p>HRRPs of clutter and targets with different SCRs and heat diagrams of adaptive weights. (<b>a</b>) Clutter; (<b>b</b>) SCR = −5dB; (<b>c</b>) SCR = 0dB; (<b>d</b>) SCR = 5dB; (<b>e</b>) colorbar. In the heat diagrams, the values of weights of the range cells are normalized, and the color display settings are the same for all heat diagrams. The colder colors correspond to lower weights and warmer colors indicate higher weights.</p>
Full article ">Figure 9
<p>MMD curves of three feature accumulation methods.</p>
Full article ">Figure 10
<p>Detection probability curves of three methods using different dimensional information.</p>
Full article ">Figure 11
<p>Targets with different energy distributions. (<b>a</b>) Target with concentrated energy distribution; (<b>b</b>) target with dispersed energy distribution.</p>
Full article ">Figure 12
<p>Detection performance of targets with different energy distributions. (<b>a</b>) Target with concentrated energy distribution; (<b>b</b>) target with dispersed energy distribution.</p>
Full article ">Figure 13
<p>Detection probability curves of different range resolutions.</p>
Full article ">Figure 14
<p>Detection probability curves. (<b>a</b>) Under clutter B; (<b>b</b>) training and testing clutter distributions are different.</p>
Full article ">
26 pages, 10301 KiB  
Article
Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance
by Xiangqing Zhang, Yan Feng, Shun Zhang, Nan Wang, Shaohui Mei and Mingyi He
Remote Sens. 2023, 15(11), 2928; https://doi.org/10.3390/rs15112928 - 4 Jun 2023
Cited by 6 | Viewed by 2163
Abstract
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. [...] Read more.
Detecting sparse, small, lost persons with only a few pixels in high-resolution aerial images was, is, and remains an important and difficult mission, in which a vital role is played by accurate monitoring and intelligent co-rescuing for the search and rescue (SaR) system. However, many problems have not been effectively solved in existing remote-vision-based SaR systems, such as the shortage of person samples in SaR scenarios and the low tolerance of small objects for bounding boxes. To address these issues, a copy-paste mechanism (ISCP) with semi-supervised object detection (SSOD) via instance segmentation and maximum mean discrepancy distance is proposed (MMD), which can provide highly robust, multi-task, and efficient aerial-based person detection for the prototype SaR system. Specifically, numerous pseudo-labels are obtained by accurately segmenting the instances of synthetic ISCP samples to obtain their boundaries. The SSOD trainer then uses soft weights to balance the prediction entropy of the loss function between the ground truth and unreliable labels. Moreover, a novel evaluation metric MMD for anchor-based detectors is proposed to elegantly compute the IoU of the bounding boxes. Extensive experiments and ablation studies on Heridal and optimized public datasets demonstrate that our approach is effective and achieves state-of-the-art person detection performance in aerial images. Full article
(This article belongs to the Special Issue Active Learning Methods for Remote Sensing Data Processing)
Show Figures

Figure 1

Figure 1
<p>General framework of our proposed method. It mainly includes four separate modules: OIM is used to generate rich and diverse samples with ISCP; SSTM improves the generalization ability of the model by iteratively training the authenticity labels; MMD is an optimized bounding box loss evaluation method that replaces IoU analysis to detect performance metrics; All of the above modules are embedded into the detector. Note that there are slight modifications in the tactics employed in the training and detection phases.</p>
Full article ">Figure 2
<p>MMD-based detectors with transfer learning. Input data sources include files, pictures, video streams, and cameras. The MMD evaluation method is either incorporated as a distinct module or integrated directly into the loss function of the anchor-based detector.</p>
Full article ">Figure 3
<p>Synthetic data generation with instance segmentation on the OML.</p>
Full article ">Figure 4
<p>Anchor-based detector with decoupled head to support instance segmentation. YOLOv5 includes CBS, CSPX_X, SPPF, and CBR modules, whereas BiSeNet adds ARM and FFM modules, mostly to include the BiSeNet network structure in the detector’s neck.</p>
Full article ">Figure 5
<p>Pseudo-label co-training and model ensemble learning.</p>
Full article ">Figure 6
<p>Instances comparison in various datasets, including MS-COCO, VisDrone, Heridal, TinyPerson, AFO and VHTA datasets. (<b>a</b>) Distribution of samples in various SaR scenes. (<b>b</b>) Statistics of ground truth of person category. (<b>c</b>) Statistics of coordinates (center point [x, y]) of person category. (<b>d</b>) Statistics of aspect ratio ([width, height]) of person category.</p>
Full article ">Figure 7
<p>Comparison of MMD and CIoU training curves for various depth models with the Heridal_ForestPerson dataset. (<b>a</b>–<b>d</b>) refer to metrics of object detection, including precision, recall, mAP_0.5, and mAP_0.5:0.95. Solid curve colors represent MMD evaluation results under different depth models. Dash curve colors represent CIoU evaluation results under different depth models. ‘x’, ‘l’, ‘m’, ‘s’ and ‘n’ denote the depth of the different backbones of the pretrained models, respectively. All benchmarks tested in YOLOv5 framework.</p>
Full article ">Figure 8
<p>Training curves on the Heridal_ForestPerson dataset with object implantation and semi-supervised learning. (<b>a</b>–<b>d</b>) refer to metrics of object detection, including precision, recall, mAP_0.5, and mAP_0.5:0.95. On the Heridal_ForestPerson dataset, the red, green, and blue curves reflect the training outcomes of YOLOv5, YOLOv5 + copy-paste with instance segmentation, and YOLOv5 + copy-paste with instance segmentation + semi-supervised learning, respectively. Different lines represent various object implantation strategies. All the experimental results were tested with the ‘s’ pretrained model.</p>
Full article ">Figure 9
<p>Define six mistake categories. False positive (FP) detection is represented by red boxes, True positive (TP) detection by green boxes, and ground truth (GT) by blue boxes. The IoU between boxes with GT is indicated by an orange highlight at the bottom of the subplots.</p>
Full article ">Figure 10
<p>Summary of errors on the Heridal_ForestPerson dataset. (<b>a</b>–<b>f</b>) represent errors in methods Nos. 3, 4, 7, 8, 11, and 12 from <a href="#remotesensing-15-02928-t005" class="html-table">Table 5</a>.</p>
Full article ">Figure 11
<p>Visualization of detection results using baseline detectors (first four rows) and our proposed detector (the fifth row) of our created VHTA dataset. The test results of the various methods have been partially enlarged for clarity.</p>
Full article ">
20 pages, 7987 KiB  
Article
Multi-Class Double-Transformation Network for SAR Image Registration
by Xiaozheng Deng, Shasha Mao, Jinyuan Yang, Shiming Lu, Shuiping Gou, Youming Zhou and Licheng Jiao
Remote Sens. 2023, 15(11), 2927; https://doi.org/10.3390/rs15112927 - 4 Jun 2023
Cited by 2 | Viewed by 1909
Abstract
In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the [...] Read more.
In SAR image registration, most existing methods consider the image registration as a two-classification problem to construct the pair training samples for training the deep model. However, it is difficult to obtain a mass of given matched-points directly from SAR images as the training samples. Based on this, we propose a multi-class double-transformation network for SAR image registration based on Swin-Transformer. Different from existing methods, the proposed method directly considers each key point as an independent category to construct the multi-classification model for SAR image registration. Then, based on the key points from the reference and sensed images, respectively, a double-transformation network with two branches is designed to search for matched-point pairs. In particular, to weaken the inherent diversity between two SAR images, key points from one image are transformed to the other image, and the transformed image is used as the basic image to capture sub-images corresponding to all key points as the training and testing samples. Moreover, a precise-matching module is designed to increase the reliability of the obtained matched-points by eliminating the inconsistent matched-point pairs given by two branches. Finally, a series of experiments illustrate that the proposed method can achieve higher registration performance compared to existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of the proposed method.</p>
Full article ">Figure 2
<p>A visual example of eight near-points around a key point from the sensed image with <span class="html-italic">k</span> pixels, where <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and the predictions are obtained by the R-S branch (<math display="inline"><semantics> <mrow> <mi>N</mi> <mi>e</mi> <msub> <mi>t</mi> <mi>R</mi> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Reference and sensed images of Wuhan data. The image size is <math display="inline"><semantics> <mrow> <mn>400</mn> <mo>×</mo> <mn>400</mn> </mrow> </semantics></math> and the resolution is 10 m.</p>
Full article ">Figure 4
<p>Reference and sensed images of Australia-Yama data. The image size is <math display="inline"><semantics> <mrow> <mn>650</mn> <mo>×</mo> <mn>350</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 5
<p>Reference and sensed images of YellowR1 data. The image size is <math display="inline"><semantics> <mrow> <mn>700</mn> <mo>×</mo> <mn>700</mn> </mrow> </semantics></math> pixels and the resolution is 8 m.</p>
Full article ">Figure 6
<p>Reference and sensed images of YellowR2 data. The image size is <math display="inline"><semantics> <mrow> <mn>1000</mn> <mo>×</mo> <mn>1000</mn> </mrow> </semantics></math> pixels and the resolution is 8 m.</p>
Full article ">Figure 7
<p>Registration CB-map for Wuhan data with <math display="inline"><semantics> <mrow> <mn>400</mn> <mo>×</mo> <mn>400</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Registration CB-map for YellowR1 data with <math display="inline"><semantics> <mrow> <mn>700</mn> <mo>×</mo> <mn>700</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Registration CB-map for Yamba data with <math display="inline"><semantics> <mrow> <mn>350</mn> <mo>×</mo> <mn>650</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Registration CB-map for YellowR2 data with <math display="inline"><semantics> <mrow> <mn>1000</mn> <mo>×</mo> <mn>1000</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>The comparison of sub-images corresponding to matched-points obtained by the proposed method without precised-matching module and with precise-matching module. For each point, the left sub-image corresponds to that point, the right top sub-image corresponds to its matched-point obtained by the proposed method without precised-matching module, and the right under sub-image (labeled in the red box) corresponds to the matched results with precised-matching module.</p>
Full article ">Figure 12
<p>The comparison of the proposed double-transformation network to two single branches (the R-S branch and the S-R branch).</p>
Full article ">
18 pages, 8141 KiB  
Article
Spatial Population Distribution Data Disaggregation Based on SDGSAT-1 Nighttime Light and Land Use Data Using Guilin, China, as an Example
by Can Liu, Yu Chen, Yongming Wei and Fang Chen
Remote Sens. 2023, 15(11), 2926; https://doi.org/10.3390/rs15112926 - 3 Jun 2023
Cited by 5 | Viewed by 3046
Abstract
A high-resolution population distribution map is crucial for numerous applications such as urban planning, disaster management, public health, and resource allocation, and it plays a pivotal role in evaluating and making decisions to achieve the UN Sustainable Development Goals (SDGs). Although there are [...] Read more.
A high-resolution population distribution map is crucial for numerous applications such as urban planning, disaster management, public health, and resource allocation, and it plays a pivotal role in evaluating and making decisions to achieve the UN Sustainable Development Goals (SDGs). Although there are many population products derived from remote sensing nighttime light (NTL) and other auxiliary data, they are limited by the coarse spatial resolution of NTL data. As a result, the outcomes’ spatial resolution is restricted, and it cannot meet the requirements of some applications. To address this limitation, this study employs the nighttime light data provided by the SDGSAT-1 satellite, which has a spatial resolution of 10 m, and land use data as auxiliary data to disaggregate the population distribution data from WorldPop data (100 m resolution) to a high resolution of 10 m. The case study conducted in Guilin, China, using the multi-class weighted dasymetric mapping method shows that the total error during the disaggregation is 0.63%, and the accuracy of 146 towns in the study area is represented by an R2 of 0.99. In comparison to the WorldPop data, the result’s information entropy and spatial frequency increases by 345% and 1142%, respectively, which demonstrates the effectiveness of this approach in studying population distributions with high spatial resolution. Full article
Show Figures

Figure 1

Figure 1
<p>The location of Guilin and SDGSAT-1 NTL RGB true color image. (GX: Guangxi Zhuang Autonomous Region, China; GL: Guilin City; XF: Xiufeng District; DC: Duocai District; XS: Xiangshan District; QX: Qixing District; YS: Yanshan District; Linggui District; YS: Yangshuo County; LC: Lingchuan County; QZ: Quanzhou County; XA: Xing’an County; YF: Yongfu County; GY: Guanyang County; LS: Longsheng Various Nationalities Autonomous County; ZY: Ziyuan County; PL: Pingle County; GC: Gongcheng Yao Autonomous County; LC: Lipu City).</p>
Full article ">Figure 2
<p>WorldPop population data of Guilin, China, in 2018. (The <b>upper</b> picture is 1 km spatial resolution, and the <b>lower</b> picture is 100 m spatial resolution).</p>
Full article ">Figure 3
<p>Guilin SDGSAT-1 NTL data in 2021 (The <b>upper</b> part is the true color image, and the <b>lower</b> part is the panchromatic band image).</p>
Full article ">Figure 4
<p>Guilin land use data (overlay of EULUC-China Data, FROM-GCL10 Data, and China Road Network. The numbers in the legend represent different land use types. For the specific correspondences, please refer to <a href="#remotesensing-15-02926-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 5
<p>Land use data validation. (<b>a</b>,<b>b</b>) Urban areas; (<b>c</b>,<b>d</b>) rural areas; (<b>e</b>,<b>f</b>) urban–rural fringe areas. (The legend of each color in the right figure is consistent with <a href="#remotesensing-15-02926-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 6
<p>Multi-class weighted dasymetric mapping flow diagram. (<b>a</b>) is the real population spatial distribution, (<b>1</b>) is the uniform population spatial distribution grids, (<b>b</b>) is the real population spatial distribution in various land use types, (<b>2</b>) is the population spatial distribution grids after using land use data, (<b>c</b>) is the NTL data, (<b>3</b>) is the population spatial distribution grids after using NTL data).</p>
Full article ">Figure 7
<p>Guilin population grid data using multi-class weighted dasymetric mapping (the <b>upper</b> part is the result of the first round, and the <b>lower</b> part is the result of the second round).</p>
Full article ">Figure 8
<p>Analysis of WorldPop data and disaggregation result.</p>
Full article ">Figure 9
<p>Population grid data comparison. (<b>a</b>) WorldPop 1 km grid data; (<b>b</b>) WorldPop 100 m data; (<b>c</b>) the first round of grid disaggregation result; (<b>d</b>) the second round of grid disaggregation result.</p>
Full article ">Figure 10
<p>Variation trend of IE value and SF value of four kinds of population grid data (a: WorldPop 1 km grid data; b: WorldPop 100 m data; c: the first round of grid disaggregation result; d: the second round of grid disaggregation result).</p>
Full article ">
18 pages, 9067 KiB  
Article
Validation of FY-4A Temperature Profiles by Radiosonde Observations in Taklimakan Desert in China
by Yufen Ma, Juanjuan Liu, Ali Mamtimin, Ailiyaer Aihaiti and Lan Xu
Remote Sens. 2023, 15(11), 2925; https://doi.org/10.3390/rs15112925 - 3 Jun 2023
Cited by 4 | Viewed by 1689
Abstract
The atmospheric temperature profiles (ATPs) retrieved through the geostationary Interferometric Infrared Sounder (GIIRS) onboard the FY-4A satellite (GIIRS/FY-4A) can effectively fill the gap of the scarce conventional sounding data in the Taklimakan Desert (TD), the second largest desert in the world, with an [...] Read more.
The atmospheric temperature profiles (ATPs) retrieved through the geostationary Interferometric Infrared Sounder (GIIRS) onboard the FY-4A satellite (GIIRS/FY-4A) can effectively fill the gap of the scarce conventional sounding data in the Taklimakan Desert (TD), the second largest desert in the world, with an area of 330,000 square kilometers. In this study, we take the experimental radiosonde observations (RAOB) from one RAOB station in the hinterland of TD and seven conventional radiosondes in the oasis region around the desert as the true values and analyze the bias distribution characteristics of GIIRS/FY-4A ATPs with quality control (QC) flags 0 or 1 for this region. In addition, a bias comparison is made with GIIRS/FY-4A ATPs, and the fifth generation ECMWF atmospheric reanalysis of the global climate (ERA5) ATPs. The results show that (1) Missing measurements in GIIRS/FY-4A ATPs are the most frequent in the near-surface layer, accounting for more than 80% of all the retrieved grid points. The averaged total proportion of GIIRS/FY-4A ATPs with QC marks 0 or 1 is about 33.06%. (2) The root mean square error (RMSE) of GIIRS/FY-4A ATPs is less than 3 K, smaller than that of ERA5 ATPs. The RMSE of ERA5 ATPs can exceed 10 K in the desert hinterland. The absolute mean biases of GIIRS/FY-4A ATPs and ERA5 ATPs are, respectively, smaller than 3 K and 2 K, the former being slightly larger. The correlation coefficients of GIIRS/FY-4A ATPs with ERA5 ATPs and RAOB ATPs are higher than 0.98 and 0.99, respectively, and the correlation between GIIRS/FY-4A ATPs and RAOB ATPs is inferior to the latter. (3) The overall atmospheric temperature retrieved by GIIRS/FY-4A is 0.08 K higher than the temperature of RAOB, on average, while the overall temperature from ERA5 is 0.13 K lower than that of RAOB, indicating that the temperature profile obtained by integrating GIIRS/FY-4A ATPs and ERA5 ATPs may be much closer to RAOB ATPs. (4) The probability density of the GIIRS/FY-4A ATP biases in the TD region generally follows the Gaussian distribution so that it can be effectively assimilated in the 3-D variational assimilation modules. The probability density distribution characteristics of the GIIRS/FY-4A ATP biases in the desert hinterland and oasis are not much different. However, due to the fusion analysis of the relatively rich multi-source conventional observation data from the oasis stations, the probability density of ERA5 ATPs biases at the oasis stations is nearer to Gaussian distribution than that of the GIIRS/FY-4A ATPs. In the desert hinterland, where conventional observation is not enough, the probability density distributions of the ATPs biases from ERA5 and GIIRS/FY-4A are alike. Therefore, the GIIRS FY4A can contribute to a more accurate estimation of ERA5 ATPs in the TD region. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of all GIIRS/FY-4A ATPs (<b>a</b>) and those with QC flags equal to 0 or 1 (<b>b</b>) during 08:00–09:40 UTC on 31 July 2021.</p>
Full article ">Figure 2
<p>The monthly averaged percentage of GIIRS/FY-4A ATPs in July 2021 (black line) with different QC flags out of all grid numbers in and around TD. The percentages of retrievals with QC_Flag = 3, QC_Flag = 2, QC_Flag = 1, and QC_Flag = 0 are represented by the blue line, cyan line, green line and red line, respectively.</p>
Full article ">Figure 3
<p>Spatial distribution of the RAOB stations in and around the Taklimakan Desert.</p>
Full article ">Figure 4
<p>Monthly averaged atmospheric temperature profiles from RAOB for the eight stations in TD in July 2021. The black, blue and red lines indicate the mean, minimum and maximum, respectively. The station name of each subplot is given in the upper-right corner of each subplot.</p>
Full article ">Figure 5
<p>Monthly averaged atmospheric temperature profiles from ERA5 for the eight stations in TD in July 2021. The black, blue and red lines indicate the mean, minimum and maximum, respectively. The station name of each subplot is given in the upper-right corner of each subplot.</p>
Full article ">Figure 6
<p>Monthly averaged atmospheric temperature profiles from GIIRS/FY-4A for the eight stations in TD in July 2021. The black, blue and red lines indicate the mean, minimum and maximum, respectively. The station name of each subplot is given in the upper-right corner of each subplot.</p>
Full article ">Figure 7
<p>The MB (<b>a</b>,<b>c</b>) and RMSE (<b>b</b>,<b>d</b>) of ERA5 GIIRS/FY-4A ATPs (<b>a</b>,<b>b</b>) and GIIRS/FY-4A ATPs (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 8
<p>The scatterplots of ERA5 ATPs and RAOB ATPs. The station name of each subplot is given in the upper-left corner of each subplot. The blue hollow circles stand for the air temperatures of both ERA5 and RAOB ATPs, with their correlation coefficients (Corr in the plot) available in the lower-right corner of each subplot. The red solid line is the function plot of y = x, which is plotted as the reference to compare the temperatures of ERA5 and RAOB.</p>
Full article ">Figure 9
<p>The scatterplots of GIIRS/FY-4A ATPs and RAOB ATPs. The station name of each subplot is given in the upper-left corner of each subplot. The blue hollow circles stand for the air temperatures of both GIIRS/FY-4A and RAOB ATPs, with their correlation coefficients (Corr in the plot) available in the lower-right corner of each subplot. The red solid line is the function plot of y = x, which is plotted as the reference to compare the temperatures of GIIRS/FY-4A and RAOB.</p>
Full article ">Figure 10
<p>The QQ plots of ERA5 ATPs and RAOB ATPs at each selected station. The station name of each subplot is given in the upper left corner of each subplot. The plus shaped blue point shows the median value of both ERA5 and RAOB temperatures. The red solid line is the function plot of y = x, which is plotted as the reference to compare the temperature of ERA5 and RAOB.</p>
Full article ">Figure 11
<p>The QQ plots of GIIRS/FY-4A ATPs and RAOB ATPs at each selected station. The station name of each sub-plot is given in the upper-left corner of each subplot. The plus shaped blue point shows the median value of both GIIRS/FY-4A and RAOB temperatures. The red solid line is the function plot of y = x, which is plotted as the reference to compare the temperature of GIIRS/FY-4A and RAOB.</p>
Full article ">Figure 12
<p>Bias PDFs of ERA5 ATPs and GIIRS/FY-4A ATPs at each station, with atmospheric pressure layer from 0 hPa to 1000 hPa. The station name of each subplot is given in the upper-left corner of each subplot. The red dashed lines stand for the PDF curves of ERA5 ATPs, and the blue solid lines stand for the PDF curves of GIIRS/FY4A ATPs.</p>
Full article ">Figure 13
<p>Bias PDFs of ERA5 ATPS and GIIRS/FY-4A ATPS in different layers of all eight stations. The subplot (<b>a</b>) indicates the PDF curves of both ERA5 ATPs and GIIRS/FY-4A ATPs at pressure layers from 0 hPa to 1000 hPa, while (<b>b</b>) is for 0 to 200 hPa, (<b>c</b>) for 200 to 700 hPa, and (<b>d</b>) for 700 to 1000 hPa. The red dashed lines stand for the PDF curves of ERA5 ATPs, and the blue solid lines for the PDF curves of GIIRS/FY4A ATPs.</p>
Full article ">
25 pages, 22522 KiB  
Article
Three-Dimensional Modelling of Past and Present Shahjahanabad through Multi-Temporal Remotely Sensed Data
by Vaibhav Rajan, Mila Koeva, Monika Kuffer, Andre Da Silva Mano and Shubham Mishra
Remote Sens. 2023, 15(11), 2924; https://doi.org/10.3390/rs15112924 - 3 Jun 2023
Cited by 1 | Viewed by 2693
Abstract
Cultural heritage is under tremendous pressure in the rapidly growing and transforming cities of the global south. Historic cities and towns are often faced with the dilemma of having to preserve old monuments while responding to the pressure of adapting itself to a [...] Read more.
Cultural heritage is under tremendous pressure in the rapidly growing and transforming cities of the global south. Historic cities and towns are often faced with the dilemma of having to preserve old monuments while responding to the pressure of adapting itself to a modern lifestyle, which often results in the loss of cultural heritage. Indian cities such as Delhi possess a rich legacy of tangible heritage, traditions, and arts, which are reflected in their present urban form. The creation of temporal 3D models of such cities not only provides a platform with which one can experience the past, but also helps to understand, examine, and improve its present deteriorating state. However, gaining access to historical data to support the development of city-scale 3D models is a challenge. While data gaps can be bridged by combining multiple data sources, this process also presents considerable technical challenges. This paper provides a framework to generate LoD-2 (level-of-detail) 3D models of the present (the 2020s) and the past (the 1970s) of a heritage mosque surrounded by a dense and complex urban settlement in Shahjahanabad (Old Delhi) by combining multiple VHR (very high resolution) satellite images. The images used are those of Pleiades and Worldview-1 and -3 (for the present) and HEXAGON KH-9 declassified spy images (for the past). The chronological steps are used to extract the DSMs and DTMs that provide a base for the 3D models. The models are rendered, and the past and present are visualized using graphics and videos. The results reveal an average increase of 80% in the heights of the built structures around the main monument (mosque), leading to a loss in the visibility of this central mosque. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Different levels of detail used to visualize urban structures under CityGML; source: [<a href="#B17-remotesensing-15-02924" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>Present-day Jama Masjid, highlighting a 300-m radius around it chosen for city modelling; source: Pleiades.</p>
Full article ">Figure 3
<p>The evolution of space around Jama Masjid from 1880s till present; source: ASI archive.</p>
Full article ">Figure 4
<p>Pipeline for generating the present and the past model of 2020s.</p>
Full article ">Figure 5
<p>Location of the buildings which were selected for collecting heights. The numbers represent the individual buildings and the alphabets contains multiple structures/buildings which are further zoomed in on the left.</p>
Full article ">Figure 6
<p>Pleiades tri-stereo and Worldview-1 stereo pairs for DSM.</p>
Full article ">Figure 7
<p>Process involved in generating a DSM through stereo-based matching.</p>
Full article ">Figure 8
<p>Network architecture of ResDepth, authors: Corinne Stucker and Konrad Schindler [<a href="#B16-remotesensing-15-02924" class="html-bibr">16</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) Conventionally obtained DSM; (<b>b</b>) output from ResDepth: the figure illustrates that the model fails to refine building structures, but the streets were widened, and much of the noise was removed.</p>
Full article ">Figure 10
<p>Very dense point cloud created from images collected using Google Earth Pro.</p>
Full article ">Figure 11
<p>Shahjahanabad in 1977, captured by the KH-9 satellite: (<b>a</b>) Shahjahanabad cropped from the full strip; (<b>b</b>) area of interest with 70 cm GSD.</p>
Full article ">Figure 12
<p>Cleaned film strips obtained after stitching and removing borders.</p>
Full article ">Figure 13
<p>The steps involved in generating the camera models and bundle adjustment for KH-9 images.</p>
Full article ">Figure 14
<p>Visual comparison of three DSMs for Jama Masjid mosque. The three domes with the minarets can be seen in the combined DSM, which the other two DSMs could not detect.</p>
Full article ">Figure 15
<p>Output DSM generated from all five images combined after rasterizing the point cloud.</p>
Full article ">Figure 16
<p>Output DSM generated from the stereo pairs of the KH-9 images.</p>
Full article ">Figure 17
<p>The numbers (right image) represent the selected ground control points for accuracy assessment; left: KH-9 image; right: extracted DTM of the 1970s.</p>
Full article ">Figure 18
<p>Illustration explaining the method used for accuracy assessment for the 1970s raster from the 2020s raster.</p>
Full article ">Figure 19
<p>Final form of the rendered present city model of the 2020s.</p>
Full article ">Figure 20
<p>Final form of the rendered past city model of the 1970s.</p>
Full article ">Figure 21
<p>Bar graph showing the changes in the building counts from the 1970s to the 2020s.</p>
Full article ">Figure 22
<p>Classified height maps of the 2020s (<b>a</b>) and 1970s (<b>b</b>) showing the vertical expansion in the area; source: ArcGIS Pro.</p>
Full article ">Figure 23
<p>Visibility comparison of point A in the two models.</p>
Full article ">Figure 24
<p>Visibility comparison of point B in the two models.</p>
Full article ">Figure 25
<p>Visibility comparison of point C in the two models.</p>
Full article ">Figure 26
<p>Visibility comparison of point D in the two models.</p>
Full article ">Figure 27
<p>Results from drone-shot videos; (<b>a</b>) point cloud from Sfm; (<b>b</b>) obtained 3D model.</p>
Full article ">Figure 28
<p>Examples of some colorized rare photographs from late 19th century. Examples of some rare photographs from late 19th century, colorized through the DeOldify noGAN method [<a href="#B28-remotesensing-15-02924" class="html-bibr">28</a>].</p>
Full article ">
20 pages, 5362 KiB  
Article
Tracking of Multiple Static and Dynamic Targets for 4D Automotive Millimeter-Wave Radar Point Cloud in Urban Environments
by Bin Tan, Zhixiong Ma, Xichan Zhu, Sen Li, Lianqing Zheng, Libo Huang and Jie Bai
Remote Sens. 2023, 15(11), 2923; https://doi.org/10.3390/rs15112923 - 3 Jun 2023
Cited by 6 | Viewed by 3112
Abstract
This paper presents a target tracking algorithm based on 4D millimeter-wave radar point cloud information for autonomous driving applications, which addresses the limitations of traditional 2 + 1D radar systems by using higher resolution target point cloud information that enables more accurate motion [...] Read more.
This paper presents a target tracking algorithm based on 4D millimeter-wave radar point cloud information for autonomous driving applications, which addresses the limitations of traditional 2 + 1D radar systems by using higher resolution target point cloud information that enables more accurate motion state estimation and target contour information. The proposed algorithm includes several steps, starting with the estimation of the ego vehicle’s velocity information using the radial velocity information of the millimeter-wave radar point cloud. Different clustering suggestions are then obtained using a density-based clustering method, and correlation regions of the targets are obtained based on these clustering suggestions. The binary Bayesian filtering method is then used to determine whether the targets are dynamic or static targets based on their distribution characteristics. For dynamic targets, Kalman filtering is used to estimate and update the state of the target using trajectory and velocity information, while for static targets, the rolling ball method is used to estimate and update the shape contour boundary of the target. Unassociated measurements are estimated for the contour and initialized for the trajectory, and unassociated trajectory targets are selectively retained and deleted. The effectiveness of the proposed method is verified using real data. Overall, the proposed target tracking algorithm based on 4D millimeter-wave radar point cloud information has the potential to improve the accuracy and reliability of target tracking in autonomous driving applications, providing more comprehensive motion state and target contour information for better decision making. Full article
(This article belongs to the Special Issue Radar Remote Sensing for Applications in Intelligent Transportation)
Show Figures

Figure 1

Figure 1
<p>Measurements of the same target at adjacent moments. (<b>a</b>) 3D view of the target point cloud at moment <span class="html-italic">t</span> − 2. (<b>b</b>) 3D view of the target point cloud at moment <span class="html-italic">t −</span> 1. (<b>c</b>) 3D view of the target point cloud at moment <span class="html-italic">t</span>. (<b>d</b>) Top view of the target point cloud at moment <span class="html-italic">t −</span> 2. (<b>e</b>) Top view of the target point cloud at moment <span class="html-italic">t −</span> 1. (<b>f</b>) Top view of the target point cloud at moment <span class="html-italic">t</span>.</p>
Full article ">Figure 2
<p>4D millimeter-wave radar point cloud tracking framework.</p>
Full article ">Figure 3
<p>Module of calculation of target measurement.</p>
Full article ">Figure 4
<p>Data acquisition platform, including 4D radar, lidar, and camera sensor.</p>
Full article ">Figure 5
<p>The relationship between velocity estimation and angle under a distance of 5 m and an object rotation angle of 10 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 6
<p>The relationship between velocity estimation and angle under a distance of 20 m and an object rotation angle of 10 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 7
<p>The relationship between velocity estimation and angle under a distance of 5 m and an object rotation angle of 40 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 8
<p>The relationship between velocity estimation and angle under a distance of 20 m and an object rotation angle of 40 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 9
<p>The relationship between velocity estimation and angle under a distance of 20 m and an object rotation angle of 70 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 10
<p>The relationship between velocity estimation and angle under a distance of 40 m and an object rotation angle of 70 degrees, considering different radial velocity measurement errors.</p>
Full article ">Figure 11
<p>Method for estimating dynamic targets at different angles, and the relationship between probability and angle changes.</p>
Full article ">Figure 12
<p>Rectangle formed by the estimated rotation angle and the true rotation angle.</p>
Full article ">Figure 13
<p>Results of 4D millimeter-wave radar point cloud and target tracking for a single vehicle, where the green box represents a dynamic target, the red box represents a static target, and the blue box represents the true box of a dynamic target.</p>
Full article ">Figure 14
<p>Results of 4D millimeter-wave radar point cloud and target tracking for a single vehicle, including an incorrect dynamic detection, where the green box represents a dynamic target, the red box represents a static target, and the blue box represents the true box of a dynamic target.</p>
Full article ">Figure 15
<p>Results of 4D millimeter-wave radar point cloud and target tracking for multiple objects, where the green box represents a dynamic target, the red box represents a static target, and the blue box represents the true box of a dynamic target.</p>
Full article ">Figure 16
<p>Performance curves of different indicators for dynamic targets in a tracking scenario.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop