Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (806)

Search Parameters:
Keywords = radar imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 25812 KiB  
Article
Forecasting Flood Inundation in U.S. Flood-Prone Regions Through a Data-Driven Approach (FIER): Using VIIRS Water Fractions and the National Water Model
by Amirhossein Rostami, Chi-Hung Chang, Hyongki Lee, Hung-Hsien Wan, Tien Le Thuy Du, Kel N. Markert, Gustavious P. Williams, E. James Nelson, Sanmei Li, William Straka III, Sean Helfrich and Angelica L. Gutierrez
Remote Sens. 2024, 16(23), 4357; https://doi.org/10.3390/rs16234357 - 22 Nov 2024
Viewed by 440
Abstract
Floods, one of the costliest, and most frequent hazards, are expected to worsen in the U.S. due to climate change. The real-time forecasting of flood inundations is extremely important for proactive decision-making to reduce damage. However, traditional forecasting methods face challenges in terms [...] Read more.
Floods, one of the costliest, and most frequent hazards, are expected to worsen in the U.S. due to climate change. The real-time forecasting of flood inundations is extremely important for proactive decision-making to reduce damage. However, traditional forecasting methods face challenges in terms of implementation and scalability due to computational burdens and data availability issues. Current forecasting services in the U.S. largely rely on hydrodynamic modeling, limited to river reaches near in situ gauges and requiring extensive data for model setup and calibration. Here, we have successfully adapted the Forecasting Inundation Extents using REOF (FIER) analysis framework to produce forecasted water fraction maps in two U.S. flood-prone regions, specifically the Red River of the North Basin and the Upper Mississippi Alluvial Plain, utilizing Visible Infrared Imaging Radiometer Suite (VIIRS) optical imagery and the National Water Model. Comparing against historical VIIRS imagery for the same dates, FIER 1- to 8-day medium-range pseudo-forecasts show that about 70–80% of pixels exhibit absolute errors of less than 30%. Although originally developed utilizing Synthetic Aperture Radar (SAR) images, this study demonstrated FIER’s versatility and effectiveness in flood forecasting by demonstrating its successful adaptation with optical VIIRS imagery which provides daily water fraction product, offering more historical observations to be used as inputs for FIER during peak flood times, particularly in regions where flooding commonly happens in a short period rather than following a broad seasonal pattern. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>top</b>) The USGS in situ streamflow data (blue line, cumecs: m<sup>3</sup>/second) from 2017 to 2020 at gauges located in (<b>a</b>) Drayton, North Dakota, along the Red River of the North mainstem, and (<b>b</b>) New Madrid, Missouri, along the Mississippi River mainstem. The green triangles mark all the epochs when Sentinel-1 images were acquired, while the orange dots mark the epochs of the VIIRS images used in this study. (<b>bottom</b>) The corresponding amount of data with less than 5% cloud coverage within each of the 10% USGS in situ streamflow percentile groups.</p>
Full article ">Figure 2
<p>(<b>Left column</b>) JRC historical maximum inundation extents and permanent water from 1984 to 2022 [<a href="#B42-remotesensing-16-04357" class="html-bibr">42</a>], and (<b>right column</b>) the USGS NLCD 2021 cultivated croplands [<a href="#B44-remotesensing-16-04357" class="html-bibr">44</a>] in (<b>a</b>) RRNB and (<b>b</b>) UMAP. The white dots show the locations of the USGS in situ gauges used in this study.</p>
Full article ">Figure 3
<p>Pie charts of the top five classes in the USDA CDL for 2021 and 2022 in the (<b>a</b>) RRNB and (<b>b</b>) UMAP, showing the most dominant crops are spring wheat and soybeans, respectively.</p>
Full article ">Figure 4
<p>Flowchart of the FIER process largely consists of framework construction and forecasting. Dashed line arrow indicates the synthesis of RSMs and forecasted RTPCs.</p>
Full article ">Figure 5
<p>Flowchart (<b>left</b>) and schematic view (<b>right</b>) of the quantile mapping process employed to correct the biases in FIER water fraction forecasts. The blue boxes in the flowchart represent historical water fraction data (FIER-synthesized and VIIRS-observed) and their respective CDFs. The red boxes in the flowchart represent forecasted water fraction data and the corresponding extracted quantiles.</p>
Full article ">Figure 6
<p>The extracted streamflow-related (<b>a</b>) RSMs, (<b>b</b>) RTPCs along with USGS in situ streamflow data, and (<b>c</b>) neural network regression models for FIER water fraction forecasting in the RRNB.</p>
Full article ">Figure 7
<p>The extracted streamflow-related (<b>a</b>) RSMs, (<b>b</b>) RTPCs along with USGS in situ streamflow data, and (<b>c</b>) neural network regression models for FIER water fraction forecasting in the UMAP.</p>
Full article ">Figure 8
<p>The extracted streamflow-related (<b>a</b>) RSMs, (<b>b</b>) RTPCs along with USGS in situ streamflow data, and (<b>c</b>) neural network regression models for FIER Sentinel-1 inundation extent forecasting in the RRNB.</p>
Full article ">Figure 9
<p>The extracted streamflow-related (<b>a</b>) RSMs, (<b>b</b>) RTPCs along with USGS in situ streamflow data, and (<b>c</b>) neural network regression models for FIER Sentinel-1 inundation extent forecasting in the UMAP.</p>
Full article ">Figure 10
<p>Cumulative percentages of pixels in different ranges of AEs in the RRNB and UMAP over (<b>a</b>) all pixels and (<b>b</b>) pixels with high water fractions (&gt;80%).</p>
Full article ">Figure 11
<p>Water fractions on the peak-flood dates in 2022 and 2023 in the RRNB: (<b>a</b>) historical observation where white pixels are clouds, (<b>b</b>) FIER pseudo-nowcast, and (<b>c</b>) 8-day FIER medium-range pseudo-forecast.</p>
Full article ">Figure 12
<p>Water fractions on the peak flood dates in 2021, 2022, and 2023 in the UMAP: (<b>a</b>) historical observation where white pixels are clouds, (<b>b</b>) FIER pseudo-nowcast, and (<b>c</b>) 8-day FIER medium-range pseudo-forecast.</p>
Full article ">Figure 13
<p>Examples of averaged FIER medium-range water fraction pseudo-forecasts over the next 1 to 8 days in the 2022 spring wheat fields in the RRNB, which could have been generated on (<b>a</b>) 2022-05-02, (<b>b</b>) 2022-05-03, or (<b>c</b>) 2022-05-04, before the peak flood on 2022-05-05 in the planting period.</p>
Full article ">Figure 14
<p>Examples of averaged FIER medium-range water fraction pseudo-forecasts over the next 1 to 8 days in the 2022 soybean fields in the UMAP, which could have been generated on (<b>a</b>) 2022-05-11, (<b>b</b>) 2022-05-12, or (<b>c</b>) 2022-05-13, before the peak flood on 2022-05-14 in the planting period.</p>
Full article ">
27 pages, 33223 KiB  
Article
Synergistic Coupling of Multi-Source Remote Sensing Data for Sandy Land Detection and Multi-Indicator Integrated Evaluation
by Junjun Wu, Yi Li, Bo Zhong, Yan Zhang, Qinhuo Liu, Xiaoliang Shi, Changyuan Ji, Shanlong Wu, Bin Sun, Changlong Li and Aixia Yang
Remote Sens. 2024, 16(22), 4322; https://doi.org/10.3390/rs16224322 - 19 Nov 2024
Viewed by 346
Abstract
Accurate and timely extraction and evaluation of sandy land are essential for ecological environmental protection; it is urgent to do the research to support the sustainable development goals (SDGs) of Land Degradation Neutrality. This study used Sentinel-1 Synthetic Aperture Radar (SAR) data and [...] Read more.
Accurate and timely extraction and evaluation of sandy land are essential for ecological environmental protection; it is urgent to do the research to support the sustainable development goals (SDGs) of Land Degradation Neutrality. This study used Sentinel-1 Synthetic Aperture Radar (SAR) data and Landsat 8 OLI multispectral data as the main data sources. Combining the rich spectral information from optical data and the penetrating advantages of radar data, a feature-level fusion method was employed to unveil the intrinsic nature of vegetative cover and accurately identify sandy land. Simultaneously, leveraging the results obtained from training with measured data, a comprehensive desertification assessment model was proposed, which combines multiple indicators to achieve a thorough evaluation of sandy land. The results showed that the method based on feature-level fusion achieved an overall accuracy of 86.31% in sandy land detection in Gansu Province, China. The integrated multi-indicator model C22_C/FVC is the ratio of correlation texture features of VH to vegetation cover based on which sandy land can be classified into three categories. When C22_C/FVC is less than 2.2, the pixel is classified as fixed sandy land. Pixels of semi-fixed sandy land have an indicator value between 2.2 and 5.2. Shifting sandy land has values greater than 5.2. Results showed that shifting sandy land and semi-fixed sandy land are the predominant types in Gansu Province, with 85,100 square kilometers and 87,100 square kilometers, respectively. The acreage of fixed sandy land was the least, 51,800 square kilometers. The method presented in this paper is robust for the detection and evaluation of sandy land from satellite imageries, which can potentially be applied for conducting high-resolution and large-scale detection and evaluation of sandy land. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Administrative map of Gansu Province.</p>
Full article ">Figure 2
<p>Distribution of sampling plots.</p>
Full article ">Figure 3
<p>Technical flowchart.</p>
Full article ">Figure 4
<p>Distribution of vegetation cover in Southern Gansu Province.</p>
Full article ">Figure 5
<p>Detection results of sandy land. (<b>a</b>) Landsat 8 OLI image in the test area of Gansu Province; (<b>b</b>) Spectral reflectance curve of different objects from Landsat 8 OLI image; (<b>c</b>) Sandy land detection based on Landsat 8 OLI; (<b>d</b>) Sandy land detection based on Sentinel-1; (<b>e</b>) Sandy land detection based on GS fusion; (<b>f</b>) Sandy land detection based on PCA fusion; (<b>g</b>) Sandy land detection based on HSV fusion; (<b>h</b>) Sandy land detection based on feature-level fusion.</p>
Full article ">Figure 6
<p>Detection of sandy land in Gansu Province.</p>
Full article ">Figure 7
<p>The 25 indicators generated from both Spectral and Radar data.</p>
Full article ">Figure 8
<p>The distribution of samples with different types of sandy land in different optical indicators. (<b>a</b>) Distribution of samples in NDVI; (<b>b</b>) Distribution of samples in MSAVI; (<b>c</b>) Distribution of samples in FVC; (<b>d</b>) Distribution of samples in EVI; (<b>e</b>) Distribution of samples in Albedo; (<b>f</b>) Distribution of samples in BSI; (<b>g</b>) Distribution of samples in LST_ Median; (<b>h</b>) Distribution of samples in LST_ Mean; (<b>i</b>) Distribution of samples in LST_ Max.</p>
Full article ">Figure 9
<p>The distribution of samples with different types of sandy land in texture features of C11. (<b>a</b>) Distribution of samples in C11; (<b>b</b>) Distribution of samples in C11_Contrast; (<b>c</b>) Distribution of samples in C11_Correlation; (<b>d</b>) Distribution of samples in C11_Dissimilarity; (<b>e</b>) Distribution of samples in C11_Energy; (<b>f</b>) Distribution of samples in C11_Entropy; (<b>g</b>) Distribution of samples in C11_Homogeneity; (<b>h</b>) Distribution of samples in C11_Mean.</p>
Full article ">Figure 10
<p>The distribution of samples with different types of sandy land in texture features of C22. (<b>a</b>) Distribution of samples in C22; (<b>b</b>) Distribution of samples in C22_Contrast; (<b>c</b>) Distribution of samples in C22_Correlation; (<b>d</b>) Distribution of samples in C22_Dissimilarity; (<b>e</b>) Distribution of samples in C22_Energy; (<b>f</b>) Distribution of samples in C22_Entropy; (<b>g</b>) Distribution of samples in C22_Homogeneity; (<b>h</b>) Distribution of samples in C22_Mean.</p>
Full article ">Figure 11
<p>Distribution of samples in C22_C/FVC.</p>
Full article ">Figure 12
<p>Evaluation of sandy land in Gansu.</p>
Full article ">
17 pages, 8145 KiB  
Article
Integrated Anti-Aliasing and Fully Shared Convolution for Small-Ship Detection in Synthetic Aperture Radar (SAR) Images
by Manman He, Junya Liu, Zhen Yang and Zhijian Yin
Electronics 2024, 13(22), 4540; https://doi.org/10.3390/electronics13224540 - 19 Nov 2024
Viewed by 353
Abstract
Synthetic Aperture Radar (SAR) imaging plays a vital role in maritime surveillance, yet the detection of small vessels poses a significant challenge when employing conventional Constant False Alarm Rate (CFAR) techniques, primarily due to the limitations in resolution and the presence of clutter. [...] Read more.
Synthetic Aperture Radar (SAR) imaging plays a vital role in maritime surveillance, yet the detection of small vessels poses a significant challenge when employing conventional Constant False Alarm Rate (CFAR) techniques, primarily due to the limitations in resolution and the presence of clutter. Deep learning (DL) offers a promising alternative, yet it still struggles with identifying small targets in complex SAR backgrounds because of feature ambiguity and noise. To address these challenges, our team has developed the AFSC network, which combines anti-aliasing techniques with fully shared convolutional layers to improve the detection of small targets in SAR imagery. The network is composed of three key components: the Backbone Feature Extraction Module (BFEM) for initial feature extraction, the Neck Feature Fusion Module (NFFM) for consolidating features, and the Head Detection Module (HDM) for final object detection. The BFEM serves as the principal feature extraction technique, with a primary emphasis on extracting features of small targets, The NFFM integrates an anti-aliasing element and is designed to accentuate the feature details of diminutive objects throughout the fusion procedure, HDM is the detection head module and adopts a new fully shared convolution strategy to make the model more lightweight. Our approach has shown better performance in terms of speed and accuracy for detecting small targets in SAR imagery, surpassing other leading methods on the SSDD dataset. It attained a mean Average Precision (AP) of 69.3% and a specific AP for small targets (APS) of 66.5%. Furthermore, the network’s robustness was confirmed using the HRSID dataset. Full article
(This article belongs to the Special Issue Advances in AI Technology for Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed network AFSC, including BFEM, NFFM, and HDM.</p>
Full article ">Figure 2
<p>Architecture of the proposed sub-module PC.</p>
Full article ">Figure 3
<p>Architecture of the proposed module HDM.</p>
Full article ">Figure 4
<p>Details regarding the dimensions of the bounding boxes within both datasets. The <b>top-left</b> graph of each dataset is the number of data in the training set and how many there are in each category; the <b>top right</b> figure is the size and number of boxes; the <b>lower-left</b> figure depicts the location of the object’s center relative to the entire image; and the <b>lower-right</b> figure shows the object’s height-to-width ratio in comparison to the entire image.</p>
Full article ">Figure 5
<p>The visual results on the SSDD dataset are displayed. Red bounding boxes indicate the actual positions, while blue bounding boxes denote the predicted locations, and yellow bounding boxes indicate the missed detections and the orange box indicates false detection. From the first line to line 9 are Ground Truth, Faster R-CNN, MobileNet, SSD, RetinaNet, YOLOv7, YOLOv8, YOLOv10 and AFSC.</p>
Full article ">Figure 6
<p>The curves of the experimental results, with the x-axis indicating the number of epochs and the y-axis representing the corresponding quantitative results.</p>
Full article ">Figure 7
<p>Based on the actual and predicted categories from the classification model, the confusion matrix organizes the dataset’s records into a matrix format. In this matrix, the rows correspond to the true categories, and the columns correspond to the model’s predicted categories. The above are the confusion matrices for the base model and the AFSC model, respectively.</p>
Full article ">
24 pages, 2680 KiB  
Review
Remote Sensing Techniques for Assessing Snow Avalanche Formation Factors and Building Hazard Monitoring Systems
by Natalya Denissova, Serik Nurakynov, Olga Petrova, Daniker Chepashev, Gulzhan Daumova and Alena Yelisseyeva
Atmosphere 2024, 15(11), 1343; https://doi.org/10.3390/atmos15111343 - 9 Nov 2024
Viewed by 691
Abstract
Snow avalanches, one of the most severe natural hazards in mountainous regions, pose significant risks to human lives, infrastructure, and ecosystems. As climate change accelerates shifts in snowfall and temperature patterns, it is increasingly important to improve our ability to monitor and predict [...] Read more.
Snow avalanches, one of the most severe natural hazards in mountainous regions, pose significant risks to human lives, infrastructure, and ecosystems. As climate change accelerates shifts in snowfall and temperature patterns, it is increasingly important to improve our ability to monitor and predict avalanches. This review explores the use of remote sensing technologies in understanding key geomorphological, geobotanical, and meteorological factors that contribute to avalanche formation. The primary objective is to assess how remote sensing can enhance avalanche risk assessment and monitoring systems. A systematic literature review was conducted, focusing on studies published between 2010 and 2025. The analysis involved screening relevant studies on remote sensing, avalanche dynamics, and data processing techniques. Key data sources included satellite platforms such as Sentinel-1, Sentinel-2, TerraSAR-X, and Landsat-8, combined with machine learning, data fusion, and change detection algorithms to process and interpret the data. The review found that remote sensing significantly improves avalanche monitoring by providing continuous, large-scale coverage of snowpack stability and terrain features. Optical and radar imagery enable the detection of crucial parameters like snow cover, slope, and vegetation that influence avalanche risks. However, challenges such as limitations in spatial and temporal resolution and real-time monitoring were identified. Emerging technologies, including microsatellites and hyperspectral imaging, offer potential solutions to these issues. The practical implications of these findings underscore the importance of integrating remote sensing data with ground-based observations for more robust avalanche forecasting. Enhanced real-time monitoring and data fusion techniques will improve disaster management, allowing for quicker response times and more effective policymaking to mitigate risks in avalanche-prone regions. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of the literature search strategy.</p>
Full article ">Figure 2
<p>Geographic distribution of study areas where relevant literature was found.</p>
Full article ">Figure 3
<p>Number of publications per year.</p>
Full article ">Figure 4
<p>Word cloud illustrating the frequency of terms in titles of reviewed articles.</p>
Full article ">Figure 5
<p>Clustered co-occurrence map of most relevant terms from titles of the compiled articles.</p>
Full article ">
16 pages, 32403 KiB  
Article
Integrated Analysis of Rockfalls and Floods in the Jiului Gorge, Romania: Impacts on Road and Rail Traffic
by Marian Puie and Bogdan-Andrei Mihai
Appl. Sci. 2024, 14(22), 10270; https://doi.org/10.3390/app142210270 - 8 Nov 2024
Viewed by 664
Abstract
This study examines the impact of rockfalls and floods on road and rail traffic in the Jiului Gorge, Romania, a critical transportation corridor. Using Sentinel-1 radar imagery processed through ESA SNAP and ArcGIS Pro, alongside traffic detection facilitated by YOLO models, we assessed [...] Read more.
This study examines the impact of rockfalls and floods on road and rail traffic in the Jiului Gorge, Romania, a critical transportation corridor. Using Sentinel-1 radar imagery processed through ESA SNAP and ArcGIS Pro, alongside traffic detection facilitated by YOLO models, we assessed susceptibility to both rockfalls and floods. The primary aim was to enhance public safety for traffic participants by providing accurate hazard mapping. Our study focuses on the area from Bumbești-Jiu to Petroșani, traversing the Southern Carpathians. The results demonstrate the utility of integrating remote sensing with machine learning to improve hazard management and inform more effective traffic planning. These findings contribute to safer, more resilient infrastructure in areas vulnerable to natural hazards. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the study area.</p>
Full article ">Figure 2
<p>Sentinel 1 GRD images of study area from descending orbit (left, 20 January 2023), from ascending orbit (middle, 20 January 2023), RGB interferogram and processing software workflow.</p>
Full article ">Figure 3
<p>ESA SNAP software workflow image samples for rockfall detection from Sentinel-1 SLC product.</p>
Full article ">Figure 4
<p>Flood map of the Jiului Gorge region, illustrating the extent and severity of flood events based on Sentinel-1 GRD images.</p>
Full article ">Figure 5
<p>Rockfall map displaying incidents along National Road 66 and surrounding slopes for specified dates.</p>
Full article ">Figure 6
<p>Rockfall susceptibility map showing areas highly susceptible to rockfall, with a notable prevalence in the upper section of the gorge.</p>
Full article ">Figure 7
<p>Rockfall susceptibility map combined with affected areas from radar images, highlighting the upper part of the gorge with high susceptibility.</p>
Full article ">Figure 8
<p>Floods susceptibility map combining DEM-derived slope classifications, land cover types, rainfall, and proximity to water bodies, showing higher susceptibility in wider parts of the gorge.</p>
Full article ">Figure 9
<p>Floods susceptibility map combined with radar-detected flood areas, illustrating increased susceptibility in the central and southern parts of the gorge.</p>
Full article ">Figure 10
<p>Train detection and recognition using YOLO models, illustrating detection from a significant distance with reduced visibility.</p>
Full article ">Figure 11
<p>Road traffic element detection with greater precision due to closer camera proximity.</p>
Full article ">Figure 12
<p>Training results from YOLOv9 model, showcasing classes obtained after training.</p>
Full article ">Figure 13
<p>Detection results including several classes, highlighting various rockfall types.</p>
Full article ">Figure 14
<p>Detection results focusing on a single class, illustrating detailed rockfall identification.</p>
Full article ">
23 pages, 6153 KiB  
Article
An Enhanced Shuffle Attention with Context Decoupling Head with Wise IoU Loss for SAR Ship Detection
by Yunshan Tang, Yue Zhang, Jiarong Xiao, Yue Cao and Zhongjun Yu
Remote Sens. 2024, 16(22), 4128; https://doi.org/10.3390/rs16224128 - 5 Nov 2024
Viewed by 569
Abstract
Synthetic Aperture Radar (SAR) imagery is widely utilized in military and civilian applications. Recent deep learning advancements have led to improved ship detection algorithms, enhancing accuracy and speed over traditional Constant False-Alarm Rate (CFAR) methods. However, challenges remain with complex backgrounds and multi-scale [...] Read more.
Synthetic Aperture Radar (SAR) imagery is widely utilized in military and civilian applications. Recent deep learning advancements have led to improved ship detection algorithms, enhancing accuracy and speed over traditional Constant False-Alarm Rate (CFAR) methods. However, challenges remain with complex backgrounds and multi-scale ship targets amidst significant interference. This paper introduces a novel method that features a context-based decoupled head, leveraging positioning and semantic information, and incorporates shuffle attention to enhance feature map interpretation. Additionally, we propose a new loss function with a dynamic non-monotonic focus mechanism to tackle these issues. Experimental results on the HRSID and SAR-Ship-Dataset demonstrate that our approach significantly improves detection performance over the original YOLOv5 algorithm and other existing methods. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

Figure 1
<p>Several typical examples of situations with small vessel targets and an inshore background.</p>
Full article ">Figure 2
<p>Overview of the proposed method’s structure. We used the backbone of YOLOv5 and neck of PAN for the network, while the shuffle attention module and Context Decoupled Head added in this paper are in the Attention Module and Context Decoupled Head part of this figure.</p>
Full article ">Figure 3
<p>The structure of the shuffle attention process.</p>
Full article ">Figure 4
<p>Semantic Context Encoding (SCE).</p>
Full article ">Figure 5
<p>Detail Preserving Encoding (DPE).</p>
Full article ">Figure 6
<p>Comparison figures of algorithm detection performance for SAR ship targets with various algorithms: (<b>a</b>) column represents the ground truth (GT), (<b>b</b>) column shows the performance of YOLOX algorithm, (<b>c</b>) column shows the performance of YOLOv5 as the baseline algorithm and (<b>d</b>) column displays the effectiveness of the proposed approach. Here the green box represents the targets of GT, while the red box represents the detected targets.</p>
Full article ">Figure 7
<p>Test results displayed in complex scenarios. The first row shows high noise conditions, where (<b>a</b>,<b>c</b>) are the ground truth, and (<b>b</b>,<b>d</b>) are the corresponding test results; the second row presents dense and small target situations, with (<b>e</b>,<b>g</b>) as the ground truth, and (<b>f</b>,<b>h</b>) as the corresponding test results; the third row illustrates complex scenarios with multiple scales, where (<b>i</b>,<b>k</b>) are the ground truth, and (<b>j</b>,<b>l</b>) are the corresponding test results. Here the green and the red box represents the target of GT and the detected target, while the yellow circle represents the missed or incorrect detection.</p>
Full article ">
20 pages, 22822 KiB  
Article
Monitoring Aeolian Erosion from Surface Coal Mines in the Mongolian Gobi Using InSAR Time Series Analysis
by Jungrack Kim, Bayasgalan Amgalan and Amanjol Bulkhbai
Remote Sens. 2024, 16(21), 4111; https://doi.org/10.3390/rs16214111 - 3 Nov 2024
Viewed by 1015
Abstract
Surface mining in the southeastern Gobi Desert has significant environmental impacts, primarily due to the creation of large coal piles that are highly susceptible to aeolian processes. Using spaceborne remote sensing and numerical simulations, we investigated erosional processes and their environmental impacts. Our [...] Read more.
Surface mining in the southeastern Gobi Desert has significant environmental impacts, primarily due to the creation of large coal piles that are highly susceptible to aeolian processes. Using spaceborne remote sensing and numerical simulations, we investigated erosional processes and their environmental impacts. Our primary tool was Interferometric Synthetic Aperture Radar (InSAR) data from Sentinel-1 imagery collected between 2017 and 2022. We analyzed these data using phase angle information from the Small Baseline InSAR time series framework. The time series analyses revealed intensive aeolian erosion in the coal piles, represented as thin deformation patterns along the potential pathways of aerodynamic transportation. Further analysis of multispectral data, combined with correlations between wind patterns and trajectory simulations, highlighted the detrimental impact of coal dust on the surrounding environment and the mechanism of aeolian erosion. The lack of mitigation measures, such as water spray, appeared to exacerbate erosion and dust generation. This study demonstrates the feasibility of using publicly available remote sensing data to monitor coal mining activities and their environmental hazards. Our findings contribute to a better understanding of coal dust generation processes in surface mining operations as well as the aeolian erosion mechanism in desert environments. Full article
(This article belongs to the Special Issue Remote Sensing and Geophysics Methods for Geomorphology Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Location of study areas, (<b>b</b>) geographical and topographic context of Ail Bayan and Tavan Tolgoi coal mines, (<b>c</b>) surrounding hydrological contexts (source: <a href="https://eic.mn/" target="_blank">https://eic.mn/</a>, accessed on 1 July 2024), (<b>d</b>) coal production and transportation at Tavan Tolgoi (43.625N, 105.474E) and (<b>e</b>) coal dust generation at Ail Bayan (43.717N, 108.946E) (images were taken in March 2019). Note that the transportation vehicles in Tavan Tolgoi are well-confined so as not to produce coal dust.</p>
Full article ">Figure 2
<p>Acquisition times and connection graphs of employed ascending/descending Sentinel-1 InSAR pairs over Ail Bayan (<b>a</b>,<b>b</b>) and Tavan Tolgoi (<b>c</b>–<b>e</b>). Note that the phase coherences of InSAR pairs are always higher than 0.7. In Ail Bayan, the ascending mode InSAR observations were interpolated into the descending mode time domain for decomposition. Similarly, over Tavan Tolgoi, the ascending and descending mode observations in path 62 were interpolated into the descending mode time domain of path 135. While the thresholds for perpendicular and temporal baselines were set to 150 m and 25 days, respectively, some InSAR pairs exceeding these thresholds were included to enhance interferometric coverage.</p>
Full article ">Figure 3
<p>Processing workflow for InSAR time series data, including integration with other satellite and spatial datasets.</p>
Full article ">Figure 4
<p>(<b>a</b>) Google map, which reveals the details of Ail Bayan, (<b>b</b>) topography presented in Copernicus 30 m DEM, (<b>c</b>) ascending LOS velocity and (<b>d</b>) descending LOS deformation velocity.</p>
Full article ">Figure 5
<p>(<b>a</b>) Decomposed horizontal velocity and (<b>b</b>) vertical velocity in Ail Bayan. The overlaid average wind velocities were extracted using GEE and interpolated to a 1 km resolution from the original 11.132 km ERA5-Land data using kriging.</p>
Full article ">Figure 6
<p>(<b>a</b>) Google map which reveals the details of Tavan Tolgoi, (<b>b</b>) topography presented in Copernicus 30 m DEM, (<b>c</b>) ascending LOS velocity, (<b>d</b>) descending LOS deformation velocity of path 135 coverage, and (<b>e</b>) descending LOS deformation velocity of path 62 coverage.</p>
Full article ">Figure 7
<p>Decomposed velocities in Tavan Tolgoi: (<b>a</b>) horizontal velocity and (<b>b</b>) vertical velocity. Note that the wind directions are similar to those in the Ail Bayan area, blowing from west to east.</p>
Full article ">Figure 8
<p>The behavior of seven RoIs along with mean wind velocities in different modes: (<b>a</b>) ascending mode, (<b>b</b>) descending mode, (<b>c</b>) decomposed horizontal deformation velocities, and (<b>d</b>) decomposed vertical deformation velocities.</p>
Full article ">Figure 9
<p>Correlation maps between InSAR deformation velocities and average wind velocities for corresponding periods in (<b>a</b>) ascending mode, (<b>b</b>) descending mode, (<b>c</b>) horizontal component of decomposed InSAR velocities, and (<b>d</b>) vertical component of decomposed InSAR velocities.</p>
Full article ">Figure 10
<p>Spectral signature analyses using Sentinel-2 time series images on (<b>a</b>) 23 March 2018, (<b>b</b>) 17 April 2018, (<b>c</b>) 2 May 2018, and (<b>d</b>) 22 May 2018. Note that a lower SID value indicates greater spectral similarity. (<b>e</b>) Visual band view of Sentinel-2 image (2 May 2018), (<b>f</b>) spectral signatures in Down1 and Down3/Up3 areas representing coal mine and major FMP aeolian sites.</p>
Full article ">Figure 11
<p>Wind factors influencing coal mine dust generation: (<b>a</b>) monthly wind velocity at an altitude of 10 m, (<b>b</b>) friction velocities for different 10 m wind velocities and roughness lengths, (<b>c</b>) trajectory simulations originating from the coal mine during the sand dust season from March to May 2018, (<b>d</b>) trajectory simulations during the summer season of 2018.</p>
Full article ">Figure 12
<p>Average NMDI maps for (<b>a</b>) Ail Bayan from 24 September 2017 to 14 August 2018, (<b>b</b>) the same region from 23 January 2017 to 9 May 2022, (<b>c</b>) the Tavan Tolgoi region from 11 September 2017 to 10 March 2018, and (<b>d</b>) the same region from 4 January 2017 to 26 May 2022.</p>
Full article ">Figure 13
<p>Environmental consequences of coal mine dust generation: (<b>a</b>) HYSPLIT trajectory simulations originating from the coal mine in 8 March 2018, (<b>b</b>) HYSPLIT trajectory simulations originating from the coal mine in 18 April 2018, using an ensemble HYSPLIT model with 150-h forward trajectory options. (<b>c</b>) Ground photos from Down3 area in Ail Bayan showing contaminated soil and vegetation by blown FMP (images were taken in March 2019).</p>
Full article ">
19 pages, 21263 KiB  
Article
Interferometric Synthetic Aperture Radar Phase Linking with Level 2 Coregistered Single Look Complexes: Enhancing Infrastructure Monitoring Accuracy at Algeciras Port
by Jaime Sánchez-Fernández, Alfredo Fernández-Landa, Álvaro Hernández Cabezudo and Rafael Molina Sánchez
Remote Sens. 2024, 16(21), 3966; https://doi.org/10.3390/rs16213966 - 25 Oct 2024
Viewed by 528
Abstract
This paper presents an advanced workflow for processing radar imagery stacks using Persistent Scatterer and Distributed Scatterer Interferometry (PSDS) to enhance spatial coherence and improve displacement detection accuracy. The workflow leverages Level 2 Coregistered Single Look Complex (L2-CSLC) images generated by the open-source [...] Read more.
This paper presents an advanced workflow for processing radar imagery stacks using Persistent Scatterer and Distributed Scatterer Interferometry (PSDS) to enhance spatial coherence and improve displacement detection accuracy. The workflow leverages Level 2 Coregistered Single Look Complex (L2-CSLC) images generated by the open-source COMPASS (Coregistered Multi-temporal Sar SLC) framework in combination with the Combined eigenvalue maximum likelihood Phase Linking (CPL) approach implemented in MiaplPy. Starting the analysis directly from Level 2 products offers a significant advantage to end-users, as they simplify processing by being pre-geocoded and ready for immediate analysis. Additionally, the open-source nature of the workflow and the use of L2-CSLC products simplify the processing pipeline, making it easier to distribute directly to users for practical applications in monitoring infrastructure stability in dynamic environments. The ISCE3-MiaplPy workflow is compared against ISCE2-MiaplPy and the European Ground Motion Service (EGMS) to assess its performance in detecting infrastructure deformations in dynamic environments, such as the Algeciras port. The results indicate that ISCE3-MiaplPy delivers denser measurements, albeit with increased noise, compared to its counterparts. This higher resolution enables a more detailed understanding of infrastructure stability and surface dynamics, which is critical for environments with ongoing human activity or natural forces. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Three swaths from the interferometric wide swath mode ascending Track 74. Algeciras port is located in the IW1 and burst t074-157011-iw1 (Blue rectangle) was processed (<b>b</b>) AOI processed with PSDS software.</p>
Full article ">Figure 2
<p>Proposed workflow schema.</p>
Full article ">Figure 3
<p>Coregistered SLC timing corrections for the whole burst on 24 March 2020. (<b>a</b>) Slant range geometrical Doppler, (<b>b</b>) azimuth bistatic delay, (<b>c</b>) azimuth FM rate mismatch, (<b>d</b>) slant range solid Earth tides, (<b>e</b>) azimuth time solid Earth tides, (<b>f</b>) line-of-sight ionospheric delay, (<b>g</b>) wet LOS troposphere, (<b>h</b>) dry LOS troposphere.</p>
Full article ">Figure 3 Cont.
<p>Coregistered SLC timing corrections for the whole burst on 24 March 2020. (<b>a</b>) Slant range geometrical Doppler, (<b>b</b>) azimuth bistatic delay, (<b>c</b>) azimuth FM rate mismatch, (<b>d</b>) slant range solid Earth tides, (<b>e</b>) azimuth time solid Earth tides, (<b>f</b>) line-of-sight ionospheric delay, (<b>g</b>) wet LOS troposphere, (<b>h</b>) dry LOS troposphere.</p>
Full article ">Figure 4
<p>InSAR network selection. (<b>a</b>) Mask connected components before (purple) and after (yellow) IFG selection. (<b>b</b>) Number of connected components, (<b>c</b>) number of IFGs not connected per pixel, (<b>d</b>) number of unconnected pixels per IFG, discarted IFGs are shown in yellow, (<b>e</b>) IFG network selected.</p>
Full article ">Figure 5
<p>(<b>a</b>) Temporal coherence, (<b>b</b>) mean amplitude, (<b>c</b>) scatterer type, (<b>d</b>) amplitude dispersion.</p>
Full article ">Figure 6
<p>(<b>a</b>) EGMS velocity for the Algeciras port. (<b>b</b>) Same area processed using CSLCs and phase. (<b>c</b>) Same area processed using ISCE2 and geocoding after phase linking. Reference point used for processing highlighted in white for (<b>b</b>,<b>c</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Histograms for the velocity in ISCE3-MiaplPy, ISCE2-MiaplPy, and EGMS over the AOI. (<b>b</b>) Histogram of velocity differences between ISCE3-EGMS. (<b>c</b>) Histogram of velocity differences between ISCE2-ISCE3.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison of a group of time series over EVOS Terminal in ISCE3-MiaPLpy and EGMS. (<b>b</b>) Measurement points over the area based on EGMS colored by velocity. (<b>c</b>) Same for ISCE3-Miaplpy.</p>
Full article ">Figure 9
<p>(<b>a</b>) Comparison of a group of time series over Isla Verde Exterior in ISCE3-MiaPLpy and EGMS. (<b>b</b>) Measurement points over the area based on EGMS colored by velocity. (<b>c</b>) Same for ISCE3-Miaplpy.</p>
Full article ">
18 pages, 7440 KiB  
Article
A Novel Method for the Estimation of Sea Surface Wind Speed from SAR Imagery
by Zahra Jafari, Pradeep Bobby, Ebrahim Karami and Rocky Taylor
J. Mar. Sci. Eng. 2024, 12(10), 1881; https://doi.org/10.3390/jmse12101881 - 20 Oct 2024
Viewed by 765
Abstract
Wind is one of the important environmental factors influencing marine target detection as it is the source of sea clutter and also affects target motion and drift. The accurate estimation of wind speed is crucial for developing an efficient machine learning (ML) model [...] Read more.
Wind is one of the important environmental factors influencing marine target detection as it is the source of sea clutter and also affects target motion and drift. The accurate estimation of wind speed is crucial for developing an efficient machine learning (ML) model for target detection. For example, high wind speeds make it more likely to mistakenly detect clutter as a marine target. This paper presents a novel approach for the estimation of sea surface wind speed (SSWS) and direction utilizing satellite imagery through innovative ML algorithms. Unlike existing methods, our proposed technique does not require wind direction information and normalized radar cross-section (NRCS) values and therefore can be used for a wide range of satellite images when the initial calibrated data are not available. In the proposed method, we extract features from co-polarized (HH) and cross-polarized (HV) satellite images and then fuse advanced regression techniques with SSWS estimation. The comparison between the proposed model and three well-known C-band models (CMODs)—CMOD-IFR2, CMOD5N, and CMOD7—further indicates the superior performance of the proposed model. The proposed model achieved the lowest Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), with values of 0.97 m/s and 0.62 m/s for calibrated images, and 1.37 and 0.97 for uncalibrated images, respectively, on the RCM dataset. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Marine Environmental Monitoring)
Show Figures

Figure 1

Figure 1
<p>Distribution of wind direction and wind speed.</p>
Full article ">Figure 2
<p>NRCS vs. incidence angle for different wind speeds and directions using CMOD5N and CMOD7 functions.</p>
Full article ">Figure 3
<p>Scatter plots of real versus calculated wind speed using (<b>a</b>) CMOD5, (<b>b</b>) CMOD-IFR, and (<b>c</b>) CMOD7 models with HH polarization.</p>
Full article ">Figure 4
<p>Scatter plots of real versus calculated wind speed using (<b>a</b>) CMOD5, (<b>b</b>) CMOD-IFR, and (<b>c</b>) CMOD7 models after compensation for polarization.</p>
Full article ">Figure 5
<p>Distribution of intensities for HH and HV polarizations at high and low wind speeds.</p>
Full article ">Figure 6
<p>Block diagram of proposed system.</p>
Full article ">Figure 7
<p>Effect of despeckling filter on RCM image.</p>
Full article ">Figure 8
<p>Histogram of the introduced feature extracted from calibrated data, with orange representing low wind, green representing mid wind, and purple representing high wind.</p>
Full article ">Figure 9
<p>Histogram of the introduced feature extracted from uncalibrated data, with orange representing low wind, green representing mid wind, and purple representing high wind.</p>
Full article ">Figure 10
<p>Comparisons of retrieved SSWS using concatenated models with different features from the calibrated RCM dataset.</p>
Full article ">Figure 11
<p>Comparisons of retrieved SSWS using concatenated models with different features from the uncalibrated RCM dataset.</p>
Full article ">Figure 12
<p>The closest region, where both RCM data and buoy station data are available.</p>
Full article ">Figure 13
<p>ERA5 vs. buoy wind speeds for the south of Greenland across all seasons in 2023.</p>
Full article ">Figure 14
<p>Testing the proposed model in the south of Greenland using buoy wind speed data.</p>
Full article ">
34 pages, 8862 KiB  
Article
A Novel Detection Transformer Framework for Ship Detection in Synthetic Aperture Radar Imagery Using Advanced Feature Fusion and Polarimetric Techniques
by Mahmoud Ahmed, Naser El-Sheimy and Henry Leung
Remote Sens. 2024, 16(20), 3877; https://doi.org/10.3390/rs16203877 - 18 Oct 2024
Viewed by 909
Abstract
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. [...] Read more.
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. These methods, relying on either intensity values or single-target characteristics, often fail to enhance the signal-to-clutter ratio (SCR) and are prone to false detections due to environmental factors. To address these issues, a novel framework is introduced that leverages the detection transformer (DETR) model along with advanced feature fusion techniques to enhance ship detection. This feature enhancement DETR (FEDETR) module manages clutter and improves feature extraction through preprocessing techniques such as filtering, denoising, and applying maximum and median pooling with various kernel sizes. Furthermore, it combines metrics like the line spread function (LSF), peak signal-to-noise ratio (PSNR), and F1 score to predict optimal pooling configurations and thus enhance edge sharpness, image fidelity, and detection accuracy. Complementing this, the weighted feature fusion (WFF) module integrates polarimetric SAR (PolSAR) methods such as Pauli decomposition, coherence matrix analysis, and feature volume and helix scattering (Fvh) components decomposition, along with FEDETR attention maps, to provide detailed radar scattering insights that enhance ship response characterization. Finally, by integrating wave polarization properties, the ability to distinguish and characterize targets is augmented, thereby improving SCR and facilitating the detection of weakly scattered targets in SAR imagery. Overall, this new framework significantly boosts DETR’s performance, offering a robust solution for maritime surveillance and security. Full article
(This article belongs to the Special Issue Target Detection with Fully-Polarized Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed ship detection in SAR imagery.</p>
Full article ">Figure 2
<p>CNN preprocessing model.</p>
Full article ">Figure 3
<p>DETR pipeline overview [<a href="#B52-remotesensing-16-03877" class="html-bibr">52</a>].</p>
Full article ">Figure 4
<p>Performance of FEDETR for two images from the test datasets SSDD and SAR Ship, including Gaofen-3 (<b>a1</b>–<b>a8</b>) and Sentinel-1 images (<b>b1</b>–<b>b8</b>) with different polarizations and resolutions. The ground truths, detection results, the false detection and missed detection results are indicated with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 5
<p>Experimental results for ship detection in SAR images across four distinct regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images; (<b>b</b>–<b>e</b>) are the detection results for DETR using VV and VH (DETR_VV, DETR_VH) as well as FEDETR using VV and VH (FEDETR_VV, FEDETR_VH) polarizations, respectively. Ground truths, detection results, false detection results, and missed detection results are marked with green, red, yellow, and blue boxes.</p>
Full article ">Figure 6
<p>Experimental results for ship detection in SAR images across four regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images and (<b>b</b>,<b>c</b>) are the predicted results from FEDETR with optimal pooling and kernel size and the WFF method, respectively. Ground truths, detection results, false detections, and missed detections are marked with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 7
<p>Correlation matrix analyzing the relationship between kernel Size, LSF, and PSNR for max pooling (<b>a</b>) and median pooling (<b>b</b>) on SSD and SAR Ship datasets. Validation of FEDETR module effectiveness.</p>
Full article ">Figure 8
<p>Depicts the LSF of images with different types of pooling and kernel sizes. Panels (<b>a1</b>–<b>a4</b>) depict LSF images after max pooling, while panels (<b>a5</b>–<b>a8</b>) show LSF images after median pooling with kernel sizes 3, 5, 7, and 9 respectively for Gaofen-3 HH images from the SAR Ship dataset. Panels (<b>b1</b>–<b>b4</b>) illustrate LSF images after max pooling and panels (<b>b5</b>–<b>b8</b>) show LSF images after median pooling for images from the SSD dataset.</p>
Full article ">Figure 9
<p>Backscattering intensity in VV and VH polarizations and ship presence across four regions. (<b>a1</b>,<b>a2</b>) Backscattering intensity in VV and VH polarizations for Onshore1; (<b>a3</b>,<b>a4</b>) backscattering intensity for ships in Onshore1; (<b>b1</b>,<b>b2</b>) backscattering intensity in VV and VH polarizations for Onshore2; (<b>b3</b>,<b>b4</b>) backscattering intensity for ships in Onshore2; (<b>c1</b>,<b>c2</b>) backscattering intensity in VV and VH polarizations for Offshore1; (<b>c3</b>,<b>c4</b>) backscattering intensity for ships in Offshore1; (<b>d1</b>,<b>d2</b>) backscattering intensity in VV and VH polarizations for Offshore2; and (<b>d3</b>,<b>d4</b>) backscattering intensity for ships in Offshore2. In each subfigure, the x-axis represents pixel intensity, and the y-axis represents frequency.</p>
Full article ">Figure 10
<p>LSF and PSNR Comparisons for Onshore and Offshore Areas (Onshore1 (<b>a</b>,<b>b</b>), Onshore2 (<b>c</b>,<b>d</b>), Offshore1 (<b>e</b>,<b>f</b>), Offshore2 (<b>g</b>,<b>h</b>)) Using VV and VH Polarization with Median and Max Pooling.</p>
Full article ">Figure 11
<p>Visual comparison of max and median pooling with different kernel sizes on onshore and offshore SAR imagery for VV and VH polarizations: (<b>a1</b>,<b>a2</b>) Onshore1 VV (max kernel size 3; median kernel size 3); (<b>a3</b>,<b>a4</b>) Onshore1 VV (median kernel size 5); (<b>b1</b>,<b>b2</b>) Onshore2 VV (max kernel size 3); (<b>b3</b>,<b>b4</b>) Onshore2 VH (median kernel size 5); (<b>c1</b>,<b>c2</b>) Offshore1 VV (max kernel size 7; median kernel size 7); (<b>c3</b>,<b>c4</b>) Offshore1 VH (max kernel size 3; median kernel size 3); (<b>d1</b>,<b>d2</b>) Offshore2 VV (max kernel size 5; median kernel size 5); (<b>d3</b>,<b>d4</b>) Offshore2 VH (max kernel size 5; median kernel size 5).</p>
Full article ">Figure 12
<p>Experimental results for ship detection in SAR images across four regions: (<b>a</b>) Onshore1, (<b>b</b>) Onshore2, (<b>c</b>) Offshore1, and (<b>d</b>) Offshore2. The figure illustrates the effectiveness of the Pauli decomposition method in reducing noise and distinguishing ships from the background. Ships are marked in pink, while noise clutter is shown in green.</p>
Full article ">Figure 13
<p>Signal-to-clutter ratio (SCR) comparisons for different polarizations across various scenarios. VV polarization is in blue, VH polarization in orange, and Fvh in green.</p>
Full article ">Figure 14
<p>Otsu’s thresholding on four regions for Pauli and FVH images: (<b>a1</b>–<b>a4</b>) thresholding for Onshore1, Onshore2, Offshore1, and Offshore2 for Pauli images; (<b>b1</b>–<b>b4</b>) thresholding for the same regions for Fvh images.</p>
Full article ">Figure 15
<p>Visualization of FEDETR attention maps, Pauli decomposition, Fvh feature maps, and WFF results for Onshore1 (<b>a1</b>–<b>a4</b>), Onshore2 (<b>b1</b>–<b>b4</b>), Offshore1 (<b>c1</b>–<b>c4</b>), and Offshore2 (<b>d1</b>–<b>d4</b>).</p>
Full article ">
32 pages, 25887 KiB  
Review
Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review
by Souad Saidi, Soufiane Idbraim, Younes Karmoude, Antoine Masse and Manuel Arbelo
Remote Sens. 2024, 16(20), 3852; https://doi.org/10.3390/rs16203852 - 17 Oct 2024
Cited by 1 | Viewed by 2214
Abstract
Remote sensing images provide a valuable way to observe the Earth’s surface and identify objects from a satellite or airborne perspective. Researchers can gain a more comprehensive understanding of the Earth’s surface by using a variety of heterogeneous data sources, including multispectral, hyperspectral, [...] Read more.
Remote sensing images provide a valuable way to observe the Earth’s surface and identify objects from a satellite or airborne perspective. Researchers can gain a more comprehensive understanding of the Earth’s surface by using a variety of heterogeneous data sources, including multispectral, hyperspectral, radar, and multitemporal imagery. This abundance of different information over a specified area offers an opportunity to significantly improve change detection tasks by merging or fusing these sources. This review explores the application of deep learning for change detection in remote sensing imagery, encompassing both homogeneous and heterogeneous scenes. It delves into publicly available datasets specifically designed for this task, analyzes selected deep learning models employed for change detection, and explores current challenges and trends in the field, concluding with a look towards potential future developments. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PRISMA flow diagram.</p>
Full article ">Figure 2
<p>Year-wise publications from 2017 to 2024.</p>
Full article ">Figure 3
<p>Global distribution of publications.</p>
Full article ">Figure 4
<p>Feature extraction strategy. (<b>a</b>) Early fusion; (<b>b</b>) Late fusion; (<b>c</b>) Multiple fusion.</p>
Full article ">Figure 5
<p>Structures of models. (<b>a</b>) Single Stream Network. (<b>b</b>) General Siamese network structure. (<b>c</b>) Double-Stream UNet.</p>
Full article ">Figure 6
<p>Structures of super resolution change detection methods.</p>
Full article ">
19 pages, 5207 KiB  
Article
Enhancing the Precision of Forest Growing Stock Volume in the Estonian National Forest Inventory with Different Predictive Techniques and Remote Sensing Data
by Temitope Olaoluwa Omoniyi and Allan Sims
Remote Sens. 2024, 16(20), 3794; https://doi.org/10.3390/rs16203794 - 12 Oct 2024
Viewed by 699
Abstract
Estimating forest growing stock volume (GSV) is crucial for forest growth and resource management, as it reflects forest productivity. National measurements are laborious and costly; however, integrating satellite data such as optical, Synthetic Aperture Radar (SAR), and airborne laser scanning (ALS) with National [...] Read more.
Estimating forest growing stock volume (GSV) is crucial for forest growth and resource management, as it reflects forest productivity. National measurements are laborious and costly; however, integrating satellite data such as optical, Synthetic Aperture Radar (SAR), and airborne laser scanning (ALS) with National Forest Inventory (NFI) data and machine learning (ML) methods has transformed forest management. In this study, random forest (RF), support vector regression (SVR), and Extreme Gradient Boosting (XGBoost) were used to predict GSV using Estonian NFI data, Sentinel-2 imagery, and ALS point cloud data. Four variable combinations were tested: CO1 (vegetation indices and LiDAR), CO2 (vegetation indices and individual band reflectance), CO3 (LiDAR and individual band reflectance), and CO4 (a combination of vegetation indices, individual band reflectance, and LiDAR). Across Estonia’s geographical regions, RF consistently delivered the best performance. In the northwest (NW), the RF model achieved the best performance with the CO3 combination, having an R2 of 0.63 and an RMSE of 125.39 m3/plot. In the southwest (SW), the RF model also performed exceptionally well, achieving an R2 of 0.73 and an RMSE of 128.86 m3/plot with the CO4 variable combination. In the northeast (NE), the RF model outperformed other ML models, achieving an R2 of 0.64 and an RMSE of 133.77 m3/plot under the CO4 combination. Finally, in the southeast (SE) region, the best performance was achieved with the CO4 combination, yielding an R2 of 0.70 and an RMSE of 21,120.72 m3/plot. These results underscore RF’s precision in predicting GSV across diverse environments, though refining variable selection and improving tree species data could further enhance accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>Cluster network (<b>a</b>) of the Estonia NFI permanent and temporary plot (2018–2022); Cartogram of the elevation model of the land cover (<b>b</b>).</p>
Full article ">Figure 2
<p>Methodology flowchart for this study.</p>
Full article ">Figure 3
<p>Scatter plot of observed vs. predicted GSV values for the validation plots using the best predictive model. The symbols * and ** represent the CO3 and CO4 combinations, respectively. (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) denote the random forest-based models for the northwest, southwest, northeast, and southeast regions, respectively.</p>
Full article ">Figure 3 Cont.
<p>Scatter plot of observed vs. predicted GSV values for the validation plots using the best predictive model. The symbols * and ** represent the CO3 and CO4 combinations, respectively. (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) denote the random forest-based models for the northwest, southwest, northeast, and southeast regions, respectively.</p>
Full article ">Figure A1
<p>Variable important plot using the best predictive model. Where (<b>a</b>), (<b>b</b>), (<b>c</b>), and (<b>d</b>) denote the random forest-based models for the northwest, southwest, northeast, and southeast regions, respectively.</p>
Full article ">
21 pages, 6225 KiB  
Article
3D Surface Velocity Field Inferred from SAR Interferometry: Cerro Prieto Step-Over, Mexico, Case Study
by Ignacio F. Garcia-Meza, J. Alejandro González-Ortega, Olga Sarychikhina, Eric J. Fielding and Sergey Samsonov
Remote Sens. 2024, 16(20), 3788; https://doi.org/10.3390/rs16203788 - 12 Oct 2024
Viewed by 1388
Abstract
The Cerro Prieto basin, a tectonically active pull-apart basin, hosts significant geothermal resources currently being exploited in the Cerro Prieto Geothermal Field (CPGF). Consequently, natural tectonic processes and anthropogenic activities contribute to three-dimensional surface displacements in this pull-apart basin. Here, we obtained the [...] Read more.
The Cerro Prieto basin, a tectonically active pull-apart basin, hosts significant geothermal resources currently being exploited in the Cerro Prieto Geothermal Field (CPGF). Consequently, natural tectonic processes and anthropogenic activities contribute to three-dimensional surface displacements in this pull-apart basin. Here, we obtained the Cerro Prieto Step-Over 3D surface velocity field (3DSVF) by accomplishing a weighted least square algorithm inversion from geometrically quasi-orthogonal airborne UAVSAR and RADARSAT-2, Sentinel 1A satellite Synthetic Aperture-Radar (SAR) imagery collected from 2012 to 2016. The 3DSVF results show a vertical rate of 150 mm/yr and 40 mm/yr for the horizontal rate, where for the first time, the north component displacement is achieved by using only the Interferometric SAR time series in the CPGF. Data integration and validation between the 3DSVF and ground-based measurements such as continuous GPS time series and precise leveling data were achieved. Correlating the findings with recent geothermal energy production revealed a subsidence rate slowdown that aligns with the CPGF’s annual vapor production. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Technology in Geodesy, Surveying and Mapping)
Show Figures

Figure 1

Figure 1
<p>Study area. (<b>a</b>) Location of the region described in (<b>b</b>). (<b>b</b>) Red square indicates the CPSO area described in (<b>c</b>). Footprints of Sentinel 1A ascending and descending SAR images are denoted by big squares (black and dark blue), and footprints of UAVSAR east and west passes are denoted by rectangles (gray). Black arrows denote the different sensors’ line of sight direction. Red lines are the main fault systems. (<b>c</b>) Gray contours show the subsidence displacement rate (cm/yr) from leveling measurements (2012–2015) surveyed by the Mexican Institute of Water Technology (IMTA). Recent earthquakes <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">M</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math> &gt; 5 occurred before the study period are denoted by red stars. 1.-<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">M</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>5.4, May/2006; 2.-<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">M</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>5.3, September/2009; 3.-<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">M</mi> </mrow> <mrow> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math>6.0, December/2009. Main tectonic faults are indicated by continuous red lines. Large-scale tectonic motion is shown by black arrows. The CPGF area is marked as dashed gray lines. Abbreviations NA = North American plate; PA = Pacific plate; EZ = exploitation zone of the CPGF, RZ = recharge zone, CPF = Cerro Prieto Fault, GF = Guerrero Fault, HF = Hidalgo Fault, LF = L Fault, IMF = Imperial Fault, MF = Morelia Fault, SF = Saltillo Fault, SF’ = Saltillo Fault continuation, CPV = Cerro Prieto Volcano. Vector data of the CPGF limits were taken from [<a href="#B19-remotesensing-16-03788" class="html-bibr">19</a>].</p>
Full article ">Figure 2
<p>Timeframe of imagery from spaceborne Sentinel 1A/RADARSAT-2 and airborne UAVSAR missions. Abbreviations: SLC = Single Look Complex, T166 = Sentinel ascending orbital pass, T173 = Sentinel descending orbital pass, MF1 = RADARSAT-2 ascending orbital pass, MF4N = RADARSAT-2 descending orbital pass, 08514S3 = Segment #3 of the flight line 08514, 26515S2 = Segment #2 of the flight line 26515. Black dots are the UAVSAR acquisition times. Gray boxes indicate temporal matching between sensors used for 3D decomposition. Dates format: YYYY/MM/DD (e.g., 1 February 2012).</p>
Full article ">Figure 3
<p>Method flowchart for InSAR data processing. ECMWF global model is the European Centre for Medium-Range Weather Forecasts. <sup>1</sup> Jackknife test for uncertainty estimations. This workflow was elaborated based on the implemented processing software (<sup>2</sup> and <sup>3</sup>).</p>
Full article ">Figure 4
<p>Synthetic data of the components of the 3D displacement vector of the CPGF [<a href="#B17-remotesensing-16-03788" class="html-bibr">17</a>]. (<b>a</b>) Synthetic data. (<b>b</b>) Calculated 3D surface displacement data. In (<b>a</b>,<b>b</b>), colors denote the vertical displacement and red color vectors represent the horizontal displacements (east-north). (<b>c</b>) Differences between synthetic and calculated 3D surface displacement data. (<b>d</b>) Flowchart for 3D inversion code validation by using synthetic data. LOSdisp = Line of Sight displacement, WLSS = Weighted Least Squares Solution.</p>
Full article ">Figure 5
<p>Maps of average LOS displacement rate (mm/yr). (<b>a</b>,<b>b</b>) RADARSAT-2 ascending and descending orbital passes, respectively. Stable reference point used in (<b>a</b>,<b>b</b>) is located to the northwest of the Cerro Prieto basin out of the map’s data frame [<a href="#B21-remotesensing-16-03788" class="html-bibr">21</a>]. (<b>c</b>,<b>d</b>) UAVSAR east and west flight segments, respectively. Maps cover 2.8 years. Areas with a coherence value below 0.27 are masked. The red flag in (<b>c</b>,<b>d</b>) shows the location of the reference point. The color palette corresponds to the LOS displacement rate. Black arrows denote the sensors’ line of sight direction. Main faults are denoted by continuous red lines. Abbreviations Ifg = interferogram, LOS = line of sight, CPF = Cerro Prieto Fault, IMF = Imperial Fault, MF = Morelia Fault, SF = Saltillo, SF’ = Saltillo Fault continuation [<a href="#B30-remotesensing-16-03788" class="html-bibr">30</a>], and CPV = Cerro Prieto Volcano.</p>
Full article ">Figure 6
<p>Maps of average LOS displacement rate (mm/yr). (<b>a</b>,<b>b</b>) Sentinel 1A ascending and descending orbital passes, respectively. (<b>c</b>,<b>d</b>) UAVSAR east and west flight segments, respectively. Maps (<b>a</b>–<b>d</b>) cover one year. Areas with a coherence value below 0.2 are masked. In (<b>a</b>), the orange squares mark the location of specific points in the exploitation (EZ) and recharge (RZ) zones. The red flag shows the location of the reference point. Notation as in <a href="#remotesensing-16-03788-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Maps of displacement vector components, derived from the RADARSAT-2 and UAVSAR datasets’ combination for the February/2012–November/2014 period. (<b>a</b>) Vertical displacement rate. Negative values indicate subsidence. (<b>b</b>) North displacement rate. Negative values indicate southward movement. (<b>c</b>) East displacement rate. Negative values indicate westward movement. Areas of low coherence (&lt;0.27) are masked out. In (<b>a</b>), the orange squares mark the location of specific points in the exploitation (EZ) and recharge (RZ) zones. Black dots a and b indicate the location of MBIG and NVLX GPS sites, respectively. SAR stands for Synthetic Aperture Radar. ADEW stands for Ascending/Descending/East/North, which refers to the flight direction combination of the different SAR geometries. Notation as in <a href="#remotesensing-16-03788-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 8
<p>Maps of displacement vector components, derived from the Sentinel 1A and UAVSAR datasets’ combination for the April/2015–April/2016 period. (<b>a</b>) Vertical displacement rate. Negative values indicate subsidence. (<b>b</b>) North displacement rate. Negative values indicate southward movement. (<b>c</b>) East displacement rate. Negative values indicate westward movement. Areas of low coherence (&lt;0.2) and errors (&lt;20 mm/yr) are masked out. In (<b>a</b>), the orange squares mark the location of the exploitation (EZ) and recharge (RZ) zones. The green inverted triangle marks the Ejido Nuevo León location and black dots a and b indicate the location of the MBIG and NVLX GPS sites, respectively. SAR stands for Synthetic Aperture Radar. ADEW stands for Ascending/Descending/East/North, which refers to the flight direction combination of the different SAR geometries. Notation as in <a href="#remotesensing-16-03788-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 8 Cont.
<p>Maps of displacement vector components, derived from the Sentinel 1A and UAVSAR datasets’ combination for the April/2015–April/2016 period. (<b>a</b>) Vertical displacement rate. Negative values indicate subsidence. (<b>b</b>) North displacement rate. Negative values indicate southward movement. (<b>c</b>) East displacement rate. Negative values indicate westward movement. Areas of low coherence (&lt;0.2) and errors (&lt;20 mm/yr) are masked out. In (<b>a</b>), the orange squares mark the location of the exploitation (EZ) and recharge (RZ) zones. The green inverted triangle marks the Ejido Nuevo León location and black dots a and b indicate the location of the MBIG and NVLX GPS sites, respectively. SAR stands for Synthetic Aperture Radar. ADEW stands for Ascending/Descending/East/North, which refers to the flight direction combination of the different SAR geometries. Notation as in <a href="#remotesensing-16-03788-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 9
<p>Contour maps of vertical displacement rate (cm/yr) from (<b>a</b>) leveling measurements (2012–2015) and (<b>b</b>) 3D displacement vector decomposition (2012–2014); (<b>c</b>) contour map of the difference (residual) between leveling measurements (IMTA) and InSAR vertical displacement rate. Blue triangles are benchmarks used for interpolation and contouring. In (<b>a</b>,<b>b</b>), the contours are every 1 cm, and in (<b>c</b>), are every 0.4 cm. In (<b>a</b>,<b>b</b>), RP is the reference area centered at the “10037” benchmark location and is represented by a black tringle. Tectonic faults are shown as continuous red lines. The red flag in (<b>b</b>,<b>c</b>) shows the location of the InSAR reference point. InSAR stands for Interferometric Aperture Radar. Notation as in <a href="#remotesensing-16-03788-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Total RMSE and difference histogram of vertical displacement rate (cm/yr); (<b>b</b>) correlation coefficient between leveling data (2012–2015) and vertical InSAR (February/2012–Novemeber/2014).</p>
Full article ">Figure 11
<p>(<b>a</b>) The vertical displacement rate (mm/yr) obtained here vs. other works [<a href="#B13-remotesensing-16-03788" class="html-bibr">13</a>,<a href="#B19-remotesensing-16-03788" class="html-bibr">19</a>,<a href="#B67-remotesensing-16-03788" class="html-bibr">67</a>] in the exploitation and recharge zones in the CPSO. See <a href="#remotesensing-16-03788-f007" class="html-fig">Figure 7</a>a and <a href="#remotesensing-16-03788-f008" class="html-fig">Figure 8</a>a for zones’ location. (<b>b</b>) Production and injection wells (number of wells) vs. total electricity generated in the CPGF. In (<b>a</b>), continuous and dashed lines represent a location in the exploitation and recharge zones, respectively. Texts in parentheses denote the works of other authors and are associated with a color for clarity.</p>
Full article ">
22 pages, 4305 KiB  
Article
LiOSR-SAR: Lightweight Open-Set Recognizer for SAR Imageries
by Jie Yang, Jihong Gu, Jingyu Xin, Zhou Cong and Dazhi Ding
Remote Sens. 2024, 16(19), 3741; https://doi.org/10.3390/rs16193741 - 9 Oct 2024
Viewed by 769
Abstract
Open-set recognition (OSR) from synthetic aperture radar (SAR) imageries plays a crucial role in maritime and terrestrial monitoring. Nevertheless, numerous deep learning-based SAR classifiers struggle with unknown targets outside of the training dataset, leading to a dilemma, namely that a large model is [...] Read more.
Open-set recognition (OSR) from synthetic aperture radar (SAR) imageries plays a crucial role in maritime and terrestrial monitoring. Nevertheless, numerous deep learning-based SAR classifiers struggle with unknown targets outside of the training dataset, leading to a dilemma, namely that a large model is difficult to deploy, while a smaller one sacrifices accuracy. To address this challenge, the novel “LiOSR-SAR” lightweight recognizer is proposed for OSR in SAR imageries. It incorporates the compact attribute focusing and open-prediction modules, which collectively optimize its lightweight structure and high accuracy. To validate LiOSR-SAR, “fast image simulation using bidirectional shooting and bouncing ray (FIS-BSBR)” is exploited to construct the corresponding dataset. It enhances the details of targets for more accurate recognition significantly. Extensive experiments show that LiOSR-SAR achieves remarkable recognition accuracies of 97.9% and 94.1% while maintaining a compact model size of 7.5 MB, demonstrating its practicality and efficiency. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Existing approach using ResNet-18. (<b>b</b>) Proposed approach, illustrating the overall network architecture of the LiOSR-SAR.</p>
Full article ">Figure 2
<p>Workflow of CAF module, where ⊕ denotes element-wise addition.</p>
Full article ">Figure 3
<p>Workflow of OP module. (<b>a</b>) Step 1: Establish the feature space. (<b>b</b>) Step 2: Measure similarity scores in feature space. (<b>c</b>) Step 3: Make category determinations.</p>
Full article ">Figure 4
<p>BSBR schematic.</p>
Full article ">Figure 5
<p>Comparison of 2D ISAR images of a ship model under four polarization modes. (<b>a</b>) RDA [<a href="#B46-remotesensing-16-03741" class="html-bibr">46</a>]. (<b>b</b>) FIS-SBR [<a href="#B47-remotesensing-16-03741" class="html-bibr">47</a>]. (<b>c</b>) FIS-BSBR.</p>
Full article ">Figure 6
<p>Sample presentation in the MSTAR dataset. (<b>a</b>) BRDM-2; (<b>b</b>) BTR-60; (<b>c</b>) BTR-70; (<b>d</b>) T-72; (<b>e</b>) ZSU23-4; (<b>f</b>) ZIL-131; (<b>g</b>) D7; (<b>h</b>) BMP-2; (<b>i</b>) T-62; (<b>j</b>) 2S1.</p>
Full article ">Figure 7
<p>Training loss and training accuracy curves for different methods. (<b>a</b>) FIS-SBR: (<b>a-1</b>) loss–epoch curves on the FIS-SBR; (<b>a-2</b>) accuracy–epoch curves on the FIS-SBR. (<b>b</b>) FIS-BSBR: (<b>b-1</b>) loss–epoch curves on the FIS-BSBR; (<b>b-2</b>) accuracy–epoch curves on the FIS-BSBR. (<b>c</b>) MSTAR: (<b>c-1</b>) loss–epoch curves on MSTAR; (<b>c-2</b>) accuracy–epoch curves on MSTAR.</p>
Full article ">Figure 8
<p>Modulating learning rates through various cosine cycles.</p>
Full article ">Figure 9
<p>Different learning rates on different datasets. (<b>a</b>) FIS-SBR: (<b>a-1</b>) training loss–epoch curve; (<b>a-2</b>) training loss (40–50)–epoch curve. (<b>b</b>) FIS-BSBR: (<b>b-1</b>) training loss–epoch curve; (<b>b-2</b>) training loss (40–50)–epoch curve. (<b>c</b>) MSTAR: (<b>c-1</b>) training loss–epoch curve; (<b>c-2</b>) training loss (40–50)–epoch curve.</p>
Full article ">Figure 10
<p>Different learning rates on different datasets. (<b>a</b>) Training accuracy–epoch curve on FIS-SBR. (<b>b</b>) Training accuracy–epoch curve on FIS-BSBR. (<b>c</b>) Training accuracy–epoch curve on MSTAR.</p>
Full article ">
21 pages, 4121 KiB  
Article
Design of an Integrated System for Spaceborne SAR Imaging and Data Transmission
by Qixing Wang, Peng Gao, Zhuochen Xie and Jinpei Yu
Sensors 2024, 24(19), 6375; https://doi.org/10.3390/s24196375 - 1 Oct 2024
Viewed by 578
Abstract
In response to the conflicting demands between real-time satellite communication and high-resolution synthetic aperture radar (SAR) imaging, we propose a method that aligns the data transmission rate with the imaging data volume. This approach balances SAR performance with the requirements for real-time data [...] Read more.
In response to the conflicting demands between real-time satellite communication and high-resolution synthetic aperture radar (SAR) imaging, we propose a method that aligns the data transmission rate with the imaging data volume. This approach balances SAR performance with the requirements for real-time data transmission. To meet the need for mobile user terminals to access real-time SAR imagery data of their surroundings without depending on large traditional ground data transmission stations, we developed an application system based on filter bank multicarrier offset quadrature amplitude modulation (FBMC-OQAM). To address the interference problem with SAR signals’ transmission and reception, we developed a signal sequence based on spaceborne SAR echo and data transmission and reception. This system enables SAR and data transmission signals to share the same frequency band, radio frequency transmission system, and antenna, creating an integrated sensing and communication system. Simulation experiments showed that, compared to the equal power allocation scheme for subcarriers, the echo image signal-to-noise ratio (SNR) improved by 2.79 dB and the data transmission rate increased by 24.075 Mbps. Full article
(This article belongs to the Special Issue 6G Space-Air-Ground Communication Networks and Key Technologies)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram showing the application scenario for the integrated spaceborne SAR and data transmission system.</p>
Full article ">Figure 2
<p>SAR echo reception and data transmission STAR sequence.</p>
Full article ">Figure 3
<p>Integration network for spaceborne SAR and data transmission.</p>
Full article ">Figure 4
<p>BER of Rayleigh fading and AWGN channels across various signal-to-noise ratio levels.</p>
Full article ">Figure 5
<p>Convergence performance of the objective function under different values of <math display="inline"><semantics> <mi>η</mi> </semantics></math>. (<b>a</b>) Algorithm 1. (<b>b</b>) Algorithm 2.</p>
Full article ">Figure 6
<p>Convergence performance of the objective function under different values of <math display="inline"><semantics> <mi>κ</mi> </semantics></math>. (<b>a</b>) Algorithm 1. (<b>b</b>) Algorithm 2.</p>
Full article ">Figure 7
<p>Spectral efficiency versus total power. (<b>a</b>) <math display="inline"><semantics> <mi>η</mi> </semantics></math> = 6 dB scheme. (<b>b</b>) <math display="inline"><semantics> <mi>η</mi> </semantics></math> = 3 dB scheme.</p>
Full article ">Figure 8
<p>Subcarrier power allocation for integrated spaceborne SAR and data transmission system.</p>
Full article ">Figure 9
<p>Algorithm performance for the integrated spaceborne SAR and data transmission system.</p>
Full article ">
Back to TopTop