Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (452)

Search Parameters:
Keywords = PolSAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 11323 KiB  
Article
Polarimetric SAR Ship Detection Using Context Aggregation Network Enhanced by Local and Edge Component Characteristics
by Canbin Hu, Hongyun Chen, Xiaokun Sun and Fei Ma
Remote Sens. 2025, 17(4), 568; https://doi.org/10.3390/rs17040568 - 7 Feb 2025
Abstract
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and [...] Read more.
Polarimetric decomposition methods are widely used in polarimetric Synthetic Aperture Radar (SAR) data processing for extracting scattering characteristics of targets. However, polarization SAR methods for ship detection still face challenges. The traditional constant false alarm rate (CFAR) detectors face sea clutter modeling and parameter estimation problems in ship detection, which is difficult to adapt to the complex background. In addition, neural network-based detection methods mostly rely on single polarimetric-channel scattering information and fail to fully explore the polarization properties and physical scattering laws of ships. To address these issues, this study constructed two novel characteristics: a helix-scattering enhanced (HSE) local component and a multi-scattering intensity difference (MSID) edge component, which are specifically designed to describe ship scattering characteristics. Based on the characteristic differences of different scattering components in ships, this paper designs a context aggregation network enhanced by local and edge component characteristics to fully utilize the scattering information of polarized SAR data. With the powerful feature extraction capability of a convolutional neural network, the proposed method can significantly enhance the distinction between ships and the sea. Further analysis shows that HSE is able to capture structural information about the target, MSID can increase ship–sea separation capability, and an HV channel retains more detailed information. Compared with other decomposition models, the proposed characteristic combination model performs well in complex backgrounds and can distinguish ship from sea more effectively. The experimental results show that the proposed method achieves a detection precision of 93.6% and a recall rate of 91.5% on a fully polarized SAR dataset, which are better than other popular network algorithms, verifying the reasonableness and superiority of the method. Full article
Show Figures

Figure 1

Figure 1
<p>Ship scattering characteristics in four component decomposition.</p>
Full article ">Figure 2
<p>Enhancement comparison before (<b>a</b>) and after (<b>b</b>) the difference.</p>
Full article ">Figure 3
<p>Structural diagram of context aggregation network based on local and edge component feature enhancement.</p>
Full article ">Figure 4
<p>Scattering Structure Feature Extraction Network.</p>
Full article ">Figure 5
<p>Detailed view of DCNblock module.</p>
Full article ">Figure 6
<p>Structure of the CAM.</p>
Full article ">Figure 7
<p>Low-Level Feature Guided Balanced Fusion Network for PolSAR.</p>
Full article ">Figure 8
<p>Comparison of extracted characteristics from RADARSAT-2 data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 9
<p>Comparison of extracted characteristics from AIRSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 10
<p>Comparison of extracted characteristics from UAVSAR data. (<b>a1</b>,<b>a2</b>) Pauli pseudocolor maps; (<b>b1</b>,<b>b2</b>) HSE; (<b>c1</b>,<b>c2</b>) MSID; (<b>d1</b>,<b>d2</b>) HH; and (<b>e1</b>,<b>e2</b>) HV.</p>
Full article ">Figure 11
<p>3D scatter plots of ship and sea characteristics. (<b>a</b>) Pauli pseudocolor map; (<b>b</b>) Pauli decomposition 3D scatter plot; (<b>c</b>) Freeman–Durden decomposition 3D scatter plot; and (<b>d</b>) proposed characteristics 3D scatter plot.</p>
Full article ">Figure 12
<p>Distribution of target pixel sizes.</p>
Full article ">Figure 13
<p>Comparison of ship detection results under different polarimetric characteristic combinations. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) Pauli components; (<b>c</b>) Freeman–Durden components; (<b>d</b>) Proposed method.</p>
Full article ">Figure 14
<p>Comparison of feature maps under different backbone networks. (<b>a</b>) Pauli image; (<b>b</b>) feature map generated by the backbone network constructed with traditional convolutional blocks; (<b>c</b>) feature map generated by the proposed backbone network employing deformable convolutional blocks.</p>
Full article ">Figure 15
<p>Comparison of ship detection results under different network modules. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth; (<b>b</b>) CAM only; (<b>c</b>) DCNblock only; (<b>d</b>) both DCNblock and CAM.</p>
Full article ">Figure 16
<p>Comparison of vessel detection results under different networks. Green rectangles indicate the ground truth, red rectangles indicate the detected results, blue circles indicate the false alarms, and orange circles indicate the missed detections. (<b>a</b>) Ground truth, (<b>b</b>) RetinaNet, (<b>c</b>) CenterNet, (<b>d</b>) Faster-RCNN, (<b>e</b>) YOLOv5, (<b>f</b>) YOLOv8, (<b>g</b>) MobileNet, (<b>h</b>) Proposed method.</p>
Full article ">
24 pages, 9871 KiB  
Article
AIR-POLSAR-CR1.0: A Benchmark Dataset for Cloud Removal in High-Resolution Optical Remote Sensing Images with Fully Polarized SAR
by Yuxi Wang, Wenjuan Zhang, Jie Pan, Wen Jiang, Fangyan Yuan, Bo Zhang, Xijuan Yue and Bing Zhang
Remote Sens. 2025, 17(2), 275; https://doi.org/10.3390/rs17020275 - 14 Jan 2025
Viewed by 424
Abstract
Due to the all-time and all-weather characteristics of synthetic aperture radar (SAR) data, they have become an important input for optical image restoration, and various cloud removal datasets based on SAR-optical have been proposed. Currently, the construction of multi-source cloud removal datasets typically [...] Read more.
Due to the all-time and all-weather characteristics of synthetic aperture radar (SAR) data, they have become an important input for optical image restoration, and various cloud removal datasets based on SAR-optical have been proposed. Currently, the construction of multi-source cloud removal datasets typically employs single-polarization or dual-polarization backscatter SAR feature images, lacking a comprehensive description of target scattering information and polarization characteristics. This paper constructs a high-resolution remote sensing dataset, AIR-POLSAR-CR1.0, based on optical images, backscatter feature images, and polarization feature images using the fully polarimetric synthetic aperture radar (PolSAR) data. The dataset has been manually annotated to provide a foundation for subsequent analyses and processing. Finally, this study performs a performance analysis of typical cloud removal deep learning algorithms based on different categories and cloud coverage on the proposed standard dataset, serving as baseline results for this benchmark. The results of the ablation experiment also demonstrate the effectiveness of the PolSAR data. In summary, AIR-POLSAR-CR1.0 fills the gap in polarization feature images and demonstrates good adaptability for the development of deep learning algorithms. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Examples of the AIR-POLSAR-CR1.0 dataset.</p>
Full article ">Figure 2
<p>The basic flowchart of the AIR-POLSAR-CR1.0 dataset construction.</p>
Full article ">Figure 3
<p>Flight platform and measuring equipment. (<b>a</b>) MA60 aircraft. (<b>b</b>) The wide-field digital aerial camera. (<b>c</b>) The multidimensional SAR system.</p>
Full article ">Figure 4
<p>Examples of the main categories of the AIR-POLSAR-CR1.0 dataset.</p>
Full article ">Figure 5
<p>Examples of the different cloud coverage of the AIR-POLSAR-CR1.0 dataset.</p>
Full article ">Figure 6
<p>Experimental results of the Pix2pix model on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the Pix2pix with TBBCFSAR model. (<b>e</b>) Results of the Pix2pix with BCFSAR model. (<b>f</b>) Results of the Pix2pix with PFSAR model. (<b>g</b>) Results of the Pix2pix with PolSAR model. (<b>h</b>) Ground truth.</p>
Full article ">Figure 7
<p>Experimental results of the SAR-Opt-cGAN model on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the SAR-Opt-cGAN without PolSAR model. (<b>e</b>) Results of the SAR-opt-cGAN with TBBCFSAR model. (<b>f</b>) Results of the SAR-Opt-cGAN with BCFSAR model. (<b>g</b>) Results of the SAR-Opt-cGAN with PFSAR model. (<b>h</b>) Results of the SAR-Opt-cGAN with PolSAR model. (<b>i</b>) Ground truth.</p>
Full article ">Figure 8
<p>Experimental results of the DSen2-CR model on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the DSen2-CR without PolSAR model. (<b>e</b>) Results of the DSen2-CR with TBBCFSAR model. (<b>f</b>) Results of the DSen2-CR with BCFSAR model. (<b>g</b>) Results of the DSen2-CR with PFSAR model. (<b>h</b>) Results of the DSen2-CR with PolSAR model. (<b>i</b>) Ground truth.</p>
Full article ">Figure 9
<p>Experimental results of the GLF-CR model on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the GLF-CR without PolSAR model. (<b>e</b>) Results of the GLF-CR with TBBCFSAR model. (<b>f</b>) Results of the GLF-CR with BCFSAR model. (<b>g</b>) Results of the GLF-CR with PFSAR model. (<b>h</b>) Results of the GLF-CR with PolSAR model. (<b>i</b>) Ground truth.</p>
Full article ">Figure 10
<p>Experimental results of the USSRN-CR model on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the USSRN-CR without PolSAR model. (<b>e</b>) Results of the DSen2-CR with TBBCFSAR model. (<b>f</b>) Results of the USSRN-CR with BCFSAR model. (<b>g</b>) Results of the USSRN-CR with PFSAR model. (<b>h</b>) Results of the USSRN-CR with PolSAR model. (<b>i</b>) Ground truth.</p>
Full article ">Figure 11
<p>Experimental results of the baseline models on the AIR-POLSAR-CR1.0 dataset. (<b>a</b>) Simulated cloudy images. (<b>b</b>) BCFSAR images. (<b>c</b>) PFSAR images. (<b>d</b>) Results of the Pix2pix model. (<b>e</b>) Results of the SAR-Opt-cGAN model. (<b>f</b>) Results of the DSen2-CR model. (<b>g</b>) Results of the GLF-CR model. (<b>h</b>) Results of the USSRN-CR model. (<b>i</b>) Ground truth.</p>
Full article ">
20 pages, 6779 KiB  
Article
Studying Forest Species Classification Methods by Combining PolSAR and Vegetation Spectral Indices
by Hongbo Zhu, Weidong Song, Bing Zhang, Ergaojie Lu, Jiguang Dai, Wei Zhao and Zhongchao Hu
Forests 2025, 16(1), 15; https://doi.org/10.3390/f16010015 - 25 Dec 2024
Viewed by 558
Abstract
Tree species are important factors affecting the carbon sequestration capacity of forests and maintaining the stability of ecosystems, but trees are widely distributed spatially and located in complex environments, and there is a lack of large-scale regional tree species classification models for remote [...] Read more.
Tree species are important factors affecting the carbon sequestration capacity of forests and maintaining the stability of ecosystems, but trees are widely distributed spatially and located in complex environments, and there is a lack of large-scale regional tree species classification models for remote sensing imagery. Therefore, many studies aim to solve this problem by combining multivariate remote sensing data and proposing a machine learning model for forest tree species classification. However, satellite-based laser systems find it difficult to meet the needs of regional forest species classification characters, due to their unique footprint sampling method, and SAR data limit the accuracy of species classification, due to the problem of information blending in backscatter coefficients. In this work, we combined Sentinel-1 and Sentinel-2 data to construct a machine learning tree classification model based on optical features, vegetation spectral features, and PolSAR polarization observation features, and propose a forest tree classification feature selection method featuring the Hilbert–Huang transform for the problem of mixed information on the surface of SAR data. The PSO-RF method was used to classify forest species, including four temperate broadleaf forests, namely, aspen (Populus L.), maple (Acer), peach tree (Prunus persica), and apricot tree (Prunus armeniaca L.), and two coniferous forests, namely, Chinese pine (Pinus tabuliformis Carrière) and Mongolian pine (Pinus sylvestris var. mongolica Litv.). In this study, some experiments were conducted using two Sentinel-1 images, four Sentinel-2 images, and 550 measured forest survey sample data points pertaining to the forested area of Fuxin District, Liaoning Province, China. The results show that the fusion model constructed in this study has high accuracy, with a Kappa coefficient of 0.94 and an overall classification accuracy of 95.1%. In addition, this study shows that PolSAR data can play an important role in forest tree species classification. In addition, by applying the Hilbert–Huang transform to PolSAR data, other feature information that interferes with the perceived vertical structure of forests can be suppressed to a certain extent, and its role in the classification of forest species, combined with PolSAR, should not be ignored. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location map of Fuxin area.</p>
Full article ">Figure 2
<p>The structure of the multi-source remote sensing forest species classification methods.</p>
Full article ">Figure 3
<p>Random forest importance ranking chart.</p>
Full article ">Figure 4
<p>Distribution of forest species in Fuxin region in 2021, determined based on multi-source remote sensing forest species classification methods.</p>
Full article ">Figure 5
<p>Map of localized forest species distribution in the study area. (<b>a</b>) Map of forest species distribution in the southwestern part of the study area. (<b>b</b>) Map of forest species distribution in the northeastern part of the study area.</p>
Full article ">Figure 6
<p>Results of feature ablation experiments. ((<b>a</b>) is the producer accuracy of the PolSAR feature ablation experiment; (<b>b</b>) is the user accuracy of the PolSAR feature ablation experiment; (<b>c</b>) is the producer accuracy of the optical feature ablation experiment; (<b>d</b>) is the user accuracy of the optical feature ablation experiment; (<b>e</b>) is the producer accuracy of the vegetation spectral feature ablation experiment; (<b>f</b>) is the user accuracy of the vegetation spectral feature ablation experiment; and (<b>g</b>) is the overall accuracy of the three overall accuracies of the feature ablation experiments).</p>
Full article ">Figure 7
<p>Plot of Hilbert–Huang transform results. ((<b>a</b>) Hilbert–Huang transform result for C11; (<b>b</b>) Hilbert–Huang transform result for C22; (<b>c</b>) Hilbert–Huang transform result for alpha; (<b>d</b>) Hilbert–Huang transform result for anisotropy; and (<b>e</b>) Hilbert–Huang transform result for entropy).</p>
Full article ">
21 pages, 13076 KiB  
Article
A Framework for High-Spatiotemporal-Resolution Soil Moisture Retrieval in China Using Multi-Source Remote Sensing Data
by Zhuangzhuang Feng, Xingming Zheng, Xiaofeng Li, Chunmei Wang, Jinfeng Song, Lei Li, Tianhao Guo and Jia Zheng
Land 2024, 13(12), 2189; https://doi.org/10.3390/land13122189 - 15 Dec 2024
Viewed by 998
Abstract
High-spatiotemporal-resolution and accurate soil moisture (SM) data are crucial for investigating climate, hydrology, and agriculture. Existing SM products do not yet meet the demands for high spatiotemporal resolution. The objective is to develop and evaluate a retrieval framework to derive SM estimates with [...] Read more.
High-spatiotemporal-resolution and accurate soil moisture (SM) data are crucial for investigating climate, hydrology, and agriculture. Existing SM products do not yet meet the demands for high spatiotemporal resolution. The objective is to develop and evaluate a retrieval framework to derive SM estimates with high spatial (100 m) and temporal (<3 days) resolution that can be used on a national scale in China. Therefore, this study integrates multi-source data, including optical remote sensing (RS) data from Sentinel-2 and Landsat-7/8/9, synthetic aperture radar (SAR) data from Sentinel-1, and auxiliary data. Four machine learning and deep learning algorithms are applied, including Random Forest Regression (RFR), Extreme Gradient Boosting (XGBoost), Long Short-Term Memory (LSTM) networks, and Ensemble Learning (EL). The integrated framework (IF) considers three feature scenarios (SC1: optical RS + auxiliary data, SC2: SAR + auxiliary data, SC3: optical RS + SAR + auxiliary data), encompassing a total of 33 features. The results are as follows: (1) The correlation coefficients (r) between auxiliary data (such as sand fraction, r = −0.48; silt fraction, r = 0.47; and evapotranspiration, r = −0.42), SAR features (such as the backscatter coefficients for VV-pol (σvv0), r = 0.47), and optical RS features (such as Shortwave Infrared Band 2 (SWIR2) reflectance data from Sentinel-2 and Landsat-7/8/9, r = −0.39) with observed SM are significant. This indicates that multi-source data can provide complementary information for SM monitoring. (2) Compared to XGBoost and LSTM, RFR and EL demonstrate superior overall performance and are the preferred models for SM prediction. Their R2 for the training and test sets exceed 0.969 and 0.743, respectively, and their ubRMSE are below 0.022 and 0.063 m3/m3, respectively. (3) The SM prediction accuracy is highest for the scenario of optical + SAR + auxiliary data, followed by SAR + auxiliary data, and finally optical + auxiliary data. (4) With an increasing Normalized Difference Vegetation Index (NDVI) and SM values, the trained models exhibit a general decrease in prediction performance and accuracy. (5) In 2021 and 2022, without considering cloud cover, the IF theoretically achieved an SM revisit time of 1–3 days across 95.01% and 96.53% of China’s area, respectively. However, SC1 was able to achieve a revisit time of 1–3 days over 60.73% of China’s area in 2021 and 69.36% in 2022, while the area covered by SC2 and SC3 at this revisit time accounted for less than 1% of China’s total area. This study validates the effectiveness of combining multi-source RS data with auxiliary data in large-scale SM monitoring and provides new methods for improving SM retrieval accuracy and spatiotemporal coverage. Full article
(This article belongs to the Section Land – Observation and Monitoring)
Show Figures

Figure 1

Figure 1
<p>The spatial distribution of the SONTE-China 17 sites within the study area.</p>
Full article ">Figure 2
<p>A framework for estimating SM based on multi-source RS data. *** represents the first priority, ** represents the second priority, and * represents the third priority.</p>
Full article ">Figure 3
<p>The training (<b>top</b>) and test (<b>bottom</b>) results of four models from IF at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 4
<p>The training results of four models at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 5
<p>The test results of four models at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 6
<p>The time series of estimated and observed SM from three scenarios at NQ, JYT, and MQ sites. The blue solid line represents the observed SM at 0–5 cm. The green solid line represents the daily NDVI. The red, green, and purple squares represent the estimated SM for SC1, SC2, and SC3, respectively. The blue bars indicate daily precipitation. The red dashed vertical lines distinguish between the training and test sets.</p>
Full article ">Figure 7
<p>Revisit time between SC1, SC2, SC3, and IF for monitoring SM in China (2021). (<b>a</b>) SC1: Optical RS + auxiliary data only; (<b>b</b>) SC2: SAR + auxiliary data only; (<b>c</b>) SC3: optical RS + SAR + auxiliary data; (<b>d</b>) IF: combined SC3, SC2, and SC1 scenarios.</p>
Full article ">Figure 8
<p>Revisit time between SC1, SC2, SC3, and IF for monitoring SM in China (2022). (<b>a</b>) SC1: Optical RS + auxiliary data only; (<b>b</b>) SC2: SAR + auxiliary data only; (<b>c</b>) SC3: optical RS + SAR + auxiliary data; (<b>d</b>) IF: combined SC3, SC2, and SC1 scenarios.</p>
Full article ">Figure 9
<p>Training (<b>top</b>) and test (<b>bottom</b>) results of three categories using the RFR based on the SC3 dataset at SONTE-China (17 sites). The red dotted line is the trend line. The gray dotted line represents the error line at 0.06 m<sup>3</sup>/m<sup>3</sup>.</p>
Full article ">Figure 10
<p>Performance of different models under various NDVI categories in the training set (<b>left</b>) and test set (<b>right</b>). The colored dot lines represent R<sup>2</sup>, and the bar charts represent ubRMSE.</p>
Full article ">Figure 11
<p>Performance of different models under various SM categories in the training set (<b>left</b>) and test set (<b>right</b>). The bar charts represent ubRMSE, and the red dot line represents the average ubRMSE.</p>
Full article ">Figure 12
<p>Revisit time distribution for multi-source RS monitoring of SM under different scenarios (2021–2022).</p>
Full article ">
31 pages, 2960 KiB  
Review
A Survey on Deep Learning for Few-Shot PolSAR Image Classification
by Ningwei Wang, Weiqiang Jin, Haixia Bi, Chen Xu and Jinghuai Gao
Remote Sens. 2024, 16(24), 4632; https://doi.org/10.3390/rs16244632 - 11 Dec 2024
Cited by 1 | Viewed by 794
Abstract
Few-shot classification of polarimetric synthetic aperture radar (PolSAR) images is a challenging task due to the scarcity of labeled data and the complex scattering properties of PolSAR data. Traditional deep learning models often suffer from overfitting and catastrophic forgetting in such settings. Recent [...] Read more.
Few-shot classification of polarimetric synthetic aperture radar (PolSAR) images is a challenging task due to the scarcity of labeled data and the complex scattering properties of PolSAR data. Traditional deep learning models often suffer from overfitting and catastrophic forgetting in such settings. Recent advancements have explored innovative approaches, including data augmentation, transfer learning, meta-learning, and multimodal fusion, to address these limitations. Data augmentation methods enhance the diversity of training samples, with advanced techniques like generative adversarial networks (GANs) generating realistic synthetic data that reflect PolSAR’s polarimetric characteristics. Transfer learning leverages pre-trained models and domain adaptation techniques to improve classification across diverse conditions with minimal labeled samples. Meta-learning enhances model adaptability by learning generalizable representations from limited data. Multimodal methods integrate complementary data sources, such as optical imagery, to enrich feature representation. This survey provides a comprehensive review of these strategies, focusing on their advantages, limitations, and potential applications in PolSAR classification. We also identify key trends, such as the increasing role of hybrid models combining multiple paradigms and the growing emphasis on explainability and domain-specific customization. By synthesizing SOTA approaches, this survey offers insights into future directions for advancing few-shot PolSAR classification. Full article
(This article belongs to the Special Issue SAR and Multisource Remote Sensing: Challenges and Innovations)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>PolSAR image classification process.</p>
Full article ">Figure 2
<p>Knowledge graph of key concepts in Few-Shot PolSAR image classification. Nodes represent key concepts, methods, or techniques, while edges indicate the relationships or dependencies between them. Node colors correspond to methodological categories: orange for data augmentation-based methods, green for transfer learning-based methods, blue for meta-learning-based methods, and pink for multimodal-based methods. The graph was constructed based on literature analysis and keyword extraction, with relationships derived from established dependencies in the field.</p>
Full article ">Figure 3
<p>Overview of the GAN-based PolSAR data augmentation pipeline, illustrating the flow from raw PolSAR input to the final loss optimization stage for both the generator and discriminator.</p>
Full article ">Figure 4
<p>Overview of the self-supervised learning framework for PolSAR image classification.</p>
Full article ">Figure 5
<p>Data division in meta-learning-based few-shot PolSAR classification, showing the training and testing stages with support and query sets for each task.</p>
Full article ">Figure 6
<p>The diagram illustrates the multimodal feature extraction and fusion process. Coherency matrices and target decomposition features (Pauli, Freeman, <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>/</mo> <mi>A</mi> <mo>/</mo> <mi>α</mi> </mrow> </semantics></math>) are separately processed through feature extraction pipelines (blue background) to capture complementary spatial, polarimetric, and semantic information. The final loss computation (<math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </semantics></math>) integrates these features to optimize classification performance.</p>
Full article ">
28 pages, 7695 KiB  
Article
MAPM:PolSAR Image Classification with Masked Autoencoder Based on Position Prediction and Memory Tokens
by Jianlong Wang, Yingying Li, Dou Quan, Beibei Hou, Zhensong Wang, Haifeng Sima and Junding Sun
Remote Sens. 2024, 16(22), 4280; https://doi.org/10.3390/rs16224280 - 17 Nov 2024
Viewed by 825
Abstract
Deep learning methods have shown significant advantages in polarimetric synthetic aperture radar (PolSAR) image classification. However, their performances rely on a large number of labeled data. To alleviate this problem, this paper proposes a PolSAR image classification method with a Masked Autoencoder based [...] Read more.
Deep learning methods have shown significant advantages in polarimetric synthetic aperture radar (PolSAR) image classification. However, their performances rely on a large number of labeled data. To alleviate this problem, this paper proposes a PolSAR image classification method with a Masked Autoencoder based on Position prediction and Memory tokens (MAPM). First, MAPM designs a Masked Autoencoder (MAE) based on the transformer for pre-training, which can boost feature learning and improve classification results based on the number of labeled samples. Secondly, since the transformer is relatively insensitive to the order of the input tokens, a position prediction strategy is introduced in the encoder part of the MAE. It can effectively capture subtle differences and discriminate complex, blurry boundaries in PolSAR images. In the fine-tuning stage, the addition of learnable memory tokens can improve classification performance. In addition, L1 loss is used for MAE optimization to enhance the robustness of the model to outliers in PolSAR data. Experimental results show the effectiveness and advantages of the proposed MAPM in PolSAR image classification. Specifically, MAPM achieves performance gains of about 1% in classification accuracy compared with existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>The scheme of the proposed PolSAR image classification method.</p>
Full article ">Figure 2
<p>Structural diagram of the MAE.</p>
Full article ">Figure 3
<p>Structural diagram of MP3.</p>
Full article ">Figure 4
<p>Structural diagram of MAPP.</p>
Full article ">Figure 5
<p>The transformer model with memory.</p>
Full article ">Figure 6
<p>AIRSAR Flevoland dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 7
<p>RADARSAT-2 San Francisco Bay dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 8
<p>ESAR Oberpfaffenhofen dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 9
<p>RADARSAT-2 Netherlands dataset and color code. (<b>a</b>) Pauli-RGB map. (<b>b</b>) Ground-truth map. (<b>c</b>) Legend.</p>
Full article ">Figure 10
<p>Predicted images for the ground truth of the AIRSAR Flevoland dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 11
<p>Predicted images of the AIRSAR Flevoland dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 12
<p>Predicted images of the RADARSAT-2 San Francisco Bay dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 13
<p>Predicted images of the ESAR Oberpfaffenhofen dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 14
<p>Predicted images of the RADARSAT-2 Netherlands dataset. (<b>a</b>) Result of AE; (<b>b</b>) result of CAE; (<b>c</b>) result of VAE; (<b>d</b>) result of CCT; (<b>e</b>) result of MCPT; (<b>f</b>) result of MAPM.</p>
Full article ">Figure 15
<p>Impact of the amount of training data on OA during the fine-tuning phase.</p>
Full article ">Figure 16
<p>Impact of masking ratio on OA during the fine-tuning phase.</p>
Full article ">Figure 17
<p>Comparison results of <math display="inline"><semantics> <msub> <mi mathvariant="italic">L</mi> <mn>1</mn> </msub> </semantics></math> loss and MSE loss.</p>
Full article ">Figure 18
<p>Training Time with memory tokens and baseline model.</p>
Full article ">Figure 19
<p>Result of model generalization performance study. (<b>a</b>) Pauli-RGB image of the AIRSAR San Francisco dataset. (<b>b</b>) Predicted image of the AIRSAR San Francisco dataset using the model trained on the RADARSAT-2 San Francisco dataset. (<b>c</b>) Legend of the RADARSAT-2 San Francisco dataset.</p>
Full article ">
21 pages, 6345 KiB  
Article
Integration of Optical and Synthetic Aperture Radar Data with Different Synthetic Aperture Radar Image Processing Techniques and Development Stages to Improve Soybean Yield Prediction
by Isabella A. Cunha, Gustavo M. M. Baptista, Victor Hugo R. Prudente, Derlei D. Melo and Lucas R. Amaral
Agriculture 2024, 14(11), 2032; https://doi.org/10.3390/agriculture14112032 - 12 Nov 2024
Cited by 1 | Viewed by 1003
Abstract
Predicting crop yield throughout its development cycle is crucial for planning storage, processing, and distribution. Optical remote sensing has been used for yield prediction but has limitations, such as cloud interference and only capturing canopy-level data. Synthetic Aperture Radar (SAR) complements optical data [...] Read more.
Predicting crop yield throughout its development cycle is crucial for planning storage, processing, and distribution. Optical remote sensing has been used for yield prediction but has limitations, such as cloud interference and only capturing canopy-level data. Synthetic Aperture Radar (SAR) complements optical data by capturing information even in cloudy conditions and providing additional plant insights. This study aimed to explore the correlation of SAR variables with soybean yield at different crop stages, testing if SAR data enhances predictions compared to optical data alone. Data from three growing seasons were collected from an area of 106 hectares, using eight SAR variables (Alpha, Entropy, DPSVI, RFDI, Pol, RVI, VH, and VV) and four speckle noise filters. The Random Forest algorithm was applied, combining SAR variables with the EVI optical index. Although none of the SAR variables showed strong correlations with yield (r < |0.35|), predictions improved when SAR data were included. The best performance was achieved using DPSVI with the Boxcar filter, combined with EVI during the maturation stage (with EVI:RMSE = 0.43, 0.49, and 0.60, respectively, for each season; while EVI + DPSVI:RMSE = 0.39, 0.49, and 0.42). Despite improving predictions, the computational demands of SAR processing must be considered, especially when optical data are limited due to cloud cover. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Figure 1
<p>Experimental area with field boundaries marked in red and soybean yield data points in each harvest.</p>
Full article ">Figure 2
<p>Temporal profiles of SAR data in <span class="html-italic">VV</span> and <span class="html-italic">VH</span> backscatter coefficient (<b>a</b>) and optical data considering EVI (<b>b</b>). The red circle represents the selected image dates based on the EVI.</p>
Full article ">Figure 3
<p>SAR data workflow for obtaining (<b>a</b>) backscatter coefficients and (<b>b</b>) polarimetric decomposition.</p>
Full article ">Figure 4
<p>Prediction scenarios performed. Input data corresponding to each tested scenario (in red): (<b>a</b>) using all stages and SAR variables together, (<b>b</b>) using stages separately and all SAR variables together, (<b>c</b>) using the stage that previously performed best with the variables separated.</p>
Full article ">Figure 5
<p>Spearman correlation coefficient between SAR data and soybean yield, including harvest, growth stages, speckle noise reduction filters, and SAR variables. Significant correlations at 5%.</p>
Full article ">Figure 6
<p>R<sup>2</sup> and RMSE values of predictions for each harvest individually with all stages of image collection, using only optical data (EVI) compared to using optical data together with all SAR variables.</p>
Full article ">Figure 7
<p>DPSVI index map for distinct growth stages and soybean harvests. The highlighted area in black shows the difference in cultivar in harvest 3.</p>
Full article ">Figure 8
<p>Percentage difference in R<sup>2</sup> of predictions with EVI and adding SAR variables in models using each stage individually compared to all stages combined.</p>
Full article ">Figure 9
<p>Percentage difference in RMSE of predictions with EVI and adding SAR variables in models using each stage individually compared to all stages combined.</p>
Full article ">Figure 10
<p>R<sup>2</sup> values for predictions using all growth stages with only optical data and using optical data in conjunction with all SAR variables.</p>
Full article ">Figure 11
<p>RMSE values for predictions using all growth stages with only optical data and using optical data in conjunction with all SAR variables.</p>
Full article ">Figure 12
<p>R<sup>2</sup> values obtained for Stage 3 using scenarios with separate SAR variables in conjunction with EVI, compared to using all SAR variables combined with EVI and using only optical data (EVI).</p>
Full article ">Figure 13
<p>Visual comparison between actual yield maps and predicted yield using DPSVI in conjunction with EVI for Stage 3, using the Boxcar filter. The actual yield data were interpolated using ordinary kriging. The error maps represent the difference between the actual and predicted maps, showing positive and negative variations.</p>
Full article ">
22 pages, 16745 KiB  
Article
Unsupervised PolSAR Image Classification Based on Superpixel Pseudo-Labels and a Similarity-Matching Network
by Lei Wang, Lingmu Peng, Rong Gui, Hanyu Hong and Shenghui Zhu
Remote Sens. 2024, 16(21), 4119; https://doi.org/10.3390/rs16214119 - 4 Nov 2024
Viewed by 1076
Abstract
Supervised polarimetric synthetic aperture radar (PolSAR) image classification demands a large amount of precisely labeled data. However, such data are difficult to obtain. Therefore, many unsupervised methods have been proposed for unsupervised PolSAR image classification. The classification maps of unsupervised methods contain many [...] Read more.
Supervised polarimetric synthetic aperture radar (PolSAR) image classification demands a large amount of precisely labeled data. However, such data are difficult to obtain. Therefore, many unsupervised methods have been proposed for unsupervised PolSAR image classification. The classification maps of unsupervised methods contain many high-confidence samples. These samples, which are often ignored, can be used as supervisory information to improve classification performance on PolSAR images. This study proposes a new unsupervised PolSAR image classification framework. The framework combines high-confidence superpixel pseudo-labeled samples and semi-supervised classification methods. The experiments indicated that this framework could achieve higher-level effectiveness in unsupervised PolSAR image classification. First, superpixel segmentation was performed on PolSAR images, and the geometric centers of the superpixels were generated. Second, the classification maps of rotation-domain deep mutual information (RDDMI), an unsupervised PolSAR image classification method, were used as the pseudo-labels of the central points of the superpixels. Finally, the unlabeled samples and the high-confidence pseudo-labeled samples were used to train an excellent semi-supervised method, similarity matching (SimMatch). Experiments on three real PolSAR datasets illustrated that, compared with the excellent RDDMI, the accuracy of the proposed method was increased by 1.70%, 0.99%, and 0.8%. The proposed framework provides significant performance improvements and is an efficient method for improving unsupervised PolSAR image classification. Full article
(This article belongs to the Special Issue SAR in Big Data Era III)
Show Figures

Figure 1

Figure 1
<p>The five parts of the framework. The Wide ResNet model adopts the classic wide residual networks (WRNs) [<a href="#B37-remotesensing-16-04119" class="html-bibr">37</a>]. The useful features from the input data are extracted by the backbone to obtain an embedding vector. <math display="inline"><semantics> <msub> <mi>L</mi> <mi>s</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>L</mi> <mi>u</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> represent the supervised loss, unsupervised loss, and similarity distribution, respectively.</p>
Full article ">Figure 2
<p>The pseudo-label generation structure of SimMatch. SimMatch generates semantic and instance pseudo-labels using weakly augmented views and calculates semantic and instance similarities through class centers. These two similarities are then propagated to each other using expansion and aggregation to obtain better pseudo-labels.</p>
Full article ">Figure 3
<p>The propagation of pseudo-labels’ information. As the example in the red box shows. If the similarity between semantics and instances are different, the histogram will be flatter, and if the semantics similarities and the instances similarities are similar, the resulting histogram will be sharper.</p>
Full article ">Figure 4
<p>RS-2 Flevoland dataset. (<b>a</b>) Pauli pseudo-color image. (<b>b</b>) Ground-truth map.</p>
Full article ">Figure 5
<p>RS-2 Wuhan dataset. (<b>a</b>) Pauli pseudo-color image. (<b>b</b>) Ground-truth map. (<b>c</b>) An optical image of ROI_1. (<b>d</b>) An optical image of ROI_2.</p>
Full article ">Figure 6
<p>AIRSAR Flevoland dataset. (<b>a</b>) Pauli pseudo-color image. (<b>b</b>) Ground-truth map.</p>
Full article ">Figure 7
<p>Classification results on the RS-2 Flevoland dataset. The black boxes show that SP-SIM has more fine classification results than RDDMI. (<b>a</b>) Ground-truth map. (<b>b</b>) Wishart. (<b>c</b>) RDDMI. (<b>d</b>) SP-SIM.</p>
Full article ">Figure 8
<p>Classification results on the RS-2 Wuhan dataset. (<b>a</b>) Ground-truth map. (<b>b</b>) Wishart. (<b>c</b>) RDDMI. (<b>d</b>) SP-SIM.</p>
Full article ">Figure 9
<p>Similar backscattering properties on the AIRSAR Flevoland dataset. (<b>a</b>) Four similar backscattering properties. (<b>b</b>) Water. (<b>c</b>) Bare soil. (<b>d</b>) Lucerne. (<b>e</b>) Rape seed.</p>
Full article ">Figure 10
<p>Classification results on the AIRSAR Flevoland dataset. (<b>a</b>) Ground-truth map. (<b>b</b>) Wishart. (<b>c</b>) RDDMI. (<b>d</b>) SP-SIM.</p>
Full article ">
16 pages, 17232 KiB  
Article
MSMTRIU-Net: Deep Learning-Based Method for Identifying Rice Cultivation Areas Using Multi-Source and Multi-Temporal Remote Sensing Images
by Manlin Wang, Xiaoshuang Ma, Taotao Zheng and Ziqi Su
Sensors 2024, 24(21), 6915; https://doi.org/10.3390/s24216915 - 28 Oct 2024
Viewed by 755
Abstract
Identifying rice cultivation areas in a timely and accurate manner holds great significance in comprehending the overall distribution pattern of rice and formulating agricultural policies. The remote sensing observation technique provides a convenient means to monitor the distribution of rice cultivation areas on [...] Read more.
Identifying rice cultivation areas in a timely and accurate manner holds great significance in comprehending the overall distribution pattern of rice and formulating agricultural policies. The remote sensing observation technique provides a convenient means to monitor the distribution of rice cultivation areas on a large scale. Single-source or single-temporal remote sensing images are often used in many studies, which makes the information of rice in different types of images and different growth stages hard to be utilized, leading to unsatisfactory identification results. This paper presents a rice cultivation area identification method based on a deep learning model using multi-source and multi-temporal remote sensing images. Specifically, a U-Net based model is employed to identify the rice planting areas using both the Landsat-8 optical dataset and Sentinel-1 Polarimetric Synthetic Aperture Radar (PolSAR) dataset; to take full into account of the spectral reflectance traits and polarimetric scattering traits of rice in different periods, multiple image features from multi-temporal Landsat-8 and Sentinel-1 images are fed into the network to train the model. The experimental results on China’s Sanjiang Plain demonstrate the high classification precisions of the proposed Multi-Source and Multi-Temporal Rice Identification U-Net (MSMTRIU-NET) and that inputting more information from multi-source and multi-temporal images into the network can indeed improve the classification performance; further, the classification map exhibits greater continuity, and the demarcations between rice cultivation regions and surrounding environments reflect reality more accurately. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The map of the study area.</p>
Full article ">Figure 2
<p>Photographs for the different phenological stages of rice: (<b>a</b>) Seedling stage; (<b>b</b>) Tillering stage; (<b>c</b>) Heading stage; (<b>d</b>) Maturity stage.</p>
Full article ">Figure 3
<p>Multi-source images of the study area: (<b>a</b>) Lansat-8 image; (<b>b</b>) Sentinel-1 image.</p>
Full article ">Figure 4
<p>Polarimetric decomposition images of the rice cultivation areas: (<b>a</b>) Original Sentinel-1 images; (<b>b</b>) Corresponding polarimetric entropy images; (<b>c</b>) Corresponding polarimetric anisotropy images; (<b>d</b>) Corresponding polarimetric alpha angle images.</p>
Full article ">Figure 5
<p>Architectures of the MSMTRIU-NET model.</p>
Full article ">Figure 6
<p>The Composition of Multi-source and Multi-temporal Datasets.</p>
Full article ">Figure 7
<p>Six types of datasets for comparison: (<b>a</b>) Single-temporal optical dataset on 3 July; (<b>b</b>) Single-temporal SAR dataset on 3 July; (<b>c</b>) Multi-temporal optical dataset; (<b>d</b>) Multi-temporal SAR dataset; (<b>e</b>) Single-temporal optical and SAR dataset; (<b>f</b>) Multi-source and multi-temporal dataset.</p>
Full article ">Figure 8
<p>The MSMTRIU-NET’s classification maps for various datasets: (<b>a</b>) Landsat-8 optical images; (<b>b</b>) Sentinel-1 SAR images; (<b>c</b>) Label images; (<b>d</b>) Single-temporal optical images; (<b>e</b>) Single-temporal SAR images; (<b>f</b>) Multi-temporal optical images; (<b>g</b>) Multi-temporal SAR imagery; (<b>h</b>) Single-temporal optical and SAR images; (<b>i</b>) Multi-source and multi-temporal images.</p>
Full article ">Figure 9
<p>A sensitive analysis on the H/A/Alpha features in SAR data: (<b>a</b>) Label image; (<b>b</b>) Classification results excluding the H/A/Alpha features; (<b>c</b>) Classification results including the H/A/Alpha features.</p>
Full article ">Figure 10
<p>Classification maps of the various methods: (<b>a</b>) Landsat-8 optical images; (<b>b</b>) Sentinel-1 SAR images; (<b>c</b>) Label images; (<b>d</b>) SVM; (<b>e</b>) RF; (<b>f</b>) FCN; (<b>g</b>) DeepLabv3+; (<b>h</b>) MSMTRIU-NET.</p>
Full article ">Figure 11
<p>Classification results using different convolutional kernel sizes: (<b>a</b>) Label image; (<b>b</b>) 7 × 7 convolutional kernel; (<b>c</b>) 5 × 5 convolutional kernel; (<b>d</b>) 3 × 3 convolutional kernel.</p>
Full article ">Figure 12
<p>Classification maps using datasets on different stages: (<b>a</b>) Landsat-8 optical images; (<b>b</b>) Sentinel-1 SAR images; (<b>c</b>) Label images; (<b>d</b>) Classification results on the seeding stage; (<b>e</b>) Classification results on the tillering stage; (<b>f</b>) Classification results on the heading stage; (<b>g</b>) Classification results on the maturity stage.</p>
Full article ">
34 pages, 8862 KiB  
Article
A Novel Detection Transformer Framework for Ship Detection in Synthetic Aperture Radar Imagery Using Advanced Feature Fusion and Polarimetric Techniques
by Mahmoud Ahmed, Naser El-Sheimy and Henry Leung
Remote Sens. 2024, 16(20), 3877; https://doi.org/10.3390/rs16203877 - 18 Oct 2024
Cited by 2 | Viewed by 1535
Abstract
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. [...] Read more.
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. These methods, relying on either intensity values or single-target characteristics, often fail to enhance the signal-to-clutter ratio (SCR) and are prone to false detections due to environmental factors. To address these issues, a novel framework is introduced that leverages the detection transformer (DETR) model along with advanced feature fusion techniques to enhance ship detection. This feature enhancement DETR (FEDETR) module manages clutter and improves feature extraction through preprocessing techniques such as filtering, denoising, and applying maximum and median pooling with various kernel sizes. Furthermore, it combines metrics like the line spread function (LSF), peak signal-to-noise ratio (PSNR), and F1 score to predict optimal pooling configurations and thus enhance edge sharpness, image fidelity, and detection accuracy. Complementing this, the weighted feature fusion (WFF) module integrates polarimetric SAR (PolSAR) methods such as Pauli decomposition, coherence matrix analysis, and feature volume and helix scattering (Fvh) components decomposition, along with FEDETR attention maps, to provide detailed radar scattering insights that enhance ship response characterization. Finally, by integrating wave polarization properties, the ability to distinguish and characterize targets is augmented, thereby improving SCR and facilitating the detection of weakly scattered targets in SAR imagery. Overall, this new framework significantly boosts DETR’s performance, offering a robust solution for maritime surveillance and security. Full article
(This article belongs to the Special Issue Target Detection with Fully-Polarized Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the proposed ship detection in SAR imagery.</p>
Full article ">Figure 2
<p>CNN preprocessing model.</p>
Full article ">Figure 3
<p>DETR pipeline overview [<a href="#B52-remotesensing-16-03877" class="html-bibr">52</a>].</p>
Full article ">Figure 4
<p>Performance of FEDETR for two images from the test datasets SSDD and SAR Ship, including Gaofen-3 (<b>a1</b>–<b>a8</b>) and Sentinel-1 images (<b>b1</b>–<b>b8</b>) with different polarizations and resolutions. The ground truths, detection results, the false detection and missed detection results are indicated with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 5
<p>Experimental results for ship detection in SAR images across four distinct regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images; (<b>b</b>–<b>e</b>) are the detection results for DETR using VV and VH (DETR_VV, DETR_VH) as well as FEDETR using VV and VH (FEDETR_VV, FEDETR_VH) polarizations, respectively. Ground truths, detection results, false detection results, and missed detection results are marked with green, red, yellow, and blue boxes.</p>
Full article ">Figure 6
<p>Experimental results for ship detection in SAR images across four regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images and (<b>b</b>,<b>c</b>) are the predicted results from FEDETR with optimal pooling and kernel size and the WFF method, respectively. Ground truths, detection results, false detections, and missed detections are marked with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 7
<p>Correlation matrix analyzing the relationship between kernel Size, LSF, and PSNR for max pooling (<b>a</b>) and median pooling (<b>b</b>) on SSD and SAR Ship datasets. Validation of FEDETR module effectiveness.</p>
Full article ">Figure 8
<p>Depicts the LSF of images with different types of pooling and kernel sizes. Panels (<b>a1</b>–<b>a4</b>) depict LSF images after max pooling, while panels (<b>a5</b>–<b>a8</b>) show LSF images after median pooling with kernel sizes 3, 5, 7, and 9 respectively for Gaofen-3 HH images from the SAR Ship dataset. Panels (<b>b1</b>–<b>b4</b>) illustrate LSF images after max pooling and panels (<b>b5</b>–<b>b8</b>) show LSF images after median pooling for images from the SSD dataset.</p>
Full article ">Figure 9
<p>Backscattering intensity in VV and VH polarizations and ship presence across four regions. (<b>a1</b>,<b>a2</b>) Backscattering intensity in VV and VH polarizations for Onshore1; (<b>a3</b>,<b>a4</b>) backscattering intensity for ships in Onshore1; (<b>b1</b>,<b>b2</b>) backscattering intensity in VV and VH polarizations for Onshore2; (<b>b3</b>,<b>b4</b>) backscattering intensity for ships in Onshore2; (<b>c1</b>,<b>c2</b>) backscattering intensity in VV and VH polarizations for Offshore1; (<b>c3</b>,<b>c4</b>) backscattering intensity for ships in Offshore1; (<b>d1</b>,<b>d2</b>) backscattering intensity in VV and VH polarizations for Offshore2; and (<b>d3</b>,<b>d4</b>) backscattering intensity for ships in Offshore2. In each subfigure, the x-axis represents pixel intensity, and the y-axis represents frequency.</p>
Full article ">Figure 10
<p>LSF and PSNR Comparisons for Onshore and Offshore Areas (Onshore1 (<b>a</b>,<b>b</b>), Onshore2 (<b>c</b>,<b>d</b>), Offshore1 (<b>e</b>,<b>f</b>), Offshore2 (<b>g</b>,<b>h</b>)) Using VV and VH Polarization with Median and Max Pooling.</p>
Full article ">Figure 11
<p>Visual comparison of max and median pooling with different kernel sizes on onshore and offshore SAR imagery for VV and VH polarizations: (<b>a1</b>,<b>a2</b>) Onshore1 VV (max kernel size 3; median kernel size 3); (<b>a3</b>,<b>a4</b>) Onshore1 VV (median kernel size 5); (<b>b1</b>,<b>b2</b>) Onshore2 VV (max kernel size 3); (<b>b3</b>,<b>b4</b>) Onshore2 VH (median kernel size 5); (<b>c1</b>,<b>c2</b>) Offshore1 VV (max kernel size 7; median kernel size 7); (<b>c3</b>,<b>c4</b>) Offshore1 VH (max kernel size 3; median kernel size 3); (<b>d1</b>,<b>d2</b>) Offshore2 VV (max kernel size 5; median kernel size 5); (<b>d3</b>,<b>d4</b>) Offshore2 VH (max kernel size 5; median kernel size 5).</p>
Full article ">Figure 12
<p>Experimental results for ship detection in SAR images across four regions: (<b>a</b>) Onshore1, (<b>b</b>) Onshore2, (<b>c</b>) Offshore1, and (<b>d</b>) Offshore2. The figure illustrates the effectiveness of the Pauli decomposition method in reducing noise and distinguishing ships from the background. Ships are marked in pink, while noise clutter is shown in green.</p>
Full article ">Figure 13
<p>Signal-to-clutter ratio (SCR) comparisons for different polarizations across various scenarios. VV polarization is in blue, VH polarization in orange, and Fvh in green.</p>
Full article ">Figure 14
<p>Otsu’s thresholding on four regions for Pauli and FVH images: (<b>a1</b>–<b>a4</b>) thresholding for Onshore1, Onshore2, Offshore1, and Offshore2 for Pauli images; (<b>b1</b>–<b>b4</b>) thresholding for the same regions for Fvh images.</p>
Full article ">Figure 15
<p>Visualization of FEDETR attention maps, Pauli decomposition, Fvh feature maps, and WFF results for Onshore1 (<b>a1</b>–<b>a4</b>), Onshore2 (<b>b1</b>–<b>b4</b>), Offshore1 (<b>c1</b>–<b>c4</b>), and Offshore2 (<b>d1</b>–<b>d4</b>).</p>
Full article ">
22 pages, 14974 KiB  
Article
Adapting CuSUM Algorithm for Site-Specific Forest Conditions to Detect Tropical Deforestation
by Anam Sabir, Unmesh Khati, Marco Lavalle and Hari Shanker Srivastava
Remote Sens. 2024, 16(20), 3871; https://doi.org/10.3390/rs16203871 - 18 Oct 2024
Viewed by 1156
Abstract
Forest degradation is a major issue in ecosystem monitoring, and to take reformative measures, it is important to detect, map, and quantify the losses of forests. Synthetic Aperture Radar (SAR) time-series data have the potential to detect forest loss. However, its sensitivity is [...] Read more.
Forest degradation is a major issue in ecosystem monitoring, and to take reformative measures, it is important to detect, map, and quantify the losses of forests. Synthetic Aperture Radar (SAR) time-series data have the potential to detect forest loss. However, its sensitivity is influenced by the ecoregion, forest type, and site conditions. In this work, we assessed the accuracy of open-source C-band time-series data from Sentinel-1 SAR for detecting deforestation across forests in Africa, South Asia, and Southeast Asia. The statistical Cumulative Sums of Change (CuSUM) algorithm was applied to determine the point of change in the time-series data. The algorithm’s robustness was assessed for different forest site conditions, SAR polarizations, resolutions, and under varying moisture conditions. We observed that the change detection algorithm was affected by the site- and forest-management activities, and also by the precipitation. The forest type and eco-region affected the detection performance, which varied for the co- and cross-pol backscattering components. The cross-pol channel showed better deforested region delineation with less spurious detection. The results for Kalimantan showed a better accuracy at a 100 m spatial resolution, with a 25.1% increase in the average Kappa coefficient for the VH polarization channel in comparison with a 25 m spatial resolution. To avoid false detection due to the high impact of soil moisture in the case of Haldwani, a seasonal analysis was carried out based on dry and wet seasons. For the seasonal analysis, the cross-pol channel showed good accuracy, with an average Kappa coefficient of 0.85 at the 25 m spatial resolution. This work was carried out in support of the upcoming NISAR mission. The datasets were repackaged to the NISAR-like HDF5 format and processing was carried out with methods similar to NISAR ATBDs. Full article
(This article belongs to the Special Issue NISAR Global Observations for Ecosystem Science and Applications)
Show Figures

Figure 1

Figure 1
<p>Study area map showing three forest sites covered in this study: (<b>A</b>) Kalimantan forests in Indonesia, (<b>B</b>) Haldwani forests in India, and (<b>C</b>) forests near Libreville in Mondah. The polygons represent the ground truth (change area) for the years 2017 to 2022 for Kalimantan and Haldwani, and for 2017 to 2019 for Mondah.</p>
Full article ">Figure 2
<p>Proposed workflow for change detection and validation.</p>
Full article ">Figure 3
<p>Backscatter time-series plots for Kalimantan for urban (red), deforested (orange), and unchanged forest (green) areas at the 25 m spatial resolution.</p>
Full article ">Figure 4
<p>Backscatter time-series plots at the 25 m spatial resolution: (<b>A</b>) Kalimantan—unchanged area, (<b>B</b>) Kalimantan—deforested area, (<b>C</b>) Haldwani—unchanged area, and (<b>D</b>) Haldwani—deforested. The red and green curves show the VV and VH backscatter, respectively. The blue vertical dotted line represents the date of felling.</p>
Full article ">Figure 5
<p>Case: Logging of 375 ha and 830 ha forest regions in Kalimantan in the years 2018 and 2022. Images show the detected deforested area with the VV and VH polarizations at the 25 m and 100 m spatial resolutions for 2018 (<b>A</b>–<b>D</b>) and 2022 (<b>E</b>–<b>H</b>).</p>
Full article ">Figure 6
<p>Case: Logging of a 63 ha forest region in Haldwani. (<b>a</b>,<b>b</b>) show the PlanetScope true color images before and after the logging, respectively. (<b>c</b>,<b>d</b>) show the SAR VH backscatter of the same area, pre- and post-felling, respectively. (<b>e</b>) shows the change maps generated with VH backscatter at the 25 m and 100 m spatial resolutions. Colors show the date (YYYYMMDD) of change, as marked by the algorithm.</p>
Full article ">Figure 7
<p>Kalimantan—VH backscatter and SWI plot: (<b>A</b>) deforested area at the 25 m spatial resolution, (<b>B</b>) unchanged area at the 25 m spatial resolution, (<b>C</b>) deforested area at the 100 m spatial resolution, and (<b>D</b>) unchanged area at the 100 m spatial resolution. The grey region shows the duration of the understory flooding. Haldwani—VH backscatter and SWI plot: (<b>E</b>) deforested area at the 25 m spatial resolution, (<b>F</b>) unchanged area at the 25 m spatial resolution, (<b>G</b>) deforested area at the 100 m spatial resolution, and (<b>H</b>) unchanged area at the 100 m spatial resolution. The blue dotted line shows the date of felling.</p>
Full article ">Figure 8
<p>Change maps showing the deforested areas in Brazil in the year 2020. The change map on the left was generated using the CuSUM algorithm with adaptive thresholding. The change map on the right was Hansen’s change map for 2020.</p>
Full article ">Figure 9
<p>Change maps showing the deforested areas in Congo in the year 2018. The change map on the left was generated using the CuSUM algorithm with adaptive thresholding. The change map on the right was Hansen’s change map for 2018.</p>
Full article ">Figure 10
<p>Change map for Kalimantan forests for years 2017–2022 with S-1 VH polarization. The base map used was the Environmental Systems Research Institute (ESRI) topographic map.</p>
Full article ">Figure 11
<p>Change map for dry seasons for the Haldwani forests for the years 2017–2022 with S-1 VH polarization. The base map used was the ESRI topographic map.</p>
Full article ">Figure 12
<p>Change maps for the Mondah forests for the years 2017–2022 with S-1 VH polarization. The base map used was the ESRI topographic map.</p>
Full article ">
20 pages, 4410 KiB  
Article
Implementation of an Immunoassay Based on the MVA-T7pol-Expression System for Rapid Identification of Immunogenic SARS-CoV-2 Antigens: A Proof-of-Concept Study
by Satendra Kumar, Liangliang Nan, Georgia Kalodimou, Sylvia Jany, Astrid Freudenstein, Christine Brandmüller, Katharina Müller, Philipp Girl, Rosina Ehmann, Wolfgang Guggemos, Michael Seilmaier, Clemens-Martin Wendtner, Asisa Volz, Gerd Sutter, Robert Fux and Alina Tscherne
Int. J. Mol. Sci. 2024, 25(20), 10898; https://doi.org/10.3390/ijms252010898 - 10 Oct 2024
Viewed by 1044
Abstract
The emergence of hitherto unknown viral pathogens presents a great challenge for researchers to develop effective therapeutics and vaccines within a short time to avoid an uncontrolled global spread, as seen during the coronavirus disease 2019 (COVID-19) pandemic. Therefore, rapid and simple methods [...] Read more.
The emergence of hitherto unknown viral pathogens presents a great challenge for researchers to develop effective therapeutics and vaccines within a short time to avoid an uncontrolled global spread, as seen during the coronavirus disease 2019 (COVID-19) pandemic. Therefore, rapid and simple methods to identify immunogenic antigens as potential therapeutical targets are urgently needed for a better pandemic preparedness. To address this problem, we chose the well-characterized Modified Vaccinia virus Ankara (MVA)-T7pol expression system to establish a workflow to identify immunogens when a new pathogen emerges, generate candidate vaccines, and test their immunogenicity in an animal model. By using this system, we detected severe acute respiratory syndrome (SARS) coronavirus 2 (SARS-CoV-2) nucleoprotein (N)-, and spike (S)-specific antibodies in COVID-19 patient sera, which is in line with the current literature and our observations from previous immunogenicity studies. Furthermore, we detected antibodies directed against the SARS-CoV-2-membrane (M) and -ORF3a proteins in COVID-19 patient sera and aimed to generate recombinant MVA candidate vaccines expressing either the M or ORF3a protein. When testing our candidate vaccines in a prime-boost immunization regimen in humanized HLA-A2.1-/HLA-DR1-transgenic H-2 class I-/class II-knockout mice, we were able to demonstrate M- and ORF3a-specific cellular and humoral immune responses. Hence, the established workflow using the MVA-T7pol expression system represents a rapid and efficient tool to identify potential immunogenic antigens and provides a basis for future development of candidate vaccines. Full article
(This article belongs to the Special Issue Viral Infection and Virology Methods)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the MVA-T7pol expression system. The T7 polymerase gene was placed under the control of the vaccinia virus early/late promoter p7.5 and was inserted into MVA deletion site II, as descripted previously [<a href="#B29-ijms-25-10898" class="html-bibr">29</a>]. The SARS-CoV-2 gene sequences of N<sub>HA</sub>, E<sub>HA</sub>, M<sub>HA</sub>, ORF3a<sub>HA</sub>, ORF6<sub>HA</sub>, ORF7a<sub>HA</sub>, and ORF8<sub>HA</sub> were inserted into the vector plasmid pOS6 [<a href="#B29-ijms-25-10898" class="html-bibr">29</a>] or pTM3 [<a href="#B34-ijms-25-10898" class="html-bibr">34</a>], and expression was placed under transcriptional control of the T7 promoter. The T7-RNA polymerase, which is expressed by recombinant MVA-T7pol during its replication cycle, allows for a transient expression of the SARS-CoV-2 antigens in the cytoplasm of infected cells that are co-transfected with the plasmids pOS6-N<sub>HA</sub>, pOS6-ORF3a<sub>HA</sub>, pTM3-M<sub>HA</sub>, pOS6-ORF8<sub>HA</sub>, pOS6-ORF7a<sub>HA</sub>, pOS6-E<sub>HA</sub>, or pOS-ORF6<sub>HA</sub>. Of note, the target SARS-CoV-2 gene sequences are not inserted into the MVA-T7pol genome. I–VI: major deletion sites of MVA-T7pol. Created with BioRender.com.</p>
Full article ">Figure 2
<p>Identification of SARS-CoV-2 proteins expressed by the MVA-T7pol system. To detect the targeted SARS-CoV-2 proteins, CEF cells were infected with recombinant MVA-T7pol at an MOI of 10 and transfected with the vector plasmids pOS6 [<a href="#B29-ijms-25-10898" class="html-bibr">29</a>] or pTM3 [<a href="#B34-ijms-25-10898" class="html-bibr">34</a>] containing the encoding sequences of the targeted SARS-CoV-2 proteins that were placed under the T7 promoter. To express SARS-CoV-2 spike protein, CEF cells were infected with recombinant MVA-S<sub>HA</sub> at an MOI of 10. Proteins were separated by SDS-PAGE and analyzed with an antibody directed against the HA-tag (<b>a</b>) or by using human serum (<b>b</b>,<b>c</b>). Non-infected cells (Mock) and cells infected with MVA-T7pol served as controls. Lane 1: N<sub>HA</sub> (47 kDa); lane 2: E<sub>HA</sub> (10 kDa); lane 3: M<sub>HA</sub> (26 kDa); lane 4: S<sub>HA</sub> (190 kDa + 90 kDa) [<a href="#B24-ijms-25-10898" class="html-bibr">24</a>]; lane 5: ORF3a<sub>HA</sub> (32 kDa); lane 6: ORF6<sub>HA</sub> (8 kDa); lane 7: ORF7a<sub>HA</sub> (14 kDa); lane 8: ORF8<sub>HA</sub> (15 kDa); lane 9: MVA-T7pol; lane 10: non-infected cells (Mock). Red arrow: N protein; black arrow: S protein; blue arrow: ORF3a protein; green arrow: M protein.</p>
Full article ">Figure 3
<p>Virological characterization of MVA-SARS-CoV-2-M (MVA-M) and MVA-SARS-CoV-2-ORF3a (MVA-ORF3a). (<b>a</b>,<b>b</b>) Schematic diagram of the MVA genome with the major deletion sites I to VI. (<b>a</b>) The encoding sequence of the full-length SARS-CoV-2 membrane protein (M) was inserted into the vector plasmid pLW73 [<a href="#B45-ijms-25-10898" class="html-bibr">45</a>] (pLW73-M). Expression of SARS-CoV-2-M was controlled by the VACV-specific promoter PmH5 [<a href="#B46-ijms-25-10898" class="html-bibr">46</a>] and was inserted via homologous recombination between MVA DNA sequences (flank-1, flank-2) adjacent to the intergenomic region between the open reading frames (ORF) of the essential viral genes, <span class="html-italic">MVA069R</span> and <span class="html-italic">MVA070L</span>, and copies cloned in the MVA vector plasmid pLW73-M. Repetitive sequences served to remove the marker gene GFP by intergenomic homologous recombination (marker gene deletion) to generate MVA-M. (<b>b</b>) The deletion III site was targeted to insert the gene sequence encoding SARS-CoV-2-ORF3a under the transcriptional control of VACV promotor PmH5 [<a href="#B46-ijms-25-10898" class="html-bibr">46</a>]. Repetitive sequences served to remove the marker gene mCherry by intragenomic homologous recombination (marker gene deletion) to generate MVA-ORF3a. (<b>c</b>,<b>d</b>) Genetic integrity of MVA-M and MVA-ORF3a. PCR analysis of genomic viral DNA confirmed stable insertion of the SARS-CoV-2-M sequence into the intergenomic region between <span class="html-italic">069R</span> and <span class="html-italic">070L</span> of the MVA genome and SARS-CoV-2-ORF3a sequence inserted into the deletion III of the MVA genome. (<b>e</b>,<b>f</b>) Multiple-step growth analysis of recombinant MVA-M, MVA-ORF3a, and non-recombinant MVA (MVA). Recombinant viruses and non-recombinant MVA (MVA) amplified in CEF cells but failed to efficiently grow in human HaCat cells.</p>
Full article ">Figure 4
<p>Synthesis of membrane (M) and ORF3a proteins in MVA-M- and ORF3a-infected cells. (<b>a</b>,<b>b</b>) CEF cells were infected at an MOI of 5, and cell lysates were collected at 0, 4, 8, 24, and 48 h post infection (hpi). Polypeptides in the cell lysates were separated by SDS-PAGE and analyzed with antibodies against the M and ORF3a proteins. (<b>c</b>,<b>d</b>) Vero cells were infected at an MOI of 0.5 with MVA-M or MVA-ORF3a and fixed with paraformaldehyde after 16 hpi. Permeabilized cells were probed with antibodies against the M and ORF3a proteins. Polyclonal goat anti-mouse secondary antibody was used for M-specific fluorescent staining (green), and polyclonal goat anti-rabbit secondary antibody was used for ORF3a-specific fluorescent staining (green). Cell nuclei were counterstained with DAPI (blue). Scale bar: 50 μm.</p>
Full article ">Figure 5
<p>Activation of SARS-CoV-2-M- and SARS-CoV-2-ORF3a-specific CD8 T cells after vaccination with MVA-M and MVA-ORF3a. Groups of <span class="html-italic">HLA-A2.1-/HLA-DR1-transgenic H-2 class I-/class II-knockout</span> mice (n = 6–10) were immunized with 10<sup>7</sup> PFU of MVA-M or MVA-ORF3a via the i.m. route. Mice immunized with non-recombinant MVA (MVA) served as controls. Splenocytes were collected and prepared at day 35 after prime immunization (14 days after booster immunization). Splenocytes were either stimulated with SARS-CoV-2-ORF3a- or SARS-CoV-2-M-specific peptides and were measured by IFN-γ ELISPOT assay (<b>a</b>,<b>e</b>,<b>f</b>) and intracellular cytokines staining (ICS) plus FACS analysis (<b>b</b>–<b>d</b>). (<b>a</b>,<b>e</b>,<b>f</b>) IFN-γ spot-forming colonies (SFC) measured by ELISPOT assay. (<b>b</b>,<b>c</b>) IFN-γ producing CD8 T cells measured by FACS analysis. Graphs show the mean frequency and absolute number of IFN-γ+ CD8 T cells. (<b>d</b>) Cytokine profile of ORF3a<sub>82-90</sub> specific CD8 T cells. Graph shows the mean frequency of IFN-γ<sup>−</sup>TNF-α<sup>+</sup>, IFN-γ<sup>+</sup>TNF-α<sup>+</sup>, and IFN-γ<sup>+</sup>TNF-α<sup>−</sup> cells within the cytokine-positive CD8 T-cell compartment. Bars represent the mean + standard error of the mean (SEM). Differences between group were analyzed by unpaired, two-tailed <span class="html-italic">t</span>-test: * <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01; ns, not significant.</p>
Full article ">Figure 6
<p>Antigen-specific humoral immunity induced by MVA-ORF3a and MVA-M. (<b>a</b>,<b>b</b>) Groups of <span class="html-italic">HLA-A2.1-/HLA-DR1-transgenic H-2 class I-/class II-</span>knockout mice were immunized with 10<sup>7</sup> PFU of MVA-M and MVA-ORF3a via the i.m. route. Mice immunized with non-recombinant MVA (MVA) served as controls. Serum samples were collected 18 days after the prime immunization (prime) and 14 days after the booster immunization (prime-boost). Sera were analyzed for (<b>a</b>) ORF3a- and (<b>b</b>) M-specific IgG by ELISA. Dashed lines represent the limits of detection (LOD). Differences between group were analyzed by unpaired, two-tailed <span class="html-italic">t</span>-test: ** <span class="html-italic">p</span> &lt; 0.01; *** <span class="html-italic">p</span> &lt; 0.001; ns, not significant.</p>
Full article ">
32 pages, 15160 KiB  
Article
Analyzing Temporal Characteristics of Winter Catch Crops Using Sentinel-1 Time Series
by Shanmugapriya Selvaraj, Damian Bargiel, Abdelaziz Htitiou and Heike Gerighausen
Remote Sens. 2024, 16(19), 3737; https://doi.org/10.3390/rs16193737 - 8 Oct 2024
Cited by 1 | Viewed by 1031
Abstract
Catch crops are intermediate crops sown between two main crop cycles. Their adoption into the cropping system has increased considerably in the last years due to its numerous benefits, in particular its potential in carbon fixation and preventing nitrogen leaching during winter. The [...] Read more.
Catch crops are intermediate crops sown between two main crop cycles. Their adoption into the cropping system has increased considerably in the last years due to its numerous benefits, in particular its potential in carbon fixation and preventing nitrogen leaching during winter. The growth period of catch crops in Germany is often marked by dense cloud cover, which limits land surface monitoring through optical remote sensing. In such conditions, synthetic aperture radar (SAR) emerges as a viable option. Despite the known advantages of SAR, the understanding of temporal behavior of radar parameters in relation to catch crops remains largely unexplored. Hence, in this study, we exploited the dense time series of Sentinel-1 data within the Copernicus Space Component to study the temporal characteristics of catch crops over a test site in the center of Germany. Radar parameters such as VV, VH, VH/VV backscatter, dpRVI (dual-pol Radar Vegetation Index) and VV coherence were extracted, and temporal profiles were interpreted for catch crops and preceding main crops along with in situ, temperature, and precipitation data. Additionally, we examined the temporal profiles of winter main crops (winter oilseed rape and winter cereals), that are grown parallel to the catch crop growing cycle. Based on the analyzed temporal patterns, we defined 22 descriptive features from VV, VH, VH/VV and dpRVI, which are specific to catch crop identification. Then, we conducted a Kruskal–Wallis test on the extracted parameters, both crop-wise and group-wise, to assess the significance of statistical differences among different catch crop groups. Our results reveal that there exists a unique temporal pattern for catch crops compared to main crops, and each of these extracted parameters possess a different sensitivity to catch crops. Parameters VV and VH are sensitive to phenological stages and crop structure. On the other hand, VH/VV and dpRVI were found to be highly sensitive to crop biomass. Coherence can be used to detect the sowing and harvest events. The preceding main crop analysis reveals that winter wheat and winter barley are the two dominant main crops grown before catch crops. Moreover, winter main crops (winter oilseed rape, winter cereals) cultivated during the catch crop cycle can be distinguished by exploiting the observed sowing window differences. The extracted descriptive features provide information about sowing, harvest, vigor, biomass, and early/late die-off nature specific to catch crop types. In the Kruskal–Wallis test, the observed high H-statistic and low p-value in several predictors indicates significant variability at 0.001 level. Furthermore, Dunn’s post hoc test among catch crop group pairs highlights the substantial differences between cold-sensitive and legume groups (p < 0.001). Full article
Show Figures

Figure 1

Figure 1
<p>Location of study area and sample points collected on five different dates during the year 2021 and 2022. The extents of Sentinel-1 relative orbits tiles 177 and 44 are shown in the location map.</p>
Full article ">Figure 2
<p>Some of the catch crop fields encountered during field survey: (<b>a</b>) mustard, (<b>b</b>) oilseed radish, (<b>c</b>) phacelia, (<b>d</b>) clover, (<b>e</b>) niger, and (<b>f</b>) green mixture.</p>
Full article ">Figure 3
<p>Phenological stages of different crops in the study area and corresponding Sentinel-1 data acquisitions in the years 2021 (blue) and 2022 (red).</p>
Full article ">Figure 4
<p>Example of a real mustard catch crop field where the entire profile of (<b>a</b>) VV backscatter, (<b>b</b>) VH backscatter, (<b>c</b>) VH/VV backscatter, and (<b>d</b>) dpRVI is divided into three phases based on detected peak and minimum values.</p>
Full article ">Figure 5
<p>Temporal VV backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of the main crop.</p>
Full article ">Figure 6
<p>Temporal VH backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 7
<p>Temporal VH/VV backscatter profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 8
<p>Temporal dpRVI profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 9
<p>Temporal VV coherence profile of main crops followed by different catch crop categories: (<b>a</b>) cold-tolerant, (<b>b</b>) cold-sensitive, (<b>c</b>) legumes for the years 2021 (<b>left</b>) and 2022 (<b>right</b>). The vertical dashed lines (black color) indicate the harvest of main crop.</p>
Full article ">Figure 10
<p>Comparison of mean VV backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 11
<p>Comparison of VH backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 12
<p>Comparison of VH/VV backscatter profiles: (<b>a</b>) winter oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The dashed lines (black) indicate the sowing time.</p>
Full article ">Figure 13
<p>Comparison of dpRVI profiles: (<b>a</b>) oilseed rape, (<b>b</b>) winter cereals, (<b>c</b>) fallow/catch crop. The doted lines (black) indicate the sowing time.</p>
Full article ">Figure 14
<p>Box plot depicting the different predictors extracted from dpRVI time series for different catch crop types.</p>
Full article ">Figure 15
<p>Box plot depicting the different predictors extracted from VV, VH and VH/VV backscatter time series for different catch crop types.</p>
Full article ">Figure A1
<p>Kruskal Wallis H and <span class="html-italic">p</span>-value statistics for each predictor based on individual crop-wise test. *, **, *** indicate 0.05, 0.01, and 0.001 level of significance.</p>
Full article ">
21 pages, 13186 KiB  
Article
Ship Contour Extraction from Polarimetric SAR Images Based on Polarization Modulation
by Guoqing Wu, Shengbin Luo Wang, Yibin Liu, Ping Wang and Yongzhen Li
Remote Sens. 2024, 16(19), 3669; https://doi.org/10.3390/rs16193669 - 1 Oct 2024
Viewed by 929
Abstract
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship [...] Read more.
Ship contour extraction is vital for extracting the geometric features of ships, providing comprehensive information essential for ship recognition. The main factors affecting the contour extraction performance are speckle noise and amplitude inhomogeneity, which can lead to over-segmentation and missed detection of ship edges. Polarimetric synthetic aperture radar (PolSAR) images contain rich target scattering information. Under different transmitting and receiving polarization, the amplitude and phase of pixels can be different, which provides the potential to meet the uniform requirement. This paper proposes a novel ship contour extraction framework from PolSAR images based on polarization modulation. Firstly, the image is partitioned into the foreground and background using a super-pixel unsupervised clustering approach. Subsequently, an optimization criterion for target amplitude modulation to achieve uniformity is designed. Finally, the ship’s contour is extracted from the optimized image using an edge-detection operator and an adaptive edge extraction algorithm. Based on the contour, the geometric features of ships are extracted. Moreover, a PolSAR ship contour extraction dataset is established using Gaofen-3 PolSAR images, combined with expert knowledge and automatic identification system (AIS) data. With this dataset, we compare the accuracy of contour extraction and geometric features with state-of-the-art methods. The average errors of extracted length and width are reduced to 20.09 m and 8.96 m. The results demonstrate that the proposed method performs well in both accuracy and precision. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>The ship segmentation result of the SSDD dataset: (<b>a</b>) Ground truth of PSeg No. 421. (<b>b</b>) Segmentation result. (<b>c</b>) Ground-truth of PSeg No. 328. (<b>d</b>) Segmentation result.</p>
Full article ">Figure 2
<p>The procedure of amplitude approximation method.</p>
Full article ">Figure 3
<p>The Gaofen-3 ship chips and polarization modulation results for a single target: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 3 Cont.
<p>The Gaofen-3 ship chips and polarization modulation results for a single target: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 4
<p>The Gaofen-3 ship chips and polarization modulation results for multiple targets: (<b>a</b>) 2D image of HH. (<b>b</b>) 3D image of HH. (<b>c</b>) 2D image of optimization on the target area. (<b>d</b>) 3D image of optimization on the target area. (<b>e</b>) 2D image of joint optimization. (<b>f</b>) 3D image of joint optimization. (<b>g</b>) 2D image of amplitude approximation. (<b>h</b>) 3D image of amplitude approximation.</p>
Full article ">Figure 5
<p>The procedure of contour extraction algorithm of PolSAR images.</p>
Full article ">Figure 6
<p>Superpixel segmentation results and Superpixel-based foreground–background classification results: (<b>a</b>) Superpixel segmentation results; (<b>b</b>) Superpixel-based foreground–background classification results; (<b>c</b>) Amplitude distribution of foreground and background superpixels.</p>
Full article ">Figure 7
<p>Schematic diagram of dual threshold polarization modulation.</p>
Full article ">Figure 8
<p>Edge strength extracted by ROEWA operator before and after image enhancement: (<b>a</b>) Edge strength of original image (HH); (<b>b</b>) Edge strength of optimized image.</p>
Full article ">Figure 9
<p>Flowchart of adaptive contour extraction method.</p>
Full article ">Figure 10
<p>Contour extraction method of adaptive clustering: (<b>a</b>) Edge strength of original image; (<b>b</b>) NMS result; (<b>c</b>) Cluttering result (k = 2); (<b>d</b>) Cluttering result (k = 3); (<b>e</b>) Strong edge; (<b>f</b>) Final contour.</p>
Full article ">Figure 11
<p>The result of ellipse fitting and schematic of ellipse parameters: (<b>a</b>) The ellipse fitting result; (<b>b</b>) The parameters of ellipse.</p>
Full article ">Figure 12
<p>The optical images of the selected dataset and the PolSAR images with labels: (<b>a</b>) The optical image of data No. 1; (<b>b</b>) The optical image of data No. 2; (<b>c</b>) The optical image of data No. 3; (<b>d</b>) The labeled image of data No. 1; (<b>e</b>) The labeled image of data No. 2; (<b>f</b>) The labeled image of data No. 3.</p>
Full article ">Figure 13
<p>Contour extraction results of single-target PolSAR images: (<b>a</b>–<b>d</b>) are the intensity of HH, HV, VV, and polarization modulation image, respectively. (<b>e</b>–<b>h</b>) are the edge-strength map of HH, HV, VV, and polarization modulation, respectively. (<b>i</b>–<b>l</b>) are the contour results of HH, HV, VV, and polarization modulation, respectively.</p>
Full article ">Figure 14
<p>Contour extraction results of multi-target PolSAR images: (<b>a</b>–<b>d</b>) are the intensity of HH, HV, VV, and polarization modulation image, respectively. (<b>e</b>–<b>h</b>) are the edge-strength map of HH, HV, VV, and polarization modulation, respectively. (<b>i</b>–<b>l</b>) are the contour results of HH, HV, VV, and polarization modulation, respectively.</p>
Full article ">Figure 15
<p>Detection results at different IoU thresholds.</p>
Full article ">Figure 16
<p>The results of ship contour and ellipse fitting with different images: (<b>a</b>–<b>e</b>) are the fitting results of HH, HV, VV, SPAN, and polarization modulation images, respectively.</p>
Full article ">Figure 17
<p>Ship size extraction results: (<b>a</b>–<b>c</b>) are the extraction results of length, width, and orientation, respectively.</p>
Full article ">
16 pages, 5920 KiB  
Article
Pixel-Level Decision Fusion for Land Cover Classification Using PolSAR Data and Local Pattern Differences
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2024, 13(19), 3846; https://doi.org/10.3390/electronics13193846 - 28 Sep 2024
Viewed by 590
Abstract
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines [...] Read more.
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines the benefits of both approaches for detection by extracting Pauli’s and Krogager’s decomposition components. The Local Pattern Differences (LPD) method was employed on every decomposition component for pixel-level texture feature extraction. These extracted features were utilized to train three independent classifiers. Ultimately, these findings were handled as independent decisions for each land cover type and were fused together using a decision fusion rule to produce complete and enhanced classification results. As part of our approach, after a thorough examination, the most appropriate classifiers and decision rules were exploited, as well as the mathematical foundations required for effective decision fusion. Incorporating qualitative and quantitative information into the decision fusion process ensures robust and reliable classification results. The innovation of our approach lies in the dual use of decomposition methods and the application of a simple but effective decision fusion strategy. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Study area: the broader area of Vancouver. Map data ©2024: Google, Landsat/Copernicus.</p>
Full article ">Figure 2
<p>Correction of geometric distortions in the ALOS ascending image: (<b>a</b>) amplitude of original image, (<b>b</b>) amplitude of calibrated image, (<b>c</b>) Pauli component, (<b>d</b>) Krogager component, (<b>e</b>) georeferenced Pauli component, and (<b>f</b>) georeferenced Krogager components.</p>
Full article ">Figure 3
<p>RGB representation of our study area: (<b>a</b>) Krogager’s scattering components and (<b>b</b>) Pauli’s scattering components.</p>
Full article ">Figure 4
<p>Illustration of the quantization process of 5 by 5 pixel window. Each of the neighboring pixel’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>) intensities compared with the central’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math>) to detect the local patterns. Then, this procedure is repeated for all pixels of our study area.</p>
Full article ">Figure 5
<p>Windows used for classification in our study area, (<b>a</b>) Krogager and (<b>b</b>) Pauli.</p>
Full article ">Figure 6
<p>Clusters of datasets: (<b>a</b>) training dataset, (<b>b</b>) testing dataset. Blue spots: sea, red spots: urban, yellow spots: crops, and green spots: forest.</p>
Full article ">
Back to TopTop