Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (438)

Search Parameters:
Keywords = PolSAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5920 KiB  
Article
Pixel-Level Decision Fusion for Land Cover Classification Using PolSAR Data and Local Pattern Differences
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2024, 13(19), 3846; https://doi.org/10.3390/electronics13193846 - 28 Sep 2024
Viewed by 236
Abstract
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines [...] Read more.
Combining various viewpoints to produce coherent and cohesive results requires decision fusion. These methodologies are essential for synthesizing data from multiple sensors in remote sensing classification in order to make conclusive decisions. Using fully polarimetric Synthetic Aperture Radar (PolSAR) imagery, our study combines the benefits of both approaches for detection by extracting Pauli’s and Krogager’s decomposition components. The Local Pattern Differences (LPD) method was employed on every decomposition component for pixel-level texture feature extraction. These extracted features were utilized to train three independent classifiers. Ultimately, these findings were handled as independent decisions for each land cover type and were fused together using a decision fusion rule to produce complete and enhanced classification results. As part of our approach, after a thorough examination, the most appropriate classifiers and decision rules were exploited, as well as the mathematical foundations required for effective decision fusion. Incorporating qualitative and quantitative information into the decision fusion process ensures robust and reliable classification results. The innovation of our approach lies in the dual use of decomposition methods and the application of a simple but effective decision fusion strategy. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Study area: the broader area of Vancouver. Map data ©2024: Google, Landsat/Copernicus.</p>
Full article ">Figure 2
<p>Correction of geometric distortions in the ALOS ascending image: (<b>a</b>) amplitude of original image, (<b>b</b>) amplitude of calibrated image, (<b>c</b>) Pauli component, (<b>d</b>) Krogager component, (<b>e</b>) georeferenced Pauli component, and (<b>f</b>) georeferenced Krogager components.</p>
Full article ">Figure 3
<p>RGB representation of our study area: (<b>a</b>) Krogager’s scattering components and (<b>b</b>) Pauli’s scattering components.</p>
Full article ">Figure 4
<p>Illustration of the quantization process of 5 by 5 pixel window. Each of the neighboring pixel’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>) intensities compared with the central’s (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>g</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math>) to detect the local patterns. Then, this procedure is repeated for all pixels of our study area.</p>
Full article ">Figure 5
<p>Windows used for classification in our study area, (<b>a</b>) Krogager and (<b>b</b>) Pauli.</p>
Full article ">Figure 6
<p>Clusters of datasets: (<b>a</b>) training dataset, (<b>b</b>) testing dataset. Blue spots: sea, red spots: urban, yellow spots: crops, and green spots: forest.</p>
Full article ">
24 pages, 1677 KiB  
Article
CPINet: Towards A Novel Cross-Polarimetric Interaction Network for Dual-Polarized SAR Ship Classification
by Jinglu He, Ruiting Sun, Yingying Kong, Wenlong Chang, Chenglu Sun, Gaige Chen, Yinghua Li, Zhe Meng and Fuping Wang
Remote Sens. 2024, 16(18), 3479; https://doi.org/10.3390/rs16183479 - 19 Sep 2024
Viewed by 481
Abstract
With the rapid development of the modern world, it is imperative to achieve effective and efficient monitoring for territories of interest, especially for the broad ocean area. For surveillance of ship targets at sea, a common and powerful approach is to take advantage [...] Read more.
With the rapid development of the modern world, it is imperative to achieve effective and efficient monitoring for territories of interest, especially for the broad ocean area. For surveillance of ship targets at sea, a common and powerful approach is to take advantage of satellite synthetic aperture radar (SAR) systems. Currently, using satellite SAR images for ship classification is a challenging issue due to complex sea situations and the imaging variances of ships. Fortunately, the emergence of advanced satellite SAR sensors has shed much light on the SAR ship automatic target recognition (ATR) task, e.g., utilizing dual-polarization (dual-pol) information to boost the performance of SAR ship classification. Therefore, in this paper we have developed a novel cross-polarimetric interaction network (CPINet) to explore the abundant polarization information of dual-pol SAR images with the help of deep learning strategies, leading to an effective solution for high-performance ship classification. First, we establish a novel multiscale deep feature extraction framework to fully mine the characteristics of dual-pol SAR images in a coarse-to-fine manner. Second, to further leverage the complementary information of dual-pol SAR images, we propose a mixed-order squeeze–excitation (MO-SE) attention mechanism, in which the first- and second-order statistics of the deep features from one single-polarized SAR image are extracted to guide the learning of another polarized one. Then, the intermediate multiscale fused and MO-SE augmented dual-polarized deep feature maps are respectively aggregated by the factorized bilinear coding (FBC) pooling method. Meanwhile, the last multiscale fused deep feature maps for each single-polarized SAR image are also individually aggregated by the FBC. Finally, four kinds of highly discriminative deep representations are obtained for loss computation and category prediction. For better network training, the gradient normalization (GradNorm) method for multitask networks is extended to adaptively balance the contribution of each loss component. Extensive experiments on the three- and five-category dual-pol SAR ship classification dataset collected from the open and free OpenSARShip database demonstrate the superiority and robustness of CPINet compared with state-of-the-art methods for the dual-polarized SAR ship classification task. Full article
(This article belongs to the Special Issue SAR in Big Data Era III)
Show Figures

Figure 1

Figure 1
<p>Overall architecture of the proposed CPINet.</p>
Full article ">Figure 2
<p>Structure of the MSDFF module as shown in our previous work [<a href="#B30-remotesensing-16-03479" class="html-bibr">30</a>]; <math display="inline"><semantics> <mi>Conv</mi> </semantics></math>, <math display="inline"><semantics> <mi>DS</mi> </semantics></math>, <math display="inline"><semantics> <mi>US</mi> </semantics></math>, and <math display="inline"><semantics> <mi mathvariant="normal">C</mi> </semantics></math> indicate the convolution block, downsampling, upsampling, and concatenation, respectively.</p>
Full article ">Figure 3
<p>Structure of the MO-SE attention augmentation module.</p>
Full article ">Figure 4
<p>Diagram of the FBC pooling method.</p>
Full article ">Figure 5
<p>Illustration of the five-category SAR ship samples in (<b>a</b>) VH polarization and (<b>b</b>) VV polarization, as also shown in [<a href="#B26-remotesensing-16-03479" class="html-bibr">26</a>,<a href="#B30-remotesensing-16-03479" class="html-bibr">30</a>]. From left to right are ship samples from the tanker, container ship, bulk carrier, general cargo, and cargo categories, respectively.</p>
Full article ">Figure 6
<p>Parameter setting analysis of hyperparameters (<b>a</b>) <span class="html-italic">k</span> and (<b>b</b>) <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Training dynamics of the GradNorm method for (<b>a</b>) weight magnitude, (<b>b</b>) loss value, and (<b>c</b>) loss ratio.</p>
Full article ">Figure 8
<p>Confusion matrices for (<b>a</b>) the three-category classification task and (<b>b</b>) the five-category classification task. The green entries indicate the TPs for each category.</p>
Full article ">
15 pages, 6660 KiB  
Article
Forest Canopy Height Estimation Combining Dual-Polarization PolSAR and Spaceborne LiDAR Data
by Yao Tong, Zhiwei Liu, Haiqiang Fu, Jianjun Zhu, Rong Zhao, Yanzhou Xie, Huacan Hu, Nan Li and Shujuan Fu
Forests 2024, 15(9), 1654; https://doi.org/10.3390/f15091654 - 19 Sep 2024
Viewed by 545
Abstract
Forest canopy height data are fundamental parameters of forest structure and are critical for understanding terrestrial carbon stock, global carbon cycle dynamics and forest productivity. To address the limitations of retrieving forest canopy height using conventional PolInSAR-based methods, we proposed a method to [...] Read more.
Forest canopy height data are fundamental parameters of forest structure and are critical for understanding terrestrial carbon stock, global carbon cycle dynamics and forest productivity. To address the limitations of retrieving forest canopy height using conventional PolInSAR-based methods, we proposed a method to estimate forest height by combining single-temporal polarimetric synthetic aperture radar (PolSAR) images with sparse spaceborne LiDAR (forest height) measurements. The core idea of our method is that volume scattering energy variations which are linked to forest canopy height occur during radar acquisition. Specifically, our methodology begins by employing a semi-empirical inversion model directly derived from the random volume over ground (RVoG) formulation to establish the relationship between forest canopy height, volume scattering energy and wave extinction. Subsequently, PolSAR decomposition techniques are used to extract canopy volume scattering energy. Additionally, machine learning is employed to generate a spatially continuous extinction coefficient product, utilizing sparse LiDAR samples for assistance. Finally, with the derived inversion model and the resulting model parameters (i.e., volume scattering power and extinction coefficient), forest canopy height can be estimated. The performance of the proposed forest height inversion method is illustrated with L-band NASA/JPL UAVSAR from AfriSAR data conducted over the Gabon Lope National Park and airborne LiDAR data. Compared to high-accuracy airborne LiDAR data, the obtained forest canopy height from the proposed approach exhibited higher accuracy (R2 = 0.92, RMSE = 6.09 m). The results demonstrate the potential and merit of the synergistic combination of PolSAR (volume scattering power) and sparse LiDAR (forest height) measurements for forest height estimation. Additionally, our approach achieves good performance in forest height estimation, with accuracy comparable to that of the multi-baseline PolInSAR-based inversion method (RMSE = 5.80 m), surpassing traditional PolSAR-based methods with an accuracy of 10.86 m. Given the simplicity and efficiency of the proposed method, it has the potential for large-scale forest height estimation applications when only single-temporal dual-polarization acquisitions are available. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the methodology proposed for the estimation of forest canopy height.</p>
Full article ">Figure 2
<p>The geolocation of the study area: (<b>a</b>) optical imagery; (<b>b</b>) the digital elevation model (DEM) of the study area. The orange rectangles in (<b>a</b>,<b>b</b>) indicate the coverage range of these airborne PolSAR data.</p>
Full article ">Figure 3
<p>Datasets: (a) multi-looked and geocoded SAR image in Pauli basis color combination; (<b>b</b>) ICESat-2 ATL08 sampling points; (<b>c</b>) LVIS forest height.</p>
Full article ">Figure 4
<p>(<b>a</b>) Volume scattering power; (<b>b</b>) extinction coefficient.</p>
Full article ">Figure 5
<p>Importance ranking of each variable in the extinction coefficient estimation model.</p>
Full article ">Figure 6
<p>(<b>a</b>) Forest height map derived by proposed method; (<b>b</b>) validation plots of the forest height inversion, where the color transition from blue to red indicates an increase density of points.</p>
Full article ">Figure 7
<p>(<b>a</b>) Forest height derived via PolSAR inversion method in [<a href="#B18-forests-15-01654" class="html-bibr">18</a>], and (<b>b</b>) scatterplot of validation results.</p>
Full article ">
30 pages, 11567 KiB  
Article
Gini Coefficient-Based Feature Learning for Unsupervised Cross-Domain Classification with Compact Polarimetric SAR Data
by Xianyu Guo, Junjun Yin, Kun Li and Jian Yang
Agriculture 2024, 14(9), 1511; https://doi.org/10.3390/agriculture14091511 - 3 Sep 2024
Viewed by 548
Abstract
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system [...] Read more.
Remote sensing image classification usually needs many labeled samples so that the target nature can be fully described. For synthetic aperture radar (SAR) images, variations of the target scattering always happen to some extent due to the imaging geometry, weather conditions, and system parameters. Therefore, labeled samples in one image could not be suitable to represent the same target in other images. The domain distribution shift of different images reduces the reusability of the labeled samples. Thus, exploring cross-domain interpretation methods is of great potential for SAR images to improve the reuse rate of existing labels from historical images. In this study, an unsupervised cross-domain classification method is proposed that utilizes the Gini coefficient to rank the robust and stable polarimetric features in both the source and target domains (GRFST) such that an unsupervised domain adaptation (UDA) can be achieved. This method selects the optimal features from both the source and target domains to alleviate the domain distribution shift. Both fully polarimetric (FP) and compact polarimetric (CP) SAR features are explored for crop-domain terrain type classification. Specifically, the CP mode refers to the hybrid dual-pol mode with an arbitrary transmitting ellipse wave. This is the first attempt in the open literature to investigate the representing abilities of different CP modes for cross-domain terrain classification. Experiments are conducted from four aspects to demonstrate the performance of CP modes for cross-data, cross-scene, and cross-crop type classification. Results show that the GRFST-UDA method yields a classification accuracy of 2% to 12% higher than the traditional UDA methods. The degree of scene similarity has a certain impact on the accuracy of cross-domain crop classification. It was also found that when both the FP and circular CP SAR data are used, stable, promising results can be achieved. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Figure 1
<p>The Pauli decomposition of FP SAR data ((<b>a</b>–<b>d</b>) are SAR images from four radar satellites (RADARSAT-2, ALOS-1, ALOS-2, and GF-3) of San Francisco, respectively. (<b>e</b>) is a SAR image from GF-3 of Qingdao. (<b>f</b>) is a SAR image from RADARSAT-2 of Jiangsu. And (<b>g</b>) is a SAR image from RADARSAT-2 of Yellow River).</p>
Full article ">Figure 1 Cont.
<p>The Pauli decomposition of FP SAR data ((<b>a</b>–<b>d</b>) are SAR images from four radar satellites (RADARSAT-2, ALOS-1, ALOS-2, and GF-3) of San Francisco, respectively. (<b>e</b>) is a SAR image from GF-3 of Qingdao. (<b>f</b>) is a SAR image from RADARSAT-2 of Jiangsu. And (<b>g</b>) is a SAR image from RADARSAT-2 of Yellow River).</p>
Full article ">Figure 2
<p>Field investigation pictures of five kinds of ground objects in Jiangsu ((<b>a</b>) T-H. (<b>b</b>) D-J. (<b>c</b>) urban. (<b>d</b>) shoal. (<b>e</b>) water).</p>
Full article ">Figure 3
<p>Field investigation pictures of three kinds of ground objects in Yellow River ((<b>a</b>) wheat. (<b>b</b>) water. (<b>c</b>) urban).</p>
Full article ">Figure 4
<p>Flow chart of the methodology.</p>
Full article ">Figure 5
<p>Feature importance ranking for source and target domains for CP SAR data with circular polarization transmitting.</p>
Full article ">Figure 6
<p>The overall accuracy of cross-domain classification from SAR satellites with different band channels based on the UDA method for CP SAR data with circular polarization transmitting ((<b>a</b>) SA result. (<b>b</b>) TCA result. (<b>c</b>) JDA result. (<b>d</b>) CORAL result. (<b>e</b>) BDA result. (<b>f</b>) GFK result. (<b>g</b>) MEDA result. (<b>h</b>) MEAN result).</p>
Full article ">Figure 7
<p>The overall accuracy of cross-domain classification for CP SAR data with circular polarization transmitting ((<b>a</b>) mean accuracy of cross-domain image classification. (<b>b</b>) mean accuracy of different SAR frequency bands cross-domain image classification. All CP-UDA: cross-domain classification based on all CP features. GFRS-UDA: cross-domain classification based on Gini coefficient feature ranking only in the source domain. GFRST-UDA: cross-domain classification based on the proposed method. Supervision: supervised classification based on the K-Nearest Neighbor classifier).</p>
Full article ">Figure 8
<p>Cross-domain images (ALOS1-Sanf→GF3-Sanf) classification maps for CP SAR data with circular polarization transmitting ((<b>a</b>) SA result. (<b>b</b>) GFRS-SA result. (<b>c</b>) GFRST-SA result. (<b>d</b>) Supervision result).</p>
Full article ">Figure 9
<p>The scatter plots of source and target domains feature alignment (<b>a</b>–<b>d</b>) UJDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRS-UJDA scatter plots. (<b>a<sub>2</sub></b>–<b>d<sub>2</sub></b>) GFRST-UJDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>,<b>a<sub>2</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>,<b>b<sub>2</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>,<b>c<sub>2</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>,<b>d<sub>2</sub></b>) scatter plots of the target domain.).</p>
Full article ">Figure 9 Cont.
<p>The scatter plots of source and target domains feature alignment (<b>a</b>–<b>d</b>) UJDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRS-UJDA scatter plots. (<b>a<sub>2</sub></b>–<b>d<sub>2</sub></b>) GFRST-UJDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>,<b>a<sub>2</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>,<b>b<sub>2</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>,<b>c<sub>2</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>,<b>d<sub>2</sub></b>) scatter plots of the target domain.).</p>
Full article ">Figure 10
<p>The histograms of the dispersion coefficient of source and target domains and aligned source and target domains ((<b>a</b>–<b>c</b>) are histograms of dispersion coefficient based on UJDA, GFRS-UJDA, and GFRST-UJDA methods, respectively).</p>
Full article ">Figure 11
<p>The overall accuracy statistics of cross-domain image classification for CP SAR data with circular polarization transmitting.</p>
Full article ">Figure 12
<p>Cross-domain image classification results for CP SAR data with circular polarization transmitting ((<b>a</b>–<b>d</b>) (GF3-Qingdao→RS2-sanf) and (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) (ALOS2-sanf→GF3-Qingdao) are cross-domain image classification maps based on SA, GFRS-SA, GFRST-SA, and supervision classification methods, respectively).</p>
Full article ">Figure 13
<p>The scatter plots of source and target domain feature alignment ((<b>a</b>–<b>d</b>) JDA scatter plots. (<b>a<sub>1</sub></b>–<b>d<sub>1</sub></b>) GFRST-JDA scatter plots. (<b>a</b>,<b>a<sub>1</sub></b>) scatter plots of the source domain. (<b>b</b>,<b>b<sub>1</sub></b>) scatter plots of the aligned source domain. (<b>c</b>,<b>c<sub>1</sub></b>) scatter plots of the aligned target domain. (<b>d</b>,<b>d<sub>1</sub></b>) scatter plots of the target domain).</p>
Full article ">Figure 14
<p>The overall accuracy of cross-domain classification based on the UDA and GFRST-UDA methods for the FP + GCP SAR data ((<b>a</b>–<b>d</b>) are overall accuracy for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), and the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0), respectively).</p>
Full article ">Figure 15
<p>Cross-domain image classification results for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4) data (<b>a</b>–<b>h</b>) SA results. (<b>a<sub>1</sub></b>–<b>h<sub>1</sub></b>) GFRST-SA results. The results from (<b>a</b>–<b>h</b>) and from (<b>a<sub>1</sub></b>–<b>h<sub>1</sub></b>) correspond to eight cross-domain pair classification maps, respectively.</p>
Full article ">Figure 16
<p>The overall accuracy statistics of cross-domain classification for the FP + CP SAR and the FP SAR data, respectively ((<b>a</b>) UDA result. (<b>b</b>) GFRST-UDA result).</p>
Full article ">Figure 17
<p>The overall accuracy of cross-domain classification based on the UDA and GFRST-UDA methods for the GCP SAR data (<b>a</b>–<b>d</b>) are overall accuracy for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), and the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0), respectively.</p>
Full article ">Figure 18
<p>Cross-domain image classification maps for the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/4), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/6), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = π/8), the FP + CP SAR (<span class="html-italic">θ</span> = π/4, <span class="html-italic">χ</span> = 0) and the FP SAR data based on the GFRST-USA method, respectively ((<b>a</b>–<b>e</b>,<b>a<sub>1</sub></b>–<b>e<sub>1</sub></b>,<b>a<sub>2</sub></b>–<b>e<sub>2,</sub>a<sub>3</sub></b>–<b>e<sub>3</sub></b>) are cross-domain images (GF3 Qingdao, RS2 Sanf, RS2 Jiangsu T-H and RS2 Jiangsu D-J→RS2 Yellow River) results, respectively).</p>
Full article ">Figure 19
<p>The overall accuracy of cross-domain classification for the FP + CP SAR and the FP SAR data, respectively ((<b>a</b>) UDA result. (<b>b</b>) GFRST-UDA result).</p>
Full article ">
21 pages, 5624 KiB  
Article
A Multi-Baseline Forest Height Estimation Method Combining Analytic and Geometric Expression of the RVoG Model
by Bing Zhang, Hongbo Zhu, Weidong Song, Jianjun Zhu, Jiguang Dai, Jichao Zhang and Chengjin Li
Forests 2024, 15(9), 1496; https://doi.org/10.3390/f15091496 - 27 Aug 2024
Viewed by 410
Abstract
As an important parameter of forest biomass, forest height is of great significance for the calculation of forest carbon stock and the study of the carbon cycle in large-scale regions. The main idea of the current forest height inversion methods using multi-baseline P-band [...] Read more.
As an important parameter of forest biomass, forest height is of great significance for the calculation of forest carbon stock and the study of the carbon cycle in large-scale regions. The main idea of the current forest height inversion methods using multi-baseline P-band polarimetric interferometric synthetic aperture radar (PolInSAR) data is to select the best baseline for forest height inversion. However, the approach of selecting the optimal baseline for forest height inversion results in the process of forest height inversion being unable to fully utilize the abundant observation data. In this paper, to solve the problem, we propose a multi-baseline forest height inversion method combining analytic and geometric expression of the random volume over ground (RVoG) model, which takes into account the advantages of the selection of the optimal observation baseline and the utilization of multi-baseline information. In this approach, for any related pixel, an optimal baseline is selected according to the geometric structure of the coherence region shape and the functional model for forest height inversion is established by the RVoG model’s analytic expression. In this way, the other baseline observations are transformed into a constraint condition according to the RVoG model’s geometric expression and are also involved in the forest height inversion. PolInSAR data were used to validate the proposed multi-baseline forest height inversion method. The results show that the accuracy of the forest height inversion with the algorithm proposed in this paper in a coniferous forest area and tropical rainforest area was improved by 17% and 39%, respectively. The method proposed in this paper provides a multi-baseline PolInSAR forest height inversion scheme for exploring regional high-precision forest height distribution. The scheme is an applicable method for large-scale, high-precision forest height inversion tasks. Full article
Show Figures

Figure 1

Figure 1
<p>RVoG model coherence loci: (<b>a</b>) ideal state RVoG model coherence loci; (<b>b</b>) ideal state RVoG model multi-baseline coherence loci; (<b>c</b>) real state RVoG model multi-baseline coherence loci.</p>
Full article ">Figure 2
<p>Normalized schematic of the ground-to-volume scattering amplitude ratio. (<b>a</b>) Schematic in the theoretical state; (<b>b</b>) Schematic in the natural state.</p>
Full article ">Figure 3
<p>Schematic diagram of the dual-baseline forest height inversion algorithm.</p>
Full article ">Figure 4
<p>Schematic diagram of the multi-baseline forest height inversion algorithm.</p>
Full article ">Figure 5
<p>Geometric constraint construction method based on the RVoG model. (<b>a</b>) Schematic in the theoretical state; (<b>b</b>) Schematic in natural state.</p>
Full article ">Figure 6
<p>Schematic diagram of the conversion of surface phases to surface elevations; superscripts 1, 2, and <span class="html-italic">q</span> in the figure indicate baselines.</p>
Full article ">Figure 7
<p>(<b>a</b>) The forest height inversion results of the traditional multi-baseline inversion algorithm; (<b>b</b>) the forest height inversion results of the algorithm proposed in this paper; (<b>c</b>) the LiDAR forest height products that cover a part of the Krycklan coniferous forest region.</p>
Full article ">Figure 8
<p>Cross-validation plots of the forest heights inverted by the two different algorithms, with the LiDAR forest height products covering part of the Mabounie rainforest region. (<b>a</b>) The results of the traditional multi-baseline inversion algorithm; (<b>b</b>) the results of the algorithm proposed in this paper.</p>
Full article ">Figure 9
<p>(<b>a</b>) The forest height inversion results of the traditional multi-baseline algorithm; (<b>b</b>) the forest height inversion results of the algorithm proposed in this paper; (<b>c</b>) the LiDAR forest height products covering part of the Krycklan coniferous forest region.</p>
Full article ">Figure 10
<p>Cross-validation plots of the forest heights inverted by the two different algorithms, with the LiDAR forest height products covering part of the Krycklan coniferous forest region. (<b>a</b>) The results of the traditional multi-baseline inversion algorithm; (<b>b</b>) the results of the algorithm proposed in this paper.</p>
Full article ">
25 pages, 94594 KiB  
Article
Harbor Detection in Polarimetric SAR Images Based on Context Features and Reflection Symmetry
by Chun Liu, Jie Gao, Shichong Liu, Chao Li, Yongchao Cheng, Yi Luo and Jian Yang
Remote Sens. 2024, 16(16), 3079; https://doi.org/10.3390/rs16163079 - 21 Aug 2024
Viewed by 528
Abstract
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features [...] Read more.
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features and polarimetric reflection symmetry. First, the image is segmented into three region types, i.e., water low-scattering regions, strong-scattering urban regions, and other regions, based on a multi-region Markov random field (MRF) segmentation method. Second, by leveraging the fact that harbors are surrounded by water on one side and a large number of buildings on the other, the coastal narrow-band area is extracted from the low-scattering regions, and the harbor regions of interest (ROIs) are determined by extracting the strong-scattering regions from the narrow-band area. Finally, by using the scattering reflection asymmetry of harbor buildings, harbors are identified based on the global threshold segmentation of the horizontal, vertical, and circular co- and cross-polarization correlation powers of the extracted ROIs. The effectiveness of the proposed method was validated with experiments on RADARSAT-2 quad-polarization images of Zhanjiang, Fuzhou, Lingshui, and Dalian, China; San Francisco, USA; and Singapore. The proposed method had high detection rates and low false detection rates in the complex coastal environment scenarios studied, far outperforming the traditional spatial harbor detection method considered for comparison. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of harbor features. (<b>a</b>) Different harbors. (<b>b</b>) Port of Capri, Italy. (<b>c</b>) Harbor structure.</p>
Full article ">Figure 2
<p>Flowchart of proposed method. (<b>a</b>) Algorithm details. (<b>b</b>) Algorithm illustration, where (1) shows the Pauli pseudo-color image, (2) and (3) show the results of water and urban region extraction using Markov random field (MRF) segmentation, (4) and (5) show the results of region of interest (ROI) extraction, in which the white band is the extracted coastal narrow-band region and the red boxes are the extracted ROIs, (6) shows the result of ROI detection, in which the detected harbors are marked with red boxes.</p>
Full article ">Figure 3
<p>An example of a RADARSAT-2 harbor image (Dalian) affected by echo sidelobes. (<b>a</b>) The whole Pauli pseudo-color image, where the Dalian port is marked with a blue box. (<b>b</b>) The Dalian port area in (<b>a</b>).</p>
Full article ">Figure 4
<p>Diagram of coastal narrow-band area extraction.</p>
Full article ">Figure 5
<p>The distribution of the co- and cross-polarization correlation coefficients (PCCs) on different bases in an image of San Francisco acquired with RADARSAT-2. (<b>a</b>) Pauli pseudo-color image. (<b>b</b>) The ground truth, where some urban, vegetation, and ocean regions are marked in red, green, and blue, respectively. (<b>c</b>) The pseudo-color image of three PCCs in (<a href="#FD7-remotesensing-16-03079" class="html-disp-formula">7</a>), where the red, green, and blue channels correspond to the horizontal, vertical, and circular PCCs, respectively. (<b>d</b>–<b>f</b>) The horizontal, vertical, and circular PCC histograms, respectively, of the urban, vegetation, and water regions.</p>
Full article ">Figure 6
<p>Pauli pseudo-color images (<b>a1–f1</b>) and ground truth (<b>a2–f2</b>) of experimental data. (<b>a1</b>,<b>a2</b>) Zhanjiang, China. (<b>b1</b>,<b>b2</b>) Fuzhou, China. (<b>c1</b>,<b>c2</b>) San Francisco, USA. (<b>d1</b>,<b>d2</b>) Singapore. (<b>e1</b>,<b>e2</b>) Lingshui, China. (<b>f1</b>,<b>f2</b>) Dalian, China.</p>
Full article ">Figure 7
<p>Illustration results of proposed harbor detection method applied to an image of Singapore. (<b>a</b>) Pauli pseudo-color image and ground truth of harbors, which are marked with green boxes. (<b>b</b>) Segmentation result after abnormal water area processing. (<b>c</b>) Water extraction result. (<b>d</b>) Coastal narrow-band area. (<b>e</b>) Result of strong-scattering area of interest in narrow-band area. (<b>f</b>) Result of harbor ROIs, which are marked with red boxes. (<b>g</b>) Harbor ROIs in Pauli pseudo-color image marked with red boxes and incorrectly detected targets marked with white numbered rectangles. (<b>h</b>) Pseudo-color image generated based on horizontal, vertical, and circular co-polarization cross-polarization correlation powers. (<b>i</b>) Final harbor detection result, which are marked with red boxes.</p>
Full article ">Figure 8
<p>Results of proposed method in different scenarios, where the color boxes of different results are the same as <a href="#remotesensing-16-03079-f007" class="html-fig">Figure 7</a>. (<b>a<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>–<b>f<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>) Experimental results for image <span class="html-italic">i</span> in <a href="#remotesensing-16-03079-t001" class="html-table">Table 1</a>. (<b>a1</b>–<b>a6</b>) Pauli pseudo-color images and ground truth of harbors. (<b>b1</b>–<b>b6</b>) Results of water and urban region extraction. (<b>c1</b>–<b>c6</b>) Results of coastal narrow-band area determined according to coastline. (<b>d1</b>–<b>d6</b>) Results of harbor ROIs determined according to strong-scattering areas in narrow-band zone. (<b>e1</b>–<b>e6</b>) Results of harbor ROIs in Pauli pseudo-color images. (<b>f1</b>–<b>f6</b>) Harbor regions detected based on co- and cross-polarization correlation powers, where detected harbors are marked with red boxes and white numbers 1–3 denote false alarm targets caused by sea-crossing bridges.</p>
Full article ">Figure 8 Cont.
<p>Results of proposed method in different scenarios, where the color boxes of different results are the same as <a href="#remotesensing-16-03079-f007" class="html-fig">Figure 7</a>. (<b>a<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>–<b>f<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>) Experimental results for image <span class="html-italic">i</span> in <a href="#remotesensing-16-03079-t001" class="html-table">Table 1</a>. (<b>a1</b>–<b>a6</b>) Pauli pseudo-color images and ground truth of harbors. (<b>b1</b>–<b>b6</b>) Results of water and urban region extraction. (<b>c1</b>–<b>c6</b>) Results of coastal narrow-band area determined according to coastline. (<b>d1</b>–<b>d6</b>) Results of harbor ROIs determined according to strong-scattering areas in narrow-band zone. (<b>e1</b>–<b>e6</b>) Results of harbor ROIs in Pauli pseudo-color images. (<b>f1</b>–<b>f6</b>) Harbor regions detected based on co- and cross-polarization correlation powers, where detected harbors are marked with red boxes and white numbers 1–3 denote false alarm targets caused by sea-crossing bridges.</p>
Full article ">Figure 9
<p>Results of proposed method for image of Zhanjiang with different narrow-band radii. (<b>a<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>–<b>c<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>) Experimental results for radius <math display="inline"><semantics> <mrow> <mi>r</mi> <mi>a</mi> <mi>d</mi> </mrow> </semantics></math><span class="html-italic">i</span> + 1. (<b>a1</b>–<b>a5</b>) Coastal arrow-band regions. (<b>b1</b>–<b>b5</b>) Results of harbor ROIs extracted. (<b>c1</b>–<b>c5</b>) Results of harbors detected, where correctly detected targets are marked in blue and false alarms in red.</p>
Full article ">Figure 10
<p>Results of proposed method for image of Zhanjiang with different false alarm rates, where correctly detected targets are marked in blue and false alarms in red. (<b>a</b>–<b>e</b>) Detection results for false alarm rates (<math display="inline"><semantics> <mrow> <mi>P</mi> <mi>f</mi> <mi>a</mi> </mrow> </semantics></math>) of 0.01, 0.02, 0.05, 0.08, and 0.1, respectively.</p>
Full article ">Figure 11
<p>Results of proposed method for image of Zhanjiang with different area ratio thresholds, where correctly detected targets are marked in blue and false alarms in red. (<b>a</b>–<b>e</b>) Detection results for threshold values (<math display="inline"><semantics> <mi>ρ</mi> </semantics></math>) of 0.3, 0.4, 0.5, 0.6, and 0.7, respectively.</p>
Full article ">Figure 12
<p>Horizontal, vertical, and circular co- and cross-polarization correlation power histograms for water, vegetation, and artificial structures in harbor area. (<b>a</b>) Histogram of horizontal polarization correlation powers of the three areas (Harbor_urban for buildings, Harbor_veg for vegetation, and Harbor_water for water). (<b>b</b>) Histogram of vertical polarization correlation powers. (<b>c</b>) Histogram of circular polarization correlation powers. (<b>d</b>) Histogram of the three different polarization correlation powers in harbor area.</p>
Full article ">Figure 12 Cont.
<p>Horizontal, vertical, and circular co- and cross-polarization correlation power histograms for water, vegetation, and artificial structures in harbor area. (<b>a</b>) Histogram of horizontal polarization correlation powers of the three areas (Harbor_urban for buildings, Harbor_veg for vegetation, and Harbor_water for water). (<b>b</b>) Histogram of vertical polarization correlation powers. (<b>c</b>) Histogram of circular polarization correlation powers. (<b>d</b>) Histogram of the three different polarization correlation powers in harbor area.</p>
Full article ">Figure 13
<p>Harbor detection results of images 1–6 in <a href="#remotesensing-16-03079-t001" class="html-table">Table 1</a> obtained by using jetty scanning method (comparison method), where the detected ROIs and harbors are all marked with red boxes. (<b>a<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>–<b>e<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>) Results of image <span class="html-italic">i</span> of comparison method. (<b>a1</b>–<b>a6</b>) Results of water extraction. (<b>b1</b>–<b>b6</b>) Results based on scanning of two pairs of orthogonal directional jetties. (<b>c1</b>–<b>c6</b>) Harbor area detected by merging scanned jetties by distance. (<b>d1</b>–<b>d6</b>) Results for detected harbor areas on Pauli pseudo-color versions of multi-look images. (<b>e1</b>–<b>e6</b>) Results in Pauli pseudo-color versions of original images.</p>
Full article ">Figure 13 Cont.
<p>Harbor detection results of images 1–6 in <a href="#remotesensing-16-03079-t001" class="html-table">Table 1</a> obtained by using jetty scanning method (comparison method), where the detected ROIs and harbors are all marked with red boxes. (<b>a<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>–<b>e<math display="inline"><semantics> <mi mathvariant="bold-italic">i</mi> </semantics></math></b>) Results of image <span class="html-italic">i</span> of comparison method. (<b>a1</b>–<b>a6</b>) Results of water extraction. (<b>b1</b>–<b>b6</b>) Results based on scanning of two pairs of orthogonal directional jetties. (<b>c1</b>–<b>c6</b>) Harbor area detected by merging scanned jetties by distance. (<b>d1</b>–<b>d6</b>) Results for detected harbor areas on Pauli pseudo-color versions of multi-look images. (<b>e1</b>–<b>e6</b>) Results in Pauli pseudo-color versions of original images.</p>
Full article ">
23 pages, 2216 KiB  
Article
Complex-Valued 2D-3D Hybrid Convolutional Neural Network with Attention Mechanism for PolSAR Image Classification
by Wenmei Li, Hao Xia, Jiadong Zhang, Yu Wang, Yan Jia and Yuhong He
Remote Sens. 2024, 16(16), 2908; https://doi.org/10.3390/rs16162908 - 9 Aug 2024
Cited by 1 | Viewed by 866
Abstract
The recently introduced complex-valued convolutional neural network (CV-CNN) has shown considerable advancements for polarimetric synthetic aperture radar (PolSAR) image classification by effectively incorporating both magnitude and phase information. However, a solitary 2D or 3D CNN encounters challenges such as insufficiently extracting scattering channel [...] Read more.
The recently introduced complex-valued convolutional neural network (CV-CNN) has shown considerable advancements for polarimetric synthetic aperture radar (PolSAR) image classification by effectively incorporating both magnitude and phase information. However, a solitary 2D or 3D CNN encounters challenges such as insufficiently extracting scattering channel dimension features or excessive computational parameters. Moreover, these networks’ default is that all information is equally important, consuming vast resources for processing useless information. To address these issues, this study presents a new hybrid CV-CNN with the attention mechanism (CV-2D/3D-CNN-AM) to classify PolSAR ground objects, possessing both excellent computational efficiency and feature extraction capability. In the proposed framework, multi-level discriminative features are extracted from preprocessed data through hybrid networks in the complex domain, along with a special attention block to filter the feature importance from both spatial and channel dimensions. Experimental results performed on three PolSAR datasets demonstrate our present approach’s superiority over other existing ones. Furthermore, ablation experiments confirm the validity of each module, highlighting our model’s robustness and effectiveness. Full article
(This article belongs to the Special Issue Advances in Synthetic Aperture Radar Data Processing and Application)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The operational domain of different attention modules. (<b>a</b>) Channel attention. (<b>b</b>) Spatial attention. (<b>c</b>) Channel and spatial attention. <span class="html-italic">C</span> represents the channel domain, <span class="html-italic">H</span> and <span class="html-italic">W</span> represent the spatial domain.</p>
Full article ">Figure 2
<p>Architecture of the proposed CV-2D/3D-CNN-AM for PolSAR image classification.</p>
Full article ">Figure 3
<p>Comparison between 2D convolution and 3D convolution. (<b>a</b>) 2D convolution. (<b>b</b>) 3D convolution.</p>
Full article ">Figure 4
<p>The implementation of complex-valued convolution.</p>
Full article ">Figure 5
<p>The improved attention block.</p>
Full article ">Figure 6
<p>Flevoland dataset. (<b>a</b>) Pauli-RGB image. (<b>b</b>) Ground truth. (<b>c</b>) Category legend.</p>
Full article ">Figure 7
<p>San Francisco dataset. (<b>a</b>) Pauli-RGB image. (<b>b</b>) Ground truth. (<b>c</b>) Category legend.</p>
Full article ">Figure 8
<p>Oberpfaffenhofen dataset. (<b>a</b>) Pauli-RGB image. (<b>b</b>) Ground truth. (<b>c</b>) Category legend.</p>
Full article ">Figure 9
<p>Classification maps of the Flevoland dataset. (<b>a</b>) Ground truth. (<b>b</b>) SVM. (<b>c</b>) CV-MLP. (<b>d</b>) CV-2D-CNN. (<b>e</b>) CV-3D-CNN. (<b>f</b>) CV-FCN. (<b>g</b>) CCN-WT. (<b>h</b>) PolSF. (<b>i</b>) CV-2D/3D-CNN-AM.</p>
Full article ">Figure 10
<p>Loss and accuracy curves of the proposed method for training and validation data on the Flevoland dataset.</p>
Full article ">Figure 11
<p>Classification maps of the San Francisco dataset. (<b>a</b>) Ground truth. (<b>b</b>) SVM. (<b>c</b>) CV-MLP. (<b>d</b>) CV-2D-CNN. (<b>e</b>) CV-3D-CNN. (<b>f</b>) CV-FCN. (<b>g</b>) CCN-WT. (<b>h</b>) PolSF. (<b>i</b>) CV-2D/3D-CNN-AM.</p>
Full article ">Figure 12
<p>Loss and accuracy curves of the proposed method for training and validation data on the San Francisco dataset.</p>
Full article ">Figure 13
<p>Classification maps of the Oberpfaffenhofen dataset. (<b>a</b>) Ground truth. (<b>b</b>) SVM. (<b>c</b>) CV-MLP. (<b>d</b>) CV-2D-CNN. (<b>e</b>) CV-3D-CNN. (<b>f</b>) CV-FCN. (<b>g</b>) CCN-WT. (<b>h</b>) PolSF. (<b>i</b>) CV-2D/3D-CNN-AM.</p>
Full article ">Figure 14
<p>Loss and accuracy curves of the proposed method for training and validation data on the Oberpfaffenhofen dataset.</p>
Full article ">
25 pages, 8439 KiB  
Article
On Unsupervised Multiclass Change Detection Using Dual-Polarimetric SAR Data
by Minhwa Kim, Seung-Jae Lee and Sang-Eun Park
Remote Sens. 2024, 16(15), 2858; https://doi.org/10.3390/rs16152858 - 5 Aug 2024
Viewed by 617
Abstract
Change detection using SAR data has been an active topic in various applications. Because conventional change detection identifies signal changes in single-pol radar observations, they cannot separately detect different kinds of change associated with different ground parameters. In this study, we investigated the [...] Read more.
Change detection using SAR data has been an active topic in various applications. Because conventional change detection identifies signal changes in single-pol radar observations, they cannot separately detect different kinds of change associated with different ground parameters. In this study, we investigated the comprehensive use of dual-pol parameters and proposed a novel dual-pol-based change detection framework utilizing different dual-pol scatter-type indicators. To optimize the exploitation of dual-pol change information, we presented a two-step processing strategy that divides the multiclass change detection process into a binary detection step that identifies the presence of changes and the classification step that distinguishes the types of change. In the detection stage, each dual-pol parameter was considered as an independent information source. Assuming potential conflict between dual-pol parameters, a disjunctive combination of detection results from different dual-pol parameters was applied to obtain the final detection result. In the classification step, an unsupervised change classification strategy was proposed based on the change direction and magnitude of the dual-pol parameters within the change class. Experimental results exhibited significantly improved detectability across a wide change spectrum compared with previous dual-pol-based change detection approaches. They also demonstrated the possibility of distinguishing different semantic changes without in situ ground data. Full article
Show Figures

Figure 1

Figure 1
<p>The workflow of proposed approaches for multiclass change detection using the bi-temporal dual-pol SAR data.</p>
Full article ">Figure 2
<p>Histogram of the change direction <math display="inline"><semantics> <mrow> <mi>θ</mi> </mrow> </semantics></math> of selected dual-pol parameter pairs. (<b>a</b>) Dual-pol intensities (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> </mrow> </msub> </mrow> </semantics></math>), (<b>b</b>) overall intensity (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>) and depolarization (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math>), and (<b>c</b>) overall intensity (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>) and coherence (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>ρ</mi> </mrow> </msub> </mrow> </semantics></math>) for the <span class="html-italic">h</span>-pol transmission case.</p>
Full article ">Figure 3
<p>Histogram of the change magnitude <math display="inline"><semantics> <mrow> <mi>r</mi> </mrow> </semantics></math> of dual-pol parameter pairs. (<b>a</b>) Overall intensity (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>) and depolarization (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </msub> </mrow> </semantics></math>) and (<b>b</b>) overall intensity (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math>) and coherence (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>ρ</mi> </mrow> </msub> </mrow> </semantics></math>) for the <span class="html-italic">h</span>-pol transmission case.</p>
Full article ">Figure 4
<p>Illustration of the multiclass change classification.</p>
Full article ">Figure 5
<p>Dual-pol SAR images for experimental validation. Color composite of the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math> configuration (red: <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>; green: <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math>; blue: <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> <mo>/</mo> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math>), acquired on (<b>a</b>) 19 July 2013 and (<b>b</b>) 19 February 2016. Color composite of the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math> configuration (red: <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math>; green: <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>; blue: <math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> <mo>/</mo> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>), acquired on (<b>c</b>) 19 July 2013 and (<b>d</b>) 19 February 2016.</p>
Full article ">Figure 6
<p>Land cover classification results from (<b>a</b>) the Landsat 8 image on 30 July 2013 and (<b>b</b>) Sentinel-2 image on 15 February 2016. (<b>c</b>) The manually generated reference change map for the study area.</p>
Full article ">Figure 7
<p>Change detection results (back: changed; white: unchanged) for the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math> configuration obtained from the proposed method using four different combinations of dual-pol parameter sets: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>ρ</mi> <mo>}</mo> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>R</mi> <mi>V</mi> <mi>I</mi> <mo>}</mo> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>ρ</mi> <mo>,</mo> <mo> </mo> <mi>R</mi> <mi>V</mi> <mi>I</mi> <mo>}</mo> </mrow> </semantics></math>. The change detection results derived from the (<b>e</b>) WL [<a href="#B16-remotesensing-16-02858" class="html-bibr">16</a>] and (<b>f</b>) PVA [<a href="#B18-remotesensing-16-02858" class="html-bibr">18</a>] methods. (<b>g</b>) The reference binary change map.</p>
Full article ">Figure 8
<p>Change detection results for the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math> configuration obtained from the proposed method using four different combinations of dual-pol parameter sets: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>ρ</mi> <mo>}</mo> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>R</mi> <mi>V</mi> <mi>I</mi> <mo>}</mo> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>=</mo> <mo>{</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>11</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>C</mi> </mrow> <mrow> <mn>22</mn> </mrow> </msub> <mo>,</mo> <mi>ρ</mi> <mo>,</mo> <mi>R</mi> <mi>V</mi> <mi>I</mi> <mo>}</mo> </mrow> </semantics></math>. The change detection results derived from the (<b>e</b>) WL and (<b>f</b>) PVA methods. (<b>g</b>) The reference binary change map.</p>
Full article ">Figure 9
<p>Multiclass change classification results for the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math> configuration derived from the (<b>a</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>} and (<b>b</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </semantics></math>} parameter pairs. The change classification results derived from the (<b>c</b>) WL and (<b>d</b>) PVA methods.</p>
Full article ">Figure 10
<p>Sankey diagrams between SAR change classes of the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math> configuration and the reference change map. SAR change classes were obtained by the (<b>a</b>) WL and (<b>b</b>) PVA methods in the previous study, and the proposed method with (<b>c</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>} and (<b>d</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </semantics></math>} parameter pairs.</p>
Full article ">Figure 11
<p>Multiclass change classification results for <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math> configuration derived from the (<b>a</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>} and (<b>b</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </semantics></math>} parameter pairs. The change classification results derived from the (<b>c</b>) WL and (<b>d</b>) PVA methods.</p>
Full article ">Figure 12
<p>Sankey diagrams between SAR change classes of the <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math> configuration and the reference change map. SAR change classes obtained by the (<b>a</b>) WL and (<b>b</b>) PVA methods in the previous study and the proposed method with (<b>c</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> </mrow> </semantics></math>} and (<b>d</b>) {<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>p</mi> <mi>a</mi> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>V</mi> <mi>I</mi> </mrow> </semantics></math>} parameter pairs.</p>
Full article ">Figure 13
<p>(<b>a</b>) The composition of the SAR change classes for the reference land cover changes related to the increase in scattering intensity; (<b>b</b>) the correlation coefficient between each land cover change composed of SAR-based change classes. (<b>c</b>) The composition of the SAR change classes for the reference land cover changes related to the decrease in scattering intensity; (<b>d</b>) the correlation coefficient between each land cover change composed of SAR-based change classes.</p>
Full article ">Figure 14
<p>(<b>a</b>) The composition of the semantic change classes for the SAR change classes; (<b>b</b>) the correlation coefficient between each SAR-based change classes composed of semantic change classes.</p>
Full article ">Figure 15
<p>Two-dimensional histograms of the ratio between the changes of the polarimetric parameter derived from the quad-pol SAR (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>q</mi> <mi>u</mi> <mi>a</mi> <mi>d</mi> </mrow> </msup> </mrow> </semantics></math>) and that from each dual-pol data in the (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>h</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>h</mi> </mrow> </semantics></math> configuration (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>h</mi> <mi>h</mi> <mo>–</mo> <mi>v</mi> <mi>h</mi> </mrow> </msup> </mrow> </semantics></math>) and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>h</mi> <mi>v</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>v</mi> <mi>v</mi> </mrow> </semantics></math> configuration (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>R</mi> </mrow> <mrow> <mi>h</mi> <mi>v</mi> <mo>–</mo> <mi>v</mi> <mi>v</mi> </mrow> </msup> </mrow> </semantics></math>). The blue dashed lines represent the case where the dual-pol and quad-pol parameters are identical.</p>
Full article ">
27 pages, 8943 KiB  
Article
How Phenology Shapes Crop-Specific Sentinel-1 PolSAR Features and InSAR Coherence across Multiple Years and Orbits
by Johannes Löw, Steven Hill, Insa Otte, Michael Thiel, Tobias Ullmann and Christopher Conrad
Remote Sens. 2024, 16(15), 2791; https://doi.org/10.3390/rs16152791 - 30 Jul 2024
Viewed by 671
Abstract
Spatial information about plant health and productivity are essential when assessing the progress towards Sustainable Development Goals such as life on land and zero hunger. Plant health and productivity are strongly linked to a plant’s phenological progress. Remote sensing, and since the launch [...] Read more.
Spatial information about plant health and productivity are essential when assessing the progress towards Sustainable Development Goals such as life on land and zero hunger. Plant health and productivity are strongly linked to a plant’s phenological progress. Remote sensing, and since the launch of Sentinel-1 (S1), specifically, radar-based frameworks have been studied for the purpose of monitoring phenological development. This study produces insights into how crop phenology shapes S1 signatures of PolSAR features and InSAR coherence of wheat, canola, sugar beet. and potato across multiple years and orbits. Hereby, differently smoothed time series and a base line of growing degree days are stacked to estimate the patterns of occurrence of extreme values and break points. These patterns are then linked to in situ observations of phenological developments. The comparison of patterns across multiple orbits and years reveals that a single optimized fit hampers the tracking capacities of an entire season monitoring framework, as does the sole reliance on extreme values. VV and VH backscatter intensities outperform all other features, but certain combinations of phenological stage and crop type are better covered by a complementary set of PolSAR features and coherence. With regard to PolSAR features, alpha and entropy can be replaced by the cross-polarization ratio for tracking certain stages. Moreover, a range of moderate incidence angles is better suited for monitoring crop phenology. Also, wheat and canola are favored by a late afternoon overpass. In sum, this study provides insights into phenological developments at the landscape level that can be of further use when investigating spatial and temporal variations within the landscape. Full article
(This article belongs to the Special Issue Cropland Phenology Monitoring Based on Cloud-Computing Platforms)
Show Figures

Figure 1

Figure 1
<p>Map of InVeKoS data 2020 for DEMMIN and the selected crops: winter wheat, sugar beet, canola, and potato. Top right corner: extent of the AOI in Mecklenburg Western Pomerania. Center right: extent in relation to footprint of relative orbits.</p>
Full article ">Figure 2
<p>Essential steps of the analysis per orbit and year separated by field and landscape level.</p>
Full article ">Figure 3
<p>Schematic depiction of estimating temporal density (TSM occurrence plot) of TSM occurrence at the field scale. The dimensions of content of the analysis encompass five years (2017–2021), three relative orbits (146,168, 95), and seven S1 features. The smoothing span ranges from 0.05 to 0.5 in steps of 0.05, resulting in eleven (n = 11) time series per field.</p>
Full article ">Figure 4
<p>Exemplary yearly crop signatures for each crop type of VV backscatter with locations of their extrema. Signatures were smoothed by LOESS with span 0.2.</p>
Full article ">Figure 5
<p>Schematic depiction of the analyses at the landscape level containing the pattern extraction and the derivation of trackable stages. This was applied for time series originating from different years and/or orbits of the same crop type and S1 feature to enable the comparison of their respective TSM distributions. These comparisons allow for the derivation of common phenological patterns across years and orbits for each crop type and S1 feature.</p>
Full article ">Figure 6
<p>Orbit-specific patterns of major signal changes at landscape level tracked by break points according to day of year (DOY; <span class="html-italic">x</span>-axis) and artificial growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2018 and 2020. Exemplary illustration for fields of wheat. Temporal uncertainties around BBCH stadia are marked by grey areas. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure 7
<p>Year-wise count of S1 features producing break points (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.) at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">Figure 8
<p>Orbit, stage, and crop-specific offsets of break points at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure 9
<p>Orbit, stage, and crop-specific offsets of maxima at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure 10
<p>Orbit, stage, and crop-specific offsets of minima at landscape level in days, displaying their mean deviation from in situ observations and temporal variance (standard deviation) by crop type and BBCH stage, containing only tracked events that were labeled reliable by the threshold approach.</p>
Full article ">Figure A1
<p>Orbit-specific patterns of major signal changes at landscape level tracked by break points according to day of year (DOY; <span class="html-italic">x</span>-axis) and artificial growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDD value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A2
<p>Orbit-specific patterns of major signal changes by maxima by day of year (DOY; <span class="html-italic">x</span>-axis) and growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A3
<p>Year-wise count of S1 features producing maxima (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.)at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">Figure A4
<p>Orbit-specific patterns of major signal changes by minima by day of year (DOY; <span class="html-italic">x</span>-axis) and growing degree day (GDDsim) values (<span class="html-italic">y</span>-axis) in relation to the corresponding five-year mean GDDsim value of BBCH stadia observed by DWD at landscape level from 2017 to 2021. Exemplary illustration for fields of winter wheat.</p>
Full article ">Figure A5
<p>Year-wise count of S1 features producing minima (Y.) that closely track phenological stages by crop type and by their respective distribution of GDD values (GD.)at the landscape level which is overlaid by the GDD values of BBCH in situ observations (colored areas).</p>
Full article ">
20 pages, 14550 KiB  
Article
Monitoring Cover Crop Biomass in Southern Brazil Using Combined PlanetScope and Sentinel-1 SAR Data
by Fábio Marcelo Breunig, Ricardo Dalagnol, Lênio Soares Galvão, Polyanna da Conceição Bispo, Qing Liu, Elias Fernando Berra, William Gaida, Veraldo Liesenberg and Tony Vinicius Moreira Sampaio
Remote Sens. 2024, 16(15), 2686; https://doi.org/10.3390/rs16152686 - 23 Jul 2024
Cited by 1 | Viewed by 820
Abstract
Precision agriculture integrates multiple sensors and data types to support farmers with informed decision-making tools throughout crop cycles. This study evaluated Aboveground Biomass (AGB) estimates of Rye using attributes derived from PlanetScope (PS) optical, Sentinel-1 Synthetic Aperture Radar (SAR), and hybrid (optical plus [...] Read more.
Precision agriculture integrates multiple sensors and data types to support farmers with informed decision-making tools throughout crop cycles. This study evaluated Aboveground Biomass (AGB) estimates of Rye using attributes derived from PlanetScope (PS) optical, Sentinel-1 Synthetic Aperture Radar (SAR), and hybrid (optical plus SAR) datasets. Optical attributes encompassed surface reflectance from PS’s blue, green, red, and near-infrared (NIR) bands, alongside the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). Sentinel-1 SAR attributes included the C-band Synthetic Aperture Radar Ground Range Detected, VV and HH polarizations, and both Ratio and Polarization (Pol) indices. Ground reference AGB data for Rye (Secale cereal L.) were collected from 50 samples and four dates at a farm located in southern Brazil, aligning with image acquisition dates. Multiple linear regression models were trained and validated. AGB was estimated based on individual (optical PS or Sentinel-1 SAR) and combined datasets (optical plus SAR). This process was repeated 100 times, and variable importance was extracted. Results revealed improved Rye AGB estimates with integrated optical and SAR data. Optical vegetation indices displayed higher correlation coefficients (r) for AGB estimation (r = +0.67 for both EVI and NDVI) compared to SAR attributes like VV, Ratio, and polarization (r ranging from −0.52 to −0.58). However, the hybrid regression model enhanced AGB estimation (R2 = 0.62, p < 0.01), reducing RMSE to 579 kg·ha−1. Using only optical or SAR data yielded R2 values of 0.51 and 0.42, respectively (p < 0.01). In the hybrid model, the most important predictors were VV, NIR, blue, and EVI. Spatial distribution analysis of predicted Rye AGB unveiled agricultural zones associated with varying biomass throughout the cover crop development. Our findings underscored the complementarity of optical with SAR data to enhance AGB estimates of cover crops, offering valuable insights for agricultural zoning to support soil and cash crop management. Full article
(This article belongs to the Special Issue Advancements in Remote Sensing for Sustainable Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of the study area, cultivated with Rye, in southern Brazil (Vila Morena farm). A total of 50 samples were systematically distributed across every half-hectare. Throughout the experiment, seven field campaigns were conducted in 2017. Tri-dimensional perspectives of UAV RGB dense-cloud are shown for the early and late growing season. The UAV-derived DEM is also depicted.</p>
Full article ">Figure 2
<p>The timeline illustrates the field data campaigns conducted for Rye Aboveground Biomass (AGB) measurements, positioned at the bottom of the figure (blue). PlanetScope (black) and Sentinel-1 SAR (red) data acquired during the 2017 cover crop winter cycle are depicted at the top and middle of the figure, respectively. The hatched area indicates the matching periods of satellite data acquisition adopted for analysis.</p>
Full article ">Figure 3
<p>Relationships between field measurements of Aboveground Biomass (AGB) of Rye, gathered from seven campaigns in 2017 (represented by symbols in the green curve ± standard deviation), and reanalysis data of daily rainfall (depicted by blue columns) and mean temperature (illustrated by the red line). At the Vila Morena farm, there was a notable surge in cover crop AGB following significant rainfall in mid-August, coupled with the general rise in temperature transitioning from local winter to spring. The sowing date is also indicated for reference.</p>
Full article ">Figure 4
<p>Per-sample point values variations in (<b>a</b>) field-measured Aboveground Biomass (AGB) of Rye and in the reflectance of the (<b>b</b>) blue, (<b>c</b>) green, (<b>d</b>) red, and (<b>e</b>) near-infrared (NIR) bands of PlanetScope. Results for the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) are shown in (<b>f</b>,<b>g</b>), respectively. All results are shown across the four dates coinciding with the availability of both PS and Sentinel-1 SAR images.</p>
Full article ">Figure 5
<p>Per-sample point values variations in (<b>a</b>) field-measured Aboveground Biomass (AGB) of Rye and in the Sentinel-1 SAR attributes (<b>b</b>) VV, (<b>c</b>) VH, (<b>d</b>) ratio, and (<b>e</b>) polarization. All results are shown across the four dates coinciding with the availability of both PS and Sentinel-1 SAR images.</p>
Full article ">Figure 6
<p>Pearson’s correlation matrix for the relationships between field-measured Aboveground Biomass (AGB; kg·ha<sup>−1</sup>) of Rye on the four dates (<span class="html-italic">n</span> = 200 samples), PlanetScope optical attributes, and Sentinel-1 SAR metrics. Data distribution is shown by histograms. Statistical significance levels are indicated by asterisks: * (0.05), ** (0.01), and *** (0.001).</p>
Full article ">Figure 7
<p>Relative root mean square error (RMSE in %) to estimate Aboveground Biomass (AGB) of Rye using Sentinel-1 SAR attributes, PS optical metrics, and the combination of both sets of variables.</p>
Full article ">Figure 8
<p>Variable importance per dataset, with values indicating the frequency (%) at which each variable was selected in the best model using a stepwise procedure across 100 simulations.</p>
Full article ">Figure 9
<p>Predicted versus observed Aboveground Biomass (AGB) of Rye for the multiple linear regression model combining PlanetScope (PS) optical attributes with Sentinel-1 SAR metrics. The results were derived using the validation dataset.</p>
Full article ">Figure 10
<p>Aboveground Biomass (AGB) map of Rye, derived from the combined optical-SAR multiple regression model, for the four coincident satellite data acquisition dates: (<b>a</b>) 21 July 2017; (<b>b</b>) 4 August 2017; (<b>c</b>) 18 August 2017 and; (<b>d</b>) 26 August 2017. Differences in the spatial occurrence of the predicted AGB are discussed in the text. Hatched areas correspond to areas with AGB that have less than the median value for the corresponding date. In (<b>d</b>), the stripe in blue corresponds to a portion of the farm submitted to chemical treatment.</p>
Full article ">Figure 11
<p>UAV true-color composites for (<b>a</b>) the early stage of Rye development on 10 August 2017 and (<b>b</b>) the late stage of maximum biomass development on 31 August 2017. RGB channels correspond to UAV bands centered at 660 nm, 550 nm, and 450 nm, respectively (X3 camera). The magenta rectangle refers to the location in <a href="#remotesensing-16-02686-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 12
<p>Difference map of Aboveground Biomass (AGB) estimates of 18 August 2017 (<a href="#remotesensing-16-02686-f010" class="html-fig">Figure 10</a>c) and 26 August 2017 (<a href="#remotesensing-16-02686-f010" class="html-fig">Figure 10</a>d). Reddish tones indicate AGB increase and blue tones indicate AGB decrease. White areas indicate low AGB variation (±100 kg·ha<sup>−1</sup>). The background is a Google satellite true color composite image.</p>
Full article ">
26 pages, 6691 KiB  
Article
Calibration of SAR Polarimetric Images by Covariance Matching Estimation Technique with Initial Search
by Jingke Liu, Lin Liu and Xiaojie Zhou
Remote Sens. 2024, 16(13), 2400; https://doi.org/10.3390/rs16132400 - 29 Jun 2024
Viewed by 740
Abstract
To date, various methods have been proposed for calibrating polarimetric synthetic aperture radar (SAR) using distributed targets. Some studies have utilized the covariance matching estimation technique (Comet) for SAR data calibration. However, practical applications have revealed issues stemming from ill-conditioned problems due to [...] Read more.
To date, various methods have been proposed for calibrating polarimetric synthetic aperture radar (SAR) using distributed targets. Some studies have utilized the covariance matching estimation technique (Comet) for SAR data calibration. However, practical applications have revealed issues stemming from ill-conditioned problems due to the analytical solution in the iterative process. To tackle this challenge, an improved method called Comet IS is introduced. Firstly, we introduce an outlier detection mechanism which is based on the Quegan algorithm’s results. Next, we incorporate an initial search approach which is based on the interior point method for recalibration. With the outlier detection mechanism in place, the algorithm can recalibrate iteratively until the results are correct. Simulation experiments reveal that the improved algorithm outperforms the original one. Furthermore, we compare the improved method with Quegan and Ainsworth algorithms, demonstrating its superior performance in calibration. Furthermore, we validate our method’s advancement using real data and corner reflectors. Compared with the other two algorithms, the improved performance in crosstalk isolation and channel imbalance is significant. This research provides a more reliable and effective approach for polarimetric SAR calibration, which is significant for enhancing SAR imaging quality. Full article
Show Figures

Figure 1

Figure 1
<p>Parameter iterative estimation procedure. (<b>a</b>) The ideal calibration processes; (<b>b</b>) the actual calibration process, where <math display="inline"><semantics> <mrow> <mi>θ</mi> </mrow> </semantics></math> represents the rotation angle induced by the introduction of the imaginary part.</p>
Full article ">Figure 2
<p>Parameter estimation error and loss function values. (<b>a</b>) R by Ainsworth algorithm in dB; (<b>b</b>) R by Quegan algorithm in dB; (<b>c</b>) R by Comet algorithm in dB; (<b>d</b>) the value of loss function in dB.</p>
Full article ">Figure 3
<p>Minimum target values. The blue line represents the observed minimum target values. The green line represents the estimated minimum target values.</p>
Full article ">Figure 4
<p>Classification results. (<b>a</b>) Real classification; (<b>b</b>) predicted classification. The figure solely illustrates the relationship between the classification results and the chosen two-dimensional features. The abscissa and ordinate respectively represent the first and third dimensions of the feature vectors to characterize the distribution of five-dimensional features.</p>
Full article ">Figure 5
<p>R with Comet IS in dB.</p>
Full article ">Figure 6
<p>Measured information graph. (<b>a</b>) Optical images of the test area; (<b>b</b>) polarimetric SAR image of the corner reflector calibration site.</p>
Full article ">Figure 7
<p>Crosstalk amplitude. (<b>a</b>) Ainsworth; (<b>b</b>) Quegan; (<b>c</b>) Comet IS.</p>
Full article ">Figure 7 Cont.
<p>Crosstalk amplitude. (<b>a</b>) Ainsworth; (<b>b</b>) Quegan; (<b>c</b>) Comet IS.</p>
Full article ">Figure 8
<p>SAR image (Pauli). (<b>a</b>) Raw SAR image; (<b>b</b>) image processed by Ainsworth algorithm; (<b>c</b>) image processed by Quegan algorithm; (<b>d</b>) image processed by Comet IS.</p>
Full article ">Figure 9
<p>Magnified section of SAR image. (<b>a</b>) Raw image; (<b>b</b>) image calibrated by Ainsworth algorithm; (<b>c</b>) image calibrated by Quegan algorithm; (<b>d</b>) image calibrated by Comet IS algorithm.</p>
Full article ">Figure 10
<p>Signatures of corner reflector RH9. (<b>a</b>−<b>d</b>) The x-pol signatures. (<b>e</b>−<b>f</b>) The co-pol signatures. (<b>a</b>,<b>e</b>) Rraw data; (<b>b</b>,<b>f</b>) data calibrated by Ainsworth algorithm; (<b>c</b>,<b>g</b>) data calibrated by Quegan algorithm; (<b>d</b>,<b>h</b>) data calibrated by Comet IS algorithm.</p>
Full article ">
18 pages, 31707 KiB  
Article
IceGCN: An Interactive Sea Ice Classification Pipeline for SAR Imagery Based on Graph Convolutional Network
by Mingzhe Jiang, Xinwei Chen, Linlin Xu and David A. Clausi
Remote Sens. 2024, 16(13), 2301; https://doi.org/10.3390/rs16132301 - 24 Jun 2024
Cited by 1 | Viewed by 694
Abstract
Monitoring sea ice in the Arctic region is crucial for polar maritime activities. The Canadian Ice Service (CIS) wants to augment its manual interpretation with machine learning-based approaches due to the increasing data volume received from newly launched synthetic aperture radar (SAR) satellites. [...] Read more.
Monitoring sea ice in the Arctic region is crucial for polar maritime activities. The Canadian Ice Service (CIS) wants to augment its manual interpretation with machine learning-based approaches due to the increasing data volume received from newly launched synthetic aperture radar (SAR) satellites. However, fully supervised machine learning models require large training datasets, which are usually limited in the sea ice classification field. To address this issue, we propose a semi-supervised interactive system to classify sea ice in dual-pol RADARSAT-2 imagery using limited training samples. First, the SAR image is oversegmented into homogeneous regions. Then, a graph is constructed based on the segmentation results, and the feature set of each node is characterized by a convolutional neural network. Finally, a graph convolutional network (GCN) is employed to classify the whole graph using limited labeled nodes automatically. The proposed method is evaluated on a published dataset. Compared with referenced algorithms, this new method outperforms in both qualitative and quantitative aspects. Full article
(This article belongs to the Special Issue Recent Advances in Sea Ice Research Using Satellite Data)
Show Figures

Figure 1

Figure 1
<p>Workflow diagram of the proposed IceGCN.</p>
Full article ">Figure 2
<p>Architecture of feature extraction module in IceGCN.</p>
Full article ">Figure 3
<p>Location of the Beaufort Sea. Footprints of the 18 RADARSAT-2 scenes used in this work are shown in red.</p>
Full article ">Figure 4
<p>Different stages of development of ice in patches cropped from HH an HV polarized scenes in the same location. Open water in HH (<b>a</b>) and HV (<b>b</b>). Young ice in HH (<b>c</b>) and HV (<b>d</b>). First-year ice in HH (<b>e</b>) and HV (<b>f</b>). Multi-year ice in HH (<b>g</b>) and HV (<b>h</b>).</p>
Full article ">Figure 5
<p>Classification results for the scene obtained on 18 April 2010 (scene ID: 20100418). (<b>a</b>) HH polarization, (<b>b</b>) HV polarization, (<b>c</b>) Ice chart, (<b>d</b>) RF, (<b>e</b>) IRGS-RF, (<b>f</b>) ResNet, (<b>g</b>) IRGS-ResNet, (<b>h</b>) IceGCN.</p>
Full article ">Figure 6
<p>Classification results for the scene obtained on 24 May 2010 (scene ID: 20100524). (<b>a</b>) HH polarization, (<b>b</b>) HV polarization, (<b>c</b>) Ice chart, (<b>d</b>) RF, (<b>e</b>) IRGS-RF, (<b>f</b>) ResNet, (<b>g</b>) IRGS-ResNet, and (<b>h</b>) IceGCN.</p>
Full article ">Figure 7
<p>Classification results for the scene obtained on 27 October 2010 (scene ID: 20101027). (<b>a</b>) HH polarization, (<b>b</b>) HV polarization, (<b>c</b>) Ice chart, (<b>d</b>) RF, (<b>e</b>) IRGS-RF, (<b>f</b>) ResNet, (<b>g</b>) IRGS-ResNet, and (<b>h</b>) IceGCN.</p>
Full article ">Figure 8
<p>Classification accuracy of IceGCN on Dataset-2 with different training sample ratios.</p>
Full article ">
30 pages, 12064 KiB  
Article
Inversion of Forest Aboveground Biomass in Regions with Complex Terrain Based on PolSAR Data and a Machine Learning Model: Radiometric Terrain Correction Assessment
by Yonghui Nie, Rula Sa, Sergey Chumachenko, Yifan Hu, Youzhu Wang and Wenyi Fan
Remote Sens. 2024, 16(12), 2229; https://doi.org/10.3390/rs16122229 - 19 Jun 2024
Viewed by 615
Abstract
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction [...] Read more.
The accurate estimation of forest aboveground biomass (AGB) in areas with complex terrain is very important for quantifying the carbon sequestration capacity of forest ecosystems and studying the regional or global carbon cycle. In our previous research, we proposed the radiometric terrain correction (RTC) process for introducing normalized correction factors, which has strong effectiveness and robustness in terms of the backscattering coefficient of polarimetric synthetic aperture radar (PolSAR) data and the monadic model. However, the impact of RTC on the correctness of feature extraction and the performance of regression models requires further exploration in the retrieval of forest AGB based on a machine learning multiple regression model. In this study, based on PolSAR data provided by ALOS-2, 117 feature variables were accurately extracted using the RTC process, and then Boruta and recursive feature elimination with cross-validation (RFECV) algorithms were used to perform multi-step feature selection. Finally, 10 machine learning regression models and the Optuna algorithm were used to evaluate the effectiveness and robustness of RTC in improving the quality of the PolSAR feature set and the performance of the regression models. The results revealed that, compared with the situation without RTC treatment, RTC can effectively and robustly improve the accuracy of PolSAR features (the Pearson correlation R between the PolSAR features and measured forest AGB increased by 0.26 on average) and the performance of regression models (the coefficient of determination R2 increased by 0.14 on average, and the rRMSE decreased by 4.20% on average), but there is a certain degree of overcorrection in the RTC process. In addition, in situations where the data exhibit linear relationships, linear models remain a powerful and practical choice due to their efficient and stable characteristics. For example, the optimal regression model in this study is the Bayesian Ridge linear regression model (R2 = 0.82, rRMSE = 18.06%). Full article
(This article belongs to the Special Issue SAR for Forest Mapping III)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of study sites: (<b>a</b>) the location of Saihanba Forest Farm in relation to the provinces and counties in China; (<b>b</b>) the spatial location of ALOS-2 data relative to Weichang County; (<b>c</b>) the Pauli RGB image (R: |HH-VV|, G: |HV|, B: |HH + VV|) based on PolSAR data and the location of the measured samples; the basemap is the optical image of Tianditu.</p>
Full article ">Figure 2
<p>A flowchart of the proposed forest AGB mapping scheme.</p>
Full article ">Figure 3
<p>Absolute value of Pearson correlation coefficient (R) between forest AGB and the PolSAR features based on the data (25 July 2020) with radiometric terrain correction (RTC, olive) and non-RTC data (NRT, red). Sorted based on R_RTC (i.e., absolute value of R value between forest AGB and SAR features extracted based on RTC data). (<b>a</b>) The first set of the extracted original PolSAR features; (<b>b</b>) the second set of the extracted original PolSAR features (39 in total); (<b>c</b>) derived features based on PolSAR original features (39 in total).</p>
Full article ">Figure 4
<p>Taking PolSAR data from 25 July 2020 as an example, we created scatter density plots between the decibel values of the three components (Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl)) of the Freeman three-decomposition in different topographic correction stages (non-radiometric terrain correction (NRTC), polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC)) and the local incidence angle <span class="html-italic">θ<sub>loc</sub></span>. (<b>a</b>) NRTC_Vol; (<b>b</b>) POAC_Vol; (<b>c</b>) ESAC_Vol; (<b>d</b>) AVEC_Vol; (<b>e</b>) NRTC_Odd; (<b>f</b>) POAC_Odd; (<b>g</b>) ESAC_Odd; (<b>h</b>) AVEC_Odd; (<b>i</b>) NRTC_Dbl; (<b>j</b>) POAC_Dbl; (<b>k</b>) ESAC_Dbl; (<b>l</b>) AVEC_Dbl.</p>
Full article ">Figure 5
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 5 Cont.
<p>Taking PolSAR data from 25 July 2020 as an example, we created a scatter density plot for each component of Freeman three-decomposition (FRE3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC was completed) with respect to non-RTC (NRTC). The three components of FRE3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure 6
<p>Analysis of the effectiveness of RTC and the optimal regression model of this study, taking the SAR data from 25 July 2020 as an example. (<b>a</b>) The training results of the NRTC and RTC data, where the black dots are the results of the corresponding single training; (<b>b</b>) scatter plot of the measured forest AGB and the AGB predicted by the optimal regression model (BysRidge); (<b>c</b>) spatial distribution map of forest AGB in the study area based on optimal model prediction.</p>
Full article ">Figure A1
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A1 Cont.
<p>The scatter density plot of each component of Yamaguchi three-component (YAM3) at different radiometric terrain correction (RTC) stages (Y-axis) relative to the previous stage (X-axis), and in AVEC stages (that is, after all processing of the RTC) with respect to non-RTC (NRTC). The three components of YAM3 are the Volume scattering component (Vol), Surface scattering component (Odd), and Double-bounce scattering component (Dbl). The three stages of RTC are polarization orientation angle correction (POAC), effective scattering area correction (ESAC), and angular variation effect correction (AVEC). The red line is a 1:1 line. (<b>a</b>) NRTC vs. POAC of Vol; (<b>b</b>) POAC vs. ESAC of Vol; (<b>c</b>) ESAC vs. AVEC of Vol; (<b>d</b>) NRTC vs. AVEC of Vol; (<b>e</b>) NRTC vs. POAC of Odd; (<b>f</b>) POAC vs. ESAC of Odd; (<b>g</b>) ESAC vs. AVEC of Odd; (<b>h</b>) NRTC vs. AVEC of Odd; (<b>i</b>) NRTC vs. POAC of Dbl; (<b>j</b>) POAC vs. ESAC of Dbl; (<b>k</b>) ESAC vs. AVEC of Dbl; (<b>l</b>) NRTC vs. AVEC of Dbl.</p>
Full article ">Figure A2
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A2 Cont.
<p>The result of feature selection: (<b>a</b>) the 32 features selected in preliminary feature selection (Boruta algorithm) based on radiative terrain correction (RTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>b</b>) the 21 features selected in preliminary feature selection (Boruta algorithm) based on non-RTC (NRTC) data, including the importance score given by the RF of the selected features, and absolute values of Pearson correlation coefficients (R) between the selected features and measured forest AGB; (<b>c</b>) the number of features selected in the second step feature selection (RFECV algorithm) based on RTC and NRTC data; (<b>d</b>) the features selected in different multivariate linear models and the variance inflation factor (VIF) value corresponding to each feature; (<b>e</b>) the features selected in different non-parametric models.</p>
Full article ">Figure A3
<p>Scatter plot of measured forest AGB and predicted forest AGB. The prediction model is an optimal regression model based on the PolSAR data processed by radiometric terrain correction (RTC) from 25 July 2020. (<b>a</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 11 July 2020. (<b>b</b>) The independent variable of the prediction model was derived from the PolSAR data (after RTC processing) from 8 August 2020.</p>
Full article ">
18 pages, 12154 KiB  
Article
Blind Edge-Retention Indicator for Assessing the Quality of Filtered (Pol)SAR Images Based on a Ratio Gradient Operator and Confidence Interval Estimation
by Xiaoshuang Ma, Le Li and Gang Wang
Remote Sens. 2024, 16(11), 1992; https://doi.org/10.3390/rs16111992 - 31 May 2024
Viewed by 449
Abstract
Speckle reduction is a key preprocessing approach for the applications of Synthetic Aperture Radar (SAR) data. For many interpretation tasks, high-quality SAR images with a rich texture and structure information are useful. Therefore, a satisfactory SAR image filter should retain this information well [...] Read more.
Speckle reduction is a key preprocessing approach for the applications of Synthetic Aperture Radar (SAR) data. For many interpretation tasks, high-quality SAR images with a rich texture and structure information are useful. Therefore, a satisfactory SAR image filter should retain this information well after processing. Some quantitative assessment indicators have been presented to evaluate the edge-preservation capability of single-polarization SAR filters, among which the non-clean-reference-based (i.e., blind) ones are attractive. However, most of these indicators are derived based only on the basic fact that the speckle is a kind of multiplicative noise, and they do not take into account the detailed statistical distribution traits of SAR data, making the assessment not robust enough. Moreover, to our knowledge, there are no specific blind assessment indicators for fully Polarimetric SAR (PolSAR) filters up to now. In this paper, a blind assessment indicator based on an SAR Ratio Gradient Operator (RGO) and Confidence Interval Estimation (CIE) is proposed. The RGO is employed to quantify the edge gradient between two neighboring image patches in both the speckled and filtered data. A decision is then made as to whether the ratio gradient value in the filtered image is close to that in the unobserved clean image by considering the statistical traits of speckle and a CIE method. The proposed indicator is also extended to assess the PolSAR filters by transforming the polarimetric scattering matrix into a scalar which follows a Gamma distribution. Experiments on the simulated SAR dataset and three real-world SAR images acquired by ALOS-PALSAR, AirSAR, and TerraSAR-X validate the robustness and reliability of the proposed indicator. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

Figure 1
<p>The unbounded PDF curves of the ratio gradient in the cases of <span class="html-italic">r</span><sub>0</sub> = 0.5 and <span class="html-italic">r</span><sub>0</sub> = 2.</p>
Full article ">Figure 2
<p>Basic idea of the proposed indicator. Red squares denote image patches and each arrow with a certain color denotes the gradient between two neighboring patches along a certain direction (for simplicity, only the gradients along two directions are shown). The gradient values in the filtered images should be close to that in the clean image if the filter retains edges well.</p>
Full article ">Figure 3
<p>Diagram of the confidence interval estimation approach.</p>
Full article ">Figure 4
<p>Single-look simulated SAR images. (<b>a</b>) The building image. (<b>b</b>) The homogeneous image.</p>
Full article ">Figure 5
<p>Real SAR images. (<b>a</b>) The ALOS-PALSAR image. (<b>b</b>) The TerraSAR-X image. (<b>c</b>) The AirSAR image.</p>
Full article ">Figure 6
<p>Filtering experiment on the single-look simulated SAR image. (<b>a</b>) The speckled building image. (<b>b</b>) The refined Lee filtered image. (<b>c</b>) The PPB filtered image. (<b>d</b>) The SAR-BM3D filtered image.</p>
Full article ">Figure 7
<p>Signal intensity of the clean reference (black line) and the filtered image (red line) along a line. (<b>a</b>) The refined Lee filter. (<b>b</b>) The PPB filter. (<b>c</b>) The SAR-BM3D filter.</p>
Full article ">Figure 8
<p>Filtering experiment on the ALOS-PALSAR image. (<b>a</b>) The original image. (<b>b</b>) The refined Lee filtered image. (<b>c</b>) The SAR-BM3D filtered image. (<b>d</b>–<b>f</b>) The images filtered by the PPB filter with IT varied from 2 to 4. (<b>g</b>–<b>k</b>) The ratio images of (<b>b</b>–<b>f</b>), respectively.</p>
Full article ">Figure 9
<p>Filtering experiment on the TerraSAR-X image. (<b>a</b>) The original image. (<b>b</b>,<b>c</b>) The filtered image and ratio image of the refined Lee filter, respectively. (<b>d</b>,<b>e</b>) The PPB filtered image with BS = 5 × 5 and 7 × 7, respectively. (<b>f</b>,<b>g</b>) The SAR-BM3D filtered image with BS = 5 × 5 and 7 × 7, respectively. (<b>h</b>–<b>k</b>) The ratio images of (<b>d</b>–<b>g</b>), respectively.</p>
Full article ">Figure 10
<p>Filtering experiment on the AirSAR image. (<b>a</b>) The original image. (<b>b</b>–<b>d</b>) The filtered image by nonlocal means, PNGF and NLTV, respectively.</p>
Full article ">
19 pages, 18815 KiB  
Article
Research on Input Schemes for Polarimetric SAR Classification Using Deep Learning
by Shuaiying Zhang, Lizhen Cui, Yue Zhang, Tian Xia, Zhen Dong and Wentao An
Remote Sens. 2024, 16(11), 1826; https://doi.org/10.3390/rs16111826 - 21 May 2024
Cited by 1 | Viewed by 687
Abstract
This study employs the reflection symmetry decomposition (RSD) method to extract polarization scattering features from ground object images, aiming to determine the optimal data input scheme for deep learning networks in polarimetric synthetic aperture radar classification. Eight distinct polarizing feature combinations were designed, [...] Read more.
This study employs the reflection symmetry decomposition (RSD) method to extract polarization scattering features from ground object images, aiming to determine the optimal data input scheme for deep learning networks in polarimetric synthetic aperture radar classification. Eight distinct polarizing feature combinations were designed, and the classification accuracy of various approaches was evaluated using the classic convolutional neural networks (CNNs) AlexNet and VGG16. The findings reveal that the commonly employed six-parameter input scheme, favored by many researchers, lacks the comprehensive utilization of polarization information and warrants attention. Intriguingly, leveraging the complete nine-parameter input scheme based on the polarization coherence matrix results in improved classification accuracy. Furthermore, the input scheme incorporating all 21 parameters from the RSD and polarization coherence matrix notably enhances overall accuracy and the Kappa coefficient compared to the other seven schemes. This comprehensive approach maximizes the utilization of polarization scattering information from ground objects, emerging as the most effective CNN input data scheme in this study. Additionally, the classification performance using the second and third component total power values (P2 and P3) from the RSD surpasses the approach utilizing surface scattering power value (PS) and secondary scattering power value (PD) from the same decomposition. Full article
Show Figures

Figure 1

Figure 1
<p>Classification of eight polarimetric data input schemes.</p>
Full article ">Figure 2
<p>Research area and ground truth map.</p>
Full article ">Figure 3
<p>Distribution of training, validation, and testing samples. (<b>a</b>) Image from 14 September 2021; (<b>b</b>) Image from 14 September 2021; (<b>c</b>) Image from 13 October 2021; (<b>d</b>) Image from 12 October 2017.</p>
Full article ">Figure 4
<p>Classification results of eight research schemes on AlexNet.</p>
Full article ">Figure 5
<p>Classification results of eight polarized data input schemes.</p>
Full article ">Figure 6
<p>Trend chart of overall classification accuracy and average accuracy.</p>
Full article ">
Back to TopTop