Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Intelligent Remote Sensing: AI-Powered Techniques for Enhanced Data Analysis and Interpretation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 August 2025 | Viewed by 4099

Special Issue Editors


E-Mail Website
Guest Editor
School of Aerospace Science and Technology, Xidian University, Xi’an 710126, China
Interests: hyperspectral image; remote sensing image processing; artificial intelligence; hyperspectral anomaly detection; object detection; radar signal processing

E-Mail Website
Guest Editor
Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen 61102, China
Interests: intelligent data analysis; machine learning and computer vision; computational intelligent; artificial intelligence in biomedical and industry
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Intelligent Remote Sensing harnesses the power of Artificial Intelligence to revolutionize data analysis and interpretation in remote sensing. As remote sensing data deluge in from satellites and drones, AI techniques offer unprecedented capabilities for processing, classifying, and extracting insights from this vast trove of information. By automating complex tasks and revealing patterns unseen to the naked eye, this research area is crucial for advancing precision agriculture, environmental monitoring, disaster response, and urban planning, among others.

This Special Issue is highly relevant to the scope of remote sensing, as it addresses a key trend in the field: the integration of AI with remote sensing technologies. By highlighting the potential and applications of AI-driven techniques, it contributes to advancing the state of the art in remote sensing research and practice, fostering interdisciplinary collaboration and innovation.

Articles may include, but are not limited to, the following topics:

  • General remote sensing image processing, including object detection, classification, segmentation, anomaly detection, change detection, denoising, fusion, etc.
  • Real-world application with remote sensing data, such as optical images, SAR images, multispectral/hyperspectral images, multi-source data, and so on.
  • Methodology: deep learning models, traditional models, interpretable models, etc.

Prof. Dr. Hai Wang
Dr. Nianyin Zeng
Dr. Shou Feng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image processing
  • remote sensing semantic analysis
  • remote sensing applications
  • big data analysis
  • data fusion
  • artificial intelligence
  • machine learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 3505 KiB  
Article
DeepDR: A Two-Level Deep Defect Recognition Framework for Meteorological Satellite Images
by Xiangang Zhao, Xiangyu Chang, Cunqun Fan, Manyun Lin, Lan Wei and Yunming Ye
Remote Sens. 2025, 17(4), 585; https://doi.org/10.3390/rs17040585 - 8 Feb 2025
Abstract
Raw meteorological satellite images often suffer from defects such as noise points and lines due to atmospheric interference and instrument errors. Current solutions typically rely on manual visual inspection to identify these defects. However, manual inspection is labor-intensive, lacks uniform standards, and is [...] Read more.
Raw meteorological satellite images often suffer from defects such as noise points and lines due to atmospheric interference and instrument errors. Current solutions typically rely on manual visual inspection to identify these defects. However, manual inspection is labor-intensive, lacks uniform standards, and is prone to both false positives and missed detections. To address these challenges, we propose DeepDR, a two-level deep defect recognition framework for meteorological satellite images. DeepDR consists of two modules: a transformer-based noise image classification module for the first level and a noise region segmentation module based on a pseudo-label training strategy for the second level. This framework enables the automatic identification of defective cloud images and the detection of noise points and lines, thereby significantly improving the accuracy of defect recognition. To evaluate the effectiveness of DeepDR, we have collected and released two satellite cloud image datasets from the FengYun-1 satellite, which include noise points and lines. Subsequently, we conducted comprehensive experiments to demonstrate the superior performance of our approach in addressing the satellite cloud image defect recognition problem. Full article
Show Figures

Figure 1

Figure 1
<p>Some examples of satellite cloud images with noise points and lines are presented. The noise points have been marked with red boxes. In (<b>a</b>,<b>b</b>), satellite cloud images with noise points are displayed, while (<b>c</b>,<b>d</b>) show images with noise lines.</p>
Full article ">Figure 2
<p>Illustration of the proposed framework DeepDR.</p>
Full article ">Figure 3
<p>Illustration of the Transformer-based noise image classifier.</p>
Full article ">Figure 4
<p>Illustration of the proposed pseudo-label-based noise region segmentation.</p>
Full article ">Figure 5
<p>Some example images of a noise point dataset are shown.</p>
Full article ">Figure 6
<p>Some example images of a noise line dataset are displayed.</p>
Full article ">Figure 7
<p>A selection of results from the noise points experiment. (<b>a</b>,<b>b</b>) are remote sensing satellite images containing noises, and (<b>c</b>,<b>d</b>) are normal remote sensing satellite images.</p>
Full article ">Figure 8
<p>A selection of results from the noise lines experiment. (<b>a</b>,<b>b</b>) are remote sensing satellite images containing lines, and (<b>c</b>,<b>d</b>) are normal remote sensing satellite images.</p>
Full article ">Figure 9
<p>The precision of all methods on normal and noise point images.</p>
Full article ">Figure 10
<p>The recall of all methods on normal and noise point images.</p>
Full article ">Figure 11
<p>The F1 score for all methods on normal and noise point images.</p>
Full article ">Figure 12
<p>The precision of all methods on normal and noise line images.</p>
Full article ">Figure 13
<p>The recall of all methods on normal and noise line images.</p>
Full article ">Figure 14
<p>The F1 score for all methods on normal and noise line images.</p>
Full article ">Figure 15
<p>The visualization results of image segmentation methods for meteorological satellite images containing noise points. The noise points have been marked with red boxes.</p>
Full article ">Figure 16
<p>The visualization results of image segmentation methods for meteorological satellite images containing noise lines.</p>
Full article ">
25 pages, 10792 KiB  
Article
Multiscale Spatial–Spectral Dense Residual Attention Fusion Network for Spectral Reconstruction from Multispectral Images
by Moqi Liu, Wenjuan Zhang and Haizhu Pan
Remote Sens. 2025, 17(3), 456; https://doi.org/10.3390/rs17030456 - 29 Jan 2025
Abstract
Spectral reconstruction (SR) from multispectral images (MSIs) is a crucial task in remote sensing image processing, aiming to enhance the spectral resolution of MSIs to produce hyperspectral images (HSIs). However, most existing deep learning-based SR methods primarily focus on deeper network architectures, often [...] Read more.
Spectral reconstruction (SR) from multispectral images (MSIs) is a crucial task in remote sensing image processing, aiming to enhance the spectral resolution of MSIs to produce hyperspectral images (HSIs). However, most existing deep learning-based SR methods primarily focus on deeper network architectures, often overlooking the importance of extracting multiscale spatial and spectral features in the MSIs. To bridge this gap, this paper proposes a multiscale spatial–spectral dense residual attention fusion network (MS2Net) for SR. Specifically, considering the multiscale nature of the land-cover types in the MSIs, a three-dimensional multiscale hierarchical residual module is designed and embedded in the head of the proposed MS2Net to extract spatial and spectral multiscale features. Subsequently, we employ a two-pathway architecture to extract deep spatial and spectral features. Both pathways are constructed with a single-shot dense residual module for efficient feature learning and a residual composite soft attention module to enhance salient spatial and spectral features. Finally, the spatial and spectral features extracted from the different pathways are integrated using an adaptive weighted feature fusion module to reconstruct HSIs. Extensive experiments on both simulated and real-world datasets demonstrate that the proposed MS2Net achieves superior performance compared to state-of-the-art SR methods. Moreover, classification experiments on the reconstructed HSIs show that the proposed MS2Net-reconstructed HSIs achieve classification accuracy that is comparable to that of real HSIs. Full article
Show Figures

Figure 1

Figure 1
<p>Visualization of multiscale spatial and spectral features captured by ZY-102D over Jiaxing, China [<a href="#B42-remotesensing-17-00456" class="html-bibr">42</a>]: The left panel highlights the spatial differences between urban (small-scale complex structure), forest (medium-scale canopy coverage), and ocean (large-scale homogeneous region). The right panel presents their distinct spectral signatures across multiple bands.</p>
Full article ">Figure 2
<p>Overall flowchart of the proposed MS2Net. “©” represents the concatenate operation. “⊗” means matrix multiplication. “⊕” represents the element-wise addition operation.</p>
Full article ">Figure 3
<p>Illustration of the 3-D MHR module.</p>
Full article ">Figure 4
<p>Illustration of the RCSA module.</p>
Full article ">Figure 5
<p>Five selected bands for hyperspectral reconstruction error maps from the IP dataset.</p>
Full article ">Figure 6
<p>Four selected bands for hyperspectral reconstruction error maps from the PU dataset.</p>
Full article ">Figure 7
<p>Four selected bands for hyperspectral reconstruction error maps from the CQ dataset.</p>
Full article ">Figure 8
<p>Four selected bands for hyperspectral reconstruction error maps from the JX dataset.</p>
Full article ">Figure 9
<p>Spectral response curves for three selected sample points in the IP dataset.</p>
Full article ">Figure 10
<p>Spectral response curves for three selected sample points in the PU dataset.</p>
Full article ">Figure 11
<p>Spectral response curves for three selected sample points in the CQ dataset.</p>
Full article ">Figure 12
<p>Spectral response curves for three selected sample points in the JX dataset.</p>
Full article ">Figure 13
<p>The convergence of the proposed MS2Net on four datasets.</p>
Full article ">Figure 14
<p>Full-pixel classification maps of reconstructed HSIs for different methods on the IP dataset.</p>
Full article ">Figure 15
<p>Full-pixel classification maps of reconstructed HSIs for different methods on the PU dataset.</p>
Full article ">
20 pages, 4706 KiB  
Article
Band Selection Algorithm Based on Multi-Feature and Affinity Propagation Clustering
by Junbin Zhuang, Wenying Chen, Xunan Huang and Yunyi Yan
Remote Sens. 2025, 17(2), 193; https://doi.org/10.3390/rs17020193 - 8 Jan 2025
Viewed by 393
Abstract
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification [...] Read more.
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification and recognition of these images present significant challenges. In this paper, we propose a band selection method (GE-AP) based on multi-feature extraction and the Affine Propagation Clustering (AP) algorithm for dimensionality reduction of hyperspectral images, aiming to improve classification accuracy and processing efficiency. In this method, texture features of the band images are extracted using the Gray-Level Co-occurrence Matrix (GLCM), and the Euclidean distance between bands is calculated. A similarity matrix is then constructed by integrating multi-feature information. The AP algorithm clusters the bands of the hyperspectral images to achieve effective band dimensionality reduction. Through simulation and comparison experiments evaluating the overall classification accuracy (OA) and Kappa coefficient, it was found that the GE-AP method achieves the highest OA and Kappa coefficient compared to three other methods, with maximum increases of 8.89% and 13.18%, respectively. This verifies that the proposed method outperforms traditional single-information methods in handling spatial and spectral redundancy between bands, demonstrating good adaptability and stability. Full article
Show Figures

Figure 1

Figure 1
<p>AP algorithm indicates the transmission process of two kinds of messages.</p>
Full article ">Figure 2
<p>Band selection flow chart based on multi-feature and Affine Propagation Clustering algorithm.</p>
Full article ">Figure 3
<p>Real object distribution map.</p>
Full article ">Figure 4
<p>Pavia University hyperspectral image of the first three principal component images.</p>
Full article ">Figure 5
<p>Hyperspectral image component lithotripsy.</p>
Full article ">Figure 6
<p>Pavia Center hyperspectral image of the first three principal component images.</p>
Full article ">Figure 7
<p>Band index line plots of hyperspectral images of Pavia University and Pavia Cener based on ABS method.</p>
Full article ">Figure 8
<p>Grayscale of correlation coefficient.</p>
Full article ">Figure 9
<p>The nearest neighbors of hyperspectral images can transmit correlation curves.</p>
Full article ">Figure 10
<p>Line chart of band coefficients in each subspace of hyperspectral image.</p>
Full article ">Figure 11
<p>The Pavia University band represents the image.</p>
Full article ">Figure 12
<p>The Pavia Center band represents the image.</p>
Full article ">
27 pages, 7948 KiB  
Article
SSUM: Spatial–Spectral Unified Mamba for Hyperspectral Image Classification
by Song Lu, Min Zhang, Yu Huo, Chenhao Wang, Jingwen Wang and Chenyu Gao
Remote Sens. 2024, 16(24), 4653; https://doi.org/10.3390/rs16244653 (registering DOI) - 12 Dec 2024
Viewed by 694
Abstract
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and [...] Read more.
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and spectral features in hyperspectral images (HSIs). However, the transformer-based method suffers from high computational complexity, especially in HSIC tasks that require processing large amounts of data. In addition, the spatial variability inherent in HSIs limits the performance improvement of HSIC. To handle these challenges, a novel Spectral–Spatial Unified Mamba (SSUM) model is proposed, which introduces the State Space Model (SSM) into HSIC tasks to reduce computational complexity and improve model performance. The SSUM model is composed of two branches, i.e., the Spectral Mamba branch and the Spatial Mamba branch, designed to extract the features of HSIs from both spectral and spatial perspectives. Specifically, in the Spectral Mamba branch, a nearest-neighbor spectrum fusion (NSF) strategy is proposed to alleviate the interference caused by the spatial variability (i.e., same object having different spectra). In addition, a novel sub-spectrum scanning (SS) mechanism is proposed, which scans along the sub-spectrum dimension to enhance the model’s perception of subtle spectral details. In the Spatial Mamba branch, a Spatial Mamba (SM) module is designed by combining a 2D Selective Scan Module (SS2D) and Spatial Attention (SA) into a unified network to sufficiently extract the spatial features of HSIs. Finally, the classification results are derived by uniting the output feature of the Spectral Mamba and Spatial Mamba branch, thus improving the comprehensive performance of HSIC. The ablation studies verify the effectiveness of the proposed NSF, SS, and SM. Comparison experiments on four public HSI datasets show the superior of the proposed SSUM. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The overall architecture of the proposed SSUM, including (<b>b</b>) Spatial Mamba; (<b>c</b>) nearest-neighbor spectrum fusion (NSF) strategy; (<b>d</b>) sub-spectrum scanning (SS) mechanism; and (<b>e</b>) 2D Selective Scan Module (SS2D). Specifically, (<b>a</b>) denotes the overall architecture of the SSUM; (<b>b</b>) denotes the Spatial Mamba, which corresponds to the Spatial Mamba in (<b>a</b>); (<b>c</b>) denotes the nearest-neighbor spectrum fusion (NSF) strategy, which corresponds to the NSF in (<b>a</b>); (<b>d</b>) denotes the sub-spectrum scanning (SS) mechanism, which corresponds to the SS in (<b>a</b>); (<b>e</b>) denotes the 2D selective scan module (SS2D), which corresponds to the SS2D in (<b>b</b>).</p>
Full article ">Figure 2
<p>Improved Spatial Attention mechanism applied to this method.</p>
Full article ">Figure 3
<p>Indian Pines dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground truth.</p>
Full article ">Figure 4
<p>Pavia University dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground truth.</p>
Full article ">Figure 5
<p>Salinas Valley dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground truth.</p>
Full article ">Figure 6
<p>WHU-Hi-Long Kou dataset. (<b>a</b>) False-color map. (<b>b</b>) Ground truth.</p>
Full article ">Figure 7
<p>Classification maps produced by various methods applied to the Indian Pines dataset: (<b>a</b>) false-color map, (<b>b</b>) ground truth, (<b>c</b>) KNN, (<b>d</b>) RF, (<b>e</b>) 1DCNN, (<b>f</b>) 2DCNN, (<b>g</b>) HybridSN, (<b>h</b>) IRTS-3DCNN, (<b>i</b>) CasRNN, (<b>j</b>) ViT, (<b>k</b>) SpectralFormer, (<b>l</b>) GraphGST, (<b>m</b>) SS-Mamba, and (<b>n</b>) SSUM.</p>
Full article ">Figure 8
<p>Classification maps produced by various methods applied to the Pavia University dataset: (<b>a</b>) false-color map, (<b>b</b>) ground truth, (<b>c</b>) KNN, (<b>d</b>) RF, (<b>e</b>) 1DCNN, (<b>f</b>) 2DCNN, (<b>g</b>) HybridSN, (<b>h</b>) IRTS-3DCNN, (<b>i</b>) CasRNN, (<b>j</b>) ViT, (<b>k</b>) SpectralFormer, (<b>l</b>) GraphGST, (<b>m</b>) SS-Mamba, and (<b>n</b>) SSUM.</p>
Full article ">Figure 9
<p>Classification maps produced by various methods applied to the Salinas Valley dataset: (<b>a</b>) false-color map, (<b>b</b>) ground truth, (<b>c</b>) KNN, (<b>d</b>) RF, (<b>e</b>) 1DCNN, (<b>f</b>) 2DCNN, (<b>g</b>) HybridSN, (<b>h</b>) IRTS-3DCNN, (<b>i</b>) CasRNN, (<b>j</b>) ViT, (<b>k</b>) SpectralFormer, (<b>l</b>) GraphGST, (<b>m</b>) SS-Mamba, and (<b>n</b>) SSUM.</p>
Full article ">Figure 10
<p>Classification maps produced by various methods applied to the WHU-Hi-LongKou dataset: (<b>a</b>) false-color map, (<b>b</b>) ground truth, (<b>c</b>) KNN, (<b>d</b>) RF, (<b>e</b>) 1DCNN, (<b>f</b>) 2DCNN, (<b>g</b>) HybridSN, (<b>h</b>) IRTS-3DCNN, (<b>i</b>) CasRNN, (<b>j</b>) ViT, (<b>k</b>) SpectralFormer, (<b>l</b>) GraphGST, (<b>m</b>) SS-Mamba, and (<b>n</b>) SSUM.</p>
Full article ">Figure 11
<p>Impacts of the different parameters for the proposed SSUM. (<b>a</b>) Impact of the neighborhood size. (<b>b</b>) Impact of the patch size. (<b>c</b>) Impact of the sub-spectrum length. (<b>d</b>) Impact of the number of bands after PCA.</p>
Full article ">Figure 12
<p>Classification map detail analysis. (<b>a</b>) Salinas Valley classified by GraphGST. (<b>b</b>) Salinas Valley classified by SSUM. (<b>c</b>) Long Kou classified by GraphGST. (<b>d</b>) Long Kou classified by SSUM.</p>
Full article ">
20 pages, 6585 KiB  
Article
Remote Sensing Image Denoising Based on Feature Interaction Complementary Learning
by Shaobo Zhao, Youqiang Dong, Xi Cheng, Yu Huo, Min Zhang and Hai Wang
Remote Sens. 2024, 16(20), 3820; https://doi.org/10.3390/rs16203820 - 14 Oct 2024
Viewed by 1082
Abstract
Optical remote sensing images are of considerable significance in a plethora of applications, including feature recognition and scene semantic segmentation. However, the quality of remote sensing images is compromised by the influence of various types of noise, which has a detrimental impact on [...] Read more.
Optical remote sensing images are of considerable significance in a plethora of applications, including feature recognition and scene semantic segmentation. However, the quality of remote sensing images is compromised by the influence of various types of noise, which has a detrimental impact on their practical applications in the aforementioned fields. Furthermore, the intricate texture characteristics inherent to remote sensing images present a significant hurdle in the removal of noise and the restoration of image texture details. In order to address these challenges, we propose a feature interaction complementary learning (FICL) strategy for remote sensing image denoising. In practical terms, the network is comprised of four main components: noise predictor (NP), reconstructed image predictor (RIP), feature interaction module (FIM), and fusion module. The combination of these modules serves to not only complete the fusion of the prediction results of NP and RIP, but also to achieve a deep coupling of the characteristics of the two predictors. Consequently, the advantages of noise prediction and reconstructed image prediction can be combined, thereby enhancing the denoising capability of the model. Furthermore, comprehensive experimentation on both synthetic Gaussian noise datasets and real-world denoising datasets has demonstrated that FICL has achieved favorable outcomes, emphasizing the efficacy and robustness of the proposed framework. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structure diagram of feature interactive complementary learning (FICL) remote sensing image denoising strategy.</p>
Full article ">Figure 2
<p>(<b>a</b>) Res2Net module and (<b>b</b>) MSResNet module.</p>
Full article ">Figure 3
<p>RIP structure diagram based on multi-scale ResNet (MSResNet). MSResUnet adopts the unset structure improved by CBAM and MSResNet. The k, n, and s represent kernel size, channel number, and step of each cycle layer, respectively.</p>
Full article ">Figure 4
<p>Structure diagram of noise predictor (NP) based on Unet. The k, n, and s represent kernel size, channel number, and step of each convolution layer and deconvolution layer, respectively.</p>
Full article ">Figure 5
<p>Structure diagram of FIM.</p>
Full article ">Figure 6
<p>Structure diagram of fusion module.</p>
Full article ">Figure 7
<p>The figure shows the visual comparison of denoising performance of various methods on NWPU-RESISC45 dataset when the noise level σ = 15. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 8
<p>The figure shows the visual comparison of denoising performance of various methods on NWPU-RESISC45 data set when the noise level σ = 50. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 9
<p>The figure shows the visual comparison of the denoising performance of various methods on the UCMerced_LandUse dataset when the noise level σ = 15. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 10
<p>The figure shows the visual comparison of the denoising performance of various methods on the UCMerced_LandUse dataset when the noise level σ = 50. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 11
<p>The visual demonstration showcasing the effectiveness of diverse denoising methodologies applied to example 1 of the PolyU dataset is presented. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 12
<p>The visual demonstration showcasing the effectiveness of diverse denoising methodologies applied to example 2 of the PolyU dataset is presented. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 13
<p>The visual demonstration showcasing the effectiveness of diverse denoising methodologies applied to example 1 of the SIDD dataset is presented. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">Figure 14
<p>The visual demonstration showcasing the effectiveness of diverse denoising methodologies applied to example 2 of the SIDD dataset is presented. (<b>a</b>) Noisy image. (<b>b</b>) Clean image. (<b>c</b>) BM3D. (<b>d</b>) DnCNN. (<b>e</b>) DUMNP. (<b>f</b>) CBDNet. (<b>g</b>) DGCL. (<b>h</b>) FICL. The red box marks the magnified area.</p>
Full article ">
Back to TopTop