Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling
<p>Framework of real-world degradation modeling. Remote sensing images in the original dataset are not paired. First, the blur kernels and noise patches are collected from the original dataset, forming <math display="inline"><semantics> <mi mathvariant="script">K</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math>. Then, the paired dataset is generated using <math display="inline"><semantics> <mi mathvariant="script">K</mi> </semantics></math>, <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math> and real RSIs. At last, a novel network is trained to estimate SR results from LR inputs.</p> "> Figure 2
<p>Blur kernel estimation with KernelGAN. <span class="html-italic"><b>D</b></span> tries to differentiate between real patches and those generated by <span class="html-italic"><b>G</b></span> (fake). <span class="html-italic"><b>G</b></span> learns to downsample the image while fooling <span class="html-italic"><b>D</b></span>.</p> "> Figure 3
<p>Noise patches extraction from real RSIs.</p> "> Figure 4
<p>Network architectures of the residual balanced attention network (RBAN) generator and the modified UNet discriminator. The basic unit of RBAN is the residual balanced attention group (RBAG), which is mainly composed of a group of residual blocks (RB). The balanced attention module (BAM) is an essential component in RBAN and RBAG to improve performance. The modified UNet model is employed as a pixel-wise discriminator for more realistic reconstructions. It consists of multiple encoders and decoders to extract and reconstruct features at multiple scales.</p> "> Figure 5
<p>Visual comparison and assessment metrics of the proposed model and 7 other methods on 2 test images from the AID dataset, “airport_15” and “commercial_67”, at a ×4 scale factor. The performances are compared under 3 settings: ① train and test on bicubic AID dataset, ② train on bicubic AID dataset and test on realistic AID dataset, and ③ train and test on realistic AID dataset. Since both the bicubic and realistic AID dataset are synthesized, all referenced and non-referenced metrics are provided.</p> "> Figure 6
<p>Visual comparison and assessment metrics of the proposed model and the other eight methods on three test images from the UCMERCED dataset, “beach30”, “harbor69”, and “storagetanks87”, at ×4 scale. The performances are compared under 2 settings: ① trained on bicubic AID dataset and test on realistic UCMERCED dataset; ② trained on realistic AID dataset and tested on realistic UCMERCED dataset. As the images from UCMERCED dataset were directly used for test without downsampling, only non-referenced metrics are provided.</p> "> Figure 7
<p>Visual comparison and assessment metrics of the proposed model and the other eight methods on three test images from the RSIs-CB256 dataset, “city_building(57)”, “dam(83)”, and “storage_room(715)”, at ×4 scale. The performances are compared under 2 settings: ① trained on bicubic AID dataset and tested on realistic RSIs-CB256 dataset; ② trained on realistic AID dataset and tested on realistic RSIs-CB256 dataset. As the images RSIs-CB256 dataset were directly used for test without downsampling, only non-referenced metrics are provided.</p> "> Figure 8
<p>Visual comparison and assessment metrics of the ablation study models on three test images from the AID dataset, “center_233”, “parking_226”, and “school_33”, at ×4 scale.</p> ">
Abstract
:1. Introduction
2. Releated Work
2.1. CNN-Based SISR Methods
2.2. SISR of RSIs
3. Methodology
3.1. Realistic Degradation for SISR of RSIs
Algorithm 1 Realistic data pairs generation |
|
3.2. Estimation of Blur Kernel and Noise Patches
3.3. Residual Balanced Attention Network (RBAN)
3.4. Loss Function
4. Experiments and Analysis
4.1. Experimental Settings
4.2. Experiments on Referenced Evaluation
4.3. Experiments on Non-Referenced Evaluation
5. Discussion
5.1. Impact of Blur Kernel Estimation
5.2. Impact of Noise Patch Estimation
5.3. Impact of BAM
5.4. Impact of Discriminator
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Rau, J.Y.; Jhan, J.P.; Hsu, Y.C. Analysis of oblique aerial images for land cover and point cloud classification in an urban environment. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1304–1319. [Google Scholar] [CrossRef]
- Lang, J.; Lyu, S.; Li, Z.; Ma, Y.; Su, D. An investigation of ice surface albedo and its influence on the high-altitude lakes of the Tibetan Plateau. Remote Sens. 2018, 10, 218. [Google Scholar] [CrossRef]
- Voigt, S.; Kemper, T.; Riedlinger, T.; Kiefl, R.; Scholte, K.; Mehl, H. Satellite image analysis for disaster and crisis-management support. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1520–1528. [Google Scholar] [CrossRef]
- Ghaffarian, S.; Kerle, N.; Filatova, T. Remote sensing-based proxies for urban disaster risk management and resilience: A review. Remote Sens. 2018, 10, 1760. [Google Scholar] [CrossRef] [Green Version]
- Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2015, 54, 1349–1362. [Google Scholar] [CrossRef] [Green Version]
- Roy, P.S.; Roy, A.; Joshi, P.K.; Kale, M.P.; Srivastava, V.K.; Srivastava, S.K.; Dwevidi, R.S.; Joshi, C.; Behera, M.D.; Meiyappan, P.; et al. Development of decadal (1985–1995–2005) land use and land cover database for India. Remote Sens. 2015, 7, 2401–2430. [Google Scholar] [CrossRef] [Green Version]
- Wang, Z.; Chen, J.; Hoi, S.C. Deep Learning for Image Super-resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3365–3387. [Google Scholar] [CrossRef] [Green Version]
- Wei, P.; Lu, H.; Timofte, R.; Lin, L.; Zuo, W.; Pan, Z.; Li, B.; Xi, T.; Fan, Y.; Zhang, G.; et al. AIM 2020 challenge on real image super-resolution: Methods and results. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Swizerland, 2020; pp. 392–422. [Google Scholar]
- Bhat, G.; Danelljan, M.; Timofte, R. NTIRE 2021 challenge on burst super-resolution: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 613–626. [Google Scholar]
- Yue, L.; Shen, H.; Li, J.; Yuan, Q.; Zhang, H.; Zhang, L. Image super-resolution: The techniques, applications, and future. Signal Process. 2016, 128, 389–408. [Google Scholar] [CrossRef]
- Chen, H.; He, X.; Qing, L.; Wu, Y.; Ren, C.; Sheriff, R.E.; Zhu, C. Real-world single image super-resolution: A brief review. Inf. Fusion 2022, 79, 124–145. [Google Scholar] [CrossRef]
- McKinley, S.; Levine, M. Cubic spline interpolation. Coll. Redwoods 1998, 45, 1049–1060. [Google Scholar]
- Lu, T.; Wang, J.; Zhang, Y.; Wang, Z.; Jiang, J. Satellite image super-resolution via multi-scale residual deep neural network. Remote Sens. 2019, 11, 1588. [Google Scholar] [CrossRef] [Green Version]
- Xiong, Y.; Guo, S.; Chen, J.; Deng, X.; Sun, L.; Zheng, X.; Xu, W. Improved SRGAN for remote sensing image super-resolution across locations and sensors. Remote Sens. 2020, 12, 1263. [Google Scholar] [CrossRef] [Green Version]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef] [Green Version]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Li, H.; Dou, X.; Tao, C.; Wu, Z.; Chen, J.; Peng, J.; Deng, M.; Zhao, L. RSI-CB: A large-scale remote sensing image classification benchmark using crowdsourced data. Sensors 2020, 20, 1594. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Yuan, Q.; Yuan, Q.; Li, J.; Sun, J.; Zhang, X. Scene-adaptive remote sensing image super-resolution using a multiscale attention network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4764–4779. [Google Scholar] [CrossRef]
- Zhang, X.; Li, Z.; Zhang, T.; Liu, F.; Tang, X.; Chen, P.; Jiao, L. Remote Sensing Image Super-Resolution via Dual-Resolution Network Based on Connected Attention Mechanism. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5611013. [Google Scholar] [CrossRef]
- Zhang, D.; Shao, J.; Li, X.; Shen, H.T. Remote Sensing Image Super-Resolution via Mixed High-Order Attention Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5183–5196. [Google Scholar] [CrossRef]
- Bell-Kligler, S.; Shocher, A.; Irani, M. Blind super-resolution kernel estimation using an internal-GAN. Adv. Neural Inf. Process. Syst. 2019, 32, 1–10. [Google Scholar]
- Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1914–1923. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1833–1844. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
- Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse-representations. In Proceedings of the International Conference on Curves and Surfaces, Avignon, France, 24–30 June 2010; Springer: Berlin/Heidelberg, Germany; pp. 711–730.
- Timofte, R.; De Smet, V.; Van Gool, L. Anchored neighborhood regression for fast example-based super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1920–1927. [Google Scholar]
- Caballero, J.; Ledig, C.; Aitken, A.; Acosta, A.; Totz, J.; Wang, Z.; Shi, W. Real-time video super-resolution with spatio-temporal networks and motion compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4778–4787. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; Volume 60, pp. 1646–1654. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1132–1140. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Encyclopedia of Signaling Molecules; Springer: New York, NY, USA, 2018; pp. 294–310. [Google Scholar]
- Anwar, S.; Barnes, N. Densely residual laplacian super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1192–1204. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Q.; Ng, R.; Koltun, V. Zoom to learn, learn to zoom. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3762–3770. [Google Scholar]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 701–710. [Google Scholar]
- Nguyen, N.; Milanfar, P. A wavelet-based interpolation-restoration method for superresolution (wavelet superresolution). Circuits Syst. Signal Process. 2000, 19, 321–338. [Google Scholar] [CrossRef]
- Li, F.; Jia, X.; Fraser, D.; Lambert, A. Super resolution for remote sensing images based on a universal hidden Markov tree model. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1270–1278. [Google Scholar]
- Pan, Z.; Yu, J.; Huang, H.; Hu, S.; Zhang, A.; Ma, H.; Sun, W. Super-resolution based on compressive sensing and structural self-similarity for remote sensing images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4864–4876. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z.; Zou, Z. Super-resolution for remote sensing images via local–global combined network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1243–1247. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Z.; Yi, P.; Jiang, J.; Xiao, J.; Yao, Y. Deep distillation recursive network for remote sensing imagery super-resolution. Remote Sens. 2018, 10, 1700. [Google Scholar] [CrossRef] [Green Version]
- Ma, W.; Pan, Z.; Guo, J.; Lei, B. Achieving super-resolution remote sensing images via the wavelet transform combined with the recursive res-net. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3512–3527. [Google Scholar] [CrossRef]
- Guo, M.; Zhang, Z.; Liu, H.; Huang, Y. NDSRGAN: A Novel Dense Generative Adversarial Network for Real Aerial Imagery Super-Resolution Reconstruction. Remote Sens. 2022, 14, 1574. [Google Scholar] [CrossRef]
- Michaeli, T.; Irani, M. Nonparametric blind super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 945–952. [Google Scholar]
- Wang, F.; Hu, H.; Shen, C. BAM: A Balanced Attention Mechanism for Single Image Super Resolution. arXiv 2021, arXiv:2104.07566. [Google Scholar]
- Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard GAN. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar]
- Chang, H.w.; Zhang, Q.w.; Wu, Q.g.; Gan, Y. Perceptual image quality assessment by independent feature detector. Neurocomputing 2015, 151, 1142–1152. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 586–595. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, Q.; Lin, M.; Yang, G.; He, C. No-reference color image quality assessment: From entropy to perceptual quality. EURASIP J. Image Video Process. 2019, 2019, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1664–1673. [Google Scholar]
Train | Test | Methods | PSNR↑ | SSIM↑ | IFS↑ | LPIPS↓ | NIQE↓ | ENIQA↓ |
---|---|---|---|---|---|---|---|---|
Bicubic AID | Bicubic AID | Bicubic | 25.73 | 0.7267 | 0.8416 | 0.5532 | 6.402 | 0.4534 |
SRCNN [24] | 26.54 | 0.7670 | 0.8629 | 0.4309 | 7.412 | 0.3288 | ||
VDSR [28] | 27.00 | 0.7861 | 0.8736 | 0.3805 | 7.140 | 0.2815 | ||
DDBPN [51] | 27.10 | 0.7906 | 0.8759 | 0.3741 | 6.854 | 0.2836 | ||
EDSR [30] | 27.24 | 0.7960 | 0.8794 | 0.3688 | 6.739 | 0.2862 | ||
SRGAN [29] | 26.10 | 0.7527 | 0.8605 | 0.3273 | 6.307 | 0.1720 | ||
DRLN [32] | 27.43 | 0.8034 | 0.8822 | 0.3499 | 6.706 | 0.2702 | ||
RBAN-UNet | 27.25 | 0.7944 | 0.8752 | 0.2710 | 5.318 | 0.2082 | ||
Bicubic AID | Realistic AID | Bicubic | 23.65 | 0.6332 | 0.7417 | 0.7207 | 7.112 | 0.5602 |
SRCNN [24] | 23.93 | 0.6483 | 0.7556 | 0.6478 | 7.663 | 0.5053 | ||
VDSR [28] | 23.90 | 0.6479 | 0.7563 | 0.6493 | 7.425 | 0.5077 | ||
DDBPN [51] | 23.91 | 0.6484 | 0.7570 | 0.6486 | 7.207 | 0.5143 | ||
EDSR [30] | 23.91 | 0.6485 | 0.7576 | 0.6600 | 7.360 | 0.5217 | ||
SRGAN [29] | 23.64 | 0.6338 | 0.7576 | 0.6270 | 5.615 | 0.4188 | ||
DRLN [32] | 23.92 | 0.6488 | 0.7584 | 0.6672 | 7.265 | 0.5274 | ||
RBAN-UNet | 23.88 | 0.6468 | 0.7570 | 0.6552 | 6.683 | 0.5349 | ||
Realistic AID | Realistic AID | Bicubic | 23.65 | 0.6332 | 0.7417 | 0.7207 | 7.112 | 0.5602 |
SRCNN [24] | 24.81 | 0.6926 | 0.7803 | 0.5607 | 7.398 | 0.4013 | ||
VDSR [28] | 25.14 | 0.7109 | 0.7936 | 0.4921 | 8.266 | 0.3299 | ||
DDBPN [51] | 25.29 | 0.7180 | 0.8009 | 0.4827 | 7.915 | 0.3337 | ||
EDSR [30] | 25.45 | 0.7250 | 0.8079 | 0.4750 | 7.404 | 0.3462 | ||
SRGAN [29] | 24.14 | 0.6649 | 0.7877 | 0.3749 | 4.547 | 0.1647 | ||
DRLN [32] | 24.83 | 0.7038 | 0.7840 | 0.5523 | 7.497 | 0.3814 | ||
RBAN-UNet | 25.68 | 0.7336 | 0.8160 | 0.3548 | 5.736 | 0.2462 |
Train | Methods | NIQE↓ | ENIQA↓ |
---|---|---|---|
Bicubic AID | Bicubic | 6.362 | 0.5368 |
SRCNN [24] | 7.431 | 0.4336 | |
VDSR [28] | 7.337 | 0.4073 | |
DDBPN [51] | 7.265 | 0.4064 | |
EDSR [30] | 6.848 | 0.4209 | |
SRGAN [29] | 5.719 | 0.2827 | |
DRLN [32] | 6.400 | 0.4219 | |
RBAN-UNet | 5.237 | 0.3827 | |
Realistic AID | Bicubic | 6.362 | 0.5368 |
SRCNN [24] | 7.940 | 0.3907 | |
VDSR [28] | 6.295 | 0.3791 | |
DDBPN [51] | 6.085 | 0.3646 | |
EDSR [30] | 5.961 | 0.3909 | |
SRGAN [29] | 4.329 | 0.1516 | |
DRLN [32] | 7.033 | 0.4091 | |
RBAN-UNet | 4.709 | 0.3169 |
Train | Methods | NIQE↓ | ENIQA↓ |
---|---|---|---|
Bicubic AID | Bicubic | 6.896 | 0.5670 |
SRCNN [24] | 6.424 | 0.4941 | |
VDSR [28] | 6.169 | 0.4876 | |
DDBPN [51] | 6.091 | 0.4908 | |
EDSR [30] | 6.159 | 0.5011 | |
SRGAN [29] | 5.453 | 0.4019 | |
DRLN [32] | 6.133 | 0.5003 | |
RBAN-UNet | 5.308 | 0.5393 | |
Realistic AID | Bicubic | 6.896 | 0.5670 |
SRCNN [24] | 6.780 | 0.4452 | |
VDSR [28] | 6.194 | 0.4462 | |
DDBPN [51] | 6.178 | 0.4475 | |
EDSR [30] | 6.426 | 0.4847 | |
SRGAN [29] | 3.979 | 0.2143 | |
DRLN [32] | 6.123 | 0.4917 | |
RBAN-UNet | 4.953 | 0.4369 |
Models | PSNR↑ | SSIM↑ | IFS↑ | LPIPS↓ | NIQE↓ | ENIQA↓ |
---|---|---|---|---|---|---|
Bicubic | 23.652 | 0.6332 | 0.7417 | 0.7207 | 7.112 | 0.5602 |
RBAN-UNet (w/o degradation) | 23.885 | 0.6468 | 0.7570 | 0.6552 | 6.683 | 0.5349 |
RBAN-UNet (w/o blur) | 23.722 | 0.6411 | 0.7528 | 0.6316 | 6.649 | 0.5335 |
RBAN-UNet (w/o noise) | 24.980 | 0.7053 | 0.7876 | 0.4403 | 5.239 | 0.2739 |
RBAN-UNet (w/o BAM) | 25.610 | 0.7310 | 0.8137 | 0.3602 | 5.745 | 0.2487 |
RBAN-VGG | 25.029 | 0.7186 | 0.8000 | 0.2722 | 4.877 | 0.1454 |
RBAN-UNet | 25.676 | 0.7336 | 0.8160 | 0.3548 | 5.736 | 0.2462 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, J.; Xu, T.; Li, J.; Jiang, S.; Zhang, Y. Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling. Remote Sens. 2022, 14, 2895. https://doi.org/10.3390/rs14122895
Zhang J, Xu T, Li J, Jiang S, Zhang Y. Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling. Remote Sensing. 2022; 14(12):2895. https://doi.org/10.3390/rs14122895
Chicago/Turabian StyleZhang, Jizhou, Tingfa Xu, Jianan Li, Shenwang Jiang, and Yuhan Zhang. 2022. "Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling" Remote Sensing 14, no. 12: 2895. https://doi.org/10.3390/rs14122895
APA StyleZhang, J., Xu, T., Li, J., Jiang, S., & Zhang, Y. (2022). Single-Image Super Resolution of Remote Sensing Images with Real-World Degradation Modeling. Remote Sensing, 14(12), 2895. https://doi.org/10.3390/rs14122895