A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction
<p>The architecture of the proposed method.</p> "> Figure 2
<p>The architecture of the CFEM.</p> "> Figure 3
<p>(<b>a</b>) The architecture of the overall SDEM. (<b>b</b>) The architecture of spatial attention.</p> "> Figure 4
<p>The architecture of the overall DRRM.</p> "> Figure 5
<p>The architecture of the SRRM.</p> "> Figure 6
<p>The architecture of the CRRM.</p> "> Figure 7
<p>The fusion results of various methods on the QB simulation dataset.</p> "> Figure 8
<p>The absolute error maps between the fusion results of all methods and the reference image on the QB simulation dataset.</p> "> Figure 9
<p>The fusion results of various methods on the GaoFen1 simulation dataset.</p> "> Figure 10
<p>The absolute error maps between the fusion results of all methods and the reference image on the GaoFen1 simulation dataset.</p> "> Figure 11
<p>The fusion results of various methods on the WV2 simulation dataset.</p> "> Figure 12
<p>The absolute error maps between the fusion results of all methods and the reference image on the WV2 simulation dataset.</p> "> Figure 13
<p>The fusion results of various methods on the QB real dataset.</p> "> Figure 14
<p>The fusion results of various methods on the GaoFen1 real dataset.</p> "> Figure 15
<p>The fusion results of various methods on the WV2 real dataset.</p> "> Figure 16
<p>The fusion results after the removal of certain modules. (<b>a</b>) LRMS image, (<b>b</b>) PAN image, (<b>c</b>) reference image, (<b>d</b>) w/o DRRM, (<b>e</b>) w/o CFEM, (<b>f</b>) w/o SDEM, (<b>g</b>) w/o (SDEM + DRRM), (<b>h</b>) w/o (CFEM + DRRM), (<b>i</b>) w/o (CFEM + SDEM), (<b>j</b>) none_modules, and (<b>k</b>) ours.</p> "> Figure 17
<p>Residual plots comparing the outcomes of the experiments with different module omissions against the reference image. (<b>a</b>) Reference image, (<b>b</b>) w/o DRRM, (<b>c</b>) w/o CFEM, (<b>d</b>) w/o SDEM, (<b>e</b>) w/o (SDEM + DRRM), (<b>f</b>) w/o (CFEM + DRRM), (<b>g</b>) w/o (CFEM + SDEM), (<b>h</b>) none_modules, and (<b>i</b>) ours.</p> "> Figure 18
<p>The output results of the various network structures along with their residual plots relative to the reference image. (<b>a</b>) Reference image, (<b>b</b>) concatenation, (<b>c</b>) direct injection, (<b>d</b>) one-stage, (<b>e</b>) two-stage, and (<b>f</b>) Ours.</p> ">
Abstract
:1. Introduction
- Overlooking of size variability: Some methodologies, exemplified by the works of [27] and [31], operate under the assumption that changes in the sizes of MS and PAN images do not impact the fusion outcomes. This assumption fails to consider the variable information content present in MS and PAN images at different scales, and the networks often utilize simplistic, single-stage fusion. This approach can overlook critical details necessary for high-quality fusion.
- Undifferentiated feature handling: Techniques such as the one mentioned in [30] process stacked feature maps of MS and PAN images as uniform inputs into the feature extraction network or utilize identical subnetworks for both MS and PAN feature extraction (such as [29]). This method does not account for the inherent differences between spectral and spatial features, potentially leading to a lack of model interpretability and underutilization of the unique characteristics each feature type offers.
- Overlooking of redundancy: Most existing methods (such as [32]) perform feature fusion after feature extraction, during which the generation of redundant information is inevitable. However, these methods lack consideration for eliminating redundant information, resulting in a decrease in the quality of the fused results.
- A multi-stage progressive pansharpening framework is introduced to address both the spectral and spatial dimensions across varying resolutions in order to incrementally refine the fused image’s spectral and spatial attributes while maintaining stability throughout the pansharpening process.
- A spatial detail extraction module and a channel feature extraction module are developed to proficiently learn and distinguish between the characteristics of spectral and spatial information. Concurrently, a detail injection technique is employed to enhance the model’s interpretability.
- The designed DRRM mitigates redundancy within both the spatial and channel dimensions of the feature maps’ post-detail injection, fostering enhanced interaction among diverse information streams. This approach not only enriches the spectral and spatial representations in the fusion outcomes but also effectively addresses the issue of feature redundancy.
2. Related Works
2.1. Detail Extraction and Injection
2.2. Multi-Stage Progressive Pansharpening
2.3. Attention Mechanism
3. Proposed Method
3.1. Overview of the Overall Framework
3.2. Channel Feature Extraction Module
3.3. Spatial Detail Extraction Module
3.4. Dual Redundancy Reduction Module
3.5. Design of the Loss Function
4. Experimental Results
4.1. Datasets
4.2. Evaluation Indicators and Comparison Methods
4.3. Experimental Details
4.4. Experimental Results on Simulated Datasets
4.5. Experimental Results on Real Datasets
4.6. Analysis of Ablation Experiment Results
4.7. Analysis of Experimental Results on Network Structure
4.8. Parameter and Running Time Analyses
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Doyog, N.D.; Lin, C.; Lee, Y.J.; Lumbres, R.I.C.; Daipan, B.P.O.; Bayer, D.C.; Parian, C.P. Diagnosing pristine pine forest development through pansharpened-surface-reflectance Landsat image derived aboveground biomass productivity. For. Ecol. Manag. 2021, 487, 119011. [Google Scholar] [CrossRef]
- Chang, N.; Bai, K.; Imen, S.; Chen, C.; Gao, W. Multisensor Satellite Image Fusion and Networking for All-Weather Environmental Monitoring. IEEE Syst. J. 2018, 12, 1341–1357. [Google Scholar] [CrossRef]
- Shi, W.; Meng, Q.; Zhang, L.; Zhao, M.; Su, C.; Jancsó, T. DSANet: A deep supervision-based simple attention network for efficient semantic segmentation in remote sensing imagery. Remote Sens. 2022, 14, 5399. [Google Scholar] [CrossRef]
- Dai, H.; Yang, Y.; Huang, S.; Wan, W.; Lu, H.; Wang, X. Pansharpening Based on Fuzzy Logic and Edge Activity. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Su, Z.; Yang, Y.; Huang, S.; Wan, W.; Sun, J.; Tu, W.; Chen, C. STCP: Synergistic Transformer and Convolutional Neural Network for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
- Xie, B.; Zhang, H.K.; Huang, B. Revealing Implicit Assumptions of the Component Substitution Pansharpening Methods. Remote Sens. 2017, 9, 443. [Google Scholar] [CrossRef]
- Yang, Y.; Wan, W.; Huang, S.; Lin, P.; Que, Y. A Novel Pan-Sharpening Framework Based on Matting Model and Multiscale Transform. Remote Sens. 2017, 9, 391. [Google Scholar] [CrossRef]
- Vivone, G.; Dalla Mura, M.; Garzelli, A.; Restaino, R.; Scarpa, G.; Ulfarsson, M.O.; Alparone, L.; Chanussot, J. A New Benchmark Based on Recent Advances in Multispectral Pansharpening: Revisiting Pansharpening With Classical and Emerging Pansharpening Methods. IEEE Geosci. Remote Sens. Mag. 2021, 9, 53–81. [Google Scholar] [CrossRef]
- Tu, T.M.; Huang, P.; Hung, C.L.; Chang, C.P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
- Choi, J.; Yu, K.; Kim, Y. A New Adaptive Component-Substitution-Based Satellite Image Fusion by Using Partial Replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
- Handayani, G.D. Pansharpening Citra Landsat-8 Metode Brovey Modif Pada Software Er Mapper; Universitas Gadjah Mada: Yogyakarta, Indonesia, 2014. [Google Scholar]
- Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored Multiscale Fusion of High-resolution MS and Pan Imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
- Khan, M.M.; Chanussot, J.; Condat, L.; Montanvert, A. Indusion: Fusion of Multispectral and Panchromatic Images Using the Induction Scaling Technique. IEEE Geosci. Remote Sens. Lett. 2008, 5, 98–102. [Google Scholar] [CrossRef]
- Xu, Q.; Li, Y.; Nie, J.; Liu, Q.; Guo, M. UPanGAN: Unsupervised pansharpening based on the spectral and spatial loss constrained Generative Adversarial Network. Inf. Fusion 2023, 91, 31–46. [Google Scholar] [CrossRef]
- Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A Variational Model for P+XS Image Fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
- Chen, C.; Li, Y.; Liu, W.; Huang, J. Image Fusion with Local Spectral Consistency and Dynamic Gradient Sparsity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2760–2765. [Google Scholar]
- Yin, H. PAN-Guided Cross-Resolution Projection for Local Adaptive Sparse Representation- Based Pansharpening. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4938–4950. [Google Scholar] [CrossRef]
- Li, L.; Ma, H.; Jia, Z. Change Detection from SAR Images Based on Convolutional Neural Networks Guided by Saliency Enhancement. Remote Sens. 2021, 13, 3697. [Google Scholar] [CrossRef]
- Xie, Y.; Zhan, N.; Zhu, J.; Xu, B.; Chen, H.; Mao, W.; Luo, X.; Hu, Y. Landslide extraction from aerial imagery considering context association characteristics. Int. J. Appl. Earth Obs. Geoinf. 2024, 131, 103950–103962. [Google Scholar] [CrossRef]
- Zhu, J.; Zhang, J.; Chen, H.; Xie, Y.; Gu, H.; Lian, H. A cross-view intelligent person search method based on multi-feature constraints. Int. J. Digit. Earth 2024, 17, 2346259. [Google Scholar] [CrossRef]
- Chen, H.; Feng, D.; Cao, S.; Xu, W.; Xie, Y.; Zhu, J.; Zhang, H. Slice-to-slice context transfer and uncertain region calibration network for shadow detection in remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2023, 203, 166–182. [Google Scholar] [CrossRef]
- Xu, W.; Feng, Z.; Wan, Q.; Xie, Y.; Feng, D.; Zhu, J.; Liu, Y. Building Height Extraction From High-Resolution Single-View Remote Sensing Images Using Shadow and Side Information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 6514–6528. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef]
- Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by Convolutional Neural Networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef]
- Wei, Y.; Yuan, Q.; Meng, X.; Shen, H.; Zhang, L.; Ng, M. Multi-scale-and-depth convolutional neural network for remote sensed imagery pan-sharpening. In Proceedings of the International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 3413–3416. [Google Scholar]
- Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef]
- He, L.; Rao, Y.; Li, J.; Chanussot, J.; Plaza, A.; Zhu, J.; Li, B. Pansharpening via Detail Injection Based Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1188–1204. [Google Scholar] [CrossRef]
- Jin, Z.; Zhuo, Y.; Zhang, T.; Jin, X.; Jing, S.; Deng, L. Remote Sensing Pansharpening by Full-Depth Feature Fusion. Remote Sens. 2022, 14, 466. [Google Scholar] [CrossRef]
- Zhang, T.; Jin, Z.; Jiang, T.; Vivone, G.; Deng, L. LAGConv: Local-Context Adaptive Convolution Kernels with Global Harmonic Bias for Pansharpening. In Proceedings of the AAAI Conference on Artificial Intelligence; AAAI Press: Washington, DC, USA, 2022; pp. 1113–1121. [Google Scholar]
- Wang, J.; Deng, L.; Zhao, C.; Wu, X.; Chen, H.; Vivone, G. Cascadic Multireceptive Learning for Multispectral Pansharpening. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
- Wang, Y.; Shao, Z.; Lu, T.; Wang, J.; Cheng, G.; Zuo, X.; Dang, C. Remote Sensing Pan-Sharpening via Cross-Spectral-Spatial Fusion Network. IEEE Geosci. Remote. Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Chen, J.; Kao, S.; He, H.; Zhuo, W.; Wen, S.; Lee, C.; Chan, S.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–23 June 2023; pp. 12021–12031. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11211, pp. 3–19. [Google Scholar]
- Deng, L.; Vivone, G.; Jin, C.; Chanussot, J. Detail Injection-Based Deep Convolutional Neural Networks for Pansharpening. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6995–7010. [Google Scholar] [CrossRef]
- Liu, Q.; Meng, X.; Li, X.; Shao, F. Detail Injection-Based Spatio-Temporal Fusion for Remote Sensing Images With Land Cover Changes. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Lai, W.; Huang, J.; Ahuja, N.; Yang, M. Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2599–2613. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Liu, C.; Sun, M.; Ou, Y. Pan-Sharpening Using an Efficient Bidirectional Pyramid Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5549–5563. [Google Scholar] [CrossRef]
- Cai, J.; Huang, B. Super-Resolution-Guided Progressive Pansharpening Based on a Deep Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5206–5220. [Google Scholar] [CrossRef]
- Li, H.; Nie, R.; Cao, J.; Jin, B.; Han, Y. MPEFNet: Multilevel Progressive Enhancement Fusion Network for Pansharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 9573–9583. [Google Scholar] [CrossRef]
- Wang, J.; Shao, Z.; Huang, X.; Lu, T.; Zhang, R. A Dual-Path Fusion Network for Pan-Sharpening. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, H.; Tian, X.; Ma, J. P2Sharpen: A progressive pansharpening network with deep spectral transformation. Inf. Fusion 2023, 91, 103–122. [Google Scholar] [CrossRef]
- Diao, W.; Zhang, F.; Wang, H.; Sun, J.; Zhang, K. Pansharpening via Triplet Attention Network With Information Interaction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3576–3588. [Google Scholar] [CrossRef]
- Lei, D.; Huang, J.; Zhang, L.; Li, W. MHANet: A Multiscale Hierarchical Pansharpening Method With Adaptive Optimization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Yang, Y.; Li, M.; Huang, S.; Lu, H.; Tu, W.; Wan, W. Multi-scale Spatial-Spectral Attention Guided Fusion Network for Pansharpening. In Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, Ottawa, ON, Canada, 29 October–3 November 2023; El-Saddik, A., Mei, T., Cucchiara, R., Bertini, M., Vallejo, D.P.T., Atrey, P.K., Hossain, M.S., Eds.; ACM: New York, NY, USA, 2023; pp. 3346–3354. [Google Scholar]
- Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
- Cao, Y.; Xu, J.; Lin, S.; Wei, F.; Hu, H. GCNet: Non-Local Networks Meet Squeeze-Excitation Networks and Beyond. In Proceedings of the International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019; pp. 1971–1980. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11531–11539. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef]
- Xiong, Z.; Liu, N.; Wang, N.; Sun, Z.; Li, W. Unsupervised Pansharpening Method Using Residual Network With Spatial Texture Attention. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More Features From Cheap Operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 1580–1589. [Google Scholar]
- Zhang, Q.; Jiang, Z.; Lu, Q.; Han, J.; Zeng, Z.; Gao, S.; Men, A. Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution. In Proceedings of the International Joint Conference on Artificial Intelligence, Yokohama, Japan, 7–15 January 2020; Bessiere, C., Ed.; ACM: New York, NY, USA, 2020; pp. 3195–3201. [Google Scholar]
- Chen, J.; He, T.; Zhuo, W.; Ma, L.; Ha, S.; Chan, S.G. TVConv: Efficient Translation Variant Convolution for Layout-aware Visual Processing. In Proceedings of the Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12538–12548. [Google Scholar]
- Chen, Y.; Dai, X.; Chen, D.; Liu, M.; Dong, X.; Yuan, L.; Liu, Z. Mobile-Former: Bridging MobileNet and Transformer. In Proceedings of the Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5260–5269. [Google Scholar]
- Li, J.; Wen, Y.; He, L. SCConv: Spatial and Channel Reconstruction Convolution for Feature Redundancy. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 6153–6162. [Google Scholar]
- Wu, Y.; He, K. Group Normalization. In Proceedings of the European Conference on Computer Vision; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 11217, pp. 3–19. [Google Scholar]
- Park, H.; Lim, H.; Jang, D. DEDU: Dual-Enhancing Dense-UNet for Lowlight Image Enhancement and Denoise. IEEE Access 2024, 12, 24071–24078. [Google Scholar] [CrossRef]
- Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss Functions for Image Restoration With Neural Networks. IEEE Trans. Comput. Imaging 2017, 3, 47–57. [Google Scholar] [CrossRef]
- Rang, X.; Dong, Z.; Han, J.; Ma, C.; Zhao, G.; Zhang, W. A Generative Adversarial Network AMS-CycleGAN for Multi-Style Image Transformation. IEEE Access 2024, 12, 65141–65153. [Google Scholar] [CrossRef]
- Meng, X.; Xiong, Y.; Shao, F.; Shen, H.; Sun, W.; Yang, G.; Yuan, Q.; Fu, R.; Zhang, H. A Large-Scale Benchmark Data Set for Evaluating Pansharpening Performance: Overview and Implementation. IEEE Geosci. Remote Sens. Mag. 2021, 9, 18–52. [Google Scholar] [CrossRef]
- Du, Q.; Younan, N.H.; King, R.L.; Shah, V.P. On the Performance Evaluation of Pan-Sharpening Techniques. IEEE Geosci. Remote Sens. Lett. 2007, 4, 518–522. [Google Scholar] [CrossRef]
- Zhou, J.T.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
- Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef]
ERGAS | RMSE | RASE | UIQI | SAM | SCC | Q4 | |
---|---|---|---|---|---|---|---|
Brovey | 1.6405 | 21.2350 | 6.3655 | 0.9639 | 1.9289 | 0.8976 | 0.5926 |
MTF_GLP | 1.3920 | 17.3036 | 5.2954 | 0.9766 | 1.5839 | 0.9205 | 0.7221 |
Indusion | 1.8479 | 23.3028 | 7.0379 | 0.9582 | 1.8896 | 0.8899 | 0.6299 |
DRPNN | 1.1240 | 14.3054 | 4.3314 | 0.9844 | 1.4479 | 0.9413 | 0.7428 |
FusionNet | 1.3804 | 17.3577 | 5.2372 | 0.9795 | 1.5534 | 0.9335 | 0.6510 |
FDFNet | 1.0824 | 13.7381 | 4.1718 | 0.9858 | 1.3659 | 0.9449 | 0.7669 |
LAGConv | 0.9585 | 12.0931 | 3.7216 | 0.98881 | 1.0614 | 0.9501 | 0.8065 |
CML | 1.0689 | 10.8217 | 3.7056 | 0.9830 | 1.1712 | 0.9438 | 0.7918 |
CSSFN | 0.8336 | 9.9339 | 3.1542 | 0.9888 | 1.1956 | 0.9546 | 0.8078 |
Ours | 0.6749 | 8.5607 | 2.5655 | 0.9947 | 0.8955 | 0.9801 | 0.8401 |
REGAS | RMSE | RASE | UIQI | SAM | SCC | Q4 | |
---|---|---|---|---|---|---|---|
Brovey | 2.2433 | 21.9277 | 8.7148 | 0.9439 | 1.7472 | 0.8259 | 0.4862 |
MTF_GLP | 6.1497 | 55.0688 | 23.0274 | 0.7342 | 7.8978 | 0.7393 | 0.2654 |
Indusion | 2.3021 | 23.1673 | 9.0305 | 0.9421 | 1.9179 | 0.7842 | 0.5476 |
DRPNN | 1.6585 | 15.7243 | 6.2990 | 0.9739 | 2.3238 | 0.8954 | 0.7316 |
FusionNet | 1.8798 | 17.5757 | 7.0501 | 0.9669 | 1.6281 | 0.8915 | 0.6622 |
FDFNet | 1.6651 | 16.3323 | 6.6479 | 0.9733 | 1.7424 | 0.8827 | 0.6628 |
LAGConv | 1.1883 | 11.7103 | 4.6062 | 0.9845 | 1.3106 | 0.9169 | 0.8281 |
CML | 1.4290 | 14.2461 | 5.6767 | 0.9769 | 1.6616 | 0.8887 | 0.8286 |
CSSFN | 1.1909 | 10.6756 | 4.1047 | 0.9842 | 1.2907 | 0.9121 | 0.7768 |
Ours | 0.9810 | 9.4526 | 3.7265 | 0.9900 | 1.1430 | 0.9428 | 0.8644 |
REGAS | RMSE | RASE | UIQI | SAM | SCC | Q8 | |
---|---|---|---|---|---|---|---|
Brovey | 5.6394 | 60.3091 | 19.1166 | 0.9197 | 5.1762 | 0.8881 | 0.5549 |
MTF_GLP | 4.3400 | 47.1259 | 15.0190 | 0.9473 | 4.5941 | 0.9065 | 0.6437 |
Indusion | 5.0325 | 51.9257 | 16.3999 | 0.9479 | 4.9926 | 0.8851 | 0.6102 |
DRPNN | 3.6689 | 37.4278 | 12.2089 | 0.9612 | 4.0867 | 0.9473 | 0.6463 |
FusionNet | 3.5226 | 35.4701 | 11.4256 | 0.9787 | 3.7909 | 0.9430 | 0.7746 |
FDFNet | 4.0114 | 36.2872 | 11.7189 | 0.9777 | 3.9823 | 0.9444 | 0.7703 |
LAGConv | 3.2990 | 32.6300 | 10.5739 | 0.9818 | 3.4230 | 0.9598 | 0.7654 |
CML | 3.1901 | 31.8553 | 10.3129 | 0.9769 | 3.3881 | 0.9562 | 0.7831 |
CSSFN | 3.2421 | 32.5700 | 10.5223 | 0.9832 | 3.4000 | 0.9546 | 0.7992 |
Ours | 2.9368 | 29.5469 | 9.5934 | 0.9851 | 3.1462 | 0.9636 | 0.8145 |
QNR | |||
---|---|---|---|
Brovey | 0.7656 | 0.1030 | 0.1514 |
MTF_GLP | 0.8075 | 0.0910 | 0.1168 |
Indusion | 0.8705 | 0.0726 | 0.0641 |
DRPNN | 0.9206 | 0.0325 | 0.0496 |
FusionNet | 0.8853 | 0.0643 | 0.0534 |
FDFNet | 0.8845 | 0.0615 | 0.0610 |
LAGConv | 0.9083 | 0.0380 | 0.0590 |
CML | 0.9095 | 0.0453 | 0.0534 |
CSSFN | 0.9192 | 0.0315 | 0.0474 |
Ours | 0.9249 | 0.0320 | 0.0449 |
QNR | |||
---|---|---|---|
Brovey | 0.6560 | 0.0922 | 0.2795 |
MTF_GLP | 0.4245 | 0.2631 | 0.4328 |
Indusion | 0.8900 | 0.0691 | 0.0446 |
DRPNN | 0.8735 | 0.0387 | 0.0914 |
FusionNet | 0.8367 | 0.0531 | 0.1174 |
FDFNet | 0.8711 | 0.0348 | 0.0979 |
LAGConv | 0.8954 | 0.0178 | 0.0441 |
CML | 0.9173 | 0.0107 | 0.0494 |
CSSFN | 0.8846 | 0.0792 | 0.0393 |
Ours | 0.9499 | 0.0204 | 0.0376 |
QNR | |||
---|---|---|---|
Brovey | 0.7332 | 0.1016 | 0.1950 |
MTF_GLP | 0.6876 | 0.1583 | 0.2007 |
Indusion | 0.7898 | 0.0747 | 0.1572 |
DRPNN | 0.7974 | 0.0903 | 0.1281 |
FusionNet | 0.8367 | 0.0671 | 0.0974 |
FDFNet | 0.8708 | 0.0499 | 0.0702 |
LAGConv | 0.8697 | 0.0366 | 0.0689 |
CML | 0.8735 | 0.0457 | 0.0604 |
CSSFN | 0.8929 | 0.0412 | 0.0620 |
Ours | 0.9011 | 0.0329 | 0.0593 |
REGAS | RMSE | RASE | UIQI | SAM | SCC | Q4 | |
---|---|---|---|---|---|---|---|
w/o DRRM | 1.4245 | 17.6026 | 5.2470 | 0.9822 | 1.9683 | 0.9014 | 0.7021 |
w/o CFEM | 1.2024 | 15.4954 | 4.6859 | 0.9846 | 1.4687 | 0.9445 | 0.7426 |
w/o SDEM | 1.2479 | 15.5463 | 4.6979 | 0.9829 | 1.6958 | 0.9418 | 0.7524 |
w/o (SDEM + DRRM) | 1.4619 | 19.2397 | 5.7021 | 0.9755 | 2.0137 | 0.9267 | 0.6804 |
w/o (CFEM + DRRM) | 1.6632 | 19.3664 | 5.7645 | 0.9731 | 1.8660 | 0.9343 | 0.5555 |
w/o (CFEM + SDEM) | 1.3832 | 16.6032 | 5.0925 | 0.9828 | 1.7248 | 0.9381 | 0.7128 |
none_modules | 2.2212 | 25.1752 | 7.4091 | 0.9625 | 3.1126 | 0.9069 | 0.5378 |
Ours | 0.6749 | 8.5607 | 2.5655 | 0.9947 | 0.8955 | 0.9801 | 0.8401 |
REGAS | RMSE | RASE | UIQI | SAM | SCC | Q4 | |
---|---|---|---|---|---|---|---|
Concatenation | 0.9880 | 12.5272 | 3.8471 | 0.9875 | 1.1217 | 0.9484 | 0.7996 |
Direct injection | 0.8652 | 10.9196 | 3.3140 | 0.9912 | 1.0993 | 0.9673 | 0.8019 |
one_stage | 4.9978 | 79.6737 | 22.52999 | 0.8151 | 1.7053 | 0.9258 | 0.6611 |
two_stage | 1.5543 | 21.3100 | 6.2745 | 0.9698 | 0.9370 | 0.9776 | 0.8279 |
three_stage | 0.6749 | 8.5607 | 2.5655 | 0.9947 | 0.8955 | 0.9801 | 0.8401 |
DRPNN | FusionNet | FDFNet | LAGConv | CML | CSSFN | Ours | |
---|---|---|---|---|---|---|---|
Param (M) | 1.64 | 0.15 | 0.09 | 0.03 | 0.16 | 0.35 | 0.07 |
Time (s) | 6.21 | 5.34 | 3.81 | 9.23 | 5.67 | 6.03 | 5.46 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wen, X.; Ma, H.; Li, L. A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction. Sensors 2024, 24, 6039. https://doi.org/10.3390/s24186039
Wen X, Ma H, Li L. A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction. Sensors. 2024; 24(18):6039. https://doi.org/10.3390/s24186039
Chicago/Turabian StyleWen, Xincan, Hongbing Ma, and Liangliang Li. 2024. "A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction" Sensors 24, no. 18: 6039. https://doi.org/10.3390/s24186039
APA StyleWen, X., Ma, H., & Li, L. (2024). A Multi-Stage Progressive Pansharpening Network Based on Detail Injection with Redundancy Reduction. Sensors, 24(18), 6039. https://doi.org/10.3390/s24186039