Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3652583.3657606acmconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
short-paper

Knowledge Distillation for Single Image Super-Resolution via Contrastive Learning

Published: 07 June 2024 Publication History

Abstract

In recent years, thanks to the vigorous development of deep learning, single image super-resolution has advanced greatly. Most super-resolution (SR) methods use convolution layers to construct the network, which achieves superior results over the traditional methods based on manual features. However, most methods based on convolutional neural networks (CNN) blindly deepen the depth of the network leading to a large number of model parameters, which inevitably brings huge computing overhead and memory consumption, and greatly limits the application in resource-limited devices. In order to alleviate this problem, a knowledge distillation framework based on contrastive learning is proposed to compress and accelerate the SR model with enormous parameters. The student network is directly constructed by reducing the number of layers of the teacher network. In particular, the proposed method distills the statistical information of the intermediate feature maps from the teacher network to train the lightweight student network. In addition, through explicit knowledge transfer, a novel contrastive loss is introduced to improve the reconstruction performance of the student network. Experiments show that the proposed contrastive distillation framework can effectively compress the model scale with an acceptable loss of performance.

References

[1]
Eirikur Agustsson and Radu Timofte. 2017. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 1122--1131.
[2]
Qing Cai, Mu Li, Dongwei Ren, Jun Lyu, Haiyong Zheng, Junyu Dong, and Yee-Hong Yang. 2024. Spherical Pseudo-Cylindrical Representation for Omnidirectional Image Super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 2 (Mar. 2024), 873--881. https://doi.org/10.1609/aaai.v38i2.27846
[3]
Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. 2019. Second-Order Attention Network for Single Image Super-Resolution. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16--20, 2019. Computer Vision Foundation / IEEE, 11065--11074.
[4]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell., Vol. 38, 2 (2016), 295--307.
[5]
Qinquan Gao, Yan Zhao, Gen Li, and Tong Tong. 2018. Image Super-Resolution Using Knowledge Distillation. In Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, Perth, Australia, December 2-6, 2018 (Lecture Notes in Computer Science, Vol. 11362), C. V. Jawahar, Hongdong Li, Greg Mori, and Konrad Schindler (Eds.). Springer, 527--541.
[6]
Ziyao Guo, Haonan Yan, Hui Li, and Xiaodong Lin. 2023. Class Attention Transfer Based Knowledge Distillation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11868--11877. https://doi.org/10.1109/CVPR52729.2023.01142
[7]
Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. 2020. GhostNet: More Features From Cheap Operations. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. 1577--1586.
[8]
Tao He, Lianli Gao, Jingkuan Song, and Yuan-Fang Li. 2022a. State-aware compositional learning toward unbiased training for scene graph generation. IEEE Transactions on Image Processing, Vol. 32 (2022), 43--56.
[9]
Tao He, Lianli Gao, Jingkuan Song, and Yuan-Fang Li. 2022b. Towards open-vocabulary scene graph generation with prompt-based finetuning. In European Conference on Computer Vision. Springer, 56--73.
[10]
Zibin He, Tao Dai, Jian Lu, Yong Jiang, and Shu-Tao Xia. 2020. Fakd: Feature-Affinity Based Knowledge Distillation for Efficient Image Super-Resolution. In IEEE International Conference on Image Processing, ICIP 2020, Abu Dhabi, United Arab Emirates, October 25-28, 2020. IEEE, 518--522.
[11]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. 1646--1654.
[12]
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced Deep Residual Networks for Single Image Super-Resolution. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 1132--1140.
[13]
Jie Liu, Wenjie Zhang, Yuting Tang, Jie Tang, and Gangshan Wu. 2020. Residual Feature Aggregation Network for Image Super-Resolution. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. 2356--2365.
[14]
Ying Nie, Kai Han, Zhenhua Liu, Chuanjian Liu, and Yunhe Wang. 2022. GhostSR: Learning Ghost Features for Efficient Image Super-Resolution. Trans. Mach. Learn. Res., Vol. 2022 (2022).
[15]
Kaiyou Song, Jin Xie, Shan Zhang, and Zimeng Luo. 2023. Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11848--11857.
[16]
Frederick Tung and Greg Mori. 2019. Similarity-Preserving Knowledge Distillation. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 1365--1374.
[17]
Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. 2018. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18]
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018a. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VII (Lecture Notes in Computer Science, Vol. 11211), Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.).
[19]
Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. 2018b. Residual Dense Network for Image Super-Resolution. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. 2472--2481.

Index Terms

  1. Knowledge Distillation for Single Image Super-Resolution via Contrastive Learning

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMR '24: Proceedings of the 2024 International Conference on Multimedia Retrieval
    May 2024
    1379 pages
    ISBN:9798400706196
    DOI:10.1145/3652583
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. contrastive learning
    2. knowledge distillation
    3. single image super-resolution

    Qualifiers

    • Short-paper

    Funding Sources

    Conference

    ICMR '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 254 of 830 submissions, 31%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 100
      Total Downloads
    • Downloads (Last 12 months)100
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 14 Nov 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media