Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

(SARN)spatial-wise attention residual network for image super-resolution

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Recent research suggests that attention mechanism is capable of improving performance of deep learning-based single image super-resolution (SISR) methods. In this work, we propose a deep spatial-wise attention residual network (SARN) for SISR. Specifically, we propose a novel spatial attention block (SAB) to rescale pixel-wise features by explicitly modeling interdependencies between pixels on each feature map, encoding where (i.e., attentive spatial pixels in feature map) the visual attention is located. A modified patch-based non-local block can be inserted in SAB to capture long-distance spatial contextual information and relax the local neighborhood constraint. Furthermore, we design a bottleneck spatial attention module to widen the network so that more information is allowed to pass. Meanwhile, we adopt local and global residual connections in SISR to make the network focus on learning valuable high-frequency information. Extensive experiments show the superiority of the proposed SARN over the state-of-art methods on benchmark datasets in both accuracy and visual quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2014)

    Article  Google Scholar 

  2. Dong, C., Loy, C. C., Xiaoou, T.: Accelerating the super-resolution convolutional neural network. In: European Conference on Computer Vision (ECCV), pp. 391–407 (2016). https://doi.org/10.1007/978-3-319-46475-6_25

  3. Wenzhe, S.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1874–1883. https://doi.org/10.1109/CVPR.2016.207 (2016)

  4. Kim, J., Lee, J. K., Lee, K. M.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654. https://doi.org/10.1109/CVPR.2016.182 (2016)

  5. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681–4690. IEEE Press, Hawaii, USA. https://doi.org/10.1109/CVPR.2017.19 (2017)

  6. Haris, M., Shakhnarovich, G., Ukita N.: Deep back-projection networks for super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1664–1673. IEEE Press, Salt Lake City, USA. https://doi.org/10.1109/CVPR.2018.00179 (2018)

  7. Tong, T., Li, G., Liu, X.: Image super-resolution using dense skip connections. In: IEEE International Conference on Computer Vision (ICCV), pp. 4799–4807. IEEE Press, Venice, IT. https://doi.org/10.1109/ICCV.2017.514 (2017)

  8. Zhang, Y., Tian, Y.: Residual dense network for image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2472–2481. IEEE Press, Salt Lake City, USA. https://doi.org/10.1109/CVPR.2018.00262 (2018)

  9. Tai, Y., Yang, J., Memnet: a persistent memory network for image restoration. In: IEEE International Conference on Computer Vision (ICCV), pp. 4539–4547. IEEE Press, Venice, IT (2017)

  10. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition Work-shops (CVPRW), pp. 136-144. IEEE Press, Honolulu, USA. https://doi.org/10.1109/CVPRW.2017.151 (2017)

  11. Yu, J., Fan, Y.: Wide activation for efficient and accurate image super-resolution. In: IEEE Computer Vision and Pattern Recognition Conference (CVPR). IEEE Press, Long Beach, CA (2019)

  12. Zhang, Y., Li, K.: Image super-resolution using very deep residual channel attention networks. In: European Conference on Computer Vision (ECCV), pp. 286–301. GER, Munich (2018)

  13. Jie, H., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141. IEEE Press, Salt Lake City, USA. https://doi.org/10.1109/TPAMI.2019.2913372 (2018)

  14. Park, J., Woo, S., Lee, J. Y., Kweon, I. S.: Bam: Bottleneck attention module. un-published

  15. Long, C., SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5659–5667. IEEE Press, Hawaii, USA. https://doi.org/10.1109/CVPR.2017.667 (2017)

  16. Sanghyun, W., et al.: CBAM: convolutional block attention module. In: European Conference on Computer Vision (ECCV), pp. 3–19. GER, Munich (2018)

  17. Zontak, M., Irani, M.: Internal statistics of a single natural image. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Press, Colorado Springs, CO, USA (2011)

  18. Glasner, D., Bagon, S., Irani, M.: Super-resolution from a single image. In: IEEE International Conference on Computer Vision (ICCV). IEEE Press, Kyoto, Japan (2009)

  19. Buades, A., Coll, B., Morel, J.-M.: A non-local algorithm for image denoising. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 60–65. IEEE Press, San Diego, USA. https://doi.org/10.1109/CVPR.2005.38 (2005)

  20. Wang, X., et al.: Non-local neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7794–7803. IEEE Press, Salt Lake City, USA (2018)

  21. Zhang, Y, et al.: Residual non-local attention networks for image restoration. arXiv preprint arXiv:1903.10082. https://doi.org/10.1109/CVPR.2016.207 (2019)

  22. Wang, X., et al.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 606–615. IEEE Press, Salt Lake City, USA (2018)

  23. Gu, J., et al.: Blind super-resolution with iterative kernel correction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1604–1613. IEEE Press, Long Beach, CA (2019)

  24. Park, T. et al.: Semantic image synthesis with spatially-adaptive normalization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Press, Long Beach, CA (2019)

  25. Liu, D., et al.: Non-local recurrent network for image restoration. In: Conference on Neural Information Processing Systems (NIPS) Montreal, CA (2018)

  26. Nair, V., Hinton, G. E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning (ICML-10) (Haifa, IL), pp. 807–814. https://doi.org/10.1109/TPAMI.2019.2913372 (2010)

  27. Sandler, M., Howard, A., Zhu, M., Mobilenetv2: inverted residuals and linear bottlenecks. In: IEEE. Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520. IEEE Press, Salt Lake City, USA (2018)

  28. Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: methods and results. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 114–125. IEEE Press, Honolulu, USA. https://doi.org/10.1109/CVPRW.2017.149 (2017)

  29. Evilacqua, B.M., Roumy, A., Guillemot, C., AlberiMorel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference (BMVC) (Guildford, UK) (2012)

  30. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: IEEE International Conference on Curves and Surfaces (ICCS) (Berlin, GER), pp. 711–730. https://doi.org/10.1007/978-3-642-27413-8_47 (2010)

  31. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: IEEE International Conference on Computer Vision (ICCV). IEEE Press, Vancouver, Canada. https://doi.org/10.1109/ICCV.2001.937655 (2001)

  32. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5197–5206. IEEE Press, Boston, USA. https://doi.org/10.1109/CVPR.2015.7299156 (2015)

  33. Matsui, Y., Ito, K., Aramaki, Y., Fujimoto, A.: Sketch-based manga retrieval using manga109 dataset. Multimedia Tools Appl. 76(20), 21811–21838 (2017)

    Article  Google Scholar 

  34. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  35. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR), Banff, CA (2014)

  36. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E.: Automatic differentiation in pytorch. In: Conference on Neural Information Processing Systems (NIPS) Long Beach, USA (2017)

  37. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 624–632. IEEE Press, Salt Lake City, USA (2018)

  38. Zhang, K., Zuo, W., Zhang, L.: Learning a single convolutional super-resolution network for multiple degradations. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3262–3271. IEEE Press, Salt Lake City, USA (2018)

  39. Zhen, L., et al.: Feedback network for image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3867–3876. IEEE Press, Long Beach, CA (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huiqian Du.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, W., Du, H., Mei, W. et al. (SARN)spatial-wise attention residual network for image super-resolution. Vis Comput 37, 1569–1580 (2021). https://doi.org/10.1007/s00371-020-01903-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01903-8

Keywords

Navigation