Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

CASR: a context-aware residual network for single-image super-resolution

  • Deep Learning Approaches for RealTime Image Super Resolution (DLRSR)
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

With the significant power of deep learning architectures, researchers have made much progress on super-resolution in the past few years. However, due to low representational ability of feature maps extracted from nature scene images, directly applying deep learning architectures for super-resolution could result in poor visual effects. Essentially, unique characteristics like low-frequency information should be emphasized for better shape reconstruction, other than treated equally across different patches and channels. To ease this problem, we propose a lightweight context-aware deep residual network named as CASR network, which appropriately encodes channel and spatial attention information to construct context-aware feature map for single-image super-resolution. We firstly design a task-specified inception block with a novel structure of astrous filters and specially chosen kernel size to extract multi-level information from low-resolution images. Then, a Dual-Attention ResNet module is applied to capture context information by dually connecting spatial and channel attention schemes. With high representational ability of context-aware feature map, CASR can accurately and efficiently generate high-resolution images. Experiments on several popular datasets show the proposed method has achieved better visual improvements and superior efficiencies than most of the existing studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Anderson P, He X, Buehler C, Teney D, Johnson M, Gould S, Zhang L (2018) Bottom-up and top-down attention for image captioning and visual question answering. In: Proceedings of 2018 IEEE conference on computer vision and pattern recognition, pp 6077–6086

  2. Bevilacqua M, Roumy A, Guillemot C, Alberi-Morel ML (2012) Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of british machine vision conference

  3. Bulat A, Yang J, Tzimiropoulos G (2018) To learn image super-resolution, use a gan to learn how to do image degradation first. In: Proceedings of European conference on computer vision, pp 185–200

  4. Cao F, Li K (2018) A new method for image super-resolution with multi-channel constraints. Knowl Based Syst 146:118–128

    Article  Google Scholar 

  5. Cao Q, Lin L, Shi Y, Liang X, Li G (2017) Attention-aware face hallucination via deep reinforcement learning. CoRR. arXiv:abs/1708.03132

  6. Chen K, Yao L, Zhang D, Wang X, Chang X, Nie F (2019) A semisupervised recurrent convolutional attention model for human activity recognition. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2927224

    Article  Google Scholar 

  7. Chen R, Qu Y, Li C, Zeng K, Xie Y, Li C (2019) Single-image super-resolution via joint statistical models-guided deep auto-encoder network. Neural Computing and Applications pp 1–11

  8. Dong C, Loy CC, He K, Tang X (2014) Learning a deep convolutional network for image super-resolution. In: Proceedings of European conference on computer vision, pp 184–199

  9. Dong C, Loy CC, Tang X (2016) Accelerating the super-resolution convolutional neural network. In: Proceedings of European conference on computer vision. Springer, pp 391–407

  10. Fujimoto A, Ogawa T, Yamamoto K, Matsui Y, Yamasaki T, Aizawa K (2016) Manga109 dataset and creation of metadata. In: Proceedings of the 1st international workshop on comics analysis, processing and understanding, p 2

  11. Gong W, Qi L, Xu Y (2018) Privacy-aware multidimensional mobile service quality prediction and recommendation in distributed fog environment. Wireless Communications and Mobile Computing

  12. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of Neural Information Processing Systems, pp 2672–2680

  13. Haris M, Shakhnarovich G, Ukita N (2018) Deep back-projection networks for super-resolution. In: Proceedings of computer vision and pattern recognition, pp 1664–1673

  14. He T, Huang W, Qiao Y, Yao J (2016) Text-attentional convolutional neural network for scene text detection. IEEE Trans Image Process 25(6):2529–2541

    Article  MathSciNet  Google Scholar 

  15. Hu Y, Li J, Huang Y, Gao X (2018) Channel-wise and spatial feature modulation network for single image super-resolution. arXiv preprint arXiv:180911130

  16. Huang J, Singh A, Ahuja N (2015) Single image super-resolution from transformed self-exemplars. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 5197–5206

  17. Huang JB, Singh A, Ahuja N (2015) Single image super-resolution from transformed self-exemplars. In: Proceedings of computer vision and pattern recognition, pp 5197–5206

  18. Kim J, Kwon Lee J, Mu Lee K (2016) Accurate image super-resolution using very deep convolutional networks. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1646–1654

  19. Kim J, Kwon Lee J, Mu Lee K (2016) Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1637–1645

  20. Kim JH, Choi JH, Cheon M, Lee JS (2018) Ram: Residual attention module for single image super-resolution. arXiv preprint arXiv:181112043

  21. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of neural information processing systems, pp 1097–1105

  22. Lai W, Huang J, Ahuja N, Yang M (2017) Fast and accurate image super-resolution with deep Laplacian pyramid networks. CoRR abs/1710.01992

  23. Lai WS, Huang JB, Ahuja N, Yang MH (2017) Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of computer vision and pattern recognition

  24. Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, et al (2017) Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint

  25. Lim B, Son S, Kim H, Nah S, Lee KM (2017) Enhanced deep residual networks for single image super-resolution. In: Proceedings of computer vision and pattern recognition workshops, pp 1132–1140

  26. Liu H, Kou H, Yan C, Qi L (2019) Link prediction in paper citation network to construct paper correlation graph. EURASIP J Wirel Commun Netw 1:233

    Article  Google Scholar 

  27. Liu S, Huang D, Wang Y (2018) Receptive field block net for accurate and fast object detection. In: Proceedings of European conference on computer vision, pp 404–419

  28. Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Proc Int Conf Comput Vis 2:416–423

    Google Scholar 

  29. Mnih V, Heess N, Graves A, Kavukcuoglu K (2014) Recurrent models of visual attention. In: Proceedings of neural information processing systems, pp 2204–2212

  30. Nguyen T, Le T, Vu H, Phung DQ (2017) Dual discriminator generative adversarial nets. In: Proceedings of Advances in neural information processing systems, pp 2670–2680

  31. Qi L, Dou W, Chen J (2016) Weighted principal component analysis-based service selection method for multimedia services in cloud. Computing 98(1–2):195–214

    Article  MathSciNet  Google Scholar 

  32. Qi L, Xu X, Dou W, Yu J, Zhou Z, Zhang X (2016) Time-aware IoE service recommendation on sparse data. Mob Inf Sys 2016:4397061:1–4397061:12

  33. Qi L, Dai P, Yu J, Zhou Z, Xu Y (2017) “time-location-frequency”-aware internet of things service selection based on historical records. Int J Distr Sens Netw 13(1):1–9

    Article  Google Scholar 

  34. Qi L, Zhang X, Dou W, Ni Q (2017) A distributed locality-sensitive hashing-based approach for cloud service recommendation from multi-source data. IEEE J Sel Areas Commun 35(11):2616–2624

    Article  Google Scholar 

  35. Qi L, Dou W, Wang W, Li G, Yu H, Wan S (2018) Dynamic mobile crowdsourcing selection for electricity load forecasting. IEEE Access 6:46926–46937

    Article  Google Scholar 

  36. Qi L, Chen Y, Yuan Y, Fu S, Zhang X, Xu X (2019) A QoS-aware virtual machine scheduling method for energy conservation in cloud-based cyber-physical systems. World Wide Web. https://doi.org/10.1007/s11280-019-00684-y

    Article  Google Scholar 

  37. Qi L, Wang R, Hu C, Li S, He Q, Xu X (2019) Time-aware distributed service recommendation with privacy-preservation. Inf Sci 480:354–364

    Article  Google Scholar 

  38. Schulter S, Leistner C, Bischof H (2015) Fast and accurate image upscaling with super-resolution forests. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 3791–3799

  39. Shamsolmoali P, Li X, Wang R (2019) Single image resolution enhancement by efficient dilated densely connected residual network. Signal Process Image Commun 79:13–23

    Google Scholar 

  40. Shamsolmoali P, Zareapoor M, Wang R, Jain DK, Yang J (2019) G-GANISR: gradual generative adversarial network for image super resolution. Neurocomputing 366:140–153

    Article  Google Scholar 

  41. Tai Y, Yang J, Liu X (2017) Image super-resolution via deep recursive residual network. In: Proceedings of computer vision and pattern recognitio, pp 2790–2798

  42. Timofte R, De Smet V, Van Gool L (2014) A+: Adjusted anchored neighborhood regression for fast super-resolution. In: Proceedings of Asian conference on computer vision. Springer, pp 111–126

  43. Timofte R, Agustsson E, Van Gool L, Yang MH, Zhang L (2017) Ntire 2017 challenge on single image super-resolution: methods and results. In: Proceedings of computer vision and pattern recognition workshops, pp 114–125

  44. Tong T, Li G, Liu X, Gao Q (2017) Image super-resolution using dense skip connections. In: Proceedings of international conference on computer vision, IEEE, pp 4809–4817

  45. Wang Y, Perazzi F, McWilliams B, Sorkine-Hornung A, Sorkine-Hornung O, Schroers C (2018) A fully progressive approach to single-image super-resolution. In: Proceedings of IEEE conference on computer vision and pattern recognition workshops, pp 864–873

  46. Wang Z, Liu D, Yang J, Han W, Huang TS (2015) Deep networks for image super-resolution with sparse prior. In: Proceedings of IEEE international conference on computer vision, pp 370–378

  47. Woo S, Park J, Lee JY, So Kweon I (2018) Cbam: Convolutional block attention module. In: Proceedings of European conference on computer vision, pp 3–19

  48. Xu X, Fu S, Qi L, Zhang X, Liu Q, He Q, Li S (2018) An IoT-oriented data placement method with privacy preservation in cloud environment. J Netw Comput Appl 124:148–157

    Article  Google Scholar 

  49. Xu X, Li Y, Huang T, Xue Y, Peng K, Qi L, Dou W (2019) An energy-aware computation offloading method for smart edge computing in wireless metropolitan area networks. J Netw Comput Appl 133:75–85

    Article  Google Scholar 

  50. Xu X, Liu Q, Luo Y, Peng K, Zhang X, Meng S, Qi L (2019) A computation offloading method over big data for iot-enabled cloud-edge computing. Future Gener Comput Syst 96:89–100

    Article  Google Scholar 

  51. Xu X, Xue Y, Qi L, Yuan Y, Zhang X, Umer T, Wan S (2019) An edge computing-enabled computation offloading method with privacy preservation for internet of connected vehicles. Future Gener Comput Syst 95:522–533

    Article  Google Scholar 

  52. Yan C, Cui X, Qi L, Xu X, Zhang X (2018) Privacy-aware data publishing and integration for collaborative service recommendation. IEEE Access 6:43021–43028

    Article  Google Scholar 

  53. Yeung S, Russakovsky O, Jin N, Andriluka M, Mori G, Li F (2018) Every moment counts: Dense detailed labeling of actions in complex videos. Int J Comput Vis 126(2–4):375–389

    Article  MathSciNet  Google Scholar 

  54. Zareapoor M, Zhou H, Yang J (2019) Perceptual image quality using dual generative adversarial network. J Neural Comput Appl. https://doi.org/10.1007/s00521-019-04239-0

    Article  Google Scholar 

  55. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Proceedings of European conference on computer vision, pp 818–833

  56. Zeyde R, Elad M, Protter M (2010) On single image scale-up using sparse-representations. In: Proceedings of international conference on curves and surfaces. Springer, pp 711–730

  57. Zhang K, Zuo W, Zhang L (2018) Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 3262–3271

  58. Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y (2018) Image super-resolution using very deep residual channel attention networks. In: Proceedings of European conference on computer vision, pp 286–301

  59. Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y (2018) Residual dense network for image super-resolution. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 2472–2481

  60. Zhao X, Sang L, Ding G, Han J, Di N, Yan C (2019) Recurrent attention model for pedestrian attribute recognition. In: Proceedings of the thirty-third AAAI conference on artificial intelligence, pp 9275–9282

  61. Zheng H, Wang X, Gao X (2018) Fast and accurate single image super-resolution via information distillation network. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 723–731

Download references

Acknowledgements

This work was supported by National Key R&D Program of China under Grant 2018YFC0407901, the Natural Science Foundation of China under Grant 61702160 and 61602407, the Natural Science Foundation of Jiangsu Province under Grant BK20170892, Natural Science Foundation of Zhejiang Province under Grant LY19F030005 and LY18F020008, and the open Project of the National Key Lab for Novel Software Technology in NJU under Grant K-FKT2017B05.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanting Ji.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Y., Ji, X., Ji, W. et al. CASR: a context-aware residual network for single-image super-resolution. Neural Comput & Applic 32, 14533–14548 (2020). https://doi.org/10.1007/s00521-019-04609-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-019-04609-8

Keywords

Navigation