Nothing Special   »   [go: up one dir, main page]

Skip to main content

Abstract

Motivated by the human way of memorizing images we introduce their functional representation, where an image is represented by a neural network. For this purpose, we construct a hypernetwork which takes an image and returns weights to the target network, which maps point from the plane (representing positions of the pixel) into its corresponding color in the image. Since the obtained representation is continuous, one can easily inspect the image at various resolutions and perform on it arbitrary continuous operations. Moreover, by inspecting interpolations we show that such representation has some properties characteristic to generative models. To evaluate the proposed mechanism experimentally, we apply it to image super-resolution problem. Despite using a single model for various scaling factors, we obtained results comparable to existing super-resolution methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We can reasonably hypothesize that a human representation of an image in the memory is given by some neural network.

  2. 2.

    Other experimental studies report that there are not much difference between using cosine and ReLU as activity function [14].

References

  1. Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 126–135 (2017). https://doi.org/10.1109/CVPRW.2017.150

  2. Baldi, P.: Autoencoders, unsupervised learning and deep architectures. In: Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning Workshop, UTLW 2011, vol. 27, pp. 37–50. JMLR.org (2011). http://dl.acm.org/citation.cfm?id=3045796.3045801

  3. Banfield, J.D., Raftery, A.E.: Model-based gaussian and non-gaussian clustering. Biometrics 49(3), 803–821 (1993). https://doi.org/10.2307/2532201. http://www.jstor.org/stable/2532201

    Article  MathSciNet  MATH  Google Scholar 

  4. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding (2012). https://doi.org/10.5244/C.26.135

  5. Brock, A., Lim, T., Ritchie, J.M., Weston, N.: SMASH: one-shot model architecture search through hypernetworks. CoRR abs/1708.05344 (2017). arXiv:abs/1708.05344

  6. Christopoulos, C., Skodras, A., Ebrahimi, T.: The JPEG2000 still image coding system: an overview. IEEE Trans. Consum. Electron. 46(4), 1103–1127 (2000). https://doi.org/10.1109/30.920468

    Article  Google Scholar 

  7. Czarnecki, W.M., Osindero, S., Jaderberg, M., Swirszcz, G., Pascanu, R.: Rethinking the inception architecture for computer vision. In: Advances in Neural Information Processing Systems, pp. 4278–4287 (2017). https://doi.org/10.1109/CVPR.2016.308

  8. Czarnecki, W.M., Osindero, S., Jaderberg, M., Swirszcz, G., Pascanu, R.: Sobolev training for neural networks. In: Advances in Neural Information Processing Systems, pp. 4278–4287 (2017)

    Google Scholar 

  9. Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_29

    Chapter  Google Scholar 

  10. Do, M.N., Vetterli, M.: The finite ridgelet transform for image representation. IEEE Trans. Image Process. 12(1), 16–28 (2003). https://doi.org/10.1109/TIP.2002.806252

    Article  MathSciNet  MATH  Google Scholar 

  11. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). https://doi.org/10.1109/TPAMI.2015.2439281

    Article  Google Scholar 

  12. Gao, S., Gruev, V.: Bilinear and bicubic interpolation methods for division of focal plane polarimeters. Opt. Express 19(27), 26161–26173 (2011). https://doi.org/10.1364/OE.19.026161

    Article  Google Scholar 

  13. Geladi, P., Kowalski, B.R.: Partial least-squares regression: a tutorial. Analytica chimica acta 185, 1–17 (1986). https://doi.org/10.1016/0003-2670(86)80028-9

    Article  Google Scholar 

  14. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  15. Ha, D., Dai, A., Le, Q.V.: Hypernetworks. arXiv preprint arXiv:1609.09106 (2016)

  16. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206 (2015). https://doi.org/10.1109/CVPR.2015.7299156

  17. Hwang, J.W., Lee, H.S.: Adaptive image interpolation based on local gradient features. IEEE Signal Process. Lett. 11(3), 359–362 (2004). https://doi.org/10.1109/LSP.2003.821718

    Article  Google Scholar 

  18. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)

  19. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016). https://doi.org/10.1109/CVPR.2016.182

  20. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  21. Krueger, D., Huang, C.W., Islam, R., Turner, R., Lacoste, A., Courville, A.: Bayesian hypernetworks. arXiv preprint arXiv:1710.04759 (2017)

  22. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017). https://doi.org/10.1109/CVPR.2017.19

  23. Lee, T.S.: Image representation using 2D gabor wavelets. IEEE Transactions on pattern analysis and machine intelligence 18(10), 959–971 (1996). https://doi.org/10.1109/34.541406

    Article  Google Scholar 

  24. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017). https://doi.org/10.1109/CVPRW.2017.151

  25. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017). https://doi.org/10.1109/CVPR.2017.106

  26. Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in Neural Information Processing Systems, pp. 700–708 (2017)

    Google Scholar 

  27. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV) (2015). https://doi.org/10.1109/ICCV.2015.425

  28. Lorraine, J., Duvenaud, D.: Stochastic hyperparameter optimization through hypernetworks. CoRR abs/1802.09419 (2018). arXiv:abs/1802.09419

  29. Louizos, C., Welling, M.: Multiplicative normalizing flows for variational bayesian neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 2218–2227. JMLR. org (2017)

    Google Scholar 

  30. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: null, p. 416. IEEE (2001). https://doi.org/10.1109/ICCV.2001.937655

  31. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press (2001). https://doi.org/10.1109/TNN.2005.848998

  32. Sheikh, A.S., Rasul, K., Merentitis, A., Bergmann, U.: Stochastic maximum likelihood optimization via hypernetworks. arXiv preprint arXiv:1712.01141 (2017)

  33. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015). https://doi.org/10.1109/CVPR.2015.7298594

  34. Takeda, H., Farsiu, S., Milanfar, P., et al.: Kernel regression for image processing and reconstruction. Ph.D. thesis, Citeseer (2006). https://doi.org/10.1109/TIP.2006.888330

    Article  MathSciNet  Google Scholar 

  35. Tolstikhin, I., Bousquet, O., Gelly, S., Schoelkopf, B.: Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558 (2017)

  36. Unser, M., Aldroubi, A., Eden, M.: Fast B-spline transforms for continuous image representation and interpolation. IEEE Trans. Pattern Anal. Mach. Intell. 3, 277–285 (1991). https://doi.org/10.1109/34.75515

    Article  Google Scholar 

  37. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008). https://doi.org/10.1145/1390156.1390294

  38. Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. In: Advances in Neural Information Processing Systems, pp. 809–817 (2013)

    Google Scholar 

  39. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  40. Yeh, R.A., Chen, C., Yian Lim, T., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017). https://doi.org/10.1109/CVPR.2017.728

  41. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47

    Chapter  Google Scholar 

  42. Zhang, C., Ren, M., Urtasun, R.: Graph hypernetworks for neural architecture search. CoRR abs/1810.05749 (2018). arXiv:abs/1810.05749

  43. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Science Centre (Poland) grant no. 2018/31/B/ST6/00993 and by the Foundation for Polish Science grant no. POIR.04.04.00-00-14DE/18-00.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marek Śmieja .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Klocek, S., Maziarka, Ł., Wołczyk, M., Tabor, J., Nowak, J., Śmieja, M. (2019). Hypernetwork Functional Image Representation. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30493-5_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30492-8

  • Online ISBN: 978-3-030-30493-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics