Nothing Special   »   [go: up one dir, main page]

Skip to main content

Estimate of the Neural Network Dimension Using Algebraic Topology and Lie Theory

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

In this paper we present an approach to determine the smallest possible number of neurons in a layer of a neural network in such a way that the topology of the input space can be learned sufficiently well. We introduce a general procedure based on persistent homology to investigate topological invariants of the manifold on which we suspect the data set. We specify the required dimensions precisely, assuming that there is a smooth manifold on or near which the data are located. Furthermore, we require that this space is connected and has a commutative group structure in the mathematical sense. These assumptions allow us to derive a decomposition of the underlying space whose topology is well known. We use the representatives of the k-dimensional homology groups from the persistence landscape to determine an integer dimension for this decomposition. This number is the dimension of the embedding that is capable of capturing the topology of the data manifold. We derive the theory and validate it experimentally on toy data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Invertible architectures guarantee the same differentiable structure during learning. Due to the construction of trivially invertible neural networks the embedding dimension is doubled, see [8].

References

  1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/, software available from tensorflow.org

  2. Bartlett, P., Harvey, N., Liaw, C., Mehrabian, A.: Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. J. Mach. Learn. Res. 20, 1–17 (2019)

    MathSciNet  MATH  Google Scholar 

  3. Boissonnat, J.D., Chazal, F., Yvinec, M.: Geometric and Topological Inference. Cambridge University Press, Cambridge (2018)

    Book  Google Scholar 

  4. Bubenik, P.: Statistical topological data analysis using persistence landscapes. J. Mach. Learn. Res. 16, 77–102 (2015)

    MathSciNet  MATH  Google Scholar 

  5. Chollet, F., et al.: Keras (2015). https://keras.io

  6. Cohen, T.S., Geiger, M., Köhler, J., Welling, M.: Spherical cnns. In: 6th International Conference on Learning Representations (2018)

    Google Scholar 

  7. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control, Signals Syst. 5(4), 455 (1992)

    Article  MathSciNet  Google Scholar 

  8. Dinh, L., Sohl-Dickstein, J., Bengio, S.: Density estimation using real NVP. In: 5th International Conference on Learning Representations (2017)

    Google Scholar 

  9. Edelsbrunner, H., Harer, J.: Persistent homology - a survey. Contemp. Math. 453, 257–282 (2008)

    Article  MathSciNet  Google Scholar 

  10. Futagami, R., Yamada, N., Shibuya, T.: Inferring underlying manifold of data by the use of persistent homology analysis. In: 7th International Workshop on Computational Topology in Image Context, pp. 40–53 (2019)

    Google Scholar 

  11. Gruenberg, K.W.: The universal coefficient theorem in the cohomology of groups. J. London Math. Soc. 1(1), 239–241 (1968)

    Article  MathSciNet  Google Scholar 

  12. Deo, S.: Algebraic Topology. TRM, vol. 27. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-8734-9

    Book  MATH  Google Scholar 

  13. Hauser, M., Gunn, S., Saab Jr., S., Ray, A.: State-space representations of deep neural networks. Neural Comput. 31(3), 538–554 (2019)

    Article  MathSciNet  Google Scholar 

  14. Hauser, M., Ray, A.: Principles of riemannian geometry in neural networks. Adv. Neural Inf. Process. Syst. 30, 2807–2816 (2017)

    Google Scholar 

  15. Johnson, J.: Deep, skinny neural networks are not universal approximators. In: 7th International Conference on Learning Representations (2019)

    Google Scholar 

  16. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations (2015)

    Google Scholar 

  17. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  18. Lee, J.: Smooth manifolds. Introduction to Smooth Manifolds. Springer, New York (2013)

    MATH  Google Scholar 

  19. Lin, H., Jegelka, S.: Resnet with one-neuron hidden layers is a universal approximator. Adv. Neural Inf. Process. Syst. 31, 6172–6181 (2018)

    Google Scholar 

  20. Melodia, L., Lenz, R.: Persistent homology as stopping-criterion for voronoi interpolation. In: Lukić, T., Barneva, R.P., Brimkov, V.E., Čomić, L., Sladoje, N. (eds.) IWCIA 2020. LNCS, vol. 12148, pp. 29–44. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51002-2_3

    Chapter  Google Scholar 

  21. Onischtschik, A.L., Winberg, E.B., Minachin, V.: Lie Groups and Lie Algebras I. Springer (1993)

    Google Scholar 

  22. Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., Dickstein, J.S.: On the expressive power of deep neural networks. In: 34th International Conference on Machine Learning, pp. 2847–2854 (2017)

    Google Scholar 

  23. Stone, M.H.: The generalized weierstrass approximation theorem. Math. Mag. 21(5), 237–254 (1948)

    Article  MathSciNet  Google Scholar 

  24. The GUDHI Project: GUDHI user and reference manual (2020). https://gudhi.inria.fr/doc/3.1.1/

  25. Zomorodian, A., Carlsson, G.: Computing persistent homology. Discrete Comput. Geom. 33(2), 249–274 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank Christian Holtzhausen, David Haller and Noah Becker for proofreading and anonymous reviewers for their constructive criticism and corrections. This work was partially supported by Siemens Energy AG.

Code and Data. The implementation, the data sets and experimental results can be found at: https://github.com/karhunenloeve/NTOPL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Melodia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Melodia, L., Lenz, R. (2021). Estimate of the Neural Network Dimension Using Algebraic Topology and Lie Theory. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12665. Springer, Cham. https://doi.org/10.1007/978-3-030-68821-9_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68821-9_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68820-2

  • Online ISBN: 978-3-030-68821-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics