Abstract
In this work, the “effective dimension” of the output of the hidden layer of a one-hidden-layer neural network with random inner weights of its computational units is investigated. To do this, a polynomial approximation of the sigmoidal activation function of each computational unit is used, whose degree is chosen based both on a desired upper bound on the approximation error and on an estimate of the range of the input to that computational unit. This estimate of the range is parameterized by the number of inputs to the network and by an upper bound both on the size of the random inner weights of the network and on the size of its inputs. The results show that the Root Mean Square Error (RMSE) on the training set is influenced by the effective dimension and by the quality of the features associated with the output of the hidden layer.
G. Gnecco and M. Sanguineti are members of INdAM. G. Gnecco and M. Li acknowledge financial support from the research program ICTP-INdAM Research in Pairs in Mathematics 2020, for the project “On the Expressive Power of Neural Nets with Random Weights”. The work of G. Gnecco was supported in part by the Italian Project ARTES 4.0 – Advanced Robotics and enabling digital TEchnology & Systems 4.0, funded by the Italian Ministry of Economic Development (MISE). The work of M. Li was supported in part by the National Natural Science Foundation of China under Grant 62172370.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
This refers to the approximation of continuous functions on compact sets with arbitrary accuracy, using elements of the specific family of neural networks.
- 2.
In our numerical implementation, the integral in the definition of the weighted \(\mathcal {L}_2([-L,L],m_u)\) norm is approximated by a finite summation on a uniform and fine grid.
- 3.
References
Cybenko, G.: Approximation by superposition of a sigmoidal function. Math. Control Sig. Syst. 2, 303–314 (1989)
DasGupta, B., Schnitger, G.: The power of approximation: a comparison of activation functions. In: Advances in Neural Information Processing Systems (NIPS), pp. 615–622 (1992)
Fan, J., Udell, M.: Online high rank matrix completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8690–8698 (2019)
Fu, A.M., Wang, X.Z., He, Y.L., Wang, L.S.: A study on residence error of training an extreme learning machine and its application to evolutionary algorithms. Neurocomputing 146, 75–82 (2014)
Gnecco, G.: A comparison between fixed-basis and variable-basis schemes for function approximation and functional optimization. J. Appl. Math. 2012, 17 (2012). Article ID 806945
Igelnik, B., Pao, Y.-H.: Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6, 1320–1329 (1995)
Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6, 861–867 (1993)
Li, M., Wang, D.: Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf. Sci. 382, 170–178 (2017)
Piazza, F., Uncini, A., Zenobi, M.: Artificial neural networks with adaptive polynomial activation function. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 343–348 (1992)
Sonoda, S., Li, M., Cao, F., Huang, C., Wang, Y.G.: On the approximation lower bound for neural nets with random weights. arXiv preprint arXiv:2008.08427. https://arxiv.org/abs/2008.08427 (2020)
Szandała, T.: Review and comparison of commonly used activation functions for deep neural networks. In: Bhoi, A.K., Mallick, P.K., Liu, C.-M., Balas, V.E. (eds.) Bio-inspired Neurocomputing. SCI, vol. 903, pp. 203–224. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-5495-7_11
Vlcěk, M.: Chebyshev polynomial approximation for activation sigmoid function. Neural Netw. World 4, 287–393 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, M., Gnecco, G., Sanguineti, M. (2022). Deeper Insights into Neural Nets with Random Weights. In: Long, G., Yu, X., Wang, S. (eds) AI 2021: Advances in Artificial Intelligence. AI 2022. Lecture Notes in Computer Science(), vol 13151. Springer, Cham. https://doi.org/10.1007/978-3-030-97546-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-97546-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-97545-6
Online ISBN: 978-3-030-97546-3
eBook Packages: Computer ScienceComputer Science (R0)