Nothing Special   »   [go: up one dir, main page]

Skip to main content

The BP-λL1 algorithm: Non-chaotic and accelerated learning in a MLP network

  • Computational Models of Neurons and Neural Nets
  • Conference paper
  • First Online:
From Natural to Artificial Neural Computation (IWANN 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Included in the following conference series:

  • 806 Accesses

Abstract

Multilayer perceptrons are learning structures often used in the connectionist approach. A backpropagation algorithm, which enables learning, is a fixed point research algorithm. As such, it induces the various behaviours of chaotic dynamics. One can apply to it the tools of chaos theory. Amongst the useable tools, a measurement of behaviour, or more precisely stability, exists, namely Lyapunov numbers. The obtaining of these numbers comes through awareness of the eigen values of the jacobian matrix associated to the weight modification functions. We give a method of calculation whose efficiency comes from the use of the particularities of the backpropagation. From the calculation of the Lyapunov numbers, the basic backpropagation algorithm is modified. We propose the first part of a new learning algorithm whose originality resides in a strategy of gradient step constraint, arising from the obligation of a stable behaviour. Its values is related to obtaining rapid convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. M. Milgram-Reconnaissance des formes, méthodes numériques et connexionnistes-Armand Colin, 1993.

    Google Scholar 

  2. Y. Le Cun-Modèles connexionnistes de l'apprentissage-Thèse de doctorat, Université de Paris VI, 1987.

    Google Scholar 

  3. B. Widrow, & al.-30 years of adaptative neural networks: perceptron, madaline and backpropagation-Procedings of the IEE, vol. 78, n∘ 9, p. 1415–1442, IEE, 1990.

    Google Scholar 

  4. S. Shah, F. Palmieri, M. Datum-Optimal filtering algorithms for fast learning in feedforward neural networks-Neural Networks, vol. 5, p. 779–787, Pergamon Press, 1992.

    Google Scholar 

  5. R.A. Jacobs-Increased rates of convergence through learning rate adaptation — Neural Networks, vol. 1, p. 295–308, Pergamon Press, 1988.

    Google Scholar 

  6. Y. Lee, S.H. Oh, M.W. Kim-An analysis of premature saturation in backpropagation learning-Neural Networks, vol. 6, p. 719–728, Pergamon Press, 1993.

    Google Scholar 

  7. P. Lascaux, R. Théodor-Analyse numérique matricielle appliquée à l'art de l'ingénieur-Masson, 1986.

    Google Scholar 

  8. C. Brezinski-Algorithmique numérique-Ellipses, 1988.

    Google Scholar 

  9. P. Frederickson, J.L. Kaplan, E.D. Yorke, J.A. Yorke-The Lyapunov dimension of strange attractors-Journal of Differential Equations, vol. 9, p. 185–207, Academic Press, 1983.

    Google Scholar 

  10. A. Grorud, D. Talay-Approximation of Lyapunov exponents of non-linear stochastic differential systems-Rapport de recherche n∘ 1341, INRIA, 1990.

    Google Scholar 

  11. J. Weitkämper-A study of bifurcations in a circular real cellular automaton-Department de Matematica aplicada i analisi, Universitat de Barcelona, 1991.

    Google Scholar 

  12. F. Zou, J.A. Nossek-Bifurcation and chaos in cellular neural networks-IEEE transactions on Circuits and Systems: Fondamental Theory and Applications, volume 40, n∘ 5, pages 166–173, IEEE, 1993.

    Google Scholar 

  13. D. Guegan-Notion de chaos, approche dynamique et problèmes d'identification-Rapport de recherche n∘ 1623, INRIA, 1992.

    Google Scholar 

  14. P. Maneville-Structures dissipatives, chaos et turbulence-Collection Aléa Saclay, 1991.

    Google Scholar 

  15. B. Karayiannis, A.N. Venetsanopoulos-Efficient Learning Algorithms for Neural Networks-IEEE Transactions on systems, man and cybernetics, vol. 23, n∘ 5, p. 1372–1383, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Augereau, B., Simon, T., Bernard, J., Heit, B. (1995). The BP-λL1 algorithm: Non-chaotic and accelerated learning in a MLP network. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_180

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_180

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics