Nothing Special   »   [go: up one dir, main page]

Skip to main content

Optimal identification using feed-forward neural networks

  • Neural Networks for Communications and Control
  • Conference paper
  • First Online:
From Natural to Artificial Neural Computation (IWANN 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Included in the following conference series:

Abstract

In this work we present new approaches for the optimal identification of nonlinear systems. We optimize different parameters of feedforward neural networks and of the learning schedule backpropagation by the use of global search methods like genetic algorithms and simulated annealing. We achieve a global increment of their learning capability thereby enlarging the generalization capability and reducing the amount of learning speed.

The result is a more reliable and robust model for nonlinear systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

Literature

  1. Burrows, T.L.; Niranjan, M.: The use of feed-forward and recurrent neural networks for system identification. Technical Report TR-158, University of Cambridge, U.K., 1993.

    Google Scholar 

  2. Goldberg, D.: Genetic Algorithms in Search, Optimization & Machine Learning, Addison-Wesley, 1989.

    Google Scholar 

  3. Hornik, K.: Multilayer feedforward networks are universal approximators. Neural Networks, Vol. 2, pp. 359–366, 1989.

    Google Scholar 

  4. Kirkpatrick, S.; Gelatt, C.; Vecchi, M.: Optimization by Simulated Annealing. Science 220, pp. 671–680, 1983.

    Google Scholar 

  5. Ljung, L., System Identification. Prentice-Hall, 1987.

    Google Scholar 

  6. Narendra, K.S.; Parthasarathy, K.: Identification and control of dynamic systems using neural networks. IEEE Trans. on Neural Networks, Vol 1, N∘ 1, pp. 4–27, 1990.

    Google Scholar 

  7. Rumelhart, D.E., Hinton, G.E., Williams R.J. Learning internal representations by error propagation. In Parallel Distributing Processing, Vol. 1, pp. 318–362, MIT Press, 1986.

    Google Scholar 

  8. Sinne, S.: M.Sc. Thesis (in Progress), Dept. Computer Science I, University of Dortmund, 1994.

    Google Scholar 

  9. Vergara, V., Moraga, C., Computational Intelligence for the Identification of Dynamical Systems, in Proceedings of the 3dr Int. Conf. on Fuzzy Logic, Neural Nets and Soft Computing, Iizuka, Japan, 1994.

    Google Scholar 

  10. Wassermann, P.D.: Neural computing: theory and praxis. Van Nostrand Reinhold, 1989.

    Google Scholar 

  11. Zwicker, E.: Simulation und Analyse dynamischer Systeme in den Wirtschaft-und Sozialwissenschaften, Berlin, 1981.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vergara, V., Sinne, S., Moraga, C. (1995). Optimal identification using feed-forward neural networks. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_284

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_284

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics