Nothing Special   »   [go: up one dir, main page]

Skip to main content

Optimal Hidden Structure for Feedforward Neural Networks

  • Conference paper
Computational Intelligence (Fuzzy Days 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1625))

Included in the following conference series:

Abstract

The selection of an adequate hidden structure of a feedforward neural network is a very important issue of its design. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. In addition, training time will significantly increase when the network is unnecessarily large. We propose two methods to optimize the size of feedforward neural networks using orthogonal transformations. These two approaches avoid the retraining process of the reduced-size network, which is necessary in any pruning technique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. P. Bachiller, R.M. Pérez, P. Martínez and P.L. Aguilar: A method based on orthogonal transformation for the design of optimal feedforward network architecture. Proceedings of the 3rd International Meeting on Vector and Parallel Processing (VECPAR’98), 1998, pp. 541–552.

    Google Scholar 

  2. G. Castellano, A. M. Fanelli and M. Pelillo: An iterative pruning algorithm for feedforward neural networks. IEEE Transactions on Neural Networks, vol. 8, no. 3,1997, pp. 519–531.

    Article  Google Scholar 

  3. G. H. Golub and C. F. Van Loan: Matrix computations. Baltimore, MD: John Hopkins Univ. Press, 1989.

    MATH  Google Scholar 

  4. P. Kanjilal and N. Banerjee: On the application of orthogonal transformation for the design and analysis of feedforward networks. IEEE Transactions on Neural Networks, vol. 5, no. 5, 1995, pp. 1061–1070.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bachiller, P., Pérez, R.M., Martínez, P., Aguilar, P.L., Díaz, P. (1999). Optimal Hidden Structure for Feedforward Neural Networks. In: Reusch, B. (eds) Computational Intelligence. Fuzzy Days 1999. Lecture Notes in Computer Science, vol 1625. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48774-3_76

Download citation

  • DOI: https://doi.org/10.1007/3-540-48774-3_76

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66050-7

  • Online ISBN: 978-3-540-48774-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics