Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Parallel evolutionary training algorithms for “hardware-friendly” neural networks

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

In this paper, Parallel Evolutionary Algorithms for integer weightneural network training are presented. To this end, each processoris assigned a subpopulation of potential solutions. Thesubpopulations are independently evolved in parallel andoccasional migration is employed to allow cooperation betweenthem. The proposed algorithms are applied to train neural networksusing threshold activation functions and weight values confined toa narrow band of integers. We constrain the weights and biases inthe range [−3, 3], thus they can be represented by just 3 bits.Such neural networks are better suited for hardware implementationthan the real weight ones. These algorithms have been designedkeeping in mind that the resulting integer weights require lessbits to be stored and the digital arithmetic operations betweenthem are easier to be implemented in hardware. Another advantageof the proposed evolutionary strategies is that they are capableof continuing the training process ``on-chip'', if needed. Ourintention is to present results of parallel evolutionaryalgorithms on this difficult task. Based on the application of theproposed class of methods on classical neural network problems,our experience is that these methods are effective and reliable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Beyer H-G and Schwefel H-P (2002) Evolution Strategies: A comprehensive introduction. Natural Computing (to appear)

  • Boutsinas B and Vrahatis MN (2001) Artificial nonmonotonic neural networks. Artificial Intelligence 132: 1–38

    Google Scholar 

  • Corwin EM, Logar AM and Oldham WJB (1994) An IterativeMethod for Training Multilayer Networks with Threshold Functions. IEEE Transactions on Neural Networks 5: 507–508

    Google Scholar 

  • Gall DA (1996) A practical multifactor optimization criterion. In: Lavi A and Vogl TP (eds) Recent Advances in Optimization Techniques, pp. 369–386. Wiley, New York

    Google Scholar 

  • Higuchi T, Niwa T, Tanaka T, Iba H, Garis H and Furuya T (1992) Evolvable hardware with genetic learning. Simulation of Adaptive Behavior, MIT Press

  • Higuchi T, Iba H and Manderick B (1994) Evolvable hardware with genetic learning. In: Kitano H (ed) Massively Parallel Artificial Intelligence. MIT Press

  • Kelahan RC and Gaddy JL (1978) Application of the adaptive random search to discrete and mixed integer optimization. International Journal for Numerical Methods in Engineering 12: 289–298

    Google Scholar 

  • Khan AH (1996) Feedforward Neural Networks with Constrained Weights. Ph.D. Thesis, Univ. of Warwick, Dept. of Engineering

  • Khan AH and Hines EL (1994) iInteger-weight neural nets. Electronics Letters 30: 1237–1238

    Google Scholar 

  • Magoulas GD, Vrahatis MN and Androulakis GS (1997) Effective back-propagation with variable stepsize. Neural Networks 10: 69–82

    Google Scholar 

  • Magoulas GD, Vrahatis MN, Grapsa TN and Androulakis GS (1997) A training method for discrete multilayer neural networks. In: Ellacott SW, Mason JC and Anderson IJ (eds) Mathematics of Neural Networks, Models, Algorithms and Applications, pp. 250–254. Kluwer Academic Publishers

  • Michalewicz Z and Fogel DB (2000) How to solve it: Modern Heuristics. Springer

  • Plagianakos VP and Vrahatis MN (1999) Training Neural Networks with 3–bit Integer Weights. In: Banzhaf W, Daida J, Eiben AE, Garzon MH, Honavar V, Jakiela M and Smith RE (eds) Proceedings of Genetic and Evolutionary Computation Conference (GECCO'99), pp. 910–915. Morgan Kaufmann

  • Plagianakos VP and VrahatisMN (2000) Training Neural Networks with Threshold Activation Functions and Constrained Integer Weights. Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN'2000). Como, Italy

  • Rudolph G (1991) Global optimization by means of distributed evolution strategies. In: Schwefel H-P and Männer R (eds) Parallel problem solving from nature, Lecture Notes in Computer Science, 496, pp. 209–213. Springer, Berlin

    Google Scholar 

  • Rudolph G (1994) An evolutionary algorithm for integer programming. In: Davidor Y, Schwefel H-P and Männer R (eds) Parallel Problem Solving from Nature, Lecture Notes in Computer Science, 866, pp. 139–148. Springer-Verlag, Berlin

    Google Scholar 

  • Rumelhart DE, Hinton GE and Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE and McClelland JL (eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1, pp. 318–362. MIT Press, Cambridge, Massachusetts

    Google Scholar 

  • Schwefel H-P (1995) Evolution and Optimum Seeking. John Wiley & Sons, Inc., New York

    Google Scholar 

  • Storn R (1999) System Design by Constraint Adaptation and Differential Evolution. IEEE Transactions on Evolutionary Computation 3: 22–34

    Google Scholar 

  • Storn R and Price K (1997) Differential Evolution-A Simple and Efficient Heuristic for Global Optimization over Continuous spaces. Journal of Global Optimization 11: 341–359

    Google Scholar 

  • Thrun SB, Bala J, Bloedorn E, Bratko I, Cestnik B, Cheng J, De Jong K, Dzeroski S, Fahlman SE, Fisher D, Hamann R, Kaufmann K, Keller S, Kononenko I, Kreuziger J, Michalski RS, Mitchell T, Pachowicz P, Reich Y, Vafaie H, Van de WeldeW,Wenzel W,Wnek J and Zhang J (1991) The MONK's Problems: A performance comparison of different learning algorithms. Technical Report, Carnegie Mellon University, CMU-CS-91–197

  • Yao X and Higuchi T (1999) Promises and Challenges of Evolvable Hardware. Systems,Man, and Cybernetics Part C: Applications and Reviews 29: 87–97

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Plagianakos, V.P., Vrahatis, M.N. Parallel evolutionary training algorithms for “hardware-friendly” neural networks. Natural Computing 1, 307–322 (2002). https://doi.org/10.1023/A:1016545907026

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1016545907026

Navigation