Abstract
This paper presents a new evolutionary artificial neural network (ANN) algorithm named IPSONet that is based on an improved particle swarm optimization (PSO). The improved PSO employs parameter automation strategy, velocity resetting, and crossover and mutations to significantly improve the performance of the original PSO algorithm in global search and fine-tuning of the solutions. IPSONet uses the improved PSO to address the design problem of feedforward ANN. Unlike most previous studies on only using PSO to evolve weights of ANNs, this study puts its emphasis on using the improved PSO to evolve simultaneously structure and weights of ANNs by a specific individual representation and evolutionary scheme. The performance of IPSONet has been evaluated on several benchmarks. The results demonstrate that IPSONet can produce compact ANNs with good generalization ability.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Hush DR and Horne NG (1993). Progress in supervised neural networks. IEEE Signal Process Mag 10: 8–39
Angeline PJ, Saunders GM and Pollack JB (1994). An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans Neural Netw 5(1): 54–65
Maniezzo V (1994). Genetic evolution of the topology and weight distribution of neural networks. IEEE Trans Neural Netw 5(1): 39–53
Yao X and Liu Y (1997). A new evolutionary system for evolving artificial neural networks. IEEE Trans Neural Netw 8(3): 694–713
Yao X (1999). Evolving artificial neural networks. Proc IEEE 87(9): 1423–1447
Schindler KH and Fischer MM (2000). An incremental algorithm for parallel training of the size and the weights in a feedforward neural network. Neural Process Lett 11: 131–138
Castillo PA, Carpio J, Merelo JJ, Prieto A and Rivas V (2000). Evolving multilayer perceptrons. Neural Process Lett 12: 115–127
Yang JM and Kao CY (2001). A robust evolutionary algorithm for training neural networks. Neural Comput Appl 10: 214–230
Kennedy J, Eberhart RC (1995) Particle swarm optimization. IEEE IntConf. Neural Networks. Piscataway, pp 1942–1948
Salerno J (1997) Using the particle swarm optimization technique to train a recurrent neural model. 9th International Conference on Tools With Artificial Intelligence (ICTAI97). IEEE Press, pp 45–49
Juang CF (2004). A hybrid genetic algorithm and particle swarm optimization for recurrent network design. IEEE Trans Syst Man Cybern 32: 997–1006
Da Y and Ge XR (2005). An improved PSO–based ANN with simulated annealing technique. Neurocomput Lett 63: 527–533
Settles M, Rodebaugh B, Soule T (2003) Comparison of genetic algorithm and particle swarm optimizer when evolving a recurrent neural network. In: Cantú–Paz E et al. (eds), Genetic and Evolutionary Computation————GECCO–2003. Chicago, 12–16 July, Springer–Verlag. 2723 pp 148–149
Ratnaweera A, Saman K and Watson HC (2004). Self–organizing hierarchical particle swarm optimizer with time–varing acceleration coefficients. IEEE Trans Evol Comput 8(3): 240–255
Angeline PJ (1998) Evolutionary optimization verses particle swarm optimization: philosophy and performance difference. In: Lecture notes Computer Science, vol. 1447, Proc. 7th Int. Conf. Evolutionary Programming–Evolutionary Programming VII, Mar. 1998, pp 600–610
Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proc. IEEE Int. Conf. Evolutionary Computation. pp 69–73
Shi Y, Eberhart RC (1999) Empirical study of particle swarm Opimization. In: Proc. IEEE Int. Conf. Evolutionary Computation 3 pp 101–106
Lovbjerg M, Rasmussen TK, Krink T (2001) Hybrid particle swarm optimiser with breeding and subpopulations. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO). San Francisco, CA, July 2001
Higashi N, Iba H (2003) Particle swarm optimization with Gaussian mutation. In: Proc. of the IEEE Swarm Intelligence Symp. Indianapolis: IEEE Inc pp 72–79
Murphy PM and Aha DW (1994). UCI repository of machine learning databases. Dept. Inf. Comput. Sci., Univ. California, Irvine, CA
Prechelt L (1994). Proben1—A set of benchmarks and benchmarking rules for neural network training algorithms. Univ. Karlsruhe, Karlsruhe, Germany
Kohavi R (1995) A study of cross–validation and bootstrap for accuracy estimation and model selection. In: proceeding of the fourteenth international joint conference on artificial intelligence. Morgan Kaufmann, San Francisco, CA, pp 1137–1143
Friedman N (1997). Bayesian network classifiers. Mach Learn 29: 131–163
Quinlan JR (1993). C4.5: programs for machine learning. Morgan Kaufmann, San Francisco
Michie D, Spiegelhalter DJ and Taylor CC (1994). Machine learning, neural and statistical classification. Ellis Horwood Limited, London, UK
Stone M (1974). Cross–validation choice and assessment of statistical predictions. J Royal Stat Soc 36: 111–147
Liu Y, Yao X and Tetsuya HC (2000). Evolutionary ensemble with negative correlation learning. IEEE Transaction on Evolutionary Computation 4(4): 380–387
Abbass HA (2001). A memetic pareto evolutionary approach to artificial neural networks. Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence. Lect Notes Comput Sci 2256: 1–12
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yu, J., Xi, L. & Wang, S. An Improved Particle Swarm Optimization for Evolving Feedforward Artificial Neural Networks. Neural Process Lett 26, 217–231 (2007). https://doi.org/10.1007/s11063-007-9053-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-007-9053-x