Abstract
The development of an enhanced parallel algorithm for batch pattern training of a multilayer perceptron with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computers are presented in this paper. An algorithmic description of the parallel version of the batch pattern training method is described. Several technical solutions which lead to enhancement of the parallelization efficiency of the algorithm are discussed. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem on two general-purpose parallel computers. The results of the experimental researches show that (i) the enhanced version of the parallel algorithm is scalable and provides better parallelization efficiency than the old implementation; (ii) the parallelization efficiency of the algorithm is high enough for an efficient use of this algorithm on general-purpose parallel computers available within modern computational grids.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Haykin, S.: Neural Networks and Learning Machines. Prentice Hall, New Jersey (2008)
Mahapatra, S., Mahapatra, R., Chatterji, B.: A Parallel Formulation of Back-propagation Learning on Distributed Memory Multiprocessors. Parallel Computing 22(12), 1661–1675 (1997)
Hanzálek, Z.: A Parallel Algorithm for Gradient Training of Feed-forward Neural Networks. Parallel Computing 24(5-6), 823–839 (1998)
Murre, J.M.J.: Transputers and Neural Networks: An Analysis of Implementation Constraints and Performance. IEEE Transaction on Neural Networks 4(2), 284–292 (1993)
Topping, B.H.V., Khan, A.I., Bahreininejad, A.: Parallel Training of Neural Networks for Finite Element Mesh Decomposition. Comp. and Struct. 63(4), 693–707 (1997)
Vin, T.K., Seng, P.Z., Kuan, M.N.P., Haron, F.: A Framework for Grid-based Neural Networks. In: Proc. First Intern. Conf. on Distrib. Framew. for Multim. Appl., pp. 246–250 (2005)
Krammer, L., Schikuta, E., Wanek, H.: A Grid-based Neural Network Execution Service Source. In: Proc. 24th IASTED Intern. Conf. on Paral and Distrib. Comp. and Netw., pp. 35–40 (2006)
De Llano, R.M., Bosque, J.L.: Study of Neural Net Training Methods in Parallel and Distributed Architectures. Fut. Gen. Comp. Sys. 26(2), 183–190 (2010)
Turchenko, V., Grandinetti, L.: Efficiency Analysis of Parallel Batch Pattern NN Training Algorithm on General-Purpose Supercomputer. In: Omatu, S., Rocha, M.P., Bravo, J., Fernández, F., Corchado, E., Bustillo, A., Corchado, J.M. (eds.) IWANN 2009. LNCS, vol. 5518, pp. 223–226. Springer, Heidelberg (2009)
Turchenko, V., Grandinetti, L.: Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization. In: Proc. 5th Intern. Work Artif. Neur. Netw. and Intel. Inform. Proces., pp. 79–87 (2009)
Fagg, G.E., Pjesivac-Grbovic, J., Bosilca, G., Angskun, T., Dongarra, J., Jeannot, E.: Flexible collective communication tuning architecture applied to Open MPI. In: Euro PVM/MPI (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Turchenko, V., Grandinetti, L. (2010). Scalability of Enhanced Parallel Batch Pattern BP Training Algorithm on General-Purpose Supercomputers. In: de Leon F. de Carvalho, A.P., Rodríguez-González, S., De Paz Santana, J.F., Rodríguez, J.M.C. (eds) Distributed Computing and Artificial Intelligence. Advances in Intelligent and Soft Computing, vol 79. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14883-5_67
Download citation
DOI: https://doi.org/10.1007/978-3-642-14883-5_67
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-14882-8
Online ISBN: 978-3-642-14883-5
eBook Packages: EngineeringEngineering (R0)