Nothing Special   »   [go: up one dir, main page]

Skip to main content

Scalability of Enhanced Parallel Batch Pattern BP Training Algorithm on General-Purpose Supercomputers

  • Conference paper
Distributed Computing and Artificial Intelligence

Part of the book series: Advances in Intelligent and Soft Computing ((AINSC,volume 79))

  • 1381 Accesses

Abstract

The development of an enhanced parallel algorithm for batch pattern training of a multilayer perceptron with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computers are presented in this paper. An algorithmic description of the parallel version of the batch pattern training method is described. Several technical solutions which lead to enhancement of the parallelization efficiency of the algorithm are discussed. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem on two general-purpose parallel computers. The results of the experimental researches show that (i) the enhanced version of the parallel algorithm is scalable and provides better parallelization efficiency than the old implementation; (ii) the parallelization efficiency of the algorithm is high enough for an efficient use of this algorithm on general-purpose parallel computers available within modern computational grids.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Haykin, S.: Neural Networks and Learning Machines. Prentice Hall, New Jersey (2008)

    Google Scholar 

  2. Mahapatra, S., Mahapatra, R., Chatterji, B.: A Parallel Formulation of Back-propagation Learning on Distributed Memory Multiprocessors. Parallel Computing 22(12), 1661–1675 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Hanzálek, Z.: A Parallel Algorithm for Gradient Training of Feed-forward Neural Networks. Parallel Computing 24(5-6), 823–839 (1998)

    Article  MATH  Google Scholar 

  4. Murre, J.M.J.: Transputers and Neural Networks: An Analysis of Implementation Constraints and Performance. IEEE Transaction on Neural Networks 4(2), 284–292 (1993)

    Article  Google Scholar 

  5. Topping, B.H.V., Khan, A.I., Bahreininejad, A.: Parallel Training of Neural Networks for Finite Element Mesh Decomposition. Comp. and Struct. 63(4), 693–707 (1997)

    Article  MATH  Google Scholar 

  6. Vin, T.K., Seng, P.Z., Kuan, M.N.P., Haron, F.: A Framework for Grid-based Neural Networks. In: Proc. First Intern. Conf. on Distrib. Framew. for Multim. Appl., pp. 246–250 (2005)

    Google Scholar 

  7. Krammer, L., Schikuta, E., Wanek, H.: A Grid-based Neural Network Execution Service Source. In: Proc. 24th IASTED Intern. Conf. on Paral and Distrib. Comp. and Netw., pp. 35–40 (2006)

    Google Scholar 

  8. De Llano, R.M., Bosque, J.L.: Study of Neural Net Training Methods in Parallel and Distributed Architectures. Fut. Gen. Comp. Sys. 26(2), 183–190 (2010)

    Article  Google Scholar 

  9. Turchenko, V., Grandinetti, L.: Efficiency Analysis of Parallel Batch Pattern NN Training Algorithm on General-Purpose Supercomputer. In: Omatu, S., Rocha, M.P., Bravo, J., Fernández, F., Corchado, E., Bustillo, A., Corchado, J.M. (eds.) IWANN 2009. LNCS, vol. 5518, pp. 223–226. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  10. Turchenko, V., Grandinetti, L.: Minimal Architecture and Training Parameters of Multilayer Perceptron for its Efficient Parallelization. In: Proc. 5th Intern. Work Artif. Neur. Netw. and Intel. Inform. Proces., pp. 79–87 (2009)

    Google Scholar 

  11. Fagg, G.E., Pjesivac-Grbovic, J., Bosilca, G., Angskun, T., Dongarra, J., Jeannot, E.: Flexible collective communication tuning architecture applied to Open MPI. In: Euro PVM/MPI (2006)

    Google Scholar 

  12. http://www.open-mpi.org/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Turchenko, V., Grandinetti, L. (2010). Scalability of Enhanced Parallel Batch Pattern BP Training Algorithm on General-Purpose Supercomputers. In: de Leon F. de Carvalho, A.P., Rodríguez-González, S., De Paz Santana, J.F., Rodríguez, J.M.C. (eds) Distributed Computing and Artificial Intelligence. Advances in Intelligent and Soft Computing, vol 79. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14883-5_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-14883-5_67

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-14882-8

  • Online ISBN: 978-3-642-14883-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics