Nothing Special   »   [go: up one dir, main page]

Skip to main content

Lessons learned when comparing shared memory and message passing codes on three modern parallel architectures

  • 2. Computational Science
  • Conference paper
  • First Online:
High-Performance Computing and Networking (HPCN-Europe 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1401))

Included in the following conference series:

Abstract

A serial Fortran 77 micromagnetics code, which simulates the behaviour of thin-film media, was parallelised using both shared memory and message passing paradigms, and run on an SGI Challenge, a Cray T3D and an SGI Origin 2000. We report the observed performance of the code, noting some important effects due to cache behaviour. We also demonstrate how certain commonly-used presentation methods can disguise the true performance profile of a code.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bailey, D. H. (1992) Misleading Performance Reporting in the Supercomputing Field, Scientific Programming, vol. 1, no. 2, pp. 141–151.

    Google Scholar 

  2. Barnes, J. and Hut, P. (1986) A Hierarchical O(NlogN) Force-Calculation Algorithm, Nature, vol. 324, no. 4, pp. 446–449.

    Google Scholar 

  3. Crowl, L. A. (1994) How to Measure, Present, and Compare Parallel Performance, IEEE Parallel and Distributed Technology, vol. 2, no. 1, pp. 9–25.

    Google Scholar 

  4. Geist, G., et al (1993) PVM 3 User's Guide and Reference Manual, Technical Report ORNL/TM-12187, Oak Ridge National Laboratories, Oak Ridge, Tennessee.

    Google Scholar 

  5. Greengard, L. and Rokhlin, V. (1987) A Fast Algorithm for Particle Simulations, Journal of Computational Physics, vol. 73, pp. 325–348.

    Google Scholar 

  6. Hennessy, J. L. and Patterson, D. A. (1996) Computer Architecture: A Quantitative Approach (Second Edition), Morgan Kaufman Publishers Inc., San Mateo, California.

    Google Scholar 

  7. MacLaren, J. M. (1997) Parallelising Serial Codes: A Comparison of Three High-Performance Parallel Programming Methods, MPhil Thesis, Department of Computer Science, University of Manchester.

    Google Scholar 

  8. Mallinson, J. C. (1987) On Damped Gyromagnetic Precession, IEEE Transactions on Magnetics, vol. MAG-23, no. 4, pp. 2003–2004.

    Google Scholar 

  9. Miles, J. J. and Middleton, B. K. (1991) A Hierarchical Micromagnetic Model of Longitudinal Thin Film Recording Media, Journal of Magnetism and Magnetic Materials, vol. 95, pp. 99–108.

    Google Scholar 

  10. Message Passing Interface Forum (1994) MPI: A Message-Passing Interface Standard, International Journal of Supercomputer Applications and High Performance Computing, vol. 8, nos. 3 and 4.

    Google Scholar 

  11. MPICH World Wide Web Home Page: http://www.mcs.ani.gov/mpi/mpich.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Peter Sloot Marian Bubak Bob Hertzberger

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

MacLaren, J.M., Bull, J.M. (1998). Lessons learned when comparing shared memory and message passing codes on three modern parallel architectures. In: Sloot, P., Bubak, M., Hertzberger, B. (eds) High-Performance Computing and Networking. HPCN-Europe 1998. Lecture Notes in Computer Science, vol 1401. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0037160

Download citation

  • DOI: https://doi.org/10.1007/BFb0037160

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64443-9

  • Online ISBN: 978-3-540-69783-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics