Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2488551.2488602acmotherconferencesArticle/Chapter ViewAbstractPublication PageseurompiConference Proceedingsconference-collections
research-article

Methodology for MPI applications autotuning

Published: 15 September 2013 Publication History

Abstract

This paper proposes a methodology designed to tackle the most common problems of MPI parallel programs. By developing a methodology that applies simple steps in a systematic way, we expect to obtain the basis for a successful autotuning approach of MPI applications based on measurements taken from their own execution. As part of the Au-toTune project, our work is ultimately aimed at extending Periscope to apply automatic tuning to parallel applications and thus provide a straightforward way of tuning MPI parallel codes. Experimental tests demonstrate that this methodology could lead to significant performance improvements.

References

[1]
S. Benedict, V. Petkov, and M. Gerndt. Periscope: An online-based distributed performance analysis tool. In M. S. Mãijller, M. M. Resch, A. Schulz, and W. E. Nagel, editors, Tools for High Performance Computing 2009, pages 1--16. Springer Berlin Heidelberg, 2010.
[2]
M. Chaarawi, J. Squyres, E. Gabriel, and S. Feki. A tool for optimizing runtime parameters of open mpi. Recent Advances in Parallel Virtual Machine and Message Passing Interface, pages 210--217, 2008.
[3]
Leibniz-Rechenzentrum. SuperMUC Petascale System. http://www.lrz.de/services/compute/supermuc/ (Mar. 2013).
[4]
R. Miceli, G. Civario, A. Sikora, E. César, M. Gerndt, H. Haitof, C. Navarrete, S. Benkner, M. Sandrieser, L. Morin, and F. Bodin. Autotune: A plugin-driven approach to the automatic tuning of parallel applications. In P. Manninen and P. Öster, editors, Applied Parallel and Scientific Computing, volume 7782 of Lecture Notes in Computer Science, pages 328--342. Springer Berlin Heidelberg, 2013.
[5]
A. Morajko, O. Morajko, T. Margalef, and E. Luque. Mate: Dynamic performance tuning environment. In Euro-Par'04, pages 98--106, 2004.
[6]
MPI Forum. MPI: A Message-Passing Interface Standard. Version 2.2, September 4th 2009. available at: http://www.mpi-forum.org (Mar. 2013).
[7]
W. E. Nagel, A. Arnold, M. Weber, H.-C. Hoppe, and K. Solchenbach. VAMPIR: Visualization and analysis of MPI resources. Citeseer, 1996.
[8]
S. Pellegrini, J. Wang, T. Fahringer, and H. Moritsch. Optimizing mpi runtime parameter settings by using machine learning. In Proceedings of the 16th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface, pages 196--206, Berlin, Heidelberg, 2009. Springer-Verlag.
[9]
R. Solar, R. Suppi, and E. Luque. High performance distributed cluster-based individual-oriented fish school simulation. Procedia CS, 4:76--85, 2011.

Cited By

View all
  • (2018)Cooperative rendezvous protocols for improved performance and overlapProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.5555/3291656.3291694(1-13)Online publication date: 11-Nov-2018
  • (2018)Cooperative rendezvous protocols for improved performance and overlapProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.1109/SC.2018.00031(1-13)Online publication date: 11-Nov-2018
  • (2017)Designing Dynamic and Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation and CommunicationHigh Performance Computing10.1007/978-3-319-58667-0_18(334-354)Online publication date: 12-May-2017

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
EuroMPI '13: Proceedings of the 20th European MPI Users' Group Meeting
September 2013
289 pages
ISBN:9781450319034
DOI:10.1145/2488551
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

  • ARCOS: Computer Architecture and Technology Area, Universidad Carlos III de Madrid

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2013

Check for updates

Author Tags

  1. MPI
  2. autotuning
  3. load balancing
  4. performance analysis

Qualifiers

  • Research-article

Funding Sources

Conference

EuroMPI '13
Sponsor:
  • ARCOS
EuroMPI '13: 20th European MPI Users's Group Meeting
September 15 - 18, 2013
Madrid, Spain

Acceptance Rates

EuroMPI '13 Paper Acceptance Rate 22 of 47 submissions, 47%;
Overall Acceptance Rate 66 of 139 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2018)Cooperative rendezvous protocols for improved performance and overlapProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.5555/3291656.3291694(1-13)Online publication date: 11-Nov-2018
  • (2018)Cooperative rendezvous protocols for improved performance and overlapProceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis10.1109/SC.2018.00031(1-13)Online publication date: 11-Nov-2018
  • (2017)Designing Dynamic and Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation and CommunicationHigh Performance Computing10.1007/978-3-319-58667-0_18(334-354)Online publication date: 12-May-2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media