Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/1757044.1757046guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Scheduling dynamically spawned processes in MPI-2

Published: 26 June 2006 Publication History

Abstract

The Message Passing Interface is one of the most well known parallel programming libraries. Although the standard MPI-1.2 norm only deals with a fixed number of processes, determined at the beginning of the parallel execution, the recently implemented MPI-2 standard provides primitives to spawn processes during the execution, and to enable them to communicate together.
However, the MPI norm does not include any way to schedule the processes. This paper presents a scheduler module, that has been implemented with MPI-2, that determines, on-line (i.e. during the execution), on which processor a newly spawned process should be run, and with which priority. The scheduling is computed under the hypotheses that the MPI-2 program follows a Divide and Conquer model, for which well-known scheduling algorithms can be used. A detailed presentation of the implementation of the scheduler, as well as an experimental validation, are provided. A clear improvement in the balance of the load is shown by the experiments.

References

[1]
A. M. Bender, and M. O. Rabin. Online scheduling of parallel programs on heterogeneous systems with applications to cilk. In Theory of Computing Systems, Special Issue on SPAA'00, volume 35, pages 289-304, 2002.
[2]
M. A. Bender and M. O. Rabin. Scheduling cilk multithreaded parallel programs on processors of different speeds. In Twelfth annual ACM Symposium on Parallel Algorithms and Architectures - SPAA, pages 13-21, Bar Harbor, Maine, USA, 2000.
[3]
R. D. Blumofe, C. F. Joerg, B. C. Kuszmaul, C. E. Leiserson, K. H. Randall, and Y. C. E. Zhou. Cilk: an efficient multithreaded runtime system. ACM SIGPLAN Notices, 30(8):207- 216, Aug. 1995.
[4]
R. D. Blumofe and C. E. Leiserson. Space-efficient scheduling of multithreaded computations. SIAM Journal on Computing, 27(1):202-229, 1998.
[5]
M. C. Cera, G. P. Pezzi, E. N. Mathias, N. Maillard, and P. O. A. Navaux. Improving the dynamic creation of processes in mpi-2, 2006. accepted to 13th European PVMMPI Users Group Meeting, set, 2006, Bonn, Germany.
[6]
T. H. Cormen, C. E. Leiserson, and R. L. R. ans Clifford Stein. Introduction to Algorithms. The MIT Press, 2 edition, 2001.
[7]
D. Bailey et al. The NAS parallel benchmarks. Technical Report RNR-91-002, NAS Systems Division, Jan. 1991.
[8]
J. Dongarra, P. Luszczek, and A. Petitet. The LINPACK benchmark: past, present and future. Concurrency and Computation: Practice and Experience, 15(9):803-820, 2003.
[9]
E. Gabriel, G. E. Fagg, G. Bosilca, T. Angskun, J. J. Dongarra, J. M. Squyres, V. Sahay, P. Kambadur, B. Barrett, A. Lumsdaine, R. H. Castain, D. J. Daniel, R. L. Graham, and T. S. Woodall. Open MPI: Goals, concept, and design of a next generation MPI implementation. In Proceedings, 11th European PVM/MPI Users' Group Meeting, pages 97-104, Budapest, Hungary, September 2004.
[10]
F. Galilée, J.-L. Roch, G. Cavalheiro, and M. Doreille. Athapascan-1: On-line Building Data Flow Graph in a Parallel Language. In IEEE, editor, International Conference on Parallel Architectures and Compilation Techniques, PACT'98, pages 88-95, Paris, France, October 1998.
[11]
R. Graham. Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math., 17(2):416- 426, 1969.
[12]
W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, Cambridge, Massachusetts, USA, Oct. 1994.
[13]
W. Gropp, E. Lusk, and R. Thakur. Using MPI-2 Advanced Features of the Message-Passing Interface. The MIT Press, Cambridge, Massachusetts, USA, 1999.
[14]
N. Maillard, R. Ennes, and T. Divério. Automatic data-flow graph generation of mpi programs. In SBAC'05, Rio de Janeiro, Brazil, November 2005.
[15]
S. Moore, F.Wolf, J. Dongarra, S. Shende, A. D. Malony, and B. Mohr. A scalable approach to mpi application performance analysis. In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 12th European PVM/MPI Users' Group Meeting, volume 3666 of Lecture Notes in Computer Science, pages 309-316. Springer, 2005.
[16]
V. S. Sunderam. PVM: A framework for parallel distributed computing. Concurrency: practice and experience, 2(4):315-339, Dec. 1990.
[17]
R. V. van Nieuwpoort, T. Kielmann, and H. E. Bal. Satin: Efficient Parallel Divide-and-Conquer in Java. In Euro-Par 2000 Parallel Processing, number 1900 in Lecture Notes in Computer Science, pages 690-699, Munich, Germany, Aug. 2000. Springer.
[18]
R. V. van Nieuwpoort, J. Maassen, R. Hofman, T. Kielmann, and H. E. Bal. Satin: Simple and efficient java-based grid programming. In AGridM 2003 Workshop on Adaptive Grid Middleware, New Orleans, Louisiana, USA, 2003.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
JSSPP'06: Proceedings of the 12th international conference on Job scheduling strategies for parallel processing
June 2006
256 pages
ISBN:9783540710349
  • Editors:
  • Eitan Frachtenberg,
  • Uwe Schwiegelshohn

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 26 June 2006

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Oct 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media