Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1188455.1188545acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
Article

A software based approach for providing network fault tolerance in clusters with uDAPL interface: MPI level design and performance evaluation

Published: 11 November 2006 Publication History

Abstract

In the arena of cluster computing, MPI has emerged as the de facto standard for writing parallel applications. At the same time, introduction of high speed RDMA-enabled interconnects like InfiniBand, Myrinet, Quadrics, RDMA-enabled Ethernet has escalated the trends in cluster computing. Network APIs like uDAPL (user Direct Access Provider Library) are being proposed to provide a network-independent interface to different RDMA-enabled interconnects. Clusters with combination(s) of these interconnects are being deployed to leverage their unique features, and network failover in wake of transmission errors. In this paper, we design a network fault tolerant MPI using uDAPL interface, making this design portable for existing and upcoming interconnects. Our design provides failover to available paths, asynchronous recovery of the previous failed paths and recovery from network partitions without application restart. In addition, the design is able to handle network heterogeneity, making it suitable for the current state of the art clusters. We implement our design and evaluate it with micro-benchmarks and applications. Our performance evaluation shows that the proposed design provides significant performance benefits to both homogeneous and heterogeneous clusters. Using a heterogeneous combinations of IBA and Ammasso-GigE, we are able to improve the performance by 10-15% for different NAS Parallel Benchmarks on 8x1 configuration. For simple micro-benchmarks on a homogeneous configuration, we are able to achieve an improvement of 15-20% in throughput. In addition, experiments with simple MPI micro-benchmarks and NAS Applications reveal that network fault tolerance modules incur negligible overhead and provide optimal performance in wake of network partitions.

References

[1]
Ammasso Incorporation. The Ammasso 1100 High Performance Ethernet Adapter User Guide. February 2005.
[2]
D. H. Bailey, E. Barszcz, J. T. Barton, D. S. Browning, R. L. Carter, D. Dagum, R. A. Fatoohi, P. O. Frederickson, T. A. Lasinski, R. S. Schreiber, H. D. Simon, V. Venkatakrishnan, and S. K. Weeratunga. The NAS Parallel Benchmarks. volume 5, pages 63--73, Fall 1991.
[3]
Mohammad Banikazemi, Rama K. Govindaraju, Robert Blackmore, and Dhabaleswar K. Panda. MPILAPI: An Efficient Implementation of MPI for IBM RS/6000 SP Systems. IEEE Transactions on Parallel and Distributed Systems, pages 1081--1093, October 2001.
[4]
Darius Buntinas, Guillaume Mercier, and William Gropp. The Design and Evaluation of Nemesis, a Scalable Low-Latency Message-Passing Communication Subsystem. Number ANL/MCS-TM-292, 2005.
[5]
L. Chai, R. Noronha, P. Gupta, G. Brown, and D. K. Panda. Designing a Portable MPI-2 over Modern Interconnects Using uDAPL Interface. In EuroPVM/MPI, 2005.
[6]
DAT Collaborative. uDAPL: User Direct Access Programming Library Version 1.2. http://www.datcollaborative.org/udapl.html, July 2004.
[7]
Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack Dongarra, Jeffrey M. Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J. Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation. In EuroPVM/MPI, pages 97--104, 2004.
[8]
Richard L. Graham, Sung-Eun Choi, David J. Daniel, Nehal N. Desai, Ronald G. Minnich, Craig E. Rasmussen, L. Dean Risinger, and Mitchel W. Sukalski. A Network-Failure-Tolerant Message-Passing System for Terascale Clusters. volume 31, pages 285--303, Norwell, MA, USA, 2003. Kluwer Academic Publishers.
[9]
InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.2. October 2004.
[10]
Lawrence Livermore National Laboratory. MVICH: MPI for Virtual Interface Architecture, August 2001.
[11]
J. Liu, A. Vishnu, and D. K. Panda. Building Multirail InfiniBand Clusters: MPI-Level Design and Performance Evaluation. In SuperComputing Conference, 2004.
[12]
J. Liu, J. Wu, S. P. Kini, P. Wyckoff, and D. K. Panda. High Performance RDMA-Based MPI Implementation over InfiniBand. In 17th Annual ACM International Conference on Supercomputing, June 2003.
[13]
Network-Based Computing Laboratory. MVAPICH/MVAPICH2: MPI-1/MPI-2 for InfiniBand on VAPI/Gen2 Layer. http://nowlab.cse.ohiostate.edu/projects/mpi-iba/index.html.
[14]
OpenIB.org. http://www.openib.org/.
[15]
Scott Pakin and Avneesh Pant. VMI 2.0: A Dynamically Reconfigurable Messaging Layer for Availability, Usability, and Management. In The 8th International Symposium on High Performance Computer Architecture (HPCA-8), Workshop on Novel Uses of System Area Networks (SAN-1), Cambridge, Massachusetts, February 2, 2002.
[16]
A. Vishnu, G. Santhanaraman, W. Huang, H.-W. Jin, and D. K. Panda. Supporting MPI-2 One Sided Communication on Multi-Rail InfiniBand Clusters: Design Challenges and Performance Benefits. In International Conference on High Performance Computing, HiPC, 2005.

Cited By

View all
  • (2018)rmalloc() and rpipe()Proceedings of the 8th International Workshop on Runtime and Operating Systems for Supercomputers10.1145/3217189.3217191(1-9)Online publication date: 12-Jun-2018
  • (2018) Spark-uDAPL: Cost-Saving Big Data Analytics on Microsoft Azure Cloud with RDMA Networks * 2018 IEEE International Conference on Big Data (Big Data)10.1109/BigData.2018.8622615(321-326)Online publication date: Dec-2018
  • (2007)Using CMT in SCTP-based MPI to exploit multiple interfaces in cluster nodesProceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface10.5555/2396095.2396134(204-212)Online publication date: 30-Sep-2007
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SC '06: Proceedings of the 2006 ACM/IEEE conference on Supercomputing
November 2006
746 pages
ISBN:0769527000
DOI:10.1145/1188455
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 November 2006

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

SC '06
Sponsor:

Acceptance Rates

SC '06 Paper Acceptance Rate 54 of 239 submissions, 23%;
Overall Acceptance Rate 1,516 of 6,373 submissions, 24%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)1
Reflects downloads up to 27 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2018)rmalloc() and rpipe()Proceedings of the 8th International Workshop on Runtime and Operating Systems for Supercomputers10.1145/3217189.3217191(1-9)Online publication date: 12-Jun-2018
  • (2018) Spark-uDAPL: Cost-Saving Big Data Analytics on Microsoft Azure Cloud with RDMA Networks * 2018 IEEE International Conference on Big Data (Big Data)10.1109/BigData.2018.8622615(321-326)Online publication date: Dec-2018
  • (2007)Using CMT in SCTP-based MPI to exploit multiple interfaces in cluster nodesProceedings of the 14th European conference on Recent Advances in Parallel Virtual Machine and Message Passing Interface10.5555/2396095.2396134(204-212)Online publication date: 30-Sep-2007
  • (2007)Network fault tolerance in open MPIProceedings of the 13th international Euro-Par conference on Parallel Processing10.5555/2391541.2391647(868-878)Online publication date: 28-Aug-2007
  • (2007)Virtual machine aware communication libraries for high performance computingProceedings of the 2007 ACM/IEEE conference on Supercomputing10.1145/1362622.1362635(1-12)Online publication date: 16-Nov-2007
  • (2007)Using CMT in SCTP-Based MPI to Exploit Multiple Interfaces in Cluster NodesRecent Advances in Parallel Virtual Machine and Message Passing Interface10.1007/978-3-540-75416-9_31(204-212)Online publication date: 2007

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media