Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/IPDPS.2005.106guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Analysis of Design Considerations for Optimizing Multi-Channel MPI over InfiniBand

Published: 04 April 2005 Publication History

Abstract

Modern day MPI implementations provide several communication channels for optimizing performance. To obtain the best performance for the most demanding contemporary applications, it becomes critical to manage these communication channels efficiently. Various issues related to overhead for message discovery and thresholds for choosing different channels need to be considered for designing the MPI layer. It is not a trivial task to choose these parameters since application characteristics and demands from the MPI layer vary widely. In this paper we try to address these issues. We propose several different schemes such as static priority and dynamic priority to efficiently implement channel polling. Our results indicate that we can reduce intranode latency by up to 12% and message discovery time up to 45%. Further, we explore several different methodologies to choose appropriate thresholds for different channels.

References

[1]
InfiniBand Trade Association. http://www.infinibandta.com.
[2]
MPI over InfiniBand Project. http://nowlab.cis.ohiostate. edu/projects/mpi-iba/.
[3]
O. Aumage and G. Mercier. MPICH/MADIII: a Cluster of Clusters Enabled MPI Implementation. In 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid2003), 2003.
[4]
R. Brightwell, K. Underwood, and R. Riesen. An Initial Analysis of the Impact of Overlap and Independent Progress for MPI. In Euro PVM/MPI, 2004.
[5]
I. Foster and N. T. Karonis. A Grid-Enabled MPI: Message Passing in Heterogenous Distributed Computing Systems. In Proceedings of the Supercomputing Conference (SC), 1998.
[6]
P. Geoffray, C. Pham, and B. Tourancheau. A Software Suite for High-Performance Communications on Clusters of SMPs. Cluster Computing, 5(4):353.363, October 2002.
[7]
P. Geoffray, L. Prylli, and B. Tourancheau. BIP-SMP: High Performance Message Passing over a Cluster of Commodity SMPs. In SuperComputing (SC), 1999.
[8]
W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI, Message Passing Interface Standard. Technical report, Argonne National Laboratory and Mississippi State University.
[9]
H. W. Jin, S. Sur, L. Chai, and D. K. Panda. Design and Performance Evaluation of LiMIC (Linux Kernel Module for MPI Intra-node Communication) on InfiniBand Cluster. Technical Report OSU-CISRC-10/04-TR58, Department of Computer Science and Engineering, The Ohio State University, October 2004.
[10]
N. Karonis, B. Toonen, and I. Foster. MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface. Journal of Parallel and Distributed Computing (JPDC), 63(5):551-563, May 2003.
[11]
J. Liu, J. Wu, and D. K. Panda. High performance RDMA-based MPI implementation over InfiniBand. Int'l Journal of Parallel Programming, In Press, 2005.
[12]
S. Lumetta, A. Mainwaring, and D. Culler. Multi-Protocol Active Messages on a Cluster of SMP's. In SuperComputing (SC), 1997.
[13]
Mellanox Technologies. Mellanox VAPI Interface, July 2002.
[14]
Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Mar 1994.
[15]
Myricom Inc. Portable MPI Model Implementation over GM, March 2004.
[16]
L. Prylli and B. Tourancheau. BIP: A New Protocol Designed for High Performance Networking on Myrinet. Lecture Notes in Computer Science, 1388, April 1998.
[17]
T. Takahashi, S. S. nd Atsushi Hori, H. Harada, and Y. Ishikawa. PM2: High Performance Communication Middleware for Heterogeneous Network Environments. In SuperComputing (SC), 2000.
[18]
T. Takahashi, F. O'carroll, H. Tezuka, A. Hori, S. Sumimoto, H. Harada, Y. Ishikawa, and P. H. Beckman. Implementation and Evaluation of MPI on an SMP Cluster. Lecture Notes in Computer Science, 1586, April 1999.
[19]
K. Underwood and R. Brightwell. The Impact of MPI Queue Usage on Message Latency. In Internation Conference on Parallel Processing (ICPP), 2004.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
IPDPS '05: Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 9 - Volume 10
April 2005
ISBN:0769523129

Publisher

IEEE Computer Society

United States

Publication History

Published: 04 April 2005

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media