Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/1413370.1413394acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

Early evaluation of IBM BlueGene/P

Published: 15 November 2008 Publication History

Abstract

BlueGene/P (BG/P) is the second generation BlueGene architecture from IBM, succeeding BlueGene/L (BG/L). BG/P is a system-on-a-chip (SoC) design that uses four PowerPC 450 cores operating at 850 MHz with a double precision, dual pipe floating point unit per core. These chips are connected with multiple interconnection networks including a 3-D torus, a global collective network, and a global barrier network. The design is intended to provide a highly scalable, physically dense system with relatively low power requirements per flop. In this paper, we report on our examination of BG/P, presented in the context of a set of important scientific applications, and as compared to other major large scale supercomputers in use today. Our investigation confirms that BG/P has good scalability with an expected lower performance per processor when compared to the Cray XT4's Opteron. We also find that BG/P uses very low power per floating point operation for certain kernels, yet it has less of a power advantage when considering science-driven metrics for mission applications.

References

[1]
S. Alam, R. Barrett, M. Bast et al., "Early Evaluation of IBM BlueGene/P," Oak Ridge National Laboratory, Oak Ridge, Tennessee 2008.
[2]
S. R. Alam, R. F. Barrett, M. R. Fahey et al., "Cray XT4: An Early Evaluation for Petascale Scientific Simulation," ACM/IEEE conference on High Performance Networking and Computing (SC07), 2007.
[3]
S. R. Alam, R. F. Barrett, M. R. Fahey et al., "An Evaluation of the ORNL XT3," International Journal of High Performance Computing Applications, 22(1):52--80, 2008.
[4]
J. Candy and R. Waltz, "An Eulerian gyrokinetic-Maxwell solver," J. Comput. Phys., 186(545), 2003.
[5]
A. Chronopoulos and C. Gear, "s-step iterative methods for symmetric linear systems," J. Comput. Appl. Math., 25:153--68, 1989.
[6]
W. D. Collins, P. J. Rasch, B. A. Boville et al., "The Formulation and Atmospheric Simulation of the Community Atmosphere Model Version 3 (CAM3)," Journal of Climate, 19(11):2144--61, 2006.
[7]
L. Dagum and R. Menon, "OpenMP:: An Industry-Standard API for Shared-Memory Programming," IEEE Computational Science & Engineering, 5(1):46--55, 1998.
[8]
J. Dongarra, R. Graybill, W. Harrod et al., "DARPA's HPCS Program: History, Models, Tools, Languages" in Advances in Computers, vol. 72, M. V. Zelkowitz, Ed. London: Academic Press, Elsevier, 2008.
[9]
E. R. Hawkes, R. Sankaran, J. C. Sutherland, and J. H. Chen, "Direct numerical simulation of turbulent combustion: fundamental insights towards predictive models," Journal of Physics: Conference Series, 16(1):65--79, 2005.
[10]
Intel Corp., Intel Cluster Toolkit 3.0 for Linux, http://www.intel.com/cd/software/products/asmo-na/eng/307696.htm#mpibenchmarks, 2007.
[11]
P. W. Jones, P. H. Worley, Y. Yoshida, J. B. White, III, and J. Levesque, "Practical performance portability in the Parallel Ocean Program (POP)," Concurrency and Computation: Experience and Practice(in press), 2004.
[12]
M. Karplus and G. A. Petsko, "Molecular dynamics simulations in biology," Nature, 347(6294):631--9, 1990.
[13]
C. A. Kennedy, M. H. Carpenter, and R. M. Lewis, "Lowstorage, explicit Runge-Kutta schemes for the compressible Navier-Stokes equations," Applied numerical mathematics, 35(3):177--219, 2000.
[14]
J. T. Kiehl, J. J. Hack, G. Bonan et al., "The National Center for Atmospheric Research Community Climate Model: CCM3," Journal of Climate, 11:1131--49, 1998.
[15]
G. Lakner, I.-H. Chung, G. Cong et al., "IBM System Blue Gene Solution: High Performance Computing Toolkit for BlueGene/P," IBM 2008.
[16]
G. Lakner and C. P. Sosa, "IBM System Blue Gene Solution: Blue Gene/P Application Development," IBM 2008.
[17]
S. J. Lin, "A vertically Lagrangian finite-volume dynamical core for global models," Mon. Wea. Rev., 132(10):2293--307, 2004.
[18]
P. Luszczek and J. Dongarra, HPC Challenge Benchmark, http://icl.cs.utk.edu/hpcc/, 2005.
[19]
A. A. Mirin and W. B. Sawyer, "A Scalable Implementation of a Finite-Volume Dynamical Core in the Community Atmosphere Model," International Journal of High Performance Computing Applications, 19(3):203--12, 2005.
[20]
L. Oliker, A. Canning, J. Carter et al., "Scientific Application Performance on Candidate PetaScale Platforms," IEEE International Parallel and Distributed Processing Symposium (IPDPS): 1--12, 2007.
[21]
D. A. Pearlman, D. A. Case, J. W. Caldwell et al., "AMBER, a package of computer programs for applying molecular mechanics, normal mode analysis, molecular dynamics and free energy calculations to simulate the structural and energetic properties of molecules," Computer Physics Communication, 91, 1995.
[22]
J. C. Phillips, R. Braun, W. Wang et al., "Scalable molecular dynamics with NAMD," J. Comput. Chem, 26(16): 1781--802, 2005.
[23]
J. C. Phillips, G. Zheng, S. Kumar, and L. V. Kale, "NAMD: Biomolecular Simulation on Thousands of Processors," Proc. SC2002, 2002.
[24]
S. J. Plimpton, "Fast Parallel Algorithms for Short-Range Molecular Dynamics," Journal of Computational Physics, 117, 1995.
[25]
R. D. Smith, J. K. Dukowicz, and R. C. Malone, "Parallel ocean general circulation modeling," Physica. D, 60(1--4):38--61, 1992.
[26]
A. J. Wallcraft, "SPMD OpenMP versus MPI for ocean models," Concurrency - Practice and Experience, 12(12):1155--64, 2000.
[27]
D. L. Williamson, J. B. Drake, J. J. Hack, R. Jakob, and P. N. Swarztrauber, "A Standard Test Set for Numerical Approximations to the Shallow Water Equations in Spherical Geometry," Journal of Computational Physics, 192:211--24, 1992.
[28]
P. H. Worley, "Early Evaluation of the IBM BG/P," in LCI International Conference on High-Performance Clustered Computing. University of Illinois, Urbana, Illinois, USA, 2008.
[29]
P. H. Worley and J. B. Drake, "Performance Portability in the Physical Parameterizations of the Community Atmospheric Model," International Journal of High Performance Computing Applications, 19(3): 187--202, 2005.
[30]
P. H. Worley and J. Levesque, "The Performance Evolution of the Parallel Ocean Program on the Cray X1," Proceedings of the 46th Cray User Group Conference, 2004.

Cited By

View all
  • (2016)The mont-blanc prototypeProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.5555/3014904.3014955(1-12)Online publication date: 13-Nov-2016
  • (2016)In-memory Integration of Existing Software Components for Parallel Adaptive Unstructured Mesh WorkflowsProceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale10.1145/2949550.2949650(1-6)Online publication date: 17-Jul-2016
  • (2014)On generating multicast routes for SpiNNakerProceedings of the 11th ACM Conference on Computing Frontiers10.1145/2597917.2597938(1-10)Online publication date: 20-May-2014
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SC '08: Proceedings of the 2008 ACM/IEEE conference on Supercomputing
November 2008
739 pages
ISBN:9781424428359

Sponsors

Publisher

IEEE Press

Publication History

Published: 15 November 2008

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

SC '08
Sponsor:

Acceptance Rates

SC '08 Paper Acceptance Rate 59 of 277 submissions, 21%;
Overall Acceptance Rate 1,516 of 6,373 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2016)The mont-blanc prototypeProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.5555/3014904.3014955(1-12)Online publication date: 13-Nov-2016
  • (2016)In-memory Integration of Existing Software Components for Parallel Adaptive Unstructured Mesh WorkflowsProceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale10.1145/2949550.2949650(1-6)Online publication date: 17-Jul-2016
  • (2014)On generating multicast routes for SpiNNakerProceedings of the 11th ACM Conference on Computing Frontiers10.1145/2597917.2597938(1-10)Online publication date: 20-May-2014
  • (2014)Early experiences co-scheduling work and communication tasks for hybrid MPI+X applicationsProceedings of the 2014 Workshop on Exascale MPI10.1109/ExaMPI.2014.6(9-19)Online publication date: 16-Nov-2014
  • (2014)The Experience in Designing and Evaluating the High Performance Cluster NetunoInternational Journal of Parallel Programming10.1007/s10766-012-0224-742:2(265-286)Online publication date: 1-Apr-2014
  • (2013)Supercomputing with commodity CPUsProceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis10.1145/2503210.2503281(1-12)Online publication date: 17-Nov-2013
  • (2012)3-Dimensional root cause diagnosis via co-analysisProceedings of the 9th international conference on Autonomic computing10.1145/2371536.2371571(181-190)Online publication date: 18-Sep-2012
  • (2012)Collective algorithms for sub-communicatorsProceedings of the 26th ACM international conference on Supercomputing10.1145/2304576.2304606(225-234)Online publication date: 25-Jun-2012
  • (2012)On Urgency of I/O OperationsProceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012)10.1109/CCGrid.2012.40(188-195)Online publication date: 13-May-2012
  • (2010)Exploiting 162-Nanosecond End-to-End Communication Latency on AntonProceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis10.1109/SC.2010.23(1-12)Online publication date: 13-Nov-2010
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media