Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- bookMay 2016
High Performance Computing in Science and Engineering, Garching/Munich 2009: Transactions of the Fourth Joint HLRB and KONWIHR Review and Results ... Centre, Garching/Munich, Germany
The Leibniz Supercomputing Centre (LRZ) and the Bavarian Competence Network for Technical and Scienti?c High Performance Computing (KONWIHR) publish in the present book results of numerical simulations facilitated by the High P- formance Computer System ...
- research-articleApril 2015
Predicting energy consumption relevant indicators of strong scaling HPC applications for different compute resource configurations
Finding the best energy-performance tradeoffs for High Performance Computing (HPC) applications is a major challenge for many modern supercomputing centers. With the increased focus on data center energy efficiency and the emergence of possible data ...
- research-articleNovember 2014
Petascale high order dynamic rupture earthquake simulations on heterogeneous supercomputers
- Alexander Heinecke,
- Alexander Breuer,
- Sebastian Rettenberger,
- Michael Bader,
- Alice-Agnes Gabriel,
- Christian Pelties,
- Arndt Bode,
- William Barth,
- Xiang-Ke Liao,
- Karthikeyan Vaidyanathan,
- Mikhail Smelyanskiy,
- Pradeep Dubey
SC '14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisPages 3–14https://doi.org/10.1109/SC.2014.6We present an end-to-end optimization of the innovative Arbitrary high-order DERivative Discontinuous Galerkin (ADER-DG) software SeisSol targeting Intel® Xeon Phi™ coprocessor platforms, achieving unprecedented earthquake model complexity through ...
- articleJuly 2014
Predicting the Energy and Power Consumption of Strong and Weak Scaling HPC Applications
Supercomputing Frontiers and Innovations: an International Journal (SCFI), Volume 1, Issue 2Pages 20–41https://doi.org/10.14529/jsfi140202Keeping energy costs in budget and operating within available capacities of power distribution and cooling systems is becoming an important requirement for High Performance Computing HPC data centers. It is even more important when considering the ...
- ArticleJune 2014
A Case Study of Energy Aware Scheduling on SuperMUC
- Axel Auweter,
- Arndt Bode,
- Matthias Brehm,
- Luigi Brochard,
- Nicolay Hammer,
- Herbert Huber,
- Raj Panda,
- Francois Thomas,
- Torsten Wilde
ISC 2014: Proceedings of the 29th International Conference on Supercomputing - Volume 8488Pages 394–409https://doi.org/10.1007/978-3-319-07518-1_25In this paper, we analyze the functionalities for energy aware scheduling of the IBM LoadLeveler resource management system on SuperMUC, one of the world's fastest HPC systems. We explain how LoadLeveler predicts execution times and the average power ...
-
- proceedingJune 2014
ICS '14: Proceedings of the 28th ACM international conference on Supercomputing
Welcome to the 28th ACM International Conference on Supercomputing (ICS), the oldest and longest running conference on high-performance computing. ICS is a premier forum for researchers to present and discuss latest results and perspectives on the state-...
- bookMarch 2014
Parallel Computing: Accelerating Computational Science and Engineering - CSE
Parallel computing has been the enabling technology of high-end machines for many years. Now, it has finally become the ubiquitous key to the efficient use of any kind of multi-processor computer architecture, from smart phones, tablets, embedded ...
- ArticleAugust 2013
Energy to solution: a new mission for parallel computing
Euro-Par'13: Proceedings of the 19th international conference on Parallel ProcessingPages 1–2https://doi.org/10.1007/978-3-642-40047-6_1For a long period in the development of computers and computing efficient applications were only characterized by computational --- and memory complexity or in more practical terms elapsed computing time and required main memory capacity. The history of ...
- abstractMay 2012
DEEP: an exascale prototype architecture based on a flexible configuration
CF '12: Proceedings of the 9th conference on Computing FrontiersPages 305–306https://doi.org/10.1145/2212908.2212960DEEP is a multipartner international cooperation project supported by the EU FP7 that introduces a flexible global system architecture using general purpose and manycore processor architectures (based on IntelMIC: many integrated core architecture). ...
- ArticleAugust 2011
Principles of energy efficiency in high performance computing
High Performance Computing (HPC) is a key technology for modern researchers enabling scientific advances through simulation where experiments are either technically impossible or financially not feasible to conduct and theory is not applicable. However, ...
- ArticleAugust 2011
Extending a highly parallel data mining algorithm to the intel ® many integrated core architecture
Euro-Par'11: Proceedings of the 2011 international conference on Parallel Processing - Volume 2Pages 375–384https://doi.org/10.1007/978-3-642-29740-3_42Extracting knowledge from vast datasets is a major challenge in data-driven applications, such as classification and regression, which are mostly compute bound. In this paper, we extend our SG++ algorithm to the Intel® Many Integrated Core Architecture (...
- ArticleAugust 2011
Workload balancing on heterogeneous systems: a case study of sparse grid interpolation
Euro-Par'11: Proceedings of the 2011 international conference on Parallel Processing - Volume 2Pages 345–354https://doi.org/10.1007/978-3-642-29740-3_39Multi-core parallelism and accelerators are becoming common features of today's computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load ...
- proceedingApril 2010
DYADEM-FTS '10: Proceedings of the First Workshop on DYnamic Aspects in DEpendability Models for Fault-Tolerant Systems
It is our pleasure to welcome you to the first workshop on Dynamic Aspects in Dependability Models for Fault Tolerant Systems (DYADEM-FTS 2010) which is co-located to the European Dependable Computing Conference (EDCC 2010) taking place at the ...
- articleJanuary 2010
- proceedingJune 2009
HPDC '09: Proceedings of the 18th ACM international symposium on High performance distributed computing
It is our great pleasure to welcome you to the 18th ACM International Symposium on High Performance Distributed Computing. This year's symposium continues its tradition of being the premier forum for presentation of research results and experience ...
- ArticleMay 2009
Preface for the Joint Workshop on Tools for Program Development and Analysis in Computational Science and Software Engineering for Large-Scale Computing
- Andreas Knüpfer,
- Arndt Bode,
- Dieter Kranzlmüller,
- Daniel Rodrìguez,
- Roberto Ruiz,
- Jie Tao,
- Roland Wismüller,
- Jens Volkert
ICCS 2009: Proceedings of the 9th International Conference on Computational SciencePages 655–656https://doi.org/10.1007/978-3-642-01973-9_73Today, computers and computational methods are increasingly important and powerful tools for science and engineering. Yet, using them effectively and efficiently requires both, expert knowledge of the respective application domain as well as solid ...
- bookNovember 2008
High Performance Computing in Science and Engineering, Garching/Munich 2007: Transactions of the Third Joint HLRB and KONWIHR Status and Result Workshop, ... Centre, Garching/Munich, Germany
The book reports on selected projects on the High Performance Computer in Bavaria (HLRB). The projects originate from the fields of fluid dynamics, astrophysics and cosmology, computational physics including high energy physics, computational chemistry ...
- ArticleJune 2008
Special Session: Tools for Program Development and Analysis in Computational Science
ICCS '08: Proceedings of the 8th international conference on Computational Science, Part IIIPages 201–202https://doi.org/10.1007/978-3-540-69389-5_23The use of supercomputing technology, parallel and distributed processing, and sophisticated algorithms is of major importance for computational scientists. Yet, the scientists' goals are to solve their challenging scientific problems, not the software ...
- ArticleMarch 2023
Scalability for Petaflops systems
AbstractFuture very high end systems, petaflops computers, will be megaprocessors or megacores with a million or more active processors. This can be derived both by extrapolation of the processor number of the leading systems in the TOP500 and by the ...