Abstract
As computer systems grow in size and complexity, tool support is needed to facilitate the efficient mapping of large-scale applications onto these systems. To help achieve this mapping, performance analysis tools must provide robust performance observation capabilities at all levels of the system, as well as map low-level behavior to high-level program constructs. Instrumentation and measurement strategies, developed over the last several years, must evolve together with performance analysis infrastructure to address the challenges of new scalable parallel systems.
Chapter PDF
Similar content being viewed by others
Keywords
- Measurement Strategy
- Executable Code
- Performance Instrumentation
- Dynamic Instrumentation
- Hardware Counter
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
MPI: A message passing interface standard. International Journal of Supercomputing Applications, 8(3/4), 1994.
S. Browne, J. Dongarra, N. Garner, G. Ho, and P. Mucci. A portable programming interface for performance evaluation on modern processors. International Journal of High Performance Computing Applications, 14(3): 189–204, 2000.
B. Buck and J. Hollingsworth. An API for runtime code patching. International Journal of High Performance Computing Applications, 14(4):317–329, 2000.
J. Dean, C. Waldspurger, and W. Weihl. Transparent, low-overhead profiling on modern processors. In Workshop on Profile and Feedback-directed Compilation, October 1998.
A. Eustace and A. Srivastava. ATOM: A flexible interface for building high performance program analysis tools. In Proc. USENIX Winter 1995, pages 303–314, 1995.
J. Galarowics and B. Mohr. Analyzing message passing programs on the Cray T3E with PAT and VAMPIR. Technical report, ZAM Forschungszentrum: Juelich, Germany, 1998.
J. S. Germain, A. Morris, S. Parker, A. Malony, and S. Shende. Integrating performance analysis in the uintah software development cycle. In High Performance Distributed Computing Conference, pages 33–41, 2000.
R. Hornung and S. Kohn. Managing application complexity in the samrai object-oriented framework. Concurrency and Computation: Practice and Experience, special issue on Software Architectures for Scientific Applications, 2001.
J. Larus and T. Ball. Rewriting executable files to measure program behavior. Software Practice and Experience, 24(2):197–218, 1994.
K. Lindlan, J. Cuny, A. Malony, S. Shende, B. Mohr, R. Rivenburgh, and C. Rasmussen. A tool framework for static and dynamic analysis of object-oriented software with templates. In Proc. SC 2000, 2000.
P. Mucci. Dynaprof 0.8 user’s guide. Technical report, Nov. 2002.
S. Shende, A. Malony, and R. Bell. Instrumentation and measurement strategies for flexible and portable empirical performance evaluation. In International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’2001), 2001.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dongarra, J., Malony, A.D., Moore, S., Mucci, P., Shende, S. (2003). Performance Instrumentation and Measurement for Terascale Systems. In: Sloot, P.M.A., Abramson, D., Bogdanov, A.V., Gorbachev, Y.E., Dongarra, J.J., Zomaya, A.Y. (eds) Computational Science — ICCS 2003. ICCS 2003. Lecture Notes in Computer Science, vol 2660. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44864-0_6
Download citation
DOI: https://doi.org/10.1007/3-540-44864-0_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40197-1
Online ISBN: 978-3-540-44864-8
eBook Packages: Springer Book Archive