Abstract
A compact user-level summary, a use-tree, explains system variabilities across benchmarks. The tree provides a clear, static declaration of the relationships among a limited number of major system resources that determine most performance variance. This new approach is especially effective with hardware instrumentation that decouples workload from observation (otherwise, a compact set of observables may be unavailable). A tree supports simple predictions and promotes more meaningful comparisons of workloads. Accounting for the sources of performance variation shown in the tree can inspire new methods of assessment; a "time dilation" technique illustrates this for loosely-coupled systems with local clocks.
A contribution of the National Institute of Standards and Technology. Not subject to U.S. Copyright. No recommendation or endorsement, express or implied, is given by the National Institute of Standards and Technology or sponsors for illustrative commercial products mentioned in the text. Partially sponsored by the Defense Advanced Research Projects Agency, ARPA Task No. 7066
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
Citations
Antonishek, J.; Snelick, R.D.: Emulation through time dilation. Distr. Memory Comput. DMCC5, Proc. 5th Conf., Charleston, S.C. 1990, 8 pp. (1989).
Bailey, D.; Brooks, E.; Dongarra, J.; Hayes, A.; Heath, M.; Lyon, G.: Benchmarks to supplant export ‘FPDR’ calculations. NBSIR 88-3795, Gaithersburg 1988, 20pp.
Carpenter, R.J.: Performance measurement instrumentation for multiprocessor computers. High Performance Comput. Syst., North-Holland, 81–92 (1988).
Denning, P.J.; Adams, G.B. III: Research questions for performance analysis of a supercomputer. Performance Evaluation of Supercomput. Elsevier Science Publishers B.V., 403–419 (1988).
Haring, G.: On stochastic models of interactive workloads. Performance '83, North Holland, 133–152 (1983).
Lyon, G.: A simple emulator for variable hardware balance on a hypercube. Time dilation (logical vs. physical delay). Internal notes, Gaithersburg 1988, 1989.
Lyon, G.E.: Design factors for parallel processing benchmarks. Theor. Comput. Sci. 64, 175–189 (1989).
Lyon, G.E.: Capacity-and-use trees for estimating computer performance variations. Comput. & Inform., Proc. Internat. Conf. 2, Toronto 1989, 309–313 (1989).
Reed, D.A.: Distributed memory working group summary. Instrumentation for Future Parallel Comput. Syst., ACM Press, 239–250 (1989).
Reed, D.A.: Performance analysis of parallel systems: A look at alternatives. 3rd ISR Supercomput. Workshop, Notes, Oahu, Hawaii 1989, 109pp. (1989).
Saavedra-Barrera, R.H.; Smith, A.J.; Miya, E.: Machine characterization based on an abstract high-level language machine. IEEE Trans. Comput. 38, 1659–1679 (1989).
Uniejewski, J.: Characterizing RISC systems using application benchmarks. Comput. Design RISC Suppl. 13 Nov., 45–48 (1989).
Wang, J.C.; Gary, J.M.; Iyer, H.K.: A technique to evaluate benchmarks: a case study using the Livermore Loops. NIST-revision for journal, Jan. 1990, 26pp.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lyon, G., Snelick, R.D. (1990). Workloads, observables, benchmarks and instrumentation. In: Burkhart, H. (eds) CONPAR 90 — VAPP IV. VAPP CONPAR 1990 1990. Lecture Notes in Computer Science, vol 457. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-53065-7_90
Download citation
DOI: https://doi.org/10.1007/3-540-53065-7_90
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-53065-7
Online ISBN: 978-3-540-46597-3
eBook Packages: Springer Book Archive