Nothing Special   »   [go: up one dir, main page]

skip to main content
article
Free access

A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance

Published: 01 November 1994 Publication History

Abstract

Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete; they do not stress the I/O system; and they do not help much in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, the benchmark automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of five workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to estimate quickly the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single-point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a DECstation 5000/200 running the Sprite LFS operating system with a three-disk disk array, a Convex C240 minisupercomputer with a four-disk disk array, and a Solbourne 5E/905 fileserver with a two-disk disk array.

References

[1]
ANON ET AL. 1985. A measure of transaction processing power. Datamation 31, 7 (Apr.), 112 118.
[2]
BECHTOLSHEIM, A. V., AND FRANK, E.H. 1990. Sun's SPARCstation 1: A Workstation for the 1990s. In Procedures of the IEEE Computer Society International Conference (COMPCON). ACM, New York, 184 188.
[3]
BERRY, M., CHEN, D., KOSS, P., KUCK, D., LO, S., PANG, Y., POINTER, L., ROLOFF, R., SAMEH, A., CLEMENTI, E., CHIN, S., SCHNEIDER, D., FOX, G., MESSINA, P., WALKER, D., HSIUNG, C., SCHWARZMEIER, J., LUE, K., ORSZAG, S., SEIDL, F., JOHNSON, O., GOODRUM, R., AND MARTIN, J. 1989. The perfect club benchmarks: Effective performance evaluation of' supercomputers. Int. J. Supercomput. Appl. (Fall).
[4]
BRAY, T. 1990. Bonnie source code. Netnews posting USENET.
[5]
CHEN, P.M. 1992. Input-output performance evaluation: SelLscaling benchmarks, predicted performance. Ph.D. dissertation, UCB/Computer Science Dept. 92/714, Univ. of' California.
[6]
CHEN, P. M., AND PATTERSON, D.A. 1990. Maximizing performance in a striped disk array. In Proceedings of the 1990 International Symposium on Computer Architecture (Seattle, May). IEEE/ACM, New York, 322-331.
[7]
DEC. 1990. DECstation 5000 Model 200 Technical Overview. Digital Equipment Corp., Palo Alto, Calif.
[8]
FERRAm, D. 1984. On the foundations of artificial workload design. In Proceedings of the 1984 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems. ACM, New York, 8-14.
[9]
GAEDE, S. 1982. A scaling technique for comparing interactive system capacities. In the 13th International Conference on Management and Performance Evaluation of Computer Systems. Computer Measurement Group, 62 67.
[10]
GAEDE, S. 1981. Tools for research in computer workload characterization. Experimental Computer Performance and Evaluation, D. Ferrari, and M. Spadoni, Eds.
[11]
HEWLETT-PACKARD. 1992. HP Apollo Series 700 Model 730 PA-RISC Workstation, Hewlett- Packard, Palo Alto, Calif.
[12]
HORNING, R., JOHNSON, L., THAYER, L., LI, D., MEIER, V., DOWDELL, C., AND ROBERTS, D. 1991. System design fox' a low cost PA-RISC desktop workstation. In Procedures of the IEEE Computer Society International Conference (COMPCON). IEEE, New York, 208-213.
[13]
HOWARD, J. H., K~ZAk, M. L, MENEES, S. G., NICHOLS, D. A., SATYANARAYANAN, M., SmEnOT~AM, R. N., AND WEST, M.J. 1988. Scale and performance in a distributed file system. ACM Trans. Comput. Syst. 6, 1 (Feb.), 51-81.
[14]
OUSTERHOUT, J. K., AND DOUGLIS, F. 1989. Beating the I/O bottleneck: A case for logstructured file systems. SIGOPS 23, i (Jan.), 11-28.
[15]
OUSTERHOUT, J. K., CHERENSON, A., DOUGLIS, r., AND NELSON, M. 1988. The Sprite network operating system. IEEE Comput. 21, 2 (Feb.), 23 36.
[16]
PARK, A., AND BECKER, J.C. 1990. IOStone: A synthetic file system benchmark. Comput. Arch. News 18, 2 (June), 45 52.
[17]
PATTERSON, D. A., GIBSON, G., AND KATZ, R.H. 1988. A case for redundant arrays of inexpensive disks (RAID). In the ACM International Conference on Management of Data (SIGMOD). ACM, New York, 109 116.
[18]
ROSENSLUM, M., ANU OUSTERHOUT, J. K. 1991. The design and implementation of a logstructured file system. In Proceedings of the 13th ACM Symposium on Operating Systems Principles. ACM, New York.
[19]
SPEC. 1991a. SPEC SDM Release 1.0 Manual. System Performance Evaluation Cooperative, Fairfax, Vs.
[20]
SPEC. 1991b. SPEC SDM Release 1.0 Technical Fact Sheet. Franson and Hagerty Associates, Fairfax, Vs.
[21]
SAAVEDRA-BARRERA, R. $., SMITH, A. J., AND MIYA, m. 1989. Machine characterization based on an abstract high-level language machine. IEEE Trans. Comput. 38, 12 (Dec.), 1659-1679.
[22]
SCOTT, V. 1990. Is standardization of benchmarks feasible? In Proceedings of the BUSCON Conference3 (Long Beach, Calif.). Conference Management Corp., 139 147.
[23]
TPPC. 1990. TPC Benchmark B Standard Specification. Transaction P:cocessing Performance Council, Freemont, Calif.
[24]
TPPC. 1989. TPC Benchmark A Standard Specification. Transaction Processing Performance Council, Freemont, Calif.

Cited By

View all
  • (2015)AndrotraceProceedings of the 3rd Workshop on Interactions of NVM/FLASH with Operating Systems and Workloads10.1145/2819001.2819007(1-8)Online publication date: 4-Oct-2015
  • (2012)Towards realistic benchmarks for virtual infrastructure resource allocatorsProceedings of the Third ACM SIGOPS Asia-Pacific conference on Systems10.5555/2387841.2387846(5-5)Online publication date: 23-Jul-2012
  • (2012)Towards realistic benchmarks for virtual infrastructure resource allocatorsProceedings of the Asia-Pacific Workshop on Systems10.1145/2349896.2349901(1-6)Online publication date: 23-Jul-2012
  • Show More Cited By

Recommendations

Reviews

Clement R. Attanasio

Input/output benchmarks should stress the I/O subsystem. Patterson and Chen claim that many existing I/O benchmarks do not. Therefore, they propose self-scaling, by which the benchmark observes its own performance and drives the load it is generating into the range that stresses the I/O capacity of the system, and not, for example, the CPU or memory. The five parameters of I/O workload they use are the number of unique data bytes read or written; the average size of a request; the fraction of reads to total number of I/O requests; the fraction of requests that follow the previous one in sequence; and the number of processes running the I/O benchmark. By varying the five-dimensional space of these parameters and observing the shape of the resulting curves, the authors develop a predictive methodology by which workload performance on other, unmeasured systems is projected. This paper is worthwhile. Through examples, the authors illustrate the kind of information about system behavior that is usually obvious in retrospect, but not always beforehand. For example, when might a system perform better on writing than reading__?__ <__?__Pub Caret>The answer is, when it batches many small writes into a few large ones. Unsurprisingly, the authors' claims for I/O performance prediction are more arguable than the benefits of I/O benchmark self-scaling. Chen and Patterson observe that there are transition regions in the performance curves, generally when the amount of data touched in the benchmark increases past the size of the buffer cache. Although they allow themselves to predict performance separately in those two domains, it is not clear that they treat competing predictive techniques similarly in their comparisons.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 November 1994
Published in TOCS Volume 12, Issue 4

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)61
  • Downloads (Last 6 weeks)7
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2015)AndrotraceProceedings of the 3rd Workshop on Interactions of NVM/FLASH with Operating Systems and Workloads10.1145/2819001.2819007(1-8)Online publication date: 4-Oct-2015
  • (2012)Towards realistic benchmarks for virtual infrastructure resource allocatorsProceedings of the Third ACM SIGOPS Asia-Pacific conference on Systems10.5555/2387841.2387846(5-5)Online publication date: 23-Jul-2012
  • (2012)Towards realistic benchmarks for virtual infrastructure resource allocatorsProceedings of the Asia-Pacific Workshop on Systems10.1145/2349896.2349901(1-6)Online publication date: 23-Jul-2012
  • (2008)Uncovering performance differences among backbone ISPs with NetdiffProceedings of the 5th USENIX Symposium on Networked Systems Design and Implementation10.5555/1387589.1387604(205-218)Online publication date: 16-Apr-2008
  • (2007)SQL AnywhereProceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop10.1109/ICDEW.2007.4401024(414-423)Online publication date: 17-Apr-2007
  • (2007)BibliographyPhysical Database Design10.1016/B978-012369389-1/50020-6(391-409)Online publication date: 2007
  • (2007)Physical Database DesignundefinedOnline publication date: 21-Mar-2007
  • (2005)Benchmarking and testing OSD for correctness and complianceProceedings of the First Haifa international conference on Hardware and Software Verification and Testing10.1007/11678779_12(158-176)Online publication date: 13-Nov-2005
  • (2005)lmbench: an extensible micro-benchmark suiteSoftware: Practice and Experience10.1002/spe.66535:11(1079-1105)Online publication date: 2005
  • (2004)Using Information from Prior Runs to Improve Automated Tuning SystemsProceedings of the 2004 ACM/IEEE conference on Supercomputing10.1109/SC.2004.65Online publication date: 6-Nov-2004
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media