Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3053600.3053638acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Do We Teach Useful Statistics for Performance Evaluation?

Published: 18 April 2017 Publication History

Abstract

Basic topics from probability and statistics -- such as probability distributions, parameter estimation, confidence intervals and statistical hypothesis testing -- are often included in computing curricula and used as tools for experimental performance evaluation. Unfortunately, data collected through experiments may not meet the requirements of many statistical analysis methods, such as independent sampling or normal distribution. As a result, the analysis methods may be more tricky to apply and the analysis results may be more tricky to interpret than one might expect. Here, we look at some of the issues on methods and experiments that would be considered basic in performance evaluation education.

References

[1]
A. Georges, D. Buytaert, and L. Eeckhout. Statistically Rigorous Java Performance Evaluation. In Proceedings of the 22nd ACM SIGPLAN Conference on Object-Oriented Programming Systems Languages and Applications, OOPSLA '07, pages 57--76, 2007.
[2]
V. Horký, F. Haas, J. Kotrć, M. Lacina, and P. Tåma. Performance Regression Unit Testing: A Case Study. In M. S. Balsamo, W. J. Knottenbelt, and A. Marin, editors, Computer Performance Engineering, Lecture Notes in Computer Science, pages 149--163. Springer Berlin Heidelberg, 2013.
[3]
R. Jain. The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. Wiley, 1991.
[4]
L. K. John and L. Eeckhout. Performance Evaluation and Benchmarking. CRC Press, 2005.
[5]
Joint Task Force on Computing Curricula, ACM, and IEEE CS. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science, 2013.
[6]
Joint Task Force on Computing Curricula, IEEE CS, and ACM. Software Engineering 2014: Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering.
[7]
Joint Task Group on Computer Engineering Curricula, ACM, and IEEE CS. Computer Engineering Curricula 2016: Curriculum Guidelines for Undergraduate Degree Programs in Computer Engineering, 2016.
[8]
H. B. Mann and D. R. Whitney. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. The Annals of Mathematical Statistics, 18(1):50--60, 1947.
[9]
Massachusetts Institute of Technology. MIT OpenCourseWare. https://ocw.mit.edu/, 2017.
[10]
SciPy team. SciPy: Scientific computing in Python. http://www.scipy.org/, 2017.
[11]
A. Sewe, M. Mezini, A. Sarimbekov, and W. Binder. Da Capo Con Scala: Design and Analysis of a Scala Benchmark Suite for the Java Virtual Machine. In Proceedings of the 2011 ACM International Conference on Object Oriented Programming Systems Languages and Applications, OOPSLA '11, pages 657--676. ACM, 2011.
[12]
Standard Performance Evaluation Corporation (SPEC). SPECjbb 2015. http://www.spec.org/jbb2015/.
[13]
B. L. Welch. The generalization of "Student's" problem when several different population variances are involved. Biometrika, 34(1/2):28--35, 1947.

Cited By

View all
  • (2022)A statistics-based performance testing methodology: a case study for the I/O bound tasks2022 IEEE 17th International Conference on Computer Sciences and Information Technologies (CSIT)10.1109/CSIT56902.2022.10000626(486-489)Online publication date: 10-Nov-2022
  • (2021)What's Wrong with My Benchmark Results? Studying Bad Practices in JMH BenchmarksIEEE Transactions on Software Engineering10.1109/TSE.2019.292534547:7(1452-1467)Online publication date: 1-Jul-2021
  • (2021)Quantitative Server Sizing Model for Performance Satisfaction in Secure U2L MigrationIEEE Access10.1109/ACCESS.2021.31193979(142449-142460)Online publication date: 2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICPE '17 Companion: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion
April 2017
248 pages
ISBN:9781450348997
DOI:10.1145/3053600
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 April 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. confidence interval
  2. performance evaluation
  3. performance evaluation education
  4. statistical testing

Qualifiers

  • Research-article

Conference

ICPE '17
Sponsor:

Acceptance Rates

ICPE '17 Companion Paper Acceptance Rate 24 of 65 submissions, 37%;
Overall Acceptance Rate 252 of 851 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)A statistics-based performance testing methodology: a case study for the I/O bound tasks2022 IEEE 17th International Conference on Computer Sciences and Information Technologies (CSIT)10.1109/CSIT56902.2022.10000626(486-489)Online publication date: 10-Nov-2022
  • (2021)What's Wrong with My Benchmark Results? Studying Bad Practices in JMH BenchmarksIEEE Transactions on Software Engineering10.1109/TSE.2019.292534547:7(1452-1467)Online publication date: 1-Jul-2021
  • (2021)Quantitative Server Sizing Model for Performance Satisfaction in Secure U2L MigrationIEEE Access10.1109/ACCESS.2021.31193979(142449-142460)Online publication date: 2021
  • (2021)Predicting unstable software benchmarks using static source code featuresEmpirical Software Engineering10.1007/s10664-021-09996-y26:6Online publication date: 18-Aug-2021
  • (2019)Software microbenchmarking in the cloud. How bad is it really?Empirical Software Engineering10.1007/s10664-019-09681-124:4(2469-2508)Online publication date: 1-Aug-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media