Abstract
The paper deals with the problem of computer performance evaluation basing on program benchmarks. We outline the drawbacks of existing techniques and propose a significant enhancement using available performance counters. This new approach needed development of supplementary tools to control benchmarking processes and combining them with various measurements. In particular, this resulted in some extensions of the open source program Phoronix Test Suite. The usefulness of our approach have been verified in many experiments taking into account various benchmarks and system configurations. We present the measurement methodology and interpretation of the obtained results, which confirmed their practical significance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bienia, Ch., Kumar, S., Singh, J.P., Li, K.: The PARSEC benchmark suite: characterization and architectural implications. Princeton University Technical report TR-811-08 (2008)
Chen, T., Guo, Q., Temam, O., Wu, Y., Bao, Y.: Statistical performance comparisons of computers. IEEE Trans. Comput. 64(5), 1442–1455 (2015)
Clemons, J., Zhu, H., Savarese, S., Austin, T.: MEVBench: a mobile computer vision benchmarking suite. In: IEEE International Symposium on Workload Characterization (IISWC), pp. 91–102 (2011)
Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: ACM Symposium on Cloud Computing, pp. 143–154 (2010)
Eigeman, R.: Performance Evaluation and Benchmarking with Real Applications. MIT Press, Cambrige (2001)
Feitelstone, D.G.: Workload Modelling for Computer Performance Evaluation. Cambridge University Press, Cambridge (2015)
Horn, A.: Kieker: a framework for application performance monitoring and dynamic software analysis. In: ACM International Conference on Performance Evaluation, pp. 247–248 (2012)
John, L.K., Eeckhout, L.: Performance Evaluation and Benchmarking. CRC Taylor & Francis, Boca Raton (2006)
Kaelli, D., Sachs, K. (eds.): Computer Performance Evaluation and Benchmarking. Proceedings of SPEC Benchmark Workshop. LNCS, vol. 5419. Springer (2009)
Kanoun, K., Spainhower, L.: Dependability Benchmarking for Computer Systems. Wiley-IEEE Computer Society Press (2008). ISBN: 978-0-470-23055-8
Król, M., Sosnowski, J.: Multidimensional monitoring of computer systems, In: UIC-ATC 2009 Symposium and Workshops on Ubiquitous, Autonomic and Trusted Computing in conjunction with the UIC 2009 and ATC 2009, pp. 68–74. IEEE Computer Society (2009)
Kubacki, M., Sosnowski, J.: Applying time series analysis to performance logs. In: Proceedings of SPIE, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments, vol. 9662, pp. 3C1–3C12. SPIE (2015)
Kubacki, M., Sosnowski, J.: Performance issues in creating cloud environment. In: Zamojski, W., et al. (eds.) Complex Systems and Dependability. Advances in Intelligent Systems and Computing, vol. 365, pp. 235–244. Springer (2015)
Kukunas, J.: Power and Performance: Software Analysis and Optimization. Elsevier, San Francisco (2015). ISBN: 978-0-12-800726-6
Leal-Taixé, L., Milan, A., Reid, I., Roth, S., Schindler, K.: MOTChallenge: towards a benchmark for multi-target tracking (2015). https://arxiv.org/abs/1504.01942
Nambiar, R., Poess, M., et al.: TPC benchmark roadmap. In: Selected Topics in Performance Evaluation and Benchmarking, Proceedings of the 4th TPCTC Conference. LNCS, vol. 7755, pp. 1–20. Springer (2012)
Nambiar, R., Poess, M. (eds.): Performance Evaluation and Benchmarking, Traditional to Big Data to Internet of Things. Proceedings of the 7th TPCTC Conference. LNCS, vol. 9508. Springer (2016)
Ondrejka, R., et al.: Red Hat Enterprise Linux 7 Resource Management Guide. 1.3. Resource Controllers in Linux Kernel. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Resource_Management_Guide/br-Resource_Controllers_in_Linux_Kernel.html
Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 32, 323–332 (2012). doi:10.1016/j.neunet.2012.02.016
Waller, J., Ehmke, N.C., Hasselbring, H.: Including performance benchmarks into continuous integration to enable DevOps. ACM SIGSOFT Softw. Eng. Notes 40(2), 1–4 (2015)
Weiss, C., Westermann, D., Heger, C., Moser, M.: Systematic performance evaluation based on tailored benchmark applications. In: Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering (ICPE 2013), pp. 411–420 (2013)
Phoronix Test Suite (2017). http://phoronix-test-suite.com/
(2016). https://github.com/phoronix-test-suite/phoronix-test-suite/pull/108 and pages with final parameters /92 and /94
Problem Sizes and Parameters in NAS Parallel Benchmarks (2012). https://www.nas.nasa.gov/publications/npb_problem_sizes.html
TPC Current Specifications (2017). http://www.tpc.org/tpc_documents_current_versions/current_specifications.asp
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Maleszewski, J., Sosnowski, J. (2018). Managing and Enhancing Performance Benchmarks. In: Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., Kacprzyk, J. (eds) Advances in Dependability Engineering of Complex Systems. DepCoS-RELCOMEX 2017. Advances in Intelligent Systems and Computing, vol 582. Springer, Cham. https://doi.org/10.1007/978-3-319-59415-6_28
Download citation
DOI: https://doi.org/10.1007/978-3-319-59415-6_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-59414-9
Online ISBN: 978-3-319-59415-6
eBook Packages: EngineeringEngineering (R0)