Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3030207.3030216acmconferencesArticle/Chapter ViewAbstractPublication PagesicpeConference Proceedingsconference-collections
research-article

Transferring Performance Prediction Models Across Different Hardware Platforms

Published: 17 April 2017 Publication History

Abstract

Many software systems provide configuration options relevant to users, which are often called features. Features influence functional properties of software systems as well as non-functional ones, such as performance and memory consumption. Researchers have successfully demonstrated the correlation between feature selection and performance. However, the generality of these performance models across different hardware platforms has not yet been evaluated.
We propose a technique for enhancing generality of performance models across different hardware environments using linear transformation. Empirical studies on three real-world software systems show that our approach is computationally efficient and can achieve high accuracy (less than 10% mean relative error) when predicting system performance across 23 different hardware platforms. Moreover, we investigate why the approach works by comparing performance distributions of systems and structure of performance models across different platforms.

References

[1]
NIST/SEMATECH e-Handbook of Statistical Methods. http://www.itl.nist.gov/div898/handbook/.
[2]
J. Antony. Design of Experiments for Engineers and Scientists. Butterworth-Heinemann, 2003.
[3]
S. Balsamo and M. Marzolla. Performance Evaluation of UML Software Architectures with Multiclass Queueing Network Models. In Proceedings of the 5th International Workshop on Software and Performance. ACM, 2005.
[4]
V. Cortellessa and R. Mirandola. PRIMA-UML: a performance validation incremental methodology on early UML diagrams. Science of Computer Programming, July 2002.
[5]
J. Guo, K. Czarnecki, S. Apel, N. Siegmund, and A. Wasowski. Variability-aware performance prediction: A statistical learning approach. In Proc. ASE. IEEE, 2013.
[6]
K. Hoste, A. Phansalkar, L. Eeckhout, A. Georges, L. K. John, and K. De Bosschere. Performance prediction based on inherent program similarity. In Proceedings of the 15th International Conference on Parallel Architectures and Compilation Techniques, PACT '06, pages 114--122, New York, NY, USA, 2006. ACM.
[7]
F. Hutter, L. Xu, H. H. Hoos, and K. Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206(0):79--111, Jan. 2014.
[8]
F. Hutter, L. Xu, H. H. Hoos, and K. Leyton-Brown. Algorithm runtime prediction: Methods & evaluation (extended abstract). In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), July 2015.
[9]
M. Kowal, M. Tschaikowski, M. Tribastone, and I. Schaefer. Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2015.
[10]
D. Montgomery. Design and Analysis of Experiments. John Wiley & Sons, 2008.
[11]
J. C. Petkovich, A. Oliveira, Y. Zhang, T. Reidemeister, and S. Fischmeister. DataMill: A Distributed Heterogeneous Infrastructure For Robust Experimentation. Software: Practice and Experience, pages n/a--n/a, 2015.
[12]
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C (2nd Ed.): The Art of Scientific Computing. Cambridge Univ. Press, 1992.
[13]
B. D. Ripley. Stochastic simulation, volume 316. John Wiley & Sons, 2009.
[14]
I. Sobol and Y. Levitan. A pseudo-random number generator for personal computers. Computers and Mathematics with Applications, 37(4--5):33--40, 1999.
[15]
SQLite. SQLite. https://www.sqlite.org/. Accessed April. 15th, 2016.
[16]
The Tukaani Project. XZ Utils. http://tukaani.org/xz/. Accessed April. 17th, 2016.
[17]
E. Thereska, B. Doebel, A. X. Zheng, and P. Nobel. Practical performance models for complex, popular applications. SIGMETRICS Perform. Eval. Rev., 38(1):1--12, June 2010.
[18]
P. Valov, J. Guo, and K. Czarnecki. Empirical comparison of regression methods for variability-aware performance prediction. In Proceedings of the 19th International Conference on Software Product Line, SPLC '15, pages 186--190, New York, NY, USA, 2015. ACM.
[19]
VideoLAN Organization. x264, the best H.264/AVC encoder. http://www.videolan.org/developers/x264.html. Accessed April. 15th, 2016.
[20]
D. Westermann, J. Happe, R. Krebs, and R. Farahbod. Automated inference of goal-oriented performance prediction functions. In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, ASE 2012, pages 190--199, New York, NY, USA, 2012. ACM.
[21]
Y. Zhang, J. Guo, E. Blais, and K. Czarnecki. Performance prediction of configurable software systems by fourier learning. In Proceedings of the International Conference on Automated Software Engineering (ASE), 2015.

Cited By

View all
  • (2024)CTuner: Automatic NoSQL Database Tuning with Causal Reinforcement LearningProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3674809(269-278)Online publication date: 24-Jul-2024
  • (2024)Adapting Multi-objectivized Software Configuration TuningProceedings of the ACM on Software Engineering10.1145/36437511:FSE(539-561)Online publication date: 12-Jul-2024
  • (2024)Predicting Configuration Performance in Multiple Environments with Sequential Meta-LearningProceedings of the ACM on Software Engineering10.1145/36437431:FSE(359-382)Online publication date: 12-Jul-2024
  • Show More Cited By

Index Terms

  1. Transferring Performance Prediction Models Across Different Hardware Platforms

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICPE '17: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering
    April 2017
    450 pages
    ISBN:9781450344043
    DOI:10.1145/3030207
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 April 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. linear transformation
    2. model transfer
    3. performance modelling
    4. regression trees

    Qualifiers

    • Research-article

    Funding Sources

    • Natural Sciences and Engineering Research Council of Canada
    • Shanghai Municipal Natural Science Foundation
    • Pratt & Whitney Canada

    Conference

    ICPE '17
    Sponsor:

    Acceptance Rates

    ICPE '17 Paper Acceptance Rate 27 of 83 submissions, 33%;
    Overall Acceptance Rate 252 of 851 submissions, 30%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)40
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)CTuner: Automatic NoSQL Database Tuning with Causal Reinforcement LearningProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3674809(269-278)Online publication date: 24-Jul-2024
    • (2024)Adapting Multi-objectivized Software Configuration TuningProceedings of the ACM on Software Engineering10.1145/36437511:FSE(539-561)Online publication date: 12-Jul-2024
    • (2024)Predicting Configuration Performance in Multiple Environments with Sequential Meta-LearningProceedings of the ACM on Software Engineering10.1145/36437431:FSE(359-382)Online publication date: 12-Jul-2024
    • (2024)Embracing Deep Variability For Reproducibility and ReplicabilityProceedings of the 2nd ACM Conference on Reproducibility and Replicability10.1145/3641525.3663621(30-35)Online publication date: 18-Jun-2024
    • (2024)MMO: Meta Multi-Objectivization for Software Configuration TuningIEEE Transactions on Software Engineering10.1109/TSE.2024.338891050:6(1478-1504)Online publication date: 15-Apr-2024
    • (2024)ChimeraTL: Transfer Learning in DBMS with Fewer Samples2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00046(310-316)Online publication date: 13-May-2024
    • (2024)Relative Performance Prediction Using Few-Shot Learning2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00278(1764-1769)Online publication date: 2-Jul-2024
    • (2024)Learning input-aware performance models of configurable systemsJournal of Systems and Software10.1016/j.jss.2023.111883208:COnline publication date: 1-Feb-2024
    • (2023)Generative AI for Reengineering Variants into Software Product LinesProceedings of the 27th ACM International Systems and Software Product Line Conference - Volume B10.1145/3579028.3609016(57-66)Online publication date: 28-Aug-2023
    • (2023)On Programming Variability with Large Language Model-based AssistantProceedings of the 27th ACM International Systems and Software Product Line Conference - Volume A10.1145/3579027.3608972(8-14)Online publication date: 28-Aug-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media