Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3194718.3194723acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Public Access

Multifaceted test suite generation using primary and supporting fitness functions

Published: 28 May 2018 Publication History

Abstract

Dozens of criteria have been proposed to judge testing adequacy. Such criteria are important, as they guide automated generation efforts. Yet, the current use of such criteria in automated generation contrasts how such criteria are used by humans. For a human, coverage is part of a multifaceted combination of testing strategies. In automated generation, coverage is typically the goal, and a single fitness function is applied at one time. We propose that the key to improving the fault detection efficacy of search-based test generation approaches lies in a targeted, multifaceted approach pairing primary fitness functions that effectively explore the structure of the class under test with lightweight supporting fitness functions that target particular scenarios likely to trigger an observable failure.
This report summarizes our findings to date, details the hypothesis of primary and supporting fitness functions, and identifies outstanding research challenges related to multifaceted test suite generation. We hope to inspire new advances in search-based test generation that could benefit our software-powered society.

References

[1]
N. Alshahwan and M. Harman. Coverage and fault detection of the output-uniqueness test selection criteria. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, ISSTA 2014, pages 181--192, New York, NY, USA, 2014. ACM.
[2]
S. Anand, E. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold, and P. McMinn. An orchestrated survey on automated software test case generation. Journal of Systems and Software, 86(8):1978--2001, August 2013.
[3]
T. Chen, H. Leung, and I. Mak. Adaptive random testing. In M. Maher, editor, Advances in Computer Science - ASIAN 2004. Higher-Level Decision Making, volume 3321 of Lecture Notes in Computer Science, pages 3156--3157. Springer Berlin / Heidelberg, 2005.
[4]
J. J. Durillo and A. J. Nebro. jmetal: A java framework for multi-objective optimization. Advances in Engineering Software, 42(10):760 -- 771, 2011.
[5]
G. Fraser and A. Arcuri. Achieving scalable mutation-based generation of whole test suites. Empirical Software Engineering, 20(3):783--812, 2014.
[6]
G. Gay. The fitness function for the job: Search-based generation of test suites that detect real faults. In Proceedings of the International Conference on Software Testing, ICST 2017. IEEE, 2017.
[7]
G. Gay. Generating effective test suites by combining coverage criteria. In Proceedings of the Symposium on Search-Based Software Engineering, SSBSE 2017. Springer Verlag, 2017.
[8]
G. Gay, M. Staats, M. Whalen, and M. Heimdahl. The risks of coverage-directed test case generation. Software Engineering, IEEE Transactions on, PP(99), 2015.
[9]
J. M. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri. Combining multiple coverage criteria in search-based unit test generation. In M. Barros and Y. Labiche, editors, Search-Based Software Engineering, volume 9275 of Lecture Notes in Computer Science, pages 93--108. Springer International Publishing, 2015.
[10]
S. Shamshiri, R. Just, J. M. Rojas, G. Fraser, P. McMinn, and A. Arcuri. Do automatically generated unit tests find real faults? an empirical study of effectiveness and challenges. In Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), ASE 2015, New York, NY, USA, 2015. ACM.
[11]
R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SBST '18: Proceedings of the 11th International Workshop on Search-Based Software Testing
May 2018
84 pages
ISBN:9781450357418
DOI:10.1145/3194718
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 May 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adequacy criteria
  2. automated test generation
  3. search-based test generation

Qualifiers

  • Research-article

Funding Sources

Conference

ICSE '18
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 151
    Total Downloads
  • Downloads (Last 12 months)40
  • Downloads (Last 6 weeks)5
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media