Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/ICSE-Companion.2019.00133acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Mimicking user behavior to improve in-house test suites

Published: 25 May 2019 Publication History

Abstract

Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need a way to (1) identify the behaviors exercised in the field that were not exercised in-house and (2) generate new tests that exercise such behaviors. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behaviors and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.

References

[1]
A. W. Biermann and J. A. Feldman. On the synthesis of finite-state machines from samples of their behavior. IEEE Transactions on Computers (TC), 21(6):592--597, 1972.
[2]
C. Cadar, D. Dunbar, and D. Engler. Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs. In Proc. of the 8th USENIX Conference on Operating Systems Design and Implementation, pages 209--224, San Diego, California, 2008.
[3]
W. Jin and A. Orso. Bugredux: Reproducing field failures for in-house debugging. In Proc. of the 34th International Conference on Software Engineering, pages 474--484, Zurich, Switzerland, 2012.
[4]
Q. Wang, Y. Brun, and A. Orso. Behavioral execution comparison: Are tests representative of field behavior? In 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pages 321--332, March 2017.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '19: Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings
May 2019
369 pages

Sponsors

Publisher

IEEE Press

Publication History

Published: 25 May 2019

Check for updates

Author Tags

  1. field data
  2. software testing
  3. test generation

Qualifiers

  • Research-article

Conference

ICSE '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 46
    Total Downloads
  • Downloads (Last 12 months)2
  • Downloads (Last 6 weeks)1
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media