Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/11561163_1guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A theory of predicate-complete test coverage and generation

Published: 02 November 2004 Publication History

Abstract

Consider a program with m statements and n predicates, where the predicates are derived from the conditional statements and assertions in a program. An observable state is an evaluation of the n predicates under some state at a program statement. The goal of predicate-complete testing (PCT) is to evaluate all the predicates at every program state. That is, we wish to cover every reachable observable state (at most m × 2n of them) in a program. PCT coverage subsumes many existing control-flow coverage criteria and is incomparable to path coverage. To support the generation of tests to achieve high PCT coverage, we show how to define an upper bound U and lower bound L to the (unknown) set of reachable observable states R. These bounds are constructed automatically using Boolean (predicate) abstraction over modal transition systems and can be used to guide test generation via symbolic execution. We define a static coverage metric as |L|/|U|, which measures the ability of the Boolean abstraction to achieve high PCT coverage.

References

[1]
R. Boyer, B. Elspas, and K. Levitt. SELECT-a formal system for testing and debugging programs by symbolic execution. SIGPLAN Notices, 10(6):234-245, 1975.
[2]
G. Bruns and P. Godefroid. Model checking partial state spaces with 3- valued temporal logics. In CAV 99: Computer Aided Verification, LNCS 1633, pages 274-287. Springer-Verlag, 1999.
[3]
C. Boyapati, S. Khurshid, and D. Marinov. Korat: automated testing based on java predicates. In Proceedings of the International Symposium on Software Testing and Analysis, pages 123-133. ACM, 2002.
[4]
W. R. Bush, J. D. Pincus, and D. J. Sielaff. A static analyzer for finding dynamic programming errors. Software-Practice and Experience, 30(7):775- 802, June 2000.
[5]
A. J. Chlipala, T. A. Henzinger, R. Jhala, and R. Majumdar. Generating tests from counterexamples. In ICSE 04: International Conference on Software Engineering (to appear). ACM, 2004.
[6]
E. Clarke, D. Kroening, and K. Yorav. Behavioral consistency of C and Verilog programs using bounded model checking. In Design Automation Conference, pages 368-371, 2003.
[7]
L. A. Clarke. A system to generate test data and symbolically execute programs. IEEE Transactions on Software Engineering, 2(3):215-222, September 1976.
[8]
L. de Alfaro, P. Godefroid, and R. Jagadeesan. Three-valued abstractions of games:uncertainty, but with precision. In LICS 04: Logic in Computer Science, To appear in LNCS. Springer-Verlag, 2004.
[9]
A. Gotlieb, B. Botella, and M. Rueher. Automatic test data generation using constraint solving techniques. In Proceedings of the International Symposium on Software Testing and Analysis, pages 53-62. ACM, 1998.
[10]
P. Godefroid, Michael Huth, and Radha Jagadeesan. Abstraction-based model checking using modal transition systems. In CONCUR 01: Conference on Concurrency Theory, LNCS 2154, pages 426-440. Springer-Verlag, 2001.
[11]
N. Gupta, A. P. Mathur, and M. L. Soffa. Automated test data generation using an iterative relaxation method. In FSE 98: Foundations of Software Engineering. ACM, 1998.
[12]
P. Godefroid. Reasoning about abstract open systems with generalized module checking. In EMSOFT 03: Conference on Embedded Software, LNCS 2855, pages 223-240. Springer-Verlag, 2003.
[13]
P. Godefroid and R. Jagadeesan. On the expressiveness of 3-valued models. In VMCAI 03: Verification, Model Checking and Abstract Interpretation, LNCS 2575, pages 206-222. Springer-Verlag, 2003.
[14]
D. Gries. The Science of Programming. Springer-Verlag, 1981.
[15]
R. Giacobazzi, F. Ranzato, and F. Scozzari. Making abstract interpretations complete. Journal of the ACM, 47(2):361-416, 2000.
[16]
M. Harder, J. Mellen, and M. D. Ernst. Improving test suites via operational abstraction. In ICSE 2003: International Conference on Software Engineering, pages 60-71. ACM, 2003.
[17]
W. E. Howden. Reliability of the path analysis testing strategy. IEEE Transactions on Software Engineering, 2:208-215, 1976.
[18]
R. Jasper, M. Brennan, K. Williamson, B. Currier, and D. Zimmerman. Test data generation and feasible path analysis. In Proceedings of the International Symposium on Software Testing and Analysis, pages 95-107. ACM, 1994.
[19]
D. Jackson and M. Vaziri. Finding bugs with a constraint solver. In Proceedings of the International Symposium on Software Testing and Analysis, pages 14-25. ACM, 2000.
[20]
B. Korel. Dynamic method of software test data generation. Software Testing, Verification and Reliability, 2(4):203-213, 1992.
[21]
R. Milner. Communicating and Mobile Systems: the π-Calculus. Cambridge University Press, 1999.
[22]
C. Ramamoorthy, S. Ho, and W. Chen. On the automated generation of program test data. IEEE Transactions on Software Engineering, 2(4):293-300.
[23]
S. Shoham and O. Grumberg. Monotonic abstraction-refinement for CTL. In TACAS 04: Tools and Algorithms for Construction and Analysis of Systems, LNCS 2988, pages 546-560. Springer-Verlag, 2004.
[24]
K-C. Tai. Theory of fault-based predicate testing for computer programs. IEEE Transactions on Software Engineering, 22(8):552-562, 1996.
[25]
K-C. Tai. Predicate-based test generation for computer programs. In ICSE 97: International Conference on Software Engineering, pages 267- 276, 1997.
[26]
D. Yates and N. Malevris. Reducing the effects of infeasible paths in branch testing. In Proceedings of the Symposium on Software Testing, Analysis, and Verification, pages 48-54. ACM, 1989.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
FMCO'04: Proceedings of the Third international conference on Formal Methods for Components and Objects
November 2004
324 pages
ISBN:3540291318
  • Editors:
  • Frank S. Boer,
  • Marcello M. Bonsangue,
  • Susanne Graf,
  • Willem-Paul Roever

Sponsors

  • IPA: The Dutch Institute for Programming Research and Algorithmics
  • The Royal Netherlands Academy of Arts and Sciences: The Royal Netherlands Academy of Arts and Sciences
  • The European project IST-2001-33522 Omega: The European project IST-2001-33522 Omega
  • The Lorentz Center: The Lorentz Center
  • The Dutch Organization for Scientific Research: The Dutch Organization for Scientific Research

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 02 November 2004

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 30 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Low-Level Reachability Analysis Based on Formal LogicTests and Proofs10.1007/978-3-031-38828-6_2(21-39)Online publication date: 18-Jul-2023
  • (2019)SYMACProceedings of the 2nd International Conference on Computer Science and Software Engineering10.1145/3339363.3339379(126-131)Online publication date: 24-May-2019
  • (2018)Test Generation from Event System Abstractions to Cover Their States and TransitionsProgramming and Computing Software10.1134/S036176881801008544:1(1-14)Online publication date: 1-Jan-2018
  • (2018)An experimental comparison of edge, edge-pair, and prime path criteriaScience of Computer Programming10.1016/j.scico.2017.10.003152:C(99-115)Online publication date: 15-Jan-2018
  • (2016)Analyzing test completeness for dynamic languagesProceedings of the 25th International Symposium on Software Testing and Analysis10.1145/2931037.2931059(142-153)Online publication date: 18-Jul-2016
  • (2016)Tri-modal under-approximation for test generationScience of Computer Programming10.1016/j.scico.2016.07.003132:P2(190-208)Online publication date: 15-Dec-2016
  • (2015)Assertion guided symbolic execution of multithreaded programsProceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering10.1145/2786805.2786841(854-865)Online publication date: 30-Aug-2015
  • (2015)TLV: abstraction through testing, learning, and validationProceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering10.1145/2786805.2786817(698-709)Online publication date: 30-Aug-2015
  • (2015)Tri-modal under-approximation of event systems for test generationProceedings of the 30th Annual ACM Symposium on Applied Computing10.1145/2695664.2695731(1737-1744)Online publication date: 13-Apr-2015
  • (2015)Guidelines for Coverage-Based Comparisons of Non-Adequate Test SuitesACM Transactions on Software Engineering and Methodology10.1145/266076724:4(1-33)Online publication date: 2-Sep-2015
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media