Nothing Special   »   [go: up one dir, main page]

skip to main content
article

JCrasher: an automatic robustness tester for Java

Published: 01 September 2004 Publication History

Abstract

JCrasher is an automatic robustness testing tool for Java code. JCrasher examines the type information of a set of Java classes and constructs code fragments that will create instances of different types to test the behavior of public methods under random data. JCrasher attempts to detect bugs by causing the program under test to 'crash', that is, to throw an undeclared runtime exception. Although in general the random testing approach has many limitations, it also has the advantage of being completely automatic: no supervision is required except for off-line inspection of the test cases that have caused a crash. Compared to other similar commercial and research tools, JCrasher offers several novelties: it transitively analyzes methods, determines the size of each tested method's parameter-space and selects parameter combinations and therefore test cases at random, taking into account the time allocated for testing; it defines heuristics for determining whether a Java exception should be considered as a program bug or whether the JCrasher supplied inputs have violated the code's preconditions; it includes support for efficiently undoing all the state changes introduced by previous tests; it produces test files for JUnit, a popular Java testing tool; and it can be integrated in the Eclipse IDE.

References

[1]
1. Beizer B. Black-Box Testing: Techniques for Functional Testing of Software and Systems. Wiley: New York, 1995.
[2]
2. Claessen K, Hughes J. QuickCheck: A lightweight tool for random testing of Haskell programs. Proceedings of the 5th ACM SIGPLAN International Conference on Functional Programming (ICFP '00), Odersky M, Wadler P (eds.). ACM Press: New York, 2000; 268-279.
[3]
3. Forrester JE, Miller BP. An empirical study of the robustness of Windows NT applications using random testing. Proceedings of the 4th USENIX Windows Systems Symposium. USENIX Association, 2000; 59-68.
[4]
4. Howe AE, von Mayrhauser A, Mraz RT. Test case generation as an AI planning problem. Automated Software Engineering 1997; 4(1):77-106.
[5]
5. Kropp NP, Koopman PJ, Siewiorek DP. Automated robustness testing of off-the-shelf software components. Digest of Papers: FTCS-28, The 28th Annual International Symposium on Fault-Tolerant Computing. IEEE Computer Society Press: Los Alamitos, CA, 1998; 230-239.
[6]
6. Memon AM, Pollack ME, Soffa ML. Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering 2001; 27(2): 144-155.
[7]
7. Parasoft Jtest page. http://www.parasoft.com/jsp/products/home.jsp?product=Jtest {7 December 2003}.
[8]
8. SilverMark Inc. Enhanced JUnit version 3.7 user reference. http://www.silvermark.com/Product/enhancedjunit/documentation/enhancedjunitmanuaunl.pdf {7 December 2003}.
[9]
9. Beck K, Gamma E. Test infected: Programmers love writing tests. Java Report 1998; 3(7):37-50.
[10]
10. Chan P, Lee R, Kramer D. The Java Class Libraries (2nd edn), vol. 1. Addison-Wesley: Reading, MA, 1998.
[11]
11. Cheon Y, Leavens G. A simple and practical approach to unit testing: The JML and JUnit way. Proceedings of ECOOP 2000--16th European Conference on Object-Oriented Programming, Magnusson B (ed.). Springer: Berlin, 2002; 231-255.
[12]
12. Dillenberger DN, Bordawekar R, Clark CW, Durand D, Emmes D, Gohda O, Howard S, Oliver MF, Samuel F, St John RW. Building a Java virtual machine for server applications: The JVM on OS/390. IBM Systems Journal 2000; 39(1):194-210.
[13]
13. Lindholm T, Yellin F. The JavaTM Virtual Machine Specification (2nd edn). Addison-Wesley: Reading, MA, 1999.
[14]
14. Bytecode Engineering Library (BCEL) page. http://jakarta.apache.org/bcel/{7 December 2003}.
[15]
15. Kozen D, Stillerman M. Eager class initialization for Java. Proceedings of the 7th International Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems, (FTRTFT 2002), Damm W, Olderog ER (eds.). Springer: Berlin, 2002; 71-80.
[16]
16. Stotts D, Lindsey M, Antley A. An informal formal method for systematic JUnit test case generation. XP/Agile Universe 2002, Second XP Universe and First Agile Universe Conference, Wells D, Williams LA (eds.). Springer: Berlin, 2002; 131-143.
[17]
17. Xie T, Notkin D. Tool-assisted unit test selection based on operational violations. Proceedings 18th IEEE International Conference on Automated Software Engineering (ASE 2003). IEEE Computer Society Press: Los Alamitos, CA, 2003; 40-48.
[18]
18. Junit Test Generator page. http://sourceforge.net/projects/junittestmaker/ {7 December 2003}.
[19]
19. Pan J, Koopman P, Siewiorek D, Huang Y, Gruber R, Jiang ML. Robustness testing and hardening of CORBA ORB implementations. The International Conference on Dependable Systems and Networks (DSN'O1). IEEE Computer Society Press: Los Alamitos, CA, 2001; 141-150.
[20]
20. Edwards SH. A framework for practical, automated black-box testing of component-based software. Software Testing, Verification and Reliability, 2001; 11(2):97-111.
[21]
21. Sankar S, Hayes R. ADL--an interface description language for specifying and testing software. ACM SIGPLAN Notices 1994; 29(8):13-21.
[22]
22. Zweben SH, Heym WD, Kimmich J. Systematic testing of data abstractions based on software specifications. Software Testing, Verification and Reliabiliy 1992; 1(4):39-55.
[23]
23. Boyapati C, Khurshid S, Marinov D. Korat: Automated testing based on Java predicates. Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2002), Frankl PG (ed.). ACM Press: New York, 2002; 123-133.
[24]
24. JCrasher page. http://www.cc.gatech.edu/~csallnch/jcrasber/{7 December 2003}.

Cited By

View all
  • (2024)Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration ToolsProceedings of the 1st International Workshop on Large Language Models for Code10.1145/3643795.3648396(54-61)Online publication date: 20-Apr-2024
  • (2024)Shaken, Not Stirred: How Developers Like Their Amplified TestsIEEE Transactions on Software Engineering10.1109/TSE.2024.338101550:5(1264-1280)Online publication date: 22-Mar-2024
  • (2024)An Empirical Evaluation of Using Large Language Models for Automated Unit Test GenerationIEEE Transactions on Software Engineering10.1109/TSE.2023.333495550:1(85-105)Online publication date: 1-Jan-2024
  • Show More Cited By

Recommendations

Reviews

Marlin W Thomas

The original Java white paper cites robustness as a key design goal of the language [1]. Java attempts to make quality assurance easier by identifying programming errors at both compile time and run time, and by adopting a policy of type safety. The Java compilers address the conformity of a program to language rules, but what about user inputs to a program that has survived compilation__?__ Robustness testers, such as JCrasher, test public methods against random data, to determine if they throw an undeclared runtime exception. This paper begins by briefly discussing testing protocols, and by advocating for a random testing paradigm, which JCrasher implements. It explains the inner workings of JCrasher, and provides empirical assessments of its quality. One distinguishing feature of JCrasher is that it can determine whether an exception results from an error in the program, or from user inputs that violate preconditions of the code. The paper explains how JCrasher generates test cases, and how it resets the Java Virtual Machine (JVM) so that a failure occurs in a state unaffected by prior failures. Although JCrasher can be used independently, it can be integrated into JUnit, a popular regression testing framework. A valuable adjunct to the paper is a Web site that provides downloads of the software, and other useful information that supplements the paper (http://www.cc.gatech.edu/~csallnch/jcrasher/). The paper will be of interest to software engineers and researchers who develop complex Java applications that must run reliably under a wide range of inputs. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Software—Practice & Experience
Software—Practice & Experience  Volume 34, Issue 11
September 2004
93 pages

Publisher

John Wiley & Sons, Inc.

United States

Publication History

Published: 01 September 2004

Author Tags

  1. java
  2. random testing
  3. software testing
  4. state re-initialization
  5. test case generation

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 24 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Unit Test Generation using Generative AI : A Comparative Performance Analysis of Autogeneration ToolsProceedings of the 1st International Workshop on Large Language Models for Code10.1145/3643795.3648396(54-61)Online publication date: 20-Apr-2024
  • (2024)Shaken, Not Stirred: How Developers Like Their Amplified TestsIEEE Transactions on Software Engineering10.1109/TSE.2024.338101550:5(1264-1280)Online publication date: 22-Mar-2024
  • (2024)An Empirical Evaluation of Using Large Language Models for Automated Unit Test GenerationIEEE Transactions on Software Engineering10.1109/TSE.2023.333495550:1(85-105)Online publication date: 1-Jan-2024
  • (2024)An Empirical Study on Automated Test Generation Tools for Java: Effectiveness and ChallengesJournal of Computer Science and Technology10.1007/s11390-023-1935-539:3(715-736)Online publication date: 1-May-2024
  • (2023)Co-dependence Aware Fuzzing for Dataflow-Based Big Data AnalyticsProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616298(1050-1061)Online publication date: 30-Nov-2023
  • (2023)LExecutor: Learning-Guided ExecutionProceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616254(1522-1534)Online publication date: 30-Nov-2023
  • (2023)An empirical study of automated unit test generation for PythonEmpirical Software Engineering10.1007/s10664-022-10248-w28:2Online publication date: 31-Jan-2023
  • (2022)Efficient Synthesis of Method Call Sequences for Test Generation and Bounded VerificationProceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering10.1145/3551349.3556951(1-12)Online publication date: 10-Oct-2022
  • (2022)Randoop-TSR: Random-based Test Generator with Test Suite ReductionProceedings of the 13th Asia-Pacific Symposium on Internetware10.1145/3545258.3545280(221-230)Online publication date: 11-Jun-2022
  • (2022)Finding bugs in Gremlin-based graph database systems via Randomized differential testingProceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3533767.3534409(302-313)Online publication date: 18-Jul-2022
  • Show More Cited By

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media