Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2994291.2994295acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

The complementary aspect of automatically and manually generated test case sets

Published: 18 November 2016 Publication History

Abstract

The test is a mandatory activity for software quality assurance. The knowledge about the software under testing is necessary to generate high-quality test cases, but to execute more than 80% of its source code is not an easy task, and demands an in-depth knowledge of the business rules it implements. In this article, we investigate the adequacy, effectiveness, and cost of manually generated test sets versus automatically generated test sets for Java programs. We observed that, in general, manual test sets determine higher statement coverage and mutation score than automatically generated test sets. But one interesting aspect recognized is that the automatically generated test sets are complementary to the manual test set. When we combined manual with automated test sets, the resultant test sets overcame in more that 10%, on average, statement coverage and mutation score when compared to the rates of manual test set, keeping a reasonable cost. Therefore, we advocate that we should concentrate the use of manually generated test sets on testing essential and critical parts of the software.

References

[1]
J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments? In XXVII International Conference on Software Engineering – ICSE’05, pages 402–411, New York, NY, USA, 2005. ACM Press.
[2]
J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin. Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Transactions on Software Engineering, 32(8):608–624, Aug. 2006.
[3]
Apache Software Foundation. Apache Maven project. Página Web, June 2016. Dispon´ıvel em: https://maven.apache.org/. Acesso em: 04/07/2016.
[4]
V. R. Basili, G. Caldiera, and H. D. Rombach. Encyclopedia of Software Engineering, volume 2, chapter Goal Question Metric Paradigm, pages 528–532. John Wiley & Sons, Inc., 1994.
[5]
B. Boehm and V. R. Basili. Software defect reduction top 10 list. Computer, 34(1):135–137, 2001.
[6]
T. A. Budd, R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In 7th ACM Symposium on Principles of Programming Languages, pages 220–233, New York, NY, Jan. 1980.
[7]
H. Coles. Pitest: real world mutation testing. Página Web, Jan. 2015. Dispon´ıvel em: http://pitest.org/. Acesso em: 04/07/2016.
[8]
S. R. S. de Souza, M. P. Prado, E. F. Barbosa, and J. C. Maldonado. An experimental study to evaluate the impact of the programming paradigm in the testing activity. CLEI Electronic Journal, 15(1):1–13, Apr. 2012. Paper 3.
[9]
Eclipse Foundation. Eclipse ide. Página Web, June 2015. Dispon´ıvel em: https://eclipse.org/mars/. Acesso em: 04/07/2016.
[10]
G. Fraser and A. Arcuri. Sound empirical evidence in software testing. In Proceedings of the 34th International Conference on Software Engineering, ICSE’12, pages 178–188, Piscataway, NJ, USA, 2012.
[11]
IEEE Press.
[12]
G. Fraser and A. Arcuri. Evosuite at the sbst 2016 tool competition. In Proceedings of the 9th International Workshop on Search-Based Software Testing, pages 33–36, New York, NY, USA, 2016. ACM.
[13]
Google. Codepro analytix evaluation guide. WEB Page, 2010. Available at: https://google-web-toolkit. googlecode.com/files/CodePro-EvalGuide.pdf. Accessed on: 04/07/2016.
[14]
J. S. Kracht, J. Z. Petrovic, and K. R. Walcott-Justice. Empirically evaluating the quality of automatically generated and manually written test suites. In 2014 14th International Conference on Quality Software, pages 256–265, Oct. 2014.
[15]
A. Leitner, I. Ciupa, B. Meyer, and M. Howard. Reconciling manual and automated testing: The autotest experience. In System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on, pages 261a–261a, Jan. 2007.
[16]
Y.-S. Ma, J. Offutt, and Y. R. Kwon. Mujava: an automated class mutation system: Research articles. STVR – Software Testing, Verification and Reliability, 15(2):97–133, 2005.
[17]
J. C. Maldonado, E. F. Barbosa, A. M. R. Vincenzi, and M. E. Delamaro. Evaluation N-selective mutation for C programs: Unit and integration testing. In Mutation 2000 Symposium, pages 22–33, San Jose, CA, Oct. 2000. Kluwer Academic Publishers.
[18]
C. Pacheco and M. D. Ernst. Randoop: Feedback-directed random testing for java. In Companion to the 22Nd ACM SIGPLAN Conference on Object-oriented Programming Systems and Applications Companion, OOPSLA ’07, pages 815–816, New York, NY, USA, 2007. ACM.
[19]
M. Roper. Software Testing. McGrall Hill, 1994.
[20]
A. J. Simons. JWalk: A tool for lazy, systematic testing of java classes by design introspection and user interaction. Automated Software Engg., 14(4):369–418, Dec. 2007.
[21]
N. Smeets and A. J. H. Simons. Automated unit testing with Randoop, JWalk and MuJava versus manual JUnit testing. Research reports, Department of Computer Science, University of Sheffield/University of Antwerp, Sheffield, Antwerp, 2011.
[22]
C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell, and A. Wesslén. Experimentation in software engineering. Springer Heidelberg, New York, NY, USA, 2012.
[23]
N. Ziviani. Project of Algorithms with Java and C++ Implementations. Cengage Learning, 2011. (in Portuguese).

Cited By

View all
  • (2023)An initial investigation of ChatGPT unit test generation capabilityProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624035(15-24)Online publication date: 25-Sep-2023
  • (2023)An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test SetsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624034(5-14)Online publication date: 25-Sep-2023
  • (2020)A Large Scale Study On the Effectiveness of Manual and Automatic Unit Test GenerationProceedings of the XXXIV Brazilian Symposium on Software Engineering10.1145/3422392.3422407(253-262)Online publication date: 21-Oct-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
A-TEST 2016: Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation
November 2016
77 pages
ISBN:9781450344012
DOI:10.1145/2994291
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 November 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated Test Data Generation
  2. Automated Testing
  3. Manual Testing
  4. Software Testing

Qualifiers

  • Research-article

Funding Sources

Conference

FSE'16
Sponsor:

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)1
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)An initial investigation of ChatGPT unit test generation capabilityProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624035(15-24)Online publication date: 25-Sep-2023
  • (2023)An Experimental Study Evaluating Cost, Adequacy, and Effectiveness of Pynguin's Test SetsProceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3624032.3624034(5-14)Online publication date: 25-Sep-2023
  • (2020)A Large Scale Study On the Effectiveness of Manual and Automatic Unit Test GenerationProceedings of the XXXIV Brazilian Symposium on Software Engineering10.1145/3422392.3422407(253-262)Online publication date: 21-Oct-2020
  • (2019)Tool smiths in off-shored workProceedings of the Tenth International Conference on Information and Communication Technologies and Development10.1145/3287098.3287112(1-10)Online publication date: 4-Jan-2019
  • (2017)An Automated Testing Model using Test Driven Development ApproachOriental journal of computer science and technology10.13005/ojcst/10.02.1810:2(385-390)Online publication date: 19-Apr-2017

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media