Abstract
Container classes such as lists, sets, or maps are elementary data structures common to many programming languages. Since they are a part of standard libraries, they are important to test, which led to research on advanced testing techniques targeting such containers and research on comparing testing techniques using such containers. However, these techniques have not been thoroughly compared to simpler techniques such as random testing. We present the results of a larger case study in which we compare random testing with shape abstraction, a systematic technique that showed the best results in a previous study. Our experiments show that random testing is about as effective as shape abstraction for testing these containers, which raises the question whether containers are well suited as a benchmark for comparing advanced testing techniques.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2008)
Andrews, J.H., Groce, A., Weston, M., Xu, R.G.: Random test run length and effectiveness. In: International Conference on Automated Software Engineering (ASE), pp. 19–28 (2008)
Andrews, J.H., Menzies, T., Li, F.C.: Genetic algorithms for randomized unit testing. IEEE Transactions on Software Engineering (TSE) 99 (2010) (preprints)
Arcuri, A.: Insight knowledge in search based software testing. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 1649–1656 (2009)
Arcuri, A.: Longer is better: On the role of test sequence length in software testing. In: International Conference on Software Testing, Verification and Validation (ICST), pp. 469–478 (2010)
Arcuri, A., Briand, L.: A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: International Conference on Software Engineering, ICSE (to appear, 2011)
Arcuri, A., Iqbal, M.Z., Briand, L.: Formal analysis of the effectiveness and predictability of random testing. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 219–229 (2010)
Arcuri, A., Yao, X.: Search based software testing of object-oriented containers. Information Sciences 178(15), 3075–3095 (2008)
Ball, T.: A theory of predicate-complete test coverage and generation. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2004. LNCS, vol. 3657, pp. 1–22. Springer, Heidelberg (2005)
Baresi, L., Lanzi, P.L., Miraz, M.: TestFul: An evolutionary test approach for Java. In: International Conference on Software Testing, Verification and Validation (ICST), pp. 185–194 (2010)
Csallner, C., Smaragdakis, Y., Xie, T.: DSD-Crasher: A hybrid analysis tool for bug finding. ACM Transactions on Software Engineering and Methodology (TOSEM) 17(8) (2008)
d’Amorim, M., Pacheco, C., Xie, T., Marinov, D., Ernst, M.D.: An empirical comparison of automated generation and classification techniques for object-oriented unit testing. In: International Conference on Automated Software Engineering (ASE), pp. 59–68 (2006)
Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Transactions on Software Engineering (TSE) 10(4), 438–444 (1984)
Frankl, P.G., Weiss, S.N.: An experimental comparison of the effectiveness of branch testing and data flow testing. IEEE Transactions on Software Engineering (TSE) 19(8), 774–787 (1993)
Galeotti, J., Rosner, N., López Pombo, C., Frias, M.: Analysis of invariants for efficient bounded verification. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 25–36 (2010)
Hamlet, D., Taylor, R.: Partition testing does not inspire confidence. IEEE Transactions on Software Engineering (TSE) 16(12), 1402–1411 (1990)
Inkumsah, K., Xie, T.: Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: International Conference on Automated Software Engineering (ASE), pp. 297–306 (2008)
Myers, G.: The Art of Software Testing. Wiley, New York (1979)
Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: International Conference on Software Engineering (ICSE), pp. 75–84 (2007)
Ribeiro, J.C.B., Zenha-Rela, M.A., de Vega, F.F.: Test case evaluation and input domain reduction strategies for the evolutionary testing of object-oriented software. Information and Software Technology 51(11), 1534–1548 (2009)
Ribeiro, J.C.B., Zenha-Rela, M.A., de Vega, F.F.: Enabling object reuse on genetic programming-based approaches to object-oriented evolutionary testing. In: Esparcia-Alcázar, A.I., Ekárt, A., Silva, S., Dignum, S., Uyar, A.Ş. (eds.) EuroGP 2010. LNCS, vol. 6021, pp. 220–231. Springer, Heidelberg (2010)
Schuler, D., Zeller, A.: Javalanche: Efficient mutation testing for Java. In: Symposium on The Foundations of Software Engineering (FSE), pp. 297–298 (2009)
Sharma, R., Gligoric, M., Jagannath, V., Marinov, D.: A comparison of constraint-based and sequence-based generation of complex input data structures. In: Workshop on Constraints in Software Testing, Verification and Analysis (CSTVA 2010), pp. 337–342 (2010)
Staats, M., Pasareanu, C.: Parallel symbolic execution for structural test generation. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 183–194 (2010)
Tonella, P.: Evolutionary testing of classes. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 119–128 (2004)
Vargha, A., Delaney, H.D.: A critique and improvement of the CL common language effect size statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics 25(2), 101–132 (2000)
Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 97–107 (2004)
Visser, W., Pasareanu, C.S., Pelànek, R.: Test input generation for Java containers using state matching. In: International Symposium on Software Testing and Analysis (ISSTA), pp. 37–48 (2006)
Wappler, S., Wegener, J.: Evolutionary unit testing of object-oriented software using strongly-typed genetic programming. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 1925–1932 (2006)
Weyuker, E.J., Jeng, B.: Analyzing partition testing strategies. IEEE Transactions on Software Engineering (TSE) 17(7), 703–711 (1991)
Xie, T., Marinov, D., Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In: International Conference on Automated Software Engineering (ASE), pp. 196–205 (2004)
Xie, T., Marinov, D., Schulte, W., Notkin, D.: Symstra: A framework for generating object-oriented unit tests using symbolic execution. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 365–381. Springer, Heidelberg (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sharma, R., Gligoric, M., Arcuri, A., Fraser, G., Marinov, D. (2011). Testing Container Classes: Random or Systematic?. In: Giannakopoulou, D., Orejas, F. (eds) Fundamental Approaches to Software Engineering. FASE 2011. Lecture Notes in Computer Science, vol 6603. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-19811-3_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-19811-3_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-19810-6
Online ISBN: 978-3-642-19811-3
eBook Packages: Computer ScienceComputer Science (R0)