Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Gap between academia and industry: a case of empirical evaluation of three software testing methods

  • Original Article
  • Published:
International Journal of System Assurance Engineering and Management Aims and scope Submit manuscript

Abstract

Doing the right kind of testing has always been one of main challenging and a decisive task for industry. To choose right software testing method(s), industry needs to have an exact objective knowledge of their effectiveness, efficiency, and applicability conditions. The most common way to evaluate testing methods, for such knowledge, is with empirical studies. Reliable and comprehensive evidence can be obtained by aggregating the results of different empirical studies (family of experiments) taking into account their findings and limitations. We conducted a study to investigate the current state of the art of empirical knowledge base of three testing methods. We found that although the empirical studies conducted so far to evaluate testing methods contain many important and interesting results; however, we still lack factual and generalizable knowledge about performance and applicability conditions of testing methods(s), making it unfeasible to be readily adopted by the industry. Moreover, we tried to identify the major factors responsible for limiting academia from producing significantly reliable results having an industrial impact. We believe that besides effective and long-term academia-industry collaboration, there is a need for more systematic, quantifiable and comprehensive empirical studies (which provides scope for aggregation using rigorous techniques), mainly replications so as to create an effective and applicable knowledge base about testing methods which potentially can fill the gap between academia and industry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Please read CR for code reading, FT for functional testing and ST for structural testing in Table 1 whereas BT and EP stands for branch testing and equivalence partitioning respectively. R1 and R2 should be read as replication 1 and replication 2 respectively.

References

  • Apa C, Dieste O (2014) Effectiveness for detecting faults within and outside the scope of testing techniques: an independent replication. Empir Softw Eng 19(2):378–417

    Article  Google Scholar 

  • Basili VR, Selby RW (1987) Comparing the effectiveness of software testing strategies. IEEE Trans Softw Eng 12:1278–1296

    Article  Google Scholar 

  • Basili VR, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):456–473

    Article  Google Scholar 

  • Beizer B (2003) Software testing techniques. Dreamtech Press, New Delhi

    MATH  Google Scholar 

  • Binder R (2000) Testing object-oriented systems: models, patterns, and tools. Addison-Wesley Professional, Boston

    Google Scholar 

  • Briand LC (2007) A critical analysis of empirical research in software testing. In: First international symposium on empirical software engineering and Measurement, 2007. ESEM 2007. IEEE, pp 1–8

  • Carver JC (2010) Towards reporting guidelines for experimental replications: a proposal. In: RESER‘2010: proceedings of the 1st international workshop on replication in empirical software engineering research, Cape Town, South Africa, vol 4

  • de Oliveira Neto FG, Torkar R, Machado PDL (2015) An initiative to improve reproducibility and empirical evaluation of software testing techniques. In: Proceedings of the 37th international conference on software engineering, volume 2 (ICSE’15), vol 2. IEEE Press, Piscataway, NJ, USA, pp 575–578

  • Farooq SU, Quadri SMK (2013) Empirical evaluation of software testing techniques–need, issues and mitigation. Softw Eng Int J 3:41–51

    Google Scholar 

  • Farooq SU, Quadri SMK (2014) Empirical evaluation of software testing techniques in an open source fashion. In: Proceedings of the 2nd international workshop on conducting empirical studies in industry. ACM, pp 21–24

  • Farooq SU, Quadri SMK, Ahmad N (2017) A replicated empirical study to evaluate software testing methods. J Softw Evol Process 29(9):e1883

    Article  Google Scholar 

  • Fernández E (2007) Aggregation process with multiple evidence levels for experimental studies in Software Engineering. In: Proceedings 2nd international doctoral symposium on empirical software engineering, pp 75–81

  • Fonseca C, Rodrigo E (2012) Definition of a support infrastructure for replicating and aggregating families of software engineering experiments. International Doctoral Symposium on empirical Software Engineering. Lund, Suecia, pp 9–16

    Google Scholar 

  • Garousi V, Felderer M (2017) Worlds Apart: Industrial and Academic Focus Areas in Software Testing. IEEE Softw 34(5):38–45. https://doi.org/10.1109/MS.2017.3641116

    Article  Google Scholar 

  • Garousi V, Petersen K, Ozkan B (2016) Challenges and best practices in industry-academia collaborations in software engineering: a systematic literature review. Inf Softw Technol 79:106–127

    Article  Google Scholar 

  • Garousi V, Felderer M, Kuhrmann M, Herkiloğlu K (2017). What industry wants from academia in software testing? Hearing practitioners’ opinions. In: Proceedings of the 21st international conference on evaluation and assessment in software engineering. ACM, pp 65–69

  • Gerwien R (2014) A painless guide to statistics. Retrieved from 7 Sept 2018 http://abacus.bates.edu/~ganderso/biology/resources/statistics.html

  • Glass RL (2006) The academe/practice communication chasm—position paper. Dagstuhl seminar on empirical SE 27.06.-30.06.06 (06262). Participant materials. http://www.dagstuhl.de/Materials/Files/06/06262/06262.GlassRobert.ExtAbstract!.pdf. Accessed 11 Sept 2015

  • Gómez OS, Juristo N, Vegas S (2014) Understanding replication of experiments in software engineering: a classification. Inf Softw Technol 56(8):1033–1048

    Article  Google Scholar 

  • Gómez OS, Cortés-Verdín K, Pardo CJ (2017) Efficiency of software testing techniques: a controlled experiment replication and network meta-analysis. e Inform Softw Eng J 11(1):77–102. https://doi.org/10.5277/e-Inf170104

    Article  Google Scholar 

  • Gorschek T (2015) How to increase the likelihood of successful transfer to industry–going beyond the empirical. In: 2015 IEEE/ACM 3rd international workshop on conducting empirical studies in industry (CESI). IEEE, pp 10–11

  • Gorschek T, Garre P, Larsson S, Wohlin C (2006) A model for technology transfer in practice. IEEE Softw 23(6):88–95

    Article  Google Scholar 

  • Gousios GI (2009) Tools and methods for large scale empirical software engineering research, (Unpublished doctoral Thesis). Athens University of Economics and Business, Athina, Greece

    Google Scholar 

  • Grady RB (1992) Practical software metrics for project management and process improvement. Prentice-Hall Inc, Upper Saddle River

    Google Scholar 

  • Hetzel W (1976) An experimental analysis of program verification methods. Ph.D. Dissertation. The University of North Carolina at Chapel Hill. AAI7702047

  • Host M, Regnell B, Wohlin C (2000) Using students as subjects—a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5:201–214

    Article  Google Scholar 

  • Ivarsson M, Gorschek T (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empir Softw Eng 16(3):365–395

    Article  Google Scholar 

  • Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In Guide to advanced empirical software engineering. Springer, London, pp 201–228

  • Jedlitschka A, Juristo N, Rombach D (2014) Reporting experiments to satisfy professionals’ information needs. Empir Softw Eng 19(6):1921–1955. https://doi.org/10.1007/s10664-013-9268-6

    Article  Google Scholar 

  • Juristo N (2015) Conducting experiments in software engineering [video file]. Retrieved from 15 Sept 2018 http://www.softwareindustryexperiments.org/

  • Juristo N, Gómez OS (2012) Replication of software engineering experiments. In Empirical software engineering and verification. Springer, Berlin, pp 60–88

  • Juristo N, Moreno AM (2013) Basics of software engineering experimentation. Springer, New York

    MATH  Google Scholar 

  • Juristo N, Vegas S (2003) Functional testing, structural testing and code reading: what fault type do they each detect? Empir Methods Stud Softw Eng, pp 208–232

  • Juristo N, Moreno AM, Vegas S (2004) Reviewing 25 years of testing technique experiments. Empir Softw Eng 9(1–2):7–44

    Article  Google Scholar 

  • Juristo N, Moreno AM, Vegas S, Solari M (2006) In search of what we experimentally know about unit testing. IEEE Softw 23(6):72–80

    Article  Google Scholar 

  • Juristo N, Moreno A, Vegas S, Shull F (2009) A look at 25 years of data. IEEE Softw 26(1):15–17

    Article  Google Scholar 

  • Juristo N, Vegas S, Solari M, Abrahao S, Ramos I (2012) Comparing the effectiveness of equivalence partitioning, branch testing and code reading by stepwise abstraction applied by subjects. In: 2012 IEEE fifth international conference on software testing, verification and validation (ICST). IEEE

  • Juristo N, Vegas S, Apa C (2013) Effectiveness for detecting faults within and outside the scope of testing techniques: a controlled experiment. Accessed on 20 Aug 2015 http://www.grise.upm.es/htdocs/sites/trs/1/pdf/JuristoVegasApa.pdf

  • Kamsties E, Lott C (1995) An empirical evaluation of three defect-detection techniques. Software engineering ESEC’95, pp 362–383

  • Kaner C, Falk J, Nguyen HQ (2000) Testing computer software, 2nd edn. Dreamtech Press, New Delhi

    MATH  Google Scholar 

  • Kelly D, Shepard T (2001) A case study in the use of defect classification in inspections. In: Proceedings of IBM centre for advanced studies conference, pp 7–20

  • Kitchenham B (2008) The role of replications in empirical software engineering—a word of warning. Empir Softwa Eng 13(2):219–221

    Article  Google Scholar 

  • Lott CM, Rombach HD (1996) Repeatable software engineering experiments for comparing defect-detection techniques. Empir Softw Eng 1(3):241–277

    Article  Google Scholar 

  • Lyu MR (1996) Handbook of software reliability engineering, vol 222. IEEE Computer Society Press, Los Alamitos

    Google Scholar 

  • Marshall E, Boggis E (2016) The statistics tutor’s quick guide to commonly used statistical tests. University of Sheffield. http://www.statstutor.ac.uk/resources/uploaded/tutorsquickguidetostatistics.pdf

  • Meyer B (2018) Towards empirical answers to important software engineering questions, Accessed on 28 Aug 2018https://cacm.acm.org/blogs/blog-cacm/224677-empirical-answers-to-important-software-engineering-questions-part-2-of-2/fulltext

  • Miller J (1999) Can results from software engineering experiments be safely combined? In: Proceedings sixth international software metrics symposium, 1999. IEEE, pp 152-158

  • Miller J (2000) Applying meta-analytical procedures to software engineering experiments. J Syst Softw 54(1):29–39

    Article  Google Scholar 

  • Myers G (1978) A controlled experiment in program testing and code walkthroughs/inspections. Commun ACM 21(9):760–768

    Article  Google Scholar 

  • Olorisade BK, Vegas S, Juristo N (2013) Determining the effectiveness of three software evaluation techniques through informal aggregation. Inf Softw Technol 55(9):1590–1601

    Article  Google Scholar 

  • Pfleeger SL, Menezes W (2000) Marketing technology to software practitioners. IEEE Softw 17(1):27–33

    Article  Google Scholar 

  • Rombach D, Jedlitschka A (2015) The maturation of empirical studies. In: Proceedings of the third international workshop on conducting empirical studies in industry. IEEE Press, pp 1–2

  • Roper M, Wood M, Miller J (1997) An empirical evaluation of defect detection techniques. Inf Softw Technol 39(11):763–775

    Article  Google Scholar 

  • Runeson P, Andersson C, Thelin T, Andrews A, Berling T (2006) What do we know about defect detection methods?[software testing]. IEEE Softw 23(3):82–90

    Article  Google Scholar 

  • Santos A, Gomez O, Juristo N (2018) Analyzing families of experiments in SE: a systematic mapping study. arXiv preprint arXiv:1805.09009

  • Shepperd M (2018) Replication studies considered harmful. In: Proceedings of the 40th international conference on software engineering: new ideas and emerging results. ACM, pp 73–76

  • Shull F, Basili V, Carver J, Maldonado JC, Travassos GH, Mendonça M, Fabbri S (2002) Replicating software engineering experiments: addressing the tacit knowledge problem. In: Proceedings of 2002 international symposium on empirical software engineering. IEEE, pp 7–16

  • Shull F, Mendoncça MG, Basili V, Carver J, Maldonado JC, Fabbri S, Travassos GH, Ferreira MC (2004) Knowledge-sharing issues in experimental software engineering. Empir Softw Eng 9(1–2):111–137

    Article  Google Scholar 

  • Shull FJ, Carver JC, Vegas S, Juristo N (2008) The role of replications in empirical software engineering. Empir Softw Eng 13(2):211–218

    Article  Google Scholar 

  • Siegmund J, Siegmund N, Apel S (2015) Views on internal and external validity in empirical software engineering. In: Proceedings of the 37th international conference on software engineering, vol 1. IEEE Press, pp 9–19

  • Sjoberg DI, Dyba T, Jorgensen M (2007) The future of empirical methods in software engineering research. In: 2007 future of software engineering. IEEE Computer Society, pp 358–378

  • Sneed HM (2009) Bridging the gap between academia and industry. http://www.reengineer.org/stevens/Harry-Sneed-CSMR2009-Stevens-Lecture-A4.pdf

  • Solari M, Matalonga S (2014) A controlled experiment to explore potentially undetectable defects for testing techniques. SEKE, pp 106–109

  • Tichy WF (2000) Hints for reviewing empirical works in software engineering. Empir Softw Eng Int J 5:309–312

    Article  Google Scholar 

  • Vegas S, Apa C, Juristo N (2016) Crossover designs in software engineering experiments: benefits and perils. IEEE Trans Software Eng 42(2):120–135

    Article  Google Scholar 

  • Wohlin C (2013) Empirical software engineering research with industry: top 10 challenges. In: Proceedings of the 1st international workshop on conducting empirical studies in industry. IEEE Press, pp 43–46

Download references

Funding

University Grants Commission (UGC), BSR Start‐Up Grant. Grant Number: F.30‐114/ 2015 (BSR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sheikh Umar Farooq.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Farooq, S. Gap between academia and industry: a case of empirical evaluation of three software testing methods. Int J Syst Assur Eng Manag 10, 1487–1504 (2019). https://doi.org/10.1007/s13198-019-00899-2

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13198-019-00899-2

Keywords

Navigation