Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3624032.3624036acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

An Approach to Regression Testing Selection based on Code Changes and Smells

Published: 17 October 2023 Publication History

Abstract

Regression testing is a software engineering maintenance activity that involves re-executing test cases on a modified software system to check whether code changes introduce new faults. However, it can be time-consuming and resource-intensive, especially for large systems. Regression testing selection techniques can help address this issue by selecting a subset of test cases to run. The change-based technique selects a subset of test cases based on the modified software classes, reducing the test suite size. Thereby, it will cover a smaller number of classes, decreasing the efficiency of the test suite to reveal design flaws. From this perspective, code smells are known to identify poor design and threaten the quality of software systems. In this study, we propose an approach to combine code change and smell to select regression tests and present two new techniques: code smell based and code change and smell. Additionally, we developed the Regression Testing Selection Tool (RTST) to automate the selection process. We empirically evaluated the approach in Defects4J projects by comparing the new techniques’ effectiveness with the change-based as a baseline. The results show that the change-based technique achieves the highest reduction rate in the test suite size but with less class coverage. On the other hand, test cases selected using code smells and changed classes combined can potentially find more bugs. The code smell-based technique provides a comparable class coverage to the code change and smell approach. Our findings highlight the benefits of incorporating code smells in regression testing selection and suggest opportunities for improving the efficiency and effectiveness of regression testing.

References

[1]
Tiago L. Alves, Christiaan Ypma, and Joost Visser. 2010. Deriving metric thresholds from benchmark data. In 29th International Conference on Software Maintenance (ICSM). 1–10.
[2]
Aloisio S. Cairo, Glauco Carneiro, AMP Resende, and F Brito e Abreu. 2019. The influence of god class and long method in the occurrence of bugs in two open source software projects: an exploratory study. The influence of god class and long method in the occurrence of bugs in two open source software projects: an exploratory study (2019), 199–204.
[3]
M. D’Ambros, A. Bacchelli, and M. Lanza. 2010. On the Impact of Design Flaws on Software Defects. In 10th International Conference on Quality Software (QSIC). 23–31.
[4]
Phongphan Danphitsanuphan and Thanitta Suwantada. 2012. Code smell detecting tool and code smell-structure bug relationship. In Spring Congress on Engineering and Technology (S-CET). 1–5.
[5]
Sebastian Elbaum, Praveen Kallakuri, Alexey Malishevsky, Gregg Rothermel, and Satya Kanduri. 2003. Understanding the effects of changes on the cost-effectiveness of regression testing techniques. Software testing, verification and reliability 13 (2003), 65–83.
[6]
Emelie Engström, Per Runeson, and Mats Skoglund. 2010. A systematic review on regression test selection techniques. Information and Software Technology 52, 1 (2010), 14–30.
[7]
Eduardo Fernandes, Johnatan Oliveira, Gustavo Vale, Thanis Paiva, and Eduardo Figueiredo. 2016. A review-based comparative study of bad smell detection tools. In 20th International Conference on Evaluation and Assessment in Software Engineering (EASE2016). 18.
[8]
Eduardo Fernandes, Gustavo Vale, Leonardo Sousa, Eduardo Figueiredo, Alessandro Garcia, and Jaejoon Lee. 2017. No code anomaly is an island anomaly agglomeration as sign of product line instabilities., 48-64 pages.
[9]
Francesca Arcelli Fontana, Marco Zanoni, Alessandro Marino, and Mika V. Mantyla. 2013. Code smell detection: Towards a machine learning-based approach. In 29th International Conference on Software Maintenance (ICSM). 396–399.
[10]
Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts. 1999. Refactoring: improving the design of existing code. Addison-Wesley Professional.
[11]
Mitja Gradišnik, Tina Beranič, Sašo Karakatič, and Goran Mausaš. 2019. Adapting God Class thresholds for software defect prediction: A case study. In 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). 1537–1542.
[12]
Todd L. Graves, Mary Jean Harrold, Jung-Min Kim, Adam Porter, and Gregg Rothermel. 2001. An empirical study of regression test selection techniques. ACM Transactions on Software Engineering and Methodology (TOSEM) 10, 2 (2001), 184–208.
[13]
René Just, Darioush Jalali, and Michael D Ernst. 2014. Defects4J: A database of existing faults to enable controlled testing studies for Java programs. In Proceedings of the 2014 international symposium on software testing and analysis. 437–440.
[14]
Kamaldeep Kaur and Parmeet Kaur. 2017. Evaluation of sampling techniques in software fault prediction using metrics and code smells. In International Conference on Advances in Computing, Communications and Informatics (ICACCI). 1377–1387.
[15]
A Günes Koru and Hongfang Liu. 2005. An investigation of the effect of module size on defect prediction using static measures. In 2005 Workshop on Predictor Models in Software Engineering (PROMISE). 1–5.
[16]
Owolabi Legunsen, Farah Hariri, August Shi, Yafeng Lu, Lingming Zhang, and Darko Marinov. 2016. An extensive study of static regression test selection in modern software evolution. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. 583–594.
[17]
Wei Li and Raed Shatnawi. 2007. An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of Systems and Software 80, 7 (2007), 1120 – 1128.
[18]
Yiling Lou, Junjie Chen, Lingming Zhang, and Dan Hao. 2019. A survey on regression test-case prioritization. In Advances in Computers. Elsevier, 1–46.
[19]
Chengying Mao and Yansheng Lu. 2005. Regression testing for component-based software systems by enhancing change information. In 12th Asia-Pacific Software Engineering Conference (APSEC’05). 8–pp.
[20]
Matias Martinez, Thomas Durieux, Romain Sommerard, Jifeng Xuan, and Martin Monperrus. 2017. Automatic repair of real bugs in java: A large-scale experiment on the defects4j dataset. Empirical Software Engineering 22 (2017), 1936–1964.
[21]
Glenford J. Myers, Corey Sandler, and Tom Badgett. 2011. The Art of Software Testing. Wiley Publishing.
[22]
Rogeres Nascimento and Cláudio Sant’Anna. 2017. Investigating the Relationship between Bad Smells and Bugs in Software Systems. In 11th Brazilian Symposium on Software Components, Architectures, and Reuse (SBCARS). 1–10.
[23]
Amir Ngah, Malcolm Munro, and Mohammad Abdallah. 2017. An overview of regression testing. Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 9 (2017), 45–49.
[24]
Jeff Offutt and Paul Ammann. 2008. Introduction to software testing. Cambridge University Press Cambridge.
[25]
Willian Oizumi, Alessandro Garcia, Leonardo da Silva Sousa, Bruno Cafeo, and Yixue Zhao. 2016. Code anomalies flock together: Exploring code anomaly agglomerations for locating design problems. In 38th International Conference on Software Engineering (ICSE). 440–451.
[26]
Steffen M. Olbrich, Daniela S. Cruzes, and Dag IK Sjøberg. 2010. Are all code smells harmful? a study of god classes and brain classes in the evolution of three open source systems. In International Conference on Software Maintenance (ICSME). 1–10.
[27]
Alessandro Orso, Nanjuan Shi, and Mary Jean Harrold. 2004. Scaling regression testing to large software systems. ACM SIGSOFT Software Engineering Notes 29 (2004), 241–251.
[28]
Juliana Padilha, Juliana Pereira, Eduardo Figueiredo, Jussara Almeida, Alessandro Garcia, and Cláudio Sant’Anna. 2014. On the effectiveness of concern metrics to detect code smells: an empirical study. In 26th International Conference on Advanced Information Systems Engineering (CAiSE). 656–671.
[29]
Thanis Paiva, Amanda Damasceno, Eduardo Figueiredo, and Cláudio Sant’Anna. 2017. On the evaluation of code smells and detection tools. Journal of Software Engineering Research and Development 5 (2017), 1–28.
[30]
Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Fausto Fasano, Rocco Oliveto, and Andrea De Lucia. 2018. On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering 23 (2018), 1188–1221.
[31]
Anjaneyulu Pasala and Animesh Bhowmick. 2005. An approach for test suite selection to validate applications on deployment of COTS upgrades. In 12th Asia-Pacific Software Engineering Conference (APSEC’05). 7–pp.
[32]
Roger Pressman and Bruce Maxim. 2016. Engenharia de Software-8ª Edição. McGraw Hill Brasil.
[33]
Raúl H Rosero, Omar S Gómez, and Glen Rodríguez. 2016. 15 years of software regression testing techniques—a survey. International Journal of Software Engineering and Knowledge Engineering 26 (2016), 675–689.
[34]
Gregg Rothermel and Mary Jean Harrold. 1996. Analyzing regression test selection techniques. IEEE Transactions on software engineering 22 (1996), 529–551.
[35]
August Shi, Milica Hadzi-Tanovic, Lingming Zhang, Darko Marinov, and Owolabi Legunsen. 2019. Reflection-aware static regression test selection. Proceedings of the ACM on Programming Languages 3 (2019), 1–29.
[36]
Satwinder Singh and Karanjeet Singh Kahlon. 2012. Effectiveness of refactoring metrics model to identify smelly and error prone classes in open source software. ACM SIGSOFT Software Engineering Notes 37, 2 (2012), 1–11.
[37]
Mats Skoglund and Per Runeson. 2005. A case study of the class firewall regression test selection technique on a large scale distributed software system. In 2005 International Symposium on Empirical Software Engineering, 2005. IEEE, 10.
[38]
Mats Skoglund and Per Runeson. 2007. Improving class firewall regression test selection by removing the class firewall. International journal of software engineering and knowledge engineering 17 (2007), 359–378.
[39]
Quinten David Soetens, Serge Demeyer, and Andy Zaidman. 2013. Change-Based Test Selection in the Presence of Developer Tests. In 2013 17th European Conference on Software Maintenance and Reengineering. 101–110.
[40]
Gustavo Vale, Eduardo Fernandes, and Eduardo Figueiredo. 2019. On the proposal and evaluation of a benchmark-based threshold derivation method. Software Quality Journal 27, 1 (2019), 275–306.
[41]
Lee White and Brian Robinson. 2004. Industrial real-time regression testing and analysis using firewalls. In 20th IEEE International Conference on Software Maintenance, 2004. Proceedings.18–27.
[42]
Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: a survey. Software testing, verification and reliability 22 (2012), 67–120.
[43]
Nico Zazworka, Michele A. Shaw, Forrest Shull, and Carolyn Seaman. 2011. Investigating the impact of design debt on software quality. In 2nd Workshop on Managing Technical Debt (MTD). 17–23.
[44]
Nico Zazworka, Antonio Vetro, Clemente Izurieta, Sunny Wong, Yuanfang Cai, Carolyn Seaman, and Forrest Shull. 2014. Comparing four approaches for technical debt identification. Software Quality Journal 22, 3 (2014), 403–426.
[45]
Jiang Zheng, Brian Robinson, Laurie Williams, and Karen Smiley. 2006. A lightweight process for change identification and regression test selection in using COTS components. In Fifth International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS’05). 7–pp.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
SAST '23: Proceedings of the 8th Brazilian Symposium on Systematic and Automated Software Testing
September 2023
133 pages
ISBN:9798400716294
DOI:10.1145/3624032
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Change-based Technique
  2. Code Change and Smell Technique
  3. Code Smell
  4. Regression Testing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Fundação Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
  • Fundação Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
  • Conselho Nacional de Desenvolvimento Cientifico e Tecnologico

Conference

SAST 2023

Acceptance Rates

Overall Acceptance Rate 45 of 92 submissions, 49%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 39
    Total Downloads
  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)5
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media