Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3236024.3275424acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

Practices and tools for better software testing

Published: 26 October 2018 Publication History

Abstract

Automated testing (hereafter referred to as just `testing') has become an essential process for improving the quality of software systems. In fact, testing can help to point out defects and to ensure that production code is robust under many usage conditions. However, writing and maintaining high-quality test code is challenging and frequently considered of secondary importance. Managers, as well as developers, do not treat test code as equally important as production code, and this behaviour could lead to poor test code quality, and in the future to defect-prone production code. The goal of my research is to bring awareness to developers on the effect of poor testing, as well as helping them in writing better test code. To this aim, I am working on 2 different perspectives: (1) studying best practices on software testing, identifying problems and challenges of current approaches, and (2) building new tools that better support the writing of test code, that tackle the issues we discovered with previous studies. Pre-print: https://doi.org/10.5281/zenodo.1411241

References

[1]
Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings - International Conference on Software Engineering. 712–721.
[2]
M. Beller, G. Gousios, A. Panichella, S. Proksch, S. Amann, and A. Zaidman. 2017. Developer Testing in the IDE: Patterns, Beliefs, and Behavior. IEEE Transactions on Software Engineering PP, 99 (2017), 1–1. 1 https://www.sig.eu
[3]
Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. When, how, and why developers (do not) test in their IDEs. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE). ACM, 179–190.
[4]
Antonia Bertolino. 2007. Software testing research: Achievements, challenges, dreams. In 2007 Future of Software Engineering. IEEE Computer Society, 85–103.
[5]
Barry Boehm, Dieter H Rombach, and Marvin V. Zelkowitz. 2005. Foundations of Empirical Software Engineering. Springer Berlin Heidelberg. 440 pages.
[6]
George Candea, Stefan Bucur, and Cristian Zamfir. 2010. Automated software testing as a service. In Proceedings of the 1st ACM symposium on Cloud computing. ACM, 155–160.
[7]
Riccardo Coppola, Maurizio Morisio, and Marco Torchiano. 2017. Scripted GUI Testing of Android Apps. Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering - PROMISE (2017), 22–32. arXiv: arXiv:1711.03565v1
[8]
Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts. 1999. Refactoring: Improving the Design of Existing Code. Xtemp01 (1999), 1–337. arXiv: arXiv:gr-qc/9809069v1
[9]
Jon Hilton. 2016. Reduce coupling: Free your code and your tests. https:// jonhilton.net/2016/03/29/coupling-tests-production/. (2016).
[10]
Cosmin Marsavina, Daniele Romano, and Andy Zaidman. 2014. Studying finegrained co-evolution patterns of production and test code. In Source Code Analysis and Manipulation (SCAM), 2014 IEEE 14th International Working Conference on. IEEE, 195–204.
[11]
Robert C. Martin. 2017. Test Contra-variance. http://blog.cleancoder.com/ uncle-bob/2017/10/03/TestContravariance.html. (2017).
[12]
Gerard Meszaros. 2007. xUnit test patterns: Refactoring test code. Pearson Education.
[13]
Leon Moonen, Arie van Deursen, Andy Zaidman, and Magiel Bruntink. 2008. On the Interplay Between Software Testing and Evolution and its Effect on Program Comprehension. In Software Evolution, Tom Mens and Serge Demeyer (Eds.). Springer, 173–202.
[14]
Glenford Myers. 2004. The Art of Software Testing, Second edition. Vol. 15. 234 pages. arXiv: arXiv:gr-qc/9809069v1
[15]
Fabio Palomba, Dario Di Nucci, Annibale Panichella, Rocco Oliveto, and Andrea De Lucia. 2016. On the diffusion of test smells in automatically generated test code. Proceedings of the 9th International Workshop on Search-Based Software Testing - SBST ’16 (2016), 5–14.
[16]
Fabio Palomba and Andy Zaidman. 2017. Does refactoring of test smells induce fixing flaky tests?. In Proceedings of the International Conference on Software Maintenance (ICSME). IEEE, 1–12.
[17]
Adam A Porter and Lawrence G Votta. 1994. An experiment to assess different defect detection methods for software requirements inspections. In Proceedings of the 16th international conference on Software engineering. IEEE Computer Society Press, 103–112.
[18]
Peter C. Rigby, Daniel M. German, and Margaret-Anne Storey. 2008. Open source software peer review practices. Proceedings of the 13th International Conference on Software Engineering (2008), 541.
[19]
Hesam Samimi, Rebecca Hicks, Ari Fogel, and Todd Millstein. 2013. Declarative Mocking Categories and Subject Descriptors. (2013), 246–256.
[20]
Davide ; Spadini, Maurício ; Aniche, Margaret-Anne Storey, Magiel Bruntink, Alberto Bacchelli, Davide Spadini, and Maurício Aniche. 2018. When Testing Meets Code Review: Why and How Developers Review Tests. 11 (2018).
[21]
Davide Spadini, Maurício Aniche, Magiel Bruntink, and Alberto Bacchelli. 2017. To Mock or Not To Mock? An Empirical Study on Mocking Practices. In Mining Software Repositories (MSR), 2017 IEEE/ACM 14th International Conference on. IEEE, 402–412.
[22]
Davide Spadini, Fabio Palomba, Andy Zaidman, Magiel Bruntink, and Alberto Bacchelli. 2018. On The Relation of Test Smells to Software Code Quality. (2018).
[23]
Arie van Deursen, Leon Moonen, Alex Bergh, and Gerard Kok. 2001. Refactoring Test Code. In Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes in Software Engineering (XP). 92–95.
[24]
E.J. Weyuker. 1998. Testing component-based software: a cautionary tale. IEEE Software 15, 5 (1998), 54–59.

Cited By

View all
  • (2024)Improving Testing Behavior by Gamifying IntelliJProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3623339(1-13)Online publication date: 20-May-2024
  • (2020)An improved fuzzing approach based on adaptive random testing2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)10.1109/ISSREW51248.2020.00045(103-108)Online publication date: Oct-2020

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2018: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
October 2018
987 pages
ISBN:9781450355735
DOI:10.1145/3236024
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automated testing
  2. code review
  3. coupling
  4. mocking
  5. software testing

Qualifiers

  • Short-paper

Funding Sources

  • H2020 Marie Sk?odowska-Curie Actions

Conference

ESEC/FSE '18
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Improving Testing Behavior by Gamifying IntelliJProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3623339(1-13)Online publication date: 20-May-2024
  • (2020)An improved fuzzing approach based on adaptive random testing2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)10.1109/ISSREW51248.2020.00045(103-108)Online publication date: Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media