Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3387940.3391464acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Open access

OffSide: Learning to Identify Mistakes in Boundary Conditions

Published: 25 September 2020 Publication History

Abstract

Mistakes in boundary conditions are the cause of many bugs in software. These mistakes happen when, e.g., developers make use of '<' or '>' in cases where they should have used '<=' or '>='. Mistakes in boundary conditions are often hard to find and manually detecting them might be very time-consuming for developers. While researchers have been proposing techniques to cope with mistakes in the boundaries for a long time, the automated detection of such bugs still remains a challenge. We conjecture that, for a tool to be able to precisely identify mistakes in boundary conditions, it should be able to capture the overall context of the source code under analysis. In this work, we propose a deep learning model that learn mistakes in boundary conditions and, later, is able to identify them in unseen code snippets. We train and test a model on over 1.5 million code snippets, with and without mistakes in different boundary conditions. Our model shows an accuracy from 55% up to 87%. The model is also able to detect 24 out of 41 real-world bugs; however, with a high false positive rate. The existing state-of-the-practice linter tools are not able to detect any of the bugs. We hope this paper can pave the road towards deep learning models that will be able to support developers in detecting mistakes in boundary conditions.

References

[1]
Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pages 143--153, 2019.
[2]
Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. code2vec: Learning distributed representations of code. Proceedings of the ACM on Programming Languages, 3(POPL):40, 2019.
[3]
Mark Dowd, John McDonald, and Justin Schuh. The art of software security assessment: Identifying and preventing software vulnerabilities. Pearson Education, 2006.
[4]
M Finavaro Aniche, FFJ Hermans, and A van Deursen. Pragmatic software testing education. In SIGCSE 2019-Proceedings of the 50th ACM Technical Symposium on Computer Science Education. Association for Computing Machinery (ACM), 2019.
[5]
Katerina Goseva-Popstojanova and Andrei Perhinschi. On the capability of static code analysis to detect security vulnerabilities. Information and Software Technology, 68:18--33, 2015.
[6]
Bernhard JM Grün, David Schuler, and Andreas Zeller. The impact of equivalent mutants. In 2009 International Conference on Software Testing, Verification, and Validation Workshops, pages 192--199. IEEE, 2009.
[7]
Daniel Hoffman, Paul Strooper, and Lee White. Boundary values and automated component testing. Software Testing, Verification and Reliability, 9(1):3--26, 1999.
[8]
David Hovemeyer and William Pugh. Finding bugs is easy. Acm sigplan notices, 39(12):92--106, 2004.
[9]
Bingchiang Jeng and Elaine J Weyuker. A simplified domain-testing strategy. ACM Transactions on Software Engineering and Methodology (TOSEM), 3(3):254--270, 1994.
[10]
Brittany Johnson. A study on improving static analysis tools: Why are we not using them? In 2012 34th International Conference on Software Engineering (ICSE), pages 1607--1609. IEEE, 2012.
[11]
Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. Why don't software developers use static analysis tools to find bugs? In Proceedings of the 2013 International Conference on Software Engineering, pages 672--681. IEEE Press, 2013.
[12]
René Just, Darioush Jalali, and Michael D Ernst. Defects4j: A database of existing faults to enable controlled testing studies for java programs. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 437--440, 2014.
[13]
Hendrig Sellik Pavel Rapoport Georgios Gousios Maurício Aniche Jón Arnar Briem, Jordi Smit. Offside: Learning to identify mistakes in boundary conditions (appendix). Complementary tables: https://figshare.com/articles/Offside_Learning_to_Identify_Mistakes_in_Boundary_Conditions_Appendix_/11689227; Source code: https://github.com/SERG-Delft/ml4se-offside; Dataset: https://zenodo.org/record/3606812, 2020.
[14]
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv.1412.6980, 2014.
[15]
Bruno Legeard, Fabien Peureux, and Mark Utting. Automated boundary testing from z and b. In International Symposium of Formal Methods Europe, pages 21--40. Springer, 2002.
[16]
Yi Li, Shaohua Wang, Tien N Nguyen, and Son Van Nguyen. Improving bug detection via context-based code representation learning and attention-based neural networks. Proceedings of the ACM on Programming Languages, 3(OOPSLA): 1--30, 2019.
[17]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pages 701-710, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2956-9. URL http://doi.acm.org/10.1145/2623330.2623732.
[18]
Michael Pradel and Koushik Sen. Deepbugs: A learning approach to name-based bug detection. Proceedings of the ACM on Programming Languages, 2(OOPSLA): 147, 2018.
[19]
Stuart C Reid. An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In Proceedings Fourth International Software Metrics Symposium, pages 64--73. IEEE, 1997.
[20]
Philip Samuel and Rajib Mall. Boundary value testing based on uml models. In 14th Asian Test Symposium (ATS'05), pages 94--99. IEEE, 2005.
[21]
Davide Spadini, Maurício Aniche, and Alberto Bacchelli. Pydriller: Python framework for mining software repositories. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 908--911. ACM, 2018.
[22]
Shaked Brody Uri Alon, Omer Levy and Eran Yahav. Code2seq: Generating sequences from structured representations of code. ICLR, 2019.

Cited By

View all
  • (2024)When debugging encounters artificial intelligence: state of the art and open challengesScience China Information Sciences10.1007/s11432-022-3803-967:4Online publication date: 21-Feb-2024
  • (2024)Survey on identification and prediction of security threats using various deep learning models on software testingMultimedia Tools and Applications10.1007/s11042-024-18323-883:27(69863-69874)Online publication date: 3-Feb-2024
  • (2023)Industrial applications of software defect prediction using machine learningInformation and Software Technology10.1016/j.infsof.2023.107192159:COnline publication date: 10-May-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSEW'20: Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops
June 2020
831 pages
ISBN:9781450379632
DOI:10.1145/3387940
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. boundary testing
  2. deep learning for software testing
  3. machine learning for software engineering
  4. machine learning for software testing
  5. software engineering
  6. software testing

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICSE '20
Sponsor:
ICSE '20: 42nd International Conference on Software Engineering
June 27 - July 19, 2020
Seoul, Republic of Korea

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)5
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)When debugging encounters artificial intelligence: state of the art and open challengesScience China Information Sciences10.1007/s11432-022-3803-967:4Online publication date: 21-Feb-2024
  • (2024)Survey on identification and prediction of security threats using various deep learning models on software testingMultimedia Tools and Applications10.1007/s11042-024-18323-883:27(69863-69874)Online publication date: 3-Feb-2024
  • (2023)Industrial applications of software defect prediction using machine learningInformation and Software Technology10.1016/j.infsof.2023.107192159:COnline publication date: 10-May-2023
  • (2022)Learning Realistic Mutations: Bug Creation for Neural Bug Detectors2022 IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST53961.2022.00027(162-173)Online publication date: Apr-2022
  • (2022)A controlled experiment of different code representations for learning-based program repairEmpirical Software Engineering10.1007/s10664-022-10223-527:7Online publication date: 1-Dec-2022
  • (2021)Learning Off-By-One Mistakes: An Empirical Study2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)10.1109/MSR52588.2021.00019(58-67)Online publication date: May-2021

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media