Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3550355.3552451acmconferencesArticle/Chapter ViewAbstractPublication PagesmodelsConference Proceedingsconference-collections
research-article

Automatic test amplification for executable models

Published: 24 October 2022 Publication History

Abstract

Behavioral models are important assets that must be thoroughly verified early in the design process. This can be achieved with manually-written test cases that embed carefully hand-picked domain-specific input data. However, such test cases may not always reach the desired level of quality, such as high coverage or being able to localize faults efficiently. Test amplification is an interesting emergent approach to improve a test suite by automatically generating new test cases out of existing manually-written ones. Yet, while ad-hoc test amplification solutions have been proposed for a few programming languages, no solution currently exists for amplifying the test cases of behavioral models.
In this paper, we fill this gap with an automated and generic approach. Given an executable DSL, a conforming behavioral model, and an existing test suite, our approach generates new regression test cases in three steps: (i) generating new test inputs by applying a set of generic modifiers on the existing test inputs; (ii) running the model under test with new inputs and generating assertions from the execution traces; and (iii) selecting the new test cases that increase the mutation score. We provide tool support for the approach atop the Eclipse GEMOC Studio1 and show its applicability in an empirical study. In the experiment, we applied the approach to 71 test suites written for models conforming to two different DSLs, and for 67 of the 71 cases, it successfully improved the mutation score between 3.17% and 54.11% depending on the initial setup.

References

[1]
Mehrdad Abdi, Henrique Rocha, and Serge Demeyer. 2019. Test amplification in the pharo smalltalk ecosystem. In Proceedings IWST 2019 International Workshop on Smalltalk Technologies. ESUG.
[2]
Tanwir Ahmad, Junaid Iqbal, Adnan Ashraf, Dragos Truscan, and Ivan Porres. 2019. Model-based testing using UML activity diagrams: A systematic mapping study. Computer Science Review 33 (2019), 98--112.
[3]
Reza Ahmadi, Nicolas Hili, and Juergen Dingel. 2018. Property-Aware Unit Testing of UML-RT Models in the Context of MDE. In Proceedings of the 14th European Conference on Modelling Foundations and Applications (ECMFA) (Lecture Notes in Computer Science, Vol. 10890). Springer, 147--163.
[4]
Issam Al-Azzoni and Saqib Iqbal. 2021. A Framework for the Regression Testing of Model-to-Model Transformations. e-Informatica Software Engineering Journal 15, 1 (2021), 65--84.
[5]
Sai Chaithra Allala, Juan P. Sotomayor, Dionny Santiago, Tariq M. King, and Peter J. Clarke. 2019. Towards Transforming User Requirements to Test Cases Using MDE and NLP. In 43rd IEEE Annual Computer Software and Applications Conference (COMPSAC). IEEE, 350--355.
[6]
J.H. Andrews, L.C. Briand, Y. Labiche, and A.S. Namin. 2006. Using Mutation Analysis for Assessing and Comparing Testing Coverage Criteria. IEEE Transactions on Software Engineering 32, 8 (2006), 608--624.
[7]
Reda Bendraou, Benoit Combemale, Xavier Crégut, and Marie-Pierre Gervais. 2007. Definition of an eXecutable SPEM 2.0. In 14th Asia-Pacific Software Engineering Conference (APSEC). IEEE Computer Society, Nagoya, Japan, 390--397. https://hal.archives-ouvertes.fr/hal-00371555
[8]
Robert Bill, Martin Fleck, Javier Troya, Tanja Mayerhofer, and Manuel Wimmer. 2019. A local and global tour on MOMoT. Software and Systems Modeling 18, 2 (2019), 1017--1046.
[9]
Erwan Bousse, Thomas Degueule, Didier Vojtisek, Tanja Mayerhofer, Julien Deantoni, and Benoit Combemale. 2016. Execution framework of the GEMOC Studio (tool demo). In Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering. 84--89.
[10]
Pablo C. Cañizares, Pablo Gómez-Abajo, Alberto Núñez, Esther Guerra, and Juan de Lara. 2021. New ideas: automated engineering of metamorphic testing environments for domain-specific languages. In SLE '21: 14th ACM SIGPLAN International Conference on Software Language Engineering, Chicago, IL, USA, October 17 - 18, 2021. ACM, 49--54.
[11]
Henry Coles, Thomas Laurent, Christopher Henard, Mike Papadakis, and Anthony Ventresque. 2016. PIT: A Practical Mutation Testing Tool for Java (Demo). In International Symposium on Software Testing and Analysis (ISSTA) (Saarbrücken, Germany). ACM, 449--452. See also https://pitest.org/quickstart/mutators, https://github.com/hcoles/pitest.
[12]
Benjamin Danglot, Oscar Vera-Perez, Zhongxing Yu, Andy Zaidman, Martin Monperrus, and Benoit Baudry. 2019. A snowballing literature study on test amplification. Journal of Systems and Software 157 (2019), 110398.
[13]
Benjamin Danglot, Oscar Luis Vera-Pérez, Benoit Baudry, and Martin Monperrus. 2019. Automatic Test Improvement with DSpot: a Study with Ten Mature Open-Source Projects. Empirical Software Engineering 24, 4 (2019), 1--35.
[14]
Thiago Botti de Assis, Andre Augusto Menegassi, and André Takeshi Endo. 2019. Amplifying Tests for Cross-Platform Apps through Test Patterns. In The 31st International Conference on Software Engineering and Knowledge Engineering, SEKE 2019, Hotel Tivoli, Lisbon, Portugal, July 10--12, 2019, Angelo Perkusich (Ed.). KSI Research Inc. and Knowledge Systems Institute Graduate School, 55--74.
[15]
Richard A. DeMillo, Richard J. Lipton, and Frederick G. Sayward. 1978. Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer 11, 4 (1978), 34--41.
[16]
Sebastian Erdweg, Tijs van der van der Storm, Markus Voelter, Laurence Tratt, Remi Bosman, William R. Cook, Albert Gerritsen, Angelo Hulshout, Steven Kelly, Alex Loh, Gabriël Konat, Pedro J. Molina, Martin Palatnik, Risto Pohjonen, Eugen Schindler, Klemens Schindler, Riccardo Solmi, Vlad Vergu, Eelco Visser, Kevin van der van der Vlist, Guido Wachsmuth, and Jimi van der van der Woning. 2015. Evaluating And Comparing Language Workbenches: Existing Results And Benchmarks For The Future. Computer Languages, Systems and Structures 44, Part A (2015), 24--47.
[17]
S.C.P.F. Fabbri, J.C. Maldonado, and M.E. Delamaro. 1999. Proteum/FSM: a tool to support finite state machine validation based on mutation testing. In Proceedings. SCCC'99 XIX International Conference of the Chilean Computer Science Society. 96--104.
[18]
Peter Fröhlich and Johannes Link. 2000. Automated Test Case Generation from Dynamic Models. In Proceedings of the 14th European Conference on Object-Oriented Programming (ECOOP) (Lecture Notes in Computer Science, Vol. 1850), Elisa Bertino (Ed.). Springer, 472--492.
[19]
Alexandre A. Giron, Itana Maria de Souza Gimenes, and Edson OliveiraJr. 2018. Evaluation of Test Case Generation based on a Software Product Line for Model Transformation. Journal of Computer Science 14, 1 (2018), 108--121.
[20]
Pablo Gómez-Abajo, Esther Guerra, Juan de Lara, and Mercedes G. Merayo. 2018. A tool for domain-independent model mutation. Science of Computer Programming 163 (2018), 85--92.
[21]
Pablo Gómez-Abajo, Esther Guerra, Juan de Lara, and Mercedes G. Merayo. 2020. Systematic Engineering of Mutation Operators. Journal of Object Technology 19, 3 (2020), 3:1--16.
[22]
Pablo Gómez-Abajo, Esther Guerra, Juan de Lara, and Mercedes G Merayo. 2021. Wodel-Test: a model-based framework for language-independent mutation testing. Software and Systems Modeling 20, 3 (2021), 767--793.
[23]
Esther Guerra and Mathias Soeken. 2015. Specification-driven model transformation testing. Software and Systems Modeling 14, 2 (2015), 623--644.
[24]
Richard G. Hamlet. 1977. Testing Programs with the Aid of a Compiler. IEEE Transactions on Software Engineering 3, 4 (1977), 279--290.
[25]
Junaid Iqbal, Adnan Ashraf, Dragos Truscan, and Ivan Porres. 2019. Exhaustive Simulation and Test Generation Using fUML Activity Diagrams. In Proceedings of the 31st International Conference on Advanced Information Systems Engineering (CAiSE) (Lecture Notes in Computer Science, Vol. 11483). Springer, 96--110.
[26]
Cédric Jeanneret, Martin Glinz, and Benoit Baudry. 2011. Estimating footprints of model operations. In 2011 33rd International Conference on Software Engineering (ICSE). IEEE, 601--610.
[27]
Faezeh Khorram, Erwan Bousse, Jean-Marie Mottu, and Gerson Sunyé. 2021. Adapting TDL to Provide Testing Support for Executable DSLs. Journal of Object Technology 20, 3 (2021), 6:1--15.
[28]
Faezeh Khorram, Erwan Bousse, Jean-Marie Mottu, and Gerson Sunyé. 2022. Advanced Testing and Debugging Support for Reactive Executable DSLs. Software and Systems Modeling (2022). https://hal.archives-ouvertes.fr/hal-03723920
[29]
Tomaz Kos, Marjan Mernik, and Tomaz Kosar. 2016. Test automation of a measurement system using a domain-specific modelling language. Journal of Systems and Software 111 (2016), 74--88.
[30]
Stefan Kriebel, Matthias Markthaler, Karin Samira Salman, Timo Greifenberg, Steffen Hillemacher, Bernhard Rumpe, Christoph Schulze, Andreas Wortmann, Philipp Orth, and Johannes Richenhagen. 2018. Improving model-based testing in automotive software engineering. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). ACM, 172--180.
[31]
Dorian Leroy, Erwan Bousse, Manuel Wimmer, Tanja Mayerhofer, Benoit Combe-male, and Wieland Schwinger. 2020. Behavioral interfaces for executable DSLs. Software and Systems Modeling 19, 4 (2020), 1015--1043.
[32]
Jin-hua Li, Geng-xin Dai, and Huan-huan Li. 2009. Mutation Analysis for Testing Finite State Machines. In 2009 Second International Symposium on Electronic Commerce and Security. 620--624.
[33]
Daniel Lübke and Tammo van Lessen. 2017. BPMN-Based Model-Driven Testing of Service-Based Processes. In Enterprise, Business-Process and Information Systems Modeling. Springer, 119--133.
[34]
Philip Makedonski, Gusztáv Adamis, Martti Käärik, Finn Kristoffersen, Michele Carignani, Andreas Ulrich, and Jens Grabowski. 2019. Test descriptions with ETSI TDL. Software Quality Journal 27, 2 (2019), 885--917.
[35]
Tanja Mayerhofer and Benoit Combemale. 2018. The Tool Generation Challenge for Executable Domain-Specific Modeling Languages. In Software Technologies: Applications and Foundations, Martina Seidl and Steffen Zschaler (Eds.). Springer International Publishing, Cham, 193--199.
[36]
Bart Meyers, Joachim Denil, István Dávid, and Hans Vangheluwe. 2016. Automated testing support for reactive domain-specific modelling languages. In Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering. Association for Computing Machinery, 181--194.
[37]
Stefan Mijatov, Tanja Mayerhofer, Philip Langer, and Gerti Kappel. 2015. Testing Functional Requirements in UML Activity Diagrams. In Tests and Proofs, Jasmin Christian Blanchette and Nikolai Kosmatov (Eds.). Springer International Publishing, Cham, 173--190.
[38]
OASIS. 2007. Web Services Business Process Execution Language Version 2.0.
[39]
Object Management Group (OMG). 2019. Precise Semantics of UML State Machines. https://www.omg.org/spec/PSSM/1.0/About-PSSM/. (last accessed in April 2022).
[40]
Object Management Group (OMG). 2021. Semantics of a Foundational Subset for Executable UML Models. https://www.omg.org/spec/FUML/. (last accessed in April 2022).
[41]
S.C. Pinto Ferraz Fabbri, M.E. Delamaro, J.C. Maldonado, and P.C. Masiero. 1994. Mutation analysis testing for finite state machines. In Proceedings of 1994 IEEE International Symposium on Software Reliability Engineering. 220--229.
[42]
Mauricio Rocha, Adenilso Simão, and Thiago Sousa. 2021. Model-based test case generation from UML sequence diagrams using extended finite state machines. Software Quality Journal 29, 3 (2021), 597--627.
[43]
Javier Jesus Gutiérrez Rodriguez, María José Escalona Cuaresma, and Manuel Mejías Risoto. 2015. A Model-Driven approach for functional test case generation. Journal of Systems and Software 109 (2015), 214--228.
[44]
Ebert Schoofs, Mehrdad Abdi, and Serge Demeyer. 2021. AmPyfier: Test Amplification in Python. CoRR abs/2112.11155 (2021). arXiv:2112.11155 https://arxiv.org/abs/2112.11155
[45]
Lijun Shan and Hong Zhu. 2009. Generating Structurally Complex Test Cases By Data Mutation: A Case Study Of Testing An Automated Modelling Tool. Comput. J. 52, 5 (2009), 571--588.
[46]
Faezeh Siavashi, Dragos Truscan, and Jüri Vain. 2018. Vulnerability Assessment of Web Services with Model-Based Mutation Testing. In 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS). 301--312.
[47]
Ben H Smith and Laurie Williams. 2009. On guiding the augmentation of an automated test suite via mutation analysis. Empirical software engineering 14, 3 (2009), 341--369.
[48]
Dave Steinberg, Frank Budinsky, Ed Merks, and Marcelo Paternostro. 2008. EMF: eclipse modeling framework. Pearson Education.
[49]
Chang-ai Sun, Yiqiang Liu, Zuoyi Wang, and W. K. Chan. 2016. μMT: A Data Mutation Directed Metamorphic Relation Acquisition Methodology. In Proceedings of the 1st International Workshop on Metamorphic Testing (Austin, Texas) (MET '16). Association for Computing Machinery, New York, NY, USA, 12--18.
[50]
Paolo Tonella. 2004. Evolutionary Testing of Classes. In Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis (Boston, Massachusetts, USA) (ISSTA '04). Association for Computing Machinery, New York, NY, USA, 119--128.
[51]
Javier Troya, Sergio Segura, and Antonio Ruiz Cortés. 2018. Automated inference of likely metamorphic relations for model transformations. Journal of Systems and Software 136 (2018), 188--208.
[52]
Marlon Vieira, Johanne Leduc, William M. Hasling, Rajesh Subramanyan, and Jürgen Kazmeier. 2006. Automation of GUI Testing Using a Model-driven Approach. In Proceedings of the 2006 International Workshop on Automation of Software Test (AST). ACM, 9--14.
[53]
W Eric Wong, Ruizhi Gao, Yihao Li, Rui Abreu, and Franz Wotawa. 2016. A survey on software fault localization. IEEE Transactions on Software Engineering 42, 8 (2016), 707--740.
[54]
Hui Wu, Jeff Gray, and Marjan Mernik. 2009. Unit Testing for Domain-Specific Languages. In Domain-Specific Languages, Walid Mohamed Taha (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 125--147.
[55]
Tao Xie. 2006. Augmenting Automatically Generated Unit-Test Suites with Regression Oracle Checking. In ECOOP 2006 - Object-Oriented Programming, 20th European Conference, Nantes, France, July 3--7, 2006, Proceedings (Lecture Notes in Computer Science, Vol. 4067), Dave Thomas (Ed.). Springer, 380--403.
[56]
Jifeng Xuan, Xiaoyuan Xie, and Martin Monperrus. 2015. Crash Reproduction via Test Case Mutation: Let Existing Test Cases Help. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (Bergamo, Italy) (ESEC/FSE 2015). Association for Computing Machinery, New York, NY, USA, 910--913.
[57]
Andreas Zeller, Rahul Gopinath, Marcel Böhme, Gordon Fraser, and Christian Holler. 2021. The Fuzzing Book. CISPA Helmholtz Center for Information Security. https://www.fuzzingbook.org/ Retrieved 2021-10-26 15:30:20+02:00.
[58]
Hong Zhu. 2015. JFuzz: A Tool for Automated Java Unit Testing Based on Data Mutation and Metamorphic Testing Methods. In 2015 Second International Conference on Trustworthy Systems and Their Applications. 8--15.

Cited By

View all
  • (2023)Towards Automatic Generation of Amplified Regression Test Oracles2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00058(332-339)Online publication date: 6-Sep-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MODELS '22: Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems
October 2022
412 pages
ISBN:9781450394666
DOI:10.1145/3550355
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • Univ. of Montreal: University of Montreal
  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 October 2022

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. executable DSL
  2. executable model
  3. regression testing
  4. test amplification

Qualifiers

  • Research-article

Funding Sources

  • R&D programme of Madrid
  • Spanish Ministry of Science
  • European Union?s Horizon 2020 research and innovation programme under the Marie Sk?odowska-Curie grant agreement

Conference

MODELS '22
Sponsor:

Acceptance Rates

MODELS '22 Paper Acceptance Rate 35 of 125 submissions, 28%;
Overall Acceptance Rate 144 of 506 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)33
  • Downloads (Last 6 weeks)1
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Towards Automatic Generation of Amplified Regression Test Oracles2023 49th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)10.1109/SEAA60479.2023.00058(332-339)Online publication date: 6-Sep-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media