Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3417990.3422009acmconferencesArticle/Chapter ViewAbstractPublication PagesmodelsConference Proceedingsconference-collections
demonstration

Using Benji to systematically evaluate model comparison algorithms

Published: 26 October 2020 Publication History

Abstract

Model comparison is a critical task in model-driven engineering. Its correctness enables an effective management of model evolution, synchronisation, and even other tasks, such as model transformation testing. The literature is rich as concerns comparison algorithms approaches, however the same cannot be said for their systematic evaluation. In this paper we present Benji, a tool for the generation of model comparison benchmarks. In particular, Benji provides domain-specific languages to design experiments in terms of input models and possible manipulations, and based on those generates corresponding benchmark cases. In this way, the experiment specification can be exploited as a systematic way to evaluate available comparison algorithms against the problem under study.

References

[1]
Lorenzo Addazi and Antonio Cicchetti. 2020. Systematic Evaluation of Model Comparison Algorithms using Model Generation. Journal of Object Technology 19, 1 (2020), 1--22.
[2]
Lorenzo Addazi, Antonio Cicchetti, Juri Di Rocco, Davide Di Ruscio, Ludovico Iovino, and Alfonso Pierantonio. 2016. Semantic-based Model Matching with EMFCompare. In ME@MODELS.
[3]
Marcus Alanen and Ivan Porres. 2003. Difference and Union of Models. In UML 2003 - The Unified Modeling Language. Modeling Languages and Applications, Perdita Stevens, Jon Whittle, and Grady Booch (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 2--17.
[4]
Kerstin Altmanninger, Martina Seidl, and Manuel Wimmer. 2009. A survey on model versioning approaches. IJWIS 5 (2009), 271--304.
[5]
Jean Bézivin. 2005. On the unification power of models. Software and Systems Modeling 4 (2005), 171--188.
[6]
Cédric Brun and Alfonso Pierantonio. 2008. Model Differences in the Eclipse Modeling Framework.
[7]
Petra Kaufmann, Gerti Kappel, Philip Langer, Martina Seidl, Konrad Wieland, and Manuel Wimmer. 2012. An Introduction to Model Versioning. In SFM.
[8]
Marouane Kessentini, Ali Ouni, Philip Langer, Manuel Wimmer, and Slim Bechikh. 2014. Search-based metamodel matching with structural and syntactic measures. Journal of Systems and Software 97 (2014), 1--14.
[9]
Barbara A. Kitchenham, Tore Dybå, and Magne Jorgensen. 2004. Evidence-Based Software Engineering. In Proceedings of the 26th International Conference on Software Engineering (ICSE '04). IEEE Computer Society, USA, 273--281.
[10]
Dimitrios S. Kolovos, Richard F. Paige, and Fiona A. C. Polack. 2006. Model comparison: a foundation for model composition and model transformation testing. In GaMMa '06.
[11]
Dimitrios S. Kolovos, Davide Di Ruscio, Alfonso Pierantonio, and Richard F. Paige. 2009. Different models for model matching: An analysis of approaches to support model differencing. 2009 ICSE Workshop on Comparison and Versioning of Software Models (2009), 1--6.
[12]
Yuehua Lin, Jeff T Gray, and Frédéric Jouault. 2007. DSMDiff: A Differentiation Tool for Domain-Specific Models.
[13]
Pit Pietsch, Klaus Mueller, and Bernhard Rumpe. 2013. Model Matching Challenge: Benchmarks for Ecore and BPMN Diagrams. Softwaretechnik-Trends 33 (2013).
[14]
Matthew Stephan and James R. Cordy. 2013. A Survey of Model Comparison Approaches and Applications. In MODELSWARD.
[15]
Janos Sztipanovits, Sandeep Neema, and Matthew J. Emerson. 2007. Metamodeling Languages and Metaprogrammable Tools. In Handbook of Real-Time and Embedded Systems.
[16]
Manuel Wimmer and Philip Langer. 2013. A Benchmark for Model Matching Systems: The Heterogeneous Metamodel Case. Softwaretechnik-Trends 33 (2013).
[17]
Claes Wohlin, Per Runeson, Martin Höst, Magnus C. Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in Software Engineering. Springer Publishing Company, Incorporated.
[18]
Zhenchang Xing and Eleni Stroulia. 2005. UMLDiff: an algorithm for object-oriented design differencing. In Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MODELS '20: Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings
October 2020
713 pages
ISBN:9781450381352
DOI:10.1145/3417990
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2020

Check for updates

Author Tags

  1. model comparison algorithms
  2. model comparison benchmarks
  3. model differencing
  4. model evolution
  5. systematic evaluation

Qualifiers

  • Demonstration

Funding Sources

  • Stiftelsen för Kunskaps- och Kompetensutveckling

Conference

MODELS '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 118 of 382 submissions, 31%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 61
    Total Downloads
  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media