Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3524843.3528089acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
short-paper

Merging smell detectors: evidence on the agreement of multiple tools

Published: 16 August 2022 Publication History

Abstract

Technical Debt estimation relies heavily on the use of static analysis tools looking for violations of pre-defined rules. Largely, Technical Debt principal is attributed to the presence of low-level code smells, unavoidably tying the effort for fixing the problems with mere coding inefficiencies. At the same time, despite their simple definition, the detection of most code smells is non-trivial and subjective, rendering the assessment of Technical Debt principal dubious. To this end, we have revisited the literature on code smell detection approaches backed by tools and developed an Eclipse plugin that incorporates six code smell detection approaches. The combined application of various smell detectors can increase the certainty of identifying actual code smells that matter to the development team. We also conduct a case study to investigate the agreement among the employed code smell detectors. To our surprise the level of agreement is quite low even for relatively simple code smells, threating the validity of existing TD analysis tools and calling for increased attention to the precise specification of code and design level issues.
Source code: https://github.com/apostolisich/SmellDetectorMerger

References

[1]
Martin Fowler. 1999. Refactoring: improving the design of existing code. Addison-Wesley Professional
[2]
Jan Schumacher, Nico Zazworka, Forrest Shull, Carolyn Seaman and Michele Shaw. 2010. Building Empirical Support for Automated Code Smell Detection Proceedings of the 2010 ACM-IEEE international symposium on empirical software engineering and measurement, (Sep. 2010), 1--10.
[3]
Ghulam Rasool and Zeeshan Arshad. 2015. A review of code smell mining techniques Journal of Software: Evolution and Process, (Nov. 2015), 867--895.
[4]
Eric Clayberg and Dan Rubel. 2008. Eclipse Plug-ins Third Edition. Addison-Wesley.
[5]
Francesca A. Fontana, Pietro Braione and Marco Zanoni. 2012. Automatic detection of bad smells in code: An experimental assessment. Journal of Object Technology, Article 2 (January 2012), 38 pages.
[6]
Almas Hamid, Muhammad Ilyas, Malik M. Hummayun and Asad Nawaz. 2013. A Comparative Study on Code Smell Detection Tools. International Journal of Advanced Science and Technology, (Nov. 2013)
[7]
Eduardo Fernandes, Johnatan Oliveira, Gustavo Vale, Thanis Paiva and Eduardo Figueiredo. 2016. A review-based comparative study of bad smell detection tools. EASE '16: Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering, Article 18, 12 pages.
[8]
Amandeep Kaur and Gaurav Dhiman. 2019. A Review on Search-Based Tools and Techniques to Identify Bad Code Smells in Object-Oriented Systems: Theory and Applications. Harmony Search and Nature Inspired Optimization Algorithms, (Jan. 2019), 909--921.
[9]
CheckStyle. 2022. CheckStyle. Retrieved from https://checkstyle.sourceforge.io/
[10]
Richard Wettel and Radu Marinescu. 2005. Archeology of code duplication: Recovering duplication chains from small duplication fragments. Symbolic and Numeric Algorithms for Scientific Computing, 2005. SYNASC 2005, October 2005
[11]
PMD. 2022. PMD Source Code Analyzer. Retrieved from https://pmd.github.io/
[12]
Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2009. Identification of Move Method Refactoring Opportunities. IEEE Transactions on Software Engineering, (May 2009), 347--367.
[13]
Santiago Vidal, Hernan Vazquez, Andres Diaz-Pace and Claudia A. Marcos. 2015. JSpIRIT: a flexible tool for the analysis of code smells. 2015 34th International Conference of the Chilean Computer Science Society (SCCC), November 2015.
[14]
Organic. 2018. GitHub: organic. Retrieved from https://github.com/opus-research/organic
[15]
Apache HttpComponents. 2022. The Apache Software Foundation: HTTP COMPONENTS. Retrieved from https://hc.apache.org/
[16]
George Hripcsak and Adam S. Rothschild. 2005. Agreement, the F-Measure, and Reliability in Information Retrieval. Journal of the American medical informatics association, (May-Jun 2005), 296--298.
[17]
Jason Lefever, Yuanfang Cai, Humberto Cervantes, Rick Kazman and Hongzhou Fang. 2021. On the lack of consensus among technical debt detection tools. ICSE-SEIP '21: Proceedings of the 43rd International Conference on Software Engineering: Software Engineering in Practice, (May 2021), 121--130.
[18]
Valentina Lenarduzzi, Savanna Lujan, Nyyti Saarimaki and Fabio Palomba. 2021. A Critical Comparison on Six Static Analysis Tools: Detection, Agreement, and Precision. arXiv:2101.08832. Retrieved from https://arxiv.org/abs/2101.08832

Cited By

View all
  • (2024)On the effectiveness of developer features in code smell prioritizationJournal of Systems and Software10.1016/j.jss.2024.111968210:COnline publication date: 25-Jun-2024
  • (2024)Aligning XAI explanations with software developers’ expectationsExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.121640238:PAOnline publication date: 27-Feb-2024
  • (2024)An exploratory evaluation of code smell agglomerationsSoftware Quality Journal10.1007/s11219-024-09680-632:4(1375-1412)Online publication date: 11-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
TechDebt '22: Proceedings of the International Conference on Technical Debt
May 2022
89 pages
ISBN:9781450393041
DOI:10.1145/3524843
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 August 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. code smells
  2. principal
  3. refactoring
  4. technical debt

Qualifiers

  • Short-paper

Funding Sources

Conference

TechDebt '22
Sponsor:
TechDebt '22: International Conference on Technical Debt
May 16 - 18, 2022
Pennsylvania, Pittsburgh

Acceptance Rates

Overall Acceptance Rate 14 of 31 submissions, 45%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)On the effectiveness of developer features in code smell prioritizationJournal of Systems and Software10.1016/j.jss.2024.111968210:COnline publication date: 25-Jun-2024
  • (2024)Aligning XAI explanations with software developers’ expectationsExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.121640238:PAOnline publication date: 27-Feb-2024
  • (2024)An exploratory evaluation of code smell agglomerationsSoftware Quality Journal10.1007/s11219-024-09680-632:4(1375-1412)Online publication date: 11-Jul-2024
  • (2023)A metrics-based approach for selecting among various refactoring candidatesEmpirical Software Engineering10.1007/s10664-023-10412-w29:1Online publication date: 16-Dec-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media