Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3671016.3674820acmconferencesArticle/Chapter ViewAbstractPublication PagesinternetwareConference Proceedingsconference-collections
research-article

ERD-CQC : Enhanced Rule and Dependency Code Quality Check for Java

Published: 24 July 2024 Publication History

Abstract

In the field of software development, the application of code quality check tools has become a key factor in improving product quality and development efficiency. While many existing tools are effective at detecting common problems in code, there are still some limitations. Firstly, these tools rely on predefined rules that may not fully encompass real-world coding challenges. Secondly, a lack of consideration of dependencies leads to failure to report violations occurring across files or modules. Third, the metrics used by these tools primarily focus on object-oriented programming, limiting their ability to assess software quality from the perspective of nationalized standards. To address these issues, this work proposes a dependency-enhanced method namely ERD-CQC for code quality detection and measurement. ERD-CQC provides 88 detection rules and 45 metrics, supplementing checking rules in categories such as Circuit Breaking, Serializable, and Security. ERD-CQC constructs an infused graph by integrating abstract syntax trees (ASTs), entities, and dependencies for violation detection. Based on the detection results, ERD-CQC provides a code quality measurement system with 4 nationalized standard dimensions for the purpose of measuring code quality from multiple perspectives. To validate the effectiveness of ERD-CQC, we manually examined 647 compliant and 528 non-compliant code snippets. ERD-CQC achieves the recall and F1 score exceeding 98%. We also collected open-source projects and closed-source projects in the real world, containing a total of 4,319 non-compliant code snippets. On this real-world benchmark, the average F1 score of ERD-CQC is 11.44% higher than the advanced tool SonarQube. Finally, we visualized the quality measurement results based on metrics and found that open-source and closed-source projects have certain patterns in metric performance. Our work will benefit developers in checking, evaluating, and monitoring their software quality comprehensively.

References

[1]
Jarallah S Alghamdi, Raimi A Rufai, and Sohel M Khan. 2005. OOMeter: A software quality assurance tool. In Ninth European Conference on Software Maintenance and Reengineering. IEEE Computer Society, 190–191.
[2]
Midya Alqaradaghi, Gregory Morse, and Tamás Kozsik. 2021. Detecting security vulnerabilities with static analysis–A case study. Pollack Periodica (2021).
[3]
Luca Ardito, Riccardo Coppola, Luca Barbato, and Diego Verga. 2020. A tool-based perspective on software code maintainability metrics: a systematic literature review. Scientific Programming 2020 (2020), 1–26.
[4]
Carliss Y Baldwin and Kim B Clark. 2000. Design rules, Volume 1: The power of modularity. MIT press.
[5]
[5] Checkstyle.https://checkstyle.org
[6]
Shyam R Chidamber and Chris F Kemerer. 1994. A metrics suite for object oriented design. IEEE Transactions on software engineering 20, 6 (1994), 476–493.
[7]
[7] ERD-CQC.https://github.com/ERD-CQC/ERD-CQC
[8]
Gang Fan, Xiaoheng Xie, Xunjin Zheng, Yinan Liang, and Peng Di. 2023. Static Code Analysis in the AI Era: An In-depth Exploration of the Concept, Function, and Potential of Intelligent Code Analysis Agents. arXiv preprint arXiv:2310.08837 (2023).
[9]
Hongzhou Fang, Yuanfang Cai, Rick Kazman, and Jason Lefever. 2022. Cider: concept-based interactive design recovery. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings. 26–30.
[10]
Daniel Galin. 2004. Software quality assurance: from theory to implementation. Pearson education.
[11]
Giovanni Grano, Fabio Palomba, and Harald C Gall. 2019. Lightweight assessment of test-case effectiveness using source-code-quality indicators. IEEE Transactions on Software Engineering 47, 4 (2019), 758–774.
[12]
Sebastian Hönel, Morgan Ericsson, Welf Löwe, and Anna Wingkvist. 2023. Metrics As Scores: A Tool- and Analysis Suite and Interactive Application for Exploring Context-Dependent Distributions. Journal of Open Source Software 8, 88 (2023), 4913. https://doi.org/10.21105/joss.04913
[13]
[13] Infer.https://github.com/facebook/infer
[14]
[14] ISO/IEC 5055.https://www.iso.org/standard/80623.html
[15]
Wuxia Jin, Yuanfang Cai, Rick Kazman, Qinghua Zheng, Di Cui, and Ting Liu. 2019. Enre: a tool framework for extensible entity relation extraction. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE, 67–70.
[16]
Wuxia Jin, Shuo Xu, Dawei Chen, Jiajun He, Dinghong Zhong, Ming Fan, Hongxu Chen, Huijia Zhang, and Ting Liu. 2024. PyAnalyzer: An Effective and Practical Approach for Dependency Extraction from Python Code. In 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE). IEEE Computer Society, 896–896.
[17]
Wuxia Jin, Yuyun Zhang, Jiaowei Shang, Yi Hou, Ming Fan, and Ting Liu. 2023. Identifying Code Changes for Architecture Decay via a Metric Forest Structure. In 2023 ACM/IEEE International Conference on Technical Debt (TechDebt). IEEE, 62–71.
[18]
Wuxia Jin, Dinghong Zhong, Yuanfang Cai, Rick Kazman, and Ting Liu. 2023. Evaluating the Impact of Possible Dependencies on Architecture-Level Maintainability. IEEE Transactions on Software Engineering 49, 3 (2023), 1064–1085. https://doi.org/10.1109/TSE.2022.3171288
[19]
Valentina Lenarduzzi, Fabiano Pecorelli, Nyyti Saarimaki, Savanna Lujan, and Fabio Palomba. 2023. A critical comparison on six static analysis tools: Detection, agreement, and precision. Journal of Systems and Software 198 (2023), 111575.
[20]
Zengyang Li, Paris Avgeriou, and Peng Liang. 2015. A systematic mapping study on technical debt and its management. Journal of Systems and Software 101 (2015), 193–220.
[21]
Zhiyu Li, Shuai Lu, Daya Guo, Nan Duan, Shailesh Jannu, Grant Jenks, Deep Majumder, Jared Green, Alexey Svyatkovskiy, Shengyu Fu, 2022. Automating code review activities by large-scale pre-training. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1035–1047.
[22]
Han Liu, Sen Chen, Ruitao Feng, Chengwei Liu, Kaixuan Li, Zhengzi Xu, Liming Nie, Yang Liu, and Yixiang Chen. 2023. A comprehensive study on quality assurance tools for java. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis. 285–297.
[23]
Alan MacCormack, John Rusnak, and Carliss Y Baldwin. 2006. Exploring the structure of complex software designs: An empirical study of open source and proprietary code. Management Science 52, 7 (2006), 1015–1030.
[24]
Diego Marcilio, Carlo A Furia, Rodrigo Bonifácio, and Gustavo Pinto. 2020. SpongeBugs: Automatically generating fix suggestions in response to static code analysis warnings. Journal of Systems and Software 168 (2020), 110671.
[25]
Thomas J McCabe. 1976. A complexity measure. IEEE Transactions on software Engineering4 (1976), 308–320.
[26]
Ran Mo, Yuanfang Cai, Rick Kazman, Lu Xiao, and Qiong Feng. 2016. Decoupling level: A new metric for architectural maintenance complexity. In Proceedings of the 38th International Conference on Software Engineering. 499–510.
[27]
Haris Mumtaz, Paramvir Singh, and Kelly Blincoe. 2023. Identifying refactoring opportunities for large packages by analyzing maintainability characteristics in Java OSS. Journal of Systems and Software (Aug 2023), 111717. https://doi.org/10.1016/j.jss.2023.111717
[28]
[28] P3C, Alibaba Group.https://github.com/alibaba/p3c
[29]
Weifeng Pan, Xin Du, Hua Ming, Dae-Kyoo Kim, and Zijiang Yang. 2023. Identifying Key Classes for Initial Software Comprehension: Can We Do It Better?. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 1878–1889.
[30]
Xin Peng, Chenxi Zhang, Zhongyuan Zhao, Akasaka Isami, Xiaofeng Guo, and Yunna Cui. 2022. Trace analysis based microservice architecture measurement. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1589–1599.
[31]
[31] PMD.https://pmd.github.io
[32]
Dan Port, Bill Taber, and LiQuo Huang. 2023. Investigating a NASA Cyclomatic Complexity Policy on Maintenance Risk of a Critical System. In 2023 IEEE/ACM 45th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 211–221.
[33]
Michael Pradel and Koushik Sen. 2018. Deepbugs: A learning approach to name-based bug detection. Proceedings of the ACM on Programming Languages 2, OOPSLA (2018), 1–25.
[34]
Srinivasan Sengamedu and Hangqi Zhao. 2022. Neural language models for code quality identification. In Proceedings of the 6th International Workshop on Machine Learning Techniques for Software Quality Evaluation. 5–10.
[35]
[35] SonarQube.https://github.com/SonarSource/sonarqube
[36]
[36] Spotbugs.https://spotbugs.github.io
[37]
Eldar Sultanow, André Ullrich, Stefan Konopik, and Gergana Vladova. 2018. Machine learning based static code analysis for software quality assurance. In 2018 Thirteenth International Conference on Digital Information Management (ICDIM). IEEE, 156–161.
[38]
Ricardo Terra, Luis Fernando Miranda, Marco Tulio Valente, and Roberto S Bigonha. 2013. Qualitas. class Corpus: A compiled version of the Qualitas Corpus. ACM SIGSOFT Software Engineering Notes 38, 5 (2013), 1–4.
[39]
Vishvajeet Thakur, Marouane Kessentini, and Tushar Sharma. 2020. Qscored: An open platform for code quality ranking and visualization. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 818–821.
[40]
Patanamon Thongtanunam, Chanathip Pornprasit, and Chakkrit Tantithamthavorn. 2022. Autotransform: Automated code transformation to support modern code review process. In Proceedings of the 44th international conference on software engineering. 237–248.
[41]
Michele Tufano, Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. 2017. There and back again: Can you compile that snapshot?Journal of Software: Evolution and Process 29, 4 (2017), e1838.
[42]
Rosalia Tufano, Luca Pascarella, Michele Tufano, Denys Poshyvanyk, and Gabriele Bavota. 2021. Towards automating code review activities. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 163–174.
[43]
[43] Understand.https://scitools.com/

Index Terms

  1. ERD-CQC : Enhanced Rule and Dependency Code Quality Check for Java

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    Internetware '24: Proceedings of the 15th Asia-Pacific Symposium on Internetware
    July 2024
    518 pages
    ISBN:9798400707056
    DOI:10.1145/3671016
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Code Quality check tools
    2. Metrics
    3. Scanning rules
    4. Software quality

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    Internetware 2024
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 55 of 111 submissions, 50%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 59
      Total Downloads
    • Downloads (Last 12 months)59
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 27 Nov 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media