Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3650212.3680379acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

One-to-One or One-to-Many? Suggesting Extract Class Refactoring Opportunities with Intra-class Dependency Hypergraph Neural Network

Published: 11 September 2024 Publication History

Abstract

Excessively large classes that encapsulate multiple responsibilities are challenging to comprehend and maintain. Addressing this issue, several Extract Class refactoring tools have been proposed, employing a two-phase process: identifying suitable fields or methods for extraction, and implementing the mechanics of refactoring. These tools traditionally generate an intra-class dependency graph to analyze the class structure, applying hard-coded rules based on this graph to unearth refactoring opportunities. Yet, the graph-based approach predominantly illuminates direct, “one-to-one” relationship between pairwise entities. Such a perspective is restrictive as it overlooks the complex, “one-to-many” dependencies among multiple entities that are prevalent in real-world classes. This narrow focus can lead to refactoring suggestions that may diverge from developers’ actual needs, given their multifaceted nature. To bridge this gap, our paper leverages the concept of intra-class dependency hypergraph to model one-to-many dependency relationship and proposes a hypergraph learning-based approach to suggest Extract Class refactoring opportunities named HECS. For each target class, we first construct its intra-class dependency hypergraph and assign attributes to nodes with a pre-trained code model. All the attributed hypergraphs are fed into an enhanced hypergraph neural network for training. Utilizing this trained neural network alongside a large language model (LLM), we construct a refactoring suggestion system. We trained HECS on a large-scale dataset and evaluated it on two real-world datasets. The results show that demonstrates an increase of 38.5% in precision, 9.7% in recall, and 44.4% in f1-measure compared to 3 state-of-the-art refactoring tools including JDeodorant, SSECS, and LLMRefactor, which is more useful for 64% of participants. The results also unveil practical suggestions and new insights that benefit existing extract-related refactoring techniques.

References

[1]
[n. d.]. GanttProject. https://sourceforge.net/projects/ganttproject/ Accessed: July 3, 2023
[2]
2024. Artifact Link. https://doi.org/10.5281/zenodo.12662219
[3]
2024. Data Link. https://github.com/cnznb/HECS/
[4]
Marios Fokaefs A, Nikolaos Tsantalis A, Eleni Stroulia A, and Alexander Chatzigeorgiou B. 2012. Identification and application of Extract Class refactorings in object-oriented systems. Journal of Systems and Software, 85, 10 (2012), 2241–2260.
[5]
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2021. Unified pre-training for program understanding and generation. arXiv preprint arXiv:2103.06333.
[6]
Pritom Saha Akash and Kevin Chen-Chuan Chang. 2022. Exploring Variational Graph Auto-Encoders for Extract Class Refactoring Recommendation. arxiv:arXiv:2203.08787.
[7]
P. S. Akash, A. Sadiq, and A. Kabir. 2019. An Approach of Extracting God Class Exploiting Both Structural and Semantic Similarity. In International Conference on Evaluation of Novel Software Approaches to Software Engineering.
[8]
Lulwah Alkulaib and Chang-Tien Lu. 2023. Balancing the Scales: HyperSMOTE for Enhanced Hypergraph Classification. In 2023 IEEE International Conference on Big Data (BigData). 5140–5145.
[9]
Musaad Alzahrani. 2022. Extract Class Refactoring Based on Cohesion and Coupling: A Greedy Approach. Computers, 11, 8 (2022), issn:2073-431X https://doi.org/10.3390/computers11080123
[10]
Mauricio Aniche, Erick Maziero, Rafael Durelli, and Vinicius HS Durelli. 2020. The effectiveness of supervised machine learning algorithms in predicting software refactoring. IEEE Transactions on Software Engineering, 48, 4 (2020), 1432–1450.
[11]
Gabriele Bavota, Bernardino De Carluccio, Andrea De Lucia, Massimiliano Di Penta, Rocco Oliveto, and Orazio Strollo. 2012. When does a refactoring induce bugs? an empirical study. In 2012 IEEE 12th International Working Conference on Source Code Analysis and Manipulation. 104–113.
[12]
G. Bavota, A. D. Lucia, A. Marcus, and R. Oliveto. 2014. Automating extract class refactoring: an improved method and its evaluation. Empirical Software Engineering, 19, 6 (2014), 1617–1664.
[13]
G. Bavota, A. D. Lucia, and R. Oliveto. 2011. Identifying Extract Class refactoring opportunities using structural and semantic cohesion measures. Journal of Systems & Software, 84, 3 (2011), 397–414.
[14]
Gabriele Bavota, Rocco Oliveto, Andrea De Lucia, Andrian Marcus, Yann-Gaël Guehénéuc, and Giuliano Antoniol. 2014. In medio stat virtus: Extract class refactoring through nash equilibria. In Software Maintenance, Reengineering & Reverse Engineering.
[15]
Norman Cliff. 2014. Ordinal methods for behavioral data analysis. Psychology Press.
[16]
Di Cui, Qiangqiang Wang, Siqi Wang, Jianlei Chi, Jianan Li, Lu Wang, and Qingshan Li. 2023. REMS: Recommending Extract Method Refactoring Opportunities via Multi-view Representation of Code Property Graph. In 2023 IEEE/ACM 31st International Conference on Program Comprehension (ICPC). 191–202.
[17]
Di Cui, Siqi Wang, Yong Luo, Xingyu Li, Jie Dai, Lu Wang, and Qingshan Li. 2022. RMove: Recommending Move Method Refactoring Opportunities using Structural and Semantic Representations of Code. In 2022 IEEE International Conference on Software Maintenance and Evolution (ICSME). 281–292.
[18]
Kayla DePalma, Izabel Miminoshvili, Chiara Henselder, Kate Moss, and Eman Abdullah AlOmar. 2024. Exploring ChatGPT’s code refactoring capabilities: An empirical study. Expert Systems with Applications, 249 (2024), 123602.
[19]
Massimiliano Di Penta, Gabriele Bavota, and Fiorella Zampetti. 2020. On the relationship between refactoring actions and bugs: a differentiated replication. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 556–567.
[20]
Elena Litani. [n. d.]. Commit id of motivated example. https://github.com/apache/xerces2-j/commit/dc38ef3f474431c6ee975d071441496ac221a110/ Accessed: Apr 10, 2001
[21]
Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, and Yue Gao. 2019. Hypergraph neural networks. In Proceedings of the AAAI conference on artificial intelligence. 33, 3558–3565.
[22]
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, Online. 1536–1547. https://doi.org/10.18653/v1/2020.findings-emnlp.139
[23]
Martin Fowler. 1997. Refactoring: improving the design of existing code. Addison-Wesley Professional.
[24]
Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji. 2022. HGNN+: General hypergraph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 3 (2022), 3181–3199.
[25]
Google BigQuery. [n. d.]. Google BigQuery. https://console.cloud.google.com/marketplace/ details/github/github-repos Accessed: 2021
[26]
D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, S. Liu, L. Zhou, N. Duan, A. Svyatkovskiy, and S. Fu. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow. In International Conference on Learning Representations.
[27]
Mouna Hadj-Kacem and Nadia Bouassida. 2019. Deep representation learning for code smells detection using variational auto-encoder. In 2019 International Joint Conference on Neural Networks (IJCNN). 1–8.
[28]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, 30 (2017).
[29]
Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436.
[30]
T. Jeba, T. Mahmud, P. S. Akash, and N. Nahar. 2020. God Class Refactoring Recommendation and Extraction Using Context based Grouping. International Journal of Information Technology and Computer Science.
[31]
Wuxia Jin, Yuanfang Cai, Rick Kazman, Qinghua Zheng, Di Cui, and Ting Liu. 2019. ENRE: a tool framework for extensible eNtity relation extraction. In Proceedings of the 41st International Conference on Software Engineering: Companion Proceedings. 67–70.
[32]
Foutse Khomh, Stéphane Vaucher, Yann-Gaël Guéhéneuc, and Houari Sahraoui. 2009. A Bayesian Approach for the Detection of Code and Design Smells. In 2009 Ninth International Conference on Quality Software. 305–314. https://doi.org/10.1109/QSIC.2009.47
[33]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
[34]
Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, and Duyu Tang. 2021. Codexglue: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664.
[35]
Tom Mens, Gabriele Taentzer, and Olga Runge. 2007. Analysing refactoring dependencies using graph transformation. Software & Systems Modeling, 6, 3 (2007).
[36]
Tom Mens, Niels Van Eetvelde, Serge Demeyer, and Dirk Janssens. 2005. Formalizing refactorings with graph transformations. Journal of Software Maintenance and Evolution: Research and Practice, 17, 4 (2005), 247–276.
[37]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
[38]
James Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics, 5, 1 (1957), 32–38.
[39]
Emerson Murphy-Hill, Chris Parnin, and Andrew P Black. 2011. How we refactor, and how we know it. IEEE Transactions on Software Engineering, 38, 1 (2011), 5–18.
[40]
Fionn Murtagh and Pedro Contreras. 2012. Algorithms for hierarchical clustering: an overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2, 1 (2012), 86–97.
[41]
W. F. Opdyke and R. E. Johnson. 1992. Refactoring Object-Oriented Frameworks. University of Illinois at Urbana-Champaign.
[42]
OpenAI. [n. d.]. Introducing ChatGPT. https://openai.com/blog/chatgpt/ Accessed: July 3, 2023
[43]
Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Fausto Fasano, Rocco Oliveto, and Andrea De Lucia. 2018. On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering, 23, 3 (2018), 1188–1221.
[44]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Curran Associates Inc., Red Hook, NY, USA.
[45]
Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, and Yanfang Ye. 2021. Cotext: Multi-task learning with code-text transformer. arXiv preprint arXiv:2105.08645.
[46]
Yu Qu and Heng Yin. 2021. Evaluating network embedding techniques’ performances in software bug prediction. Empirical Software Engineering, 26, 4 (2021), 1–44.
[47]
Max Schäfer, Julian Dolby, Manu Sridharan, Emina Torlak, and Frank Tip. 2010. Correct refactoring of concurrent Java code. In ECOOP 2010–Object-Oriented Programming: 24th European Conference, Maribor, Slovenia, June 21-25, 2010. Proceedings 24. 225–249.
[48]
Atsushi Shirafuji, Yusuke Oda, Jun Suzuki, Makoto Morishita, and Yutaka Watanobe. 2023. Refactoring Programs Using Large Language Models with Few-Shot Examples. arXiv preprint arXiv:2311.11690.
[49]
Danilo Silva, Nikolaos Tsantalis, and Marco Tulio Valente. 2016. Why we refactor? confessions of github contributors. In Proceedings of the 2016 24th acm sigsoft international symposium on foundations of software engineering. 858–870.
[50]
F. Simon, F. Steinbruckner, and C. Lewerentz. 2001. Metrics based refactoring. In European Conference on Software Maintenance & Reengineering.
[51]
The Apache Software Foundation. [n. d.]. Xerces. https://xerces.apache.org/ Accessed: July 3, 2023
[52]
Nikolaos Tsantalis and Alexander Chatzigeorgiou. 2011. Identification of extract method refactoring opportunities for the decomposition of methods. Journal of Systems and Software, 84, 10 (2011), 1757–1782.
[53]
N. Tsantalis, A. S. Ketkar, and D. Dig. 2020. RefactoringMiner 2.0. IEEE Transactions on Software Engineering, PP, 99 (2020), 1–1.
[54]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. stat, 1050, 20 (2017), 10–48550.
[55]
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859.
[56]
Sunny Wong, Yuanfang Cai, Giuseppe Valetto, Georgi Simeonov, and Kanwarpreet Sethi. 2009. Design rule hierarchies and parallelism in software development tasks. In Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering. 197–208.
[57]
Sihan Xu, Aishwarya Sivaraman, Siau-Cheng Khoo, and Jing Xu. 2017. Gems: An extract method refactoring recommender. In 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). 24–34.
[58]
Zhengran Zeng, Hanzhuo Tan, Haotian Zhang, Jing Li, Yuqun Zhang, and Lingming Zhang. 2022. An extensive study on pre-trained models for program understanding and generation. In Proceedings of the 31st ACM SIGSOFT international symposium on software testing and analysis. 39–51.
[59]
Tianxiang Zhao, Xiang Zhang, and Suhang Wang. 2021. Graphsmote: Imbalanced node classification on graphs with graph neural networks. In Proceedings of the 14th ACM international conference on web search and data mining. 833–841.
[60]
Yutong Zhao, Lu Xiao, Xiao Wang, Zhifei Chen, Bihuan Chen, and Yang Liu. 2020. Butterfly space: An architectural approach for investigating performance issues. In 2020 IEEE International Conference on Software Architecture (ICSA). 202–213.

Index Terms

  1. One-to-One or One-to-Many? Suggesting Extract Class Refactoring Opportunities with Intra-class Dependency Hypergraph Neural Network

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ISSTA 2024: Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis
      September 2024
      1928 pages
      ISBN:9798400706127
      DOI:10.1145/3650212
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 September 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      Author Tags

      1. Extract Class Refactoring
      2. Hypergraph Neural Network

      Qualifiers

      • Research-article

      Conference

      ISSTA '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 58 of 213 submissions, 27%

      Upcoming Conference

      ISSTA '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 129
        Total Downloads
      • Downloads (Last 12 months)129
      • Downloads (Last 6 weeks)50
      Reflects downloads up to 19 Nov 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media