Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/ICSE-SEIP52600.2021.00010acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Using machine intelligence to prioritise code review requests

Published: 17 December 2021 Publication History

Abstract

Modern Code Review (MCR) is the process of reviewing new code changes that need to be merged with an existing codebase. As a developer, one may receive many code review requests every day, i.e., the review requests need to be prioritised. Manually prioritising review requests is a challenging and time-consuming process. To address the above problem, we conducted an industrial case study at Ericsson aiming at developing a tool called Pineapple, which uses a Bayesian Network to prioritise code review requests. To validate our approach/tool, we deployed it in a live software development project at Ericsson, wherein more than 150 developers develop a telecommunication product. We focused on evaluating the predictive performance, feasibility, and usefulness of our approach. The results indicate that Pineapple has competent predictive performance (RMSE = 0.21 and MAE = 0.15). Furthermore, around 82.6% of Pineapple's users believe the tool can support code review request prioritisation by providing reliable results, and around 56.5% of the users believe it helps reducing code review lead time. As future work, we plan to evaluate Pineapple's predictive performance, usefulness, and feasibility through a longitudinal investigation.

References

[1]
Junji Shimagaki, Yasutaka Kamei, Shane McIntosh, Ahmed E Hassan, and Naoyasu Ubayashi. A study of the quality-impacting practices of modern code review at sony mobile. In Proceedings of the 38th International Conference on Software Engineering Companion, pages 212--221, 2016.
[2]
A Frank Ackerman, Priscilla J Fowler, and Robert G Ebenau. Software inspections and the industrial production of software. In Proc. of a symposium on Software validation: inspection-testing-verification-alternatives, pages 13--40, 1984.
[3]
Aybuke Aurum, Håkan Petersson, and Claes Wohlin. State-of-the-art: software inspections after 25 years. Software Testing, Verification and Reliability, 12(3):133--154, 2002.
[4]
Michael Fagan. Design and code inspections to reduce errors in program development. In Software pioneers, pages 575--607. Springer, 2002.
[5]
Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. In Proceedings of the 11th Working Conference on Mining Software Repositories, pages 192--201, 2014.
[6]
Ali Ouni, Raula Gaikovina Kula, and Katsuro Inoue. Search-based peer reviewers recommendation in modern code review. In 2016 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 367--377. IEEE, 2016.
[7]
Forrest Shull and Carolyn Seaman. Inspecting the history of inspections: An example of evidence-based technology diffusion. IEEE software, 25(1):88--90, 2008.
[8]
Alberto Bacchelli and Christian Bird. Expectations, outcomes, and challenges of modern code review. In 2013 35th International Conference on Software Engineering (ICSE), pages 712--721. IEEE, 2013.
[9]
Moritz Beller, Alberto Bacchelli, Andy Zaidman, and Elmar Juergens. Modern code reviews in open-source projects: Which problems do they fix? In Proceedings of the 11th working conference on mining software repositories, pages 202--211, 2014.
[10]
Achyudh Ram, Anand Ashok Sawant, Marco Castelluccio, and Alberto Bacchelli. What makes a code change easier to review: an empirical investigation on code change reviewability. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 201--212, 2018.
[11]
Georgios Gousios, Andy Zaidman, Margaret-Anne Storey, and Arie Van Deursen. Work practices and challenges in pull-based development: the integrator's perspective. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, volume 1, pages 358--368. IEEE, 2015.
[12]
Nir Friedman, Dan Geiger, and Moises Goldszmidt. Bayesian network classifiers. Machine learning, 29(2--3):131--163, 1997.
[13]
Shin Yoo, Mark Harman, Paolo Tonella, and Angelo Susi. Clustering test cases to achieve effective and scalable prioritisation incorporating expert knowledge. In Proceedings of the eighteenth international symposium on Software testing and analysis, pages 201--212, 2009.
[14]
Paolo Busetta, Fitsum Meshesha Kifetew, Denisse Munante, Anna Perini, Alberto Siena, and Angelo Susi. Tool-supported collaborative requirements prioritisation. In 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), volume 1, pages 180--189. IEEE, 2017.
[15]
Jason But, Thuy Nguyen, Lawrence Stewart, Nigel Williams, and Grenville Armitage. Performance analysis of the angel system for automated control of game traffic prioritisation. In Proceedings of the 6th ACM SIGCOMM workshop on Network and system support for games, pages 123--128, 2007.
[16]
Patanamon Thongtanunam, Raula Gaikovina Kula, Ana Erika Camargo Cruz, Norihiro Yoshida, and Hajimu Iida. Improving code review effectiveness through reviewer recommendations. In Proceedings of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering, pages 119--122, 2014.
[17]
Gaeul Jeong, Sunghun Kim, Thomas Zimmermann, and Kwangkeun Yi. Improving code review by predicting reviewers and acceptance of patches. Research on software analysis for error-free computing center Tech-Memo (ROSAEC MEMO 2009-006), pages 1--18, 2009.
[18]
Georgios Gousios, Martin Pinzger, and Arie van Deursen. An exploratory study of the pull-based software development model. In Proceedings of the 36th International Conference on Software Engineering, pages 345--355, 2014.
[19]
Guoliang Zhao, Daniel Alencar da Costa, and Ying Zou. Improving the pull requests review process using learning-to-rank algorithms. Empirical Software Engineering, 24(4):2140--2170, 2019.
[20]
Yuanrui Fan, Xin Xia, David Lo, and Shanping Li. Early prediction of merged code changes to prioritize reviewing tasks. Empirical Software Engineering, 23(6):3346--3393, 2018.
[21]
ES Van der Veen. Prioritizing pull requests. 2015.
[22]
Per Runeson and Martin Höst. Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering, 14(2):131, 2009.
[23]
Tianfeng Chai and Roland R Draxler. Root mean square error (rmse) or mean absolute error (mae)?-arguments against avoiding rmse in the literature. Geoscientific model development, 7(3):1247--1250, 2014.
[24]
Tony Gorschek, Per Garre, Stig Larsson, and Claes Wohlin. A model for technology transfer in practice. IEEE software, 23(6):88--95, 2006.
[25]
Deepika Badampudi, Ricardo Britto, and Michael Unterkalmsteiner. Modern code reviews - preliminary results of a systematic mapping study. In Proceedings of the Evaluation and Assessment on Software Engineering, EASE '19, page 340--345, New York, NY, USA, 2019. Association for Computing Machinery.
[26]
Serena Hamilton and Carmel Pollino. Good practice in bayesian network modelling. Environmental Modelling Software, 37, 11 2012.

Cited By

View all
  • (2024)Mining Pull Requests to Detect Process Anomalies in Open Source Software DevelopmentProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639196(1-13)Online publication date: 20-May-2024
  • (2023)On the Prediction of Software Merge Conflicts: A Systematic Review and Meta-analysisProceedings of the XIX Brazilian Symposium on Information Systems10.1145/3592813.3592931(404-411)Online publication date: 29-May-2023
  • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE-SEIP '21: Proceedings of the 43rd International Conference on Software Engineering: Software Engineering in Practice
May 2021
405 pages
ISBN:9780738146690

Sponsors

In-Cooperation

  • IEEE CS

Publisher

IEEE Press

Publication History

Published: 17 December 2021

Check for updates

Author Tags

  1. bayesian networks
  2. machine intelligence
  3. machine learning
  4. machine reasoning
  5. modern code review
  6. prioritisation

Qualifiers

  • Research-article

Conference

ICSE '21
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)0
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Mining Pull Requests to Detect Process Anomalies in Open Source Software DevelopmentProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639196(1-13)Online publication date: 20-May-2024
  • (2023)On the Prediction of Software Merge Conflicts: A Systematic Review and Meta-analysisProceedings of the XIX Brazilian Symposium on Information Systems10.1145/3592813.3592931(404-411)Online publication date: 29-May-2023
  • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
  • (2022)Understanding automated code review process and developer experience in industryProceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3540250.3558950(1398-1407)Online publication date: 7-Nov-2022

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media