Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

An Empirical Investigation of Relevant Changes and Automation Needs in Modern Code Review

Published: 01 November 2020 Publication History

Abstract

Recent research has shown that available tools for Modern Code Review (MCR) are still far from meeting the current expectations of developers. The objective of this paper is to investigate the approaches and tools that, from a developer’s point of view, are still needed to facilitate MCR activities. To that end, we first empirically elicited a taxonomy of recurrent review change types that characterize MCR. The taxonomy was designed by performing three steps: (i) we generated an initial version of the taxonomy by qualitatively and quantitatively analyzing 211 review changes/commits and 648 review comments of ten open-source projects; then (ii) we integrated into this initial taxonomy, topics, and MCR change types of an existing taxonomy available from the literature; finally, (iii) we surveyed 52 developers to integrate eventually missing change types in the taxonomy. Results of our study highlight that the availability of new emerging development technologies (e.g., Cloud-based technologies) and practices (e.g., Continuous delivery) has pushed developers to perform additional activities during MCR and that additional types of feedback are expected by reviewers. Our participants provided recommendations, specified techniques to employ, and highlighted the data to analyze for building recommender systems able to automate the code review activities composing our taxonomy. We surveyed 14 additional participants (12 developers and 2 researchers), not involved in the previous survey, to qualitatively assess the relevance and completeness of the identified MCR change types as well as assess how critical and feasible to implement are some of the identified techniques to support MCR activities. Thus, with a study involving 21 additional developers, we qualitatively assess the feasibility and usefulness of leveraging natural language feedback (automation considered critical/feasible to implement) in supporting developers during MCR activities. In summary, this study sheds some more light on the approaches and tools that are still needed to facilitate MCR activities, confirming the feasibility and usefulness of using summarization techniques during MCR activities. We believe that the results of our work represent an essential step for meeting the expectations of developers and supporting the vision of full or partial automation in MCR.

References

[14]
Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: Proceedings of the International Conference on Software Engineering (ICSE), pp. 712–721
[15]
Balachandran V (2013) Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In: 35th International Conference on Software Engineering, ICSE ’13, San Francisco, CA, USA, May 18-26, 2013, pp. 931–940. 10.1109/ICSE.2013.6606642
[16]
Barnett M, Bird C, Brunet J, Lahiri SK (2015) Helping developers help themselves: Automatic decomposition of code review changesets. In: 37Th IEEE/ACM international conference on software engineering, ICSE 2015, florence, italy, may 16-24, 2015, volume 1, pp. 134–144
[17]
Baum T, Schneider K, Bacchelli A (2017) On the optimal order of reading source code changes for review. In: 2017 IEEE International Conference on Software Maintenance and Evolution, ICSME 2017, Shanghai, China, September 17-22, 2017, pp. 329–340. 10.1109/ICSME.2017.28
[18]
Bavota G, Qusef A, Oliveto R, Lucia AD, and Binkley DW Are test smells really harmful? an empirical study Empir Softw Eng 2015 20 4 1052-1094
[19]
Bavota G, Russo B (2015) Four eyes are better than two: on the impact of code reviews on software quality. In: 2015 IEEE International conference on software maintenance and evolution, ICSME 2015, bremen, germany, september 29 - october 1, 2015, pp. 81–90
[20]
Baysal O, Kononenko O, Holmes R, Godfrey MW (2012) The secret life of patches: a firefox case study. In: Proceedings of the Working Conference on Reverse Engineering (WCRE), pp. 447–455
[21]
Baysal O, Kononenko O, Holmes R, and Godfrey MW Investigating technical and non-technical factors influencing modern code review Empir Softw Eng 2016 21 3 932-959
[22]
Beller M, Bacchelli A, Zaidman A, Ju̇rgens E (2014) Modern code reviews in open-source projects: which problems do they fix?. In: 11Th working conference on mining software repositories, MSR 2014, proceedings, may 31 - june 1, 2014, hyderabad, india, pp. 202–211
[23]
Beller M, Bholanath R, McIntosh S, Zaidman A (2016) Analyzing the state of static analysis: a large-scale evaluation in open source software. In: IEEE 23Rd international conference on software analysis, evolution, and reengineering, SANER 2016, suita, osaka, japan, march 14-18, 2016 - volume 1, pp. 470–481. IEEE computer society
[24]
Bosu A, Carver JC, Bird C, Orbeck JD, and Chockley C Process aspects and social dynamics of contemporary code review: Insights from open source development and industrial practice at microsoft IEEE Trans. Software Eng 2017 43 1 56-75
[25]
Bosu A, Greiler M, Bird C (2015) Characteristics of useful code reviews: an empirical study at microsoft. In: 12Th IEEE/ACM working conference on mining software repositories, MSR 2015, florence, italy, may 16-17, 2015, pp. 146–156
[26]
Chatley R, Jones L (2018) Diggit: Automated code review via software repository mining. In: 25Th international conference on software analysis, evolution and reengineering, SANER 2018, campobasso, italy, march 20-23, 2018, pp. 567–571
[27]
De Lucia A, Di Penta M, Oliveto R, Panichella A, Panichella S (2012) Using IR methods for labeling source code artifacts: is it worthwhile?. In: IEEE 20Th international conference on program comprehension, ICPC 2012, passau, germany, june 11-13, 2012, pp. 193–202
[28]
Deursen A, Moonen L, Bergh A, Kok G (2001) Refactoring test code. In: Proceedings of the 2nd International Conference on Extreme Programming and Flexible Processes (XP2001), pp. 92–95
[29]
Di Penta M, Cerulo L, and Aversano L The life and death of statically detected vulnerabilities: an empirical study Information &, Software Technology 2009 51 10 1469-1484
[30]
Dig D and Johnson RE How do apis evolve? A story of refactoring Journal of Software Maintenance 2006 18 2 83-107
[31]
Duvall P, Matyas SM, Glover A (2007) Continuous integration: improving software quality and reducing risk Addison-Wesley
[32]
Duvall PM (2010) Continuous integration patterns and antipatterns. DZone refcard #84 http://bit.ly/l8rfVS
[33]
Efstathiou V, Spinellis D (2018) Code review comments: language matters. In: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pp. 69–72
[34]
Flanagan C, Leino KRM, Lillibridge M, Nelson G, Saxe JB, Stata R (2002) Extended static checking for java. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pp. 234–245
[35]
Fluri B, Gall HC (2006) Classifying change types for qualifying change couplings. In: 14th International Conference on Program Comprehension (ICPC 2006), 14-16 June 2006, Athens, Greece, pp. 35–45. IEEE Computer Society. 10.1109/ICPC.2006.16
[36]
Fowler M (2002) Refactoring: Improving the design of existing code. In: Extreme programming and agile methods - XP/agile universe 2002, second XP universe and first agile universe conference chicago, IL, USA, August, 2002, p. 256
[37]
Fusaro P, Lanubile F, and Visaggio G A replicated experiment to assess requirements inspection techniques Empir Softw Eng 1997 2 1 39-57
[38]
Germán DM, Robles G, Poo-Caamaño G, Yang X, Iida H, Inoue K (2018) ”was my contribution fairly reviewed?”: a framework to study the perception of fairness in modern code reviews. In: Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pp. 523–534. http://doi.acm.org/10.1145/3180155.3180217
[39]
Gibbs L, Kealy M, Willis K, Green J, Welch N, and Daly J What have sampling and data collection got to do with good qualitative research? Australian and New Zealand journal of public health 2007 31 6 540-544
[40]
Grano G, Ciurumelea A, Panichella S, Palomba F, Gall HC (2018) Exploring the integration of user feedback in automated testing of android applications. In: 2018 IEEE 25Th international conference on software analysis, evolution and reengineering (SANER), pp. 72–83
[41]
Haiduc S, Aponte J, Moreno L, Marcus A (2010) On the use of automated text summarization techniques for summarizing source code. In: 17Th working conference on reverse engineering (WCRE), october 2010, beverly, MA, USA, pp. 35–44
[42]
Hannebauer C, Patalas M, Stu̇nkel S, Gruhn V (2016) Automatically recommending code reviewers based on their expertise: an empirical comparison. In: Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering, ASE 2016, Singapore, September 3-7, 2016, pp. 99–110
[43]
Henley AZ, Muçlu K, Christakis M, Fleming SD, Bird C (2018) Cfar: A tool to increase communication, productivity, and review quality in collaborative code reviews. In: R.L. Mandryk, M. Hancock, M. Perry, A.L. Cox (eds.) Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI 2018, Montreal, QC, Canada, April 21-26, 2018, p. 157. ACM. 10.1145/3173574.3173731
[44]
Hill E, Pollock L, Vijay-Shanker K (2009) Automatically capturing source code context of nl-queries for software maintenance and reuse. In: International conference on software engineering (ICSE), pp. 232–242. IEEE
[45]
Hȯst M and Johansson C Evaluation of code review methods through interviews and experimentation J Syst Softw 2000 52 2-3 113-120
[46]
Humble J, Farley D (2010) Continuous delivery: Reliable Software Releases Through Build, Test, and Deployment Automation, 1st edn Addison-Wesley Professional
[47]
Kemerer CF and Paulk MC The impact of design and code reviews on software quality: An empirical study based on PSP data IEEE Trans. Software Eng 2009 35 4 534-550
[48]
Khalid H, Shihab E, Nagappan M, and Hassan AE What do mobile app users complain about? IEEE Softw 2015 32 3 70-77
[49]
Kim S, Ernst MD (2007) Which warnings should I fix first?. In: Proceedings of the joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering (ESEC/FSE), pp. 45–54
[50]
Kim S, Pan K, Jr. EJW (2006) Micro pattern evolution. In: S. Diehl, H.C. Gall, A.E. Hassan (eds.) Proceedings of the 2006 International Workshop on Mining Software Repositories, MSR 2006, Shanghai, China, May 22-23, 2006, pp. 40–46. ACM. 10.1145/1137983.1137995
[51]
Kononenko O, Baysal O, Godfrey MW (2016) Code review quality: how developers see it. In: Proceedings of the 38th International Conference on Software Engineering, ICSE 2016, Austin, TX, USA, May 14-22, 2016, pp. 1028–1038
[52]
Kononenko O, Baysal O, Guerrouj L, Cao Y, Godfrey MW (2015) Investigating code review quality: Do people and participation matter?. In: 2015 IEEE International conference on software maintenance and evolution, ICSME 2015, bremen, germany, september 29 - october 1, 2015, pp. 111–120
[53]
Mäntylä M and Lassenius C What types of defects are really discovered in code reviews? IEEE Trans. Software Eng 2009 35 3 430-448
[54]
Mäntylä M, Vanhanen J, Lassenius C (2003) A taxonomy and an initial empirical study of bad smells in code. In: 19Th international conference on software maintenance (ICSM, Amsterdam, The Netherlands, pp. 381–384
[55]
Mäntylä MV and Lassenius C What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering (TSE) 2009 35 3 430-448
[56]
Martin D, Panichella S (2019) The cloudification perspectives of search-based software testing. In: A. Gorla, J.M. Rojas (eds.) Proceedings of the 12th International Workshop on Search-Based Software Testing, SBST@ICSE 2019, Montreal, QC, Canada, May 27, 2019, pp. 5–6. IEEE / ACM . 10.1109/SBST.2019.00009
[57]
McBurney PW, McMillan C (2014) Automatic documentation generation via source code summarization of method context. In: Proceedings of the International Conference on Program Comprehension (ICPC), pp. 279–290. ACM
[58]
McIntosh S, Kamei Y, Adams B, Hassan AE (2014) The impact of code review coverage and code review participation on software quality: a case study of the qt, vtk, and ITK projects. In: Proceedings of the Working Conference on Mining Software Repositories (MSR), pp. 192–201
[59]
Menarini M, Yan Y, Griswold WG (2017) Semantics-assisted code review: an efficient toolchain and a user study. In: Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, ASE 2017, Urbana, IL, USA, October 30 - November 03, 2017, pp. 554–565
[60]
Meszaros G (2010) Xunit test patterns and smells: improving the ROI of test code. In: Companion to the 25th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications, SPLASH/OOPSLA 2010, october, reno/tahoe, nevada, USA, pp. 299–300
[61]
Moha N, Guéhéneuc Y, Duchien L, and Meur AL DECOR: A method for the specification and detection of code and design smells IEEE Trans. Software Eng. 2010 36 1 20-36
[62]
Moha N, Gueheneuc YG, Duchien L, and Le Meur AF Decor: a method for the specification and detection of code and design smells IEEE Trans Softw Eng 2010 36 1 20-36
[63]
Moreno L, Aponte J, Sridhara G, Marcus A, Pollock L, Vijay-Shanker K (2013) Automatic generation of natural language summaries for java classes. In: International conference on program comprehension (ICPC), pp. 23–32. IEEE
[64]
Moreno L, Marcus A (2017) Automatic software summarization: the state of the art. In: 39Th international conference on software engineering, ICSE 2017, buenos aires, argentina, may 20-28, 2017, pp. 511–512
[65]
Moreno L, Marcus A (2018) Automatic software summarization: the state of the art. In: Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pp. 530–531
[66]
Nurolahzade M, Nasehi SM, Khandkar SH, Rawal S (2009) The role of patch review in software evolution: an analysis of the mozilla firefox. In: Proceedings of the Joint International and Annual ERCIM Workshops on Principles of Software Evolution (IWPSE) and Software Evolution (Evol) Workshops, pp. 9–18
[67]
Ouni A, Kula RG, Inoue K (2016) Search-based peer reviewers recommendation in modern code review. In: 2016 IEEE International Conference on Software Maintenance and Evolution, ICSME 2016, Raleigh, NC, USA, October 2-7, 2016, pp. 367–377. 10.1109/ICSME.2016.65
[68]
Paixȧo M, Krinke J, Han D, Harman M (2018) CROP: Linking code reviews to source code changes. In: Proceedings of the 15th International Conference on Mining Software Repositories, MSR 2018, Gothenburg, Sweden, May 28-29, 2018, pp. 46–49
[69]
Palomba F, Panichella A, Lucia AD, Oliveto R, Zaidman A (2016) A textual-based technique for smell detection. In: 24Th international conference on program comprehension, austin, TX, USA, May, 2016, pp. 1–10
[70]
Panichella S (2018) Summarization techniques for code, change, testing, and user feedback (invited paper). In: C. Artho, R. Ramler (eds.) 2018 IEEE Workshop on Validation, Analysis and Evolution of Software Tests, VST@SANER 2018, Campobasso, Italy, March 20, 2018, pp. 1–5. IEEE. 10.1109/VST.2018.8327148
[71]
Panichella S, Arnaoudova V, Penta MD, Antoniol G (2015) Would static analysis tools help developers with code reviews?. In: 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering, SANER 2015, Montreal, QC, Canada, March 2-6, 2015, pp. 161–170. 10.1109/SANER.2015.7081826
[72]
Panichella S, Di Sorbo A, Guzman E, Visaggio CA, Canfora G, Gall HC (2015) How can i improve my app? classifying user reviews for software maintenance and evolution. In: 2015 IEEE International conference on software maintenance and evolution (ICSME), pp. 281–290
[73]
Panichella S, Panichella A, Beller M, Zaidman A, Gall HC (2016) The impact of test case summaries on bug fixing performance: an empirical investigation. In: 38Th international conference on software engineering, austin, TX, USA, May, 2016, pp. 547–558
[74]
Parnas DL, Weiss DM (1985) Active design reviews: Principles and practices. In: Proceedings, 8th international conference on software engineering, london, UK, August 28-30, 1985., pp. 132–136
[75]
Porter AA and Votta LG Comparing detection methods for software requirements inspections: a replication using professional subjects Empir Softw Eng 1998 3 4 355-379
[76]
Rahman MM, Roy CK, Kula RG (2017) Predicting usefulness of code review comments using textual features and developer experience. In: Proceedings of the 14th International Conference on Mining Software Repositories, MSR 2017, Buenos Aires, Argentina, May 20-28, 2017, pp. 215–226
[77]
Rigby PC (2011) Understanding open source software peer review: Review processes, parameters and statistical models, and underlying behaviours and mechanisms. Ph.D. thesis, University of Victoria, BC Canada
[78]
Rigby PC, German DM (2006) A preliminary examination of code review processes in open source projects. Tech. Rep. DCS-305-IR University of Victoria
[79]
Rigby PC, German DM, Storey MD (2008) Open source software peer review practices: a case study of the apache server. In: Proceedings of the International Conference on Software Engineering (ICSE), pp. 541–550
[80]
Savor T, Douglas M, Gentili M, Williams L, Beck K, Stumm M (2016) Continuous deployment at facebook and OANDA. In: Companion proceedings of the 38th International Conference on Software Engineering (ICSE Companion), pp. 21–30
[81]
Shi S, Li M, Lo D, Thung F, Huo X (2019) Automatic code review by learning the revision of source code. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pp. 4910–4917. AAAI Press. 10.1609/aaai.v33i01.33014910
[82]
Singh D, Sekar VR, Stolee KT, Johnson B (2017) Evaluating how static analysis tools can reduce code review effort. In: A.Z. Henley, P. Rogers, A. Sarma (eds.) 2017 IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2017, Raleigh, NC, USA, October 11-14, 2017, pp. 101–105. IEEE Computer Society. 10.1109/VLHCC.2017.8103456
[83]
Spadini D, Aniche MF, Storey MD, Bruntink M, Bacchelli A (2018) When testing meets code review: why and how developers review tests. In: Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pp. 677–687
[84]
Sridhara G, Hill E, Muppaneni D, Pollock L, Vijay-Shanker K (2010) Towards automatically generating summary comments for java methods. In: International conference on automated software engineering, pp. 43–52
[85]
Thongtanunam P, Tantithamthavorn C, Kula RG, Yoshida N, Iida H, Matsumoto K (2015) Who should review my code? a file location-based code-reviewer recommendation approach for modern code review. In: 22Nd IEEE international conference on software analysis, evolution, and reengineering, SANER 2015, montreal, QC, Canada, March 2-6, 2015, pp. 141–150
[86]
Thung F, Lucia, Lo D, Jiang L, Rahman F, Devanbu PT (2012) To what extent could we detect field defects? an empirical study of false negatives in static bug finding tools. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp. 50–59
[87]
Tsantalis N and Chatzigeorgiou A Identification of move method refactoring opportunities IEEE Trans. Software Eng. 2009 35 347-367
[88]
Vassallo C, Panichella S, Palomba F, Proksch S, Zaidman A, Gall HC (2018) Context is king: the developer perspective on the usage of static analysis tools. In: 25Th international conference on software analysis, evolution and reengineering, SANER 2018, campobasso, italy, march 20-23, 2018, pp. 38–49
[89]
Vendome C, Germȧn DM, Penta MD, Bavota G, Vȧsquez ML, Poshyvanyk D (2018) To distribute or not to distribute?: why licensing bugs matter. In: Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pp. 268–279
[90]
Wagner HR The discovery of grounded theory: Strategies for qualitative research Social Forces 1968 46 4 555
[91]
Wagner S, Jurjens J, Koller C, Trischberger P (2005) Comparing bug finding tools with reviews and tests. In: Proceedings of the 17th IFIP TC6/WG 6.1 International Conference on Testing of Communicating Systems, pp. 40–55
[92]
Weißgerber P, Neu D, Diehl S (2008) Small patches get in!. In: Proceedings of the Working Conference on Mining Software Repositories (MSR), pp. 67–76
[93]
Zampetti F, Scalabrino S, Oliveto R, Canfora G, Di Penta M (2017) How open source projects use static code analysis tools in continuous integration pipelines. In: Proceedings of the 14th International Conference on Mining Software Repositories, pp. 334–344. IEEE Press
[94]
Zampetti Fiorella VCPSCGGHDPM (2020) An empirical characterization of bad practices in continuous integration Empirical Software Engineering
[95]
Zanjani MB, Kagdi HH, and Bird C Automatically recommending peer reviewers in modern code review IEEE Trans. Software Eng 2016 42 6 530-543
[96]
Zhang T, Song M, Pinedo J, Kim M (2015) Interactive code review for systematic changes. In: 37Th IEEE/ACM international conference on software engineering, ICSE 2015, florence, italy, may 16-24, 2015, volume 1, pp. 111–122
[97]
Zhou Y, Gu R, Chen T, Huang Z, Panichella S, Gall HC (2017) Analyzing apis documentation and code to detect directive defects. In: Proceedings of the 39th International Conference on Software Engineering, ICSE 2017, Buenos Aires, Argentina, May 20-28, 2017, pp. 27–37. 10.1109/ICSE.2017.11
[98]
Zhou Y, Su Y, Chen T, Huang Z, Gall HC, Panichella S (2020) User review-based change file localization for mobile applications IEEE Trans Softw Eng 1–1
[99]
Zhou Y, Wang C, Yan X, Chen T, Panichella S, Gall HC (2018) Automatic detection and repair recommendation of directive defects in java api documentation IEEE Trans Softw Eng 1–1

Cited By

View all
  • (2025)A qualitative study on refactorings induced by code reviewEmpirical Software Engineering10.1007/s10664-024-10560-730:1Online publication date: 1-Feb-2025
  • (2023)Summary of the 2nd Natural Language-based Software Engineering Workshop (NLBSE 2023)ACM SIGSOFT Software Engineering Notes10.1145/3617946.361795748:4(60-63)Online publication date: 17-Oct-2023
  • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Empirical Software Engineering
Empirical Software Engineering  Volume 25, Issue 6
Nov 2020
1119 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 November 2020

Author Tags

  1. Code review process and practices
  2. Empirical study
  3. Automated software engineering.

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)A qualitative study on refactorings induced by code reviewEmpirical Software Engineering10.1007/s10664-024-10560-730:1Online publication date: 1-Feb-2025
  • (2023)Summary of the 2nd Natural Language-based Software Engineering Workshop (NLBSE 2023)ACM SIGSOFT Software Engineering Notes10.1145/3617946.361795748:4(60-63)Online publication date: 17-Oct-2023
  • (2023)Modern Code Reviews—Survey of Literature and PracticeACM Transactions on Software Engineering and Methodology10.1145/358500432:4(1-61)Online publication date: 26-May-2023
  • (2023)Summary of the 1st Natural Language-based Software Engineering Workshop (NLBSE 2022)ACM SIGSOFT Software Engineering Notes10.1145/3573074.357310148:1(101-104)Online publication date: 17-Jan-2023
  • (2023)Complementing Secure Code Review with Automated Program AnalysisProceedings of the 45th International Conference on Software Engineering: Companion Proceedings10.1109/ICSE-Companion58688.2023.00052(189-191)Online publication date: 14-May-2023
  • (2022)Opportunities and Challenges in Repeated Revisions to Pull-Requests: An Empirical StudyProceedings of the ACM on Human-Computer Interaction10.1145/35552086:CSCW2(1-35)Online publication date: 11-Nov-2022
  • (2022)Identifying Solidity Smart Contract API Documentation ErrorsProceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering10.1145/3551349.3556963(1-13)Online publication date: 10-Oct-2022
  • (2022)Code smells detection via modern code review: a study of the OpenStack and Qt communitiesEmpirical Software Engineering10.1007/s10664-022-10178-727:6Online publication date: 4-Jul-2022
  • (2022)Topic modeling and intuitionistic fuzzy set-based approach for efficient software bug triagingKnowledge and Information Systems10.1007/s10115-022-01735-z64:11(3081-3111)Online publication date: 1-Nov-2022
  • (2021)Using code reviews to automatically configure static analysis toolsEmpirical Software Engineering10.1007/s10664-021-10076-427:1Online publication date: 11-Dec-2021

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media