Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Do Words Have Power? Understanding and Fostering Civility in Code Review Discussion

Published: 12 July 2024 Publication History

Abstract

Modern Code Review (MCR) is an integral part of the software development process where developers improve product quality through collaborative discussions. Unfortunately, these discussions can sometimes become heated by the presence of inappropriate behaviors such as personal attacks, insults, disrespectful comments, and derogatory conduct, often referred to as incivility. While researchers have extensively explored such incivility in various public domains, our understanding of its causes, consequences, and courses of action remains limited within the professional context of software development, specifically within code review discussions. To bridge this gap, our study draws upon the experience of 171 professional software developers representing diverse development practices across different geographical regions. Our findings reveal that more than half of these developers (56.72%) have encountered instances of workplace incivility, and a substantial portion of that group (83.70%) reported experiencing such incidents at least once a month. We also identified various causes, positive and negative consequences, and potential courses of action for uncivil communication. Moreover, to address the negative aspects of incivility, we propose a model for promoting civility that detects uncivil comments during communication and provides alternative civil suggestions while preserving the original comments’ semantics, enabling developers to engage in respectful and constructive discussions. An in-depth analysis of 2K uncivil review comments using eight different evaluation metrics and a manual evaluation suggested that our proposed approach could generate civil alternatives significantly compared to the state-of-the-art politeness and detoxification models. Moreover, a survey involving 36 developers who used our civility model reported its effectiveness in enhancing online development interactions, fostering better relationships, increasing contributor involvement, and expediting development processes. Our research is a pioneer in generating civil alternatives for uncivil discussions in software development, opening new avenues for research in collaboration and communication within the software engineering context.

References

[1]
Sonam Adinolf and Selen Turkay. 2018. Toxic behaviors in Esports games: player perceptions and coping strategies. In Proceedings of the 2018 Annual Symposium on computer-human interaction in play companion extended abstracts. 365–372.
[2]
Toufique Ahmed, Amiangshu Bosu, Anindya Iqbal, and Shahram Rahimi. 2017. SentiCR: a customized sentiment analysis tool for code review interactions. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). 106–111.
[3]
Adam Alami, Marisa Leavitt Cohn, and Andrzej Wąsowski. 2019. Why does code review work for open source software communities? In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). 1073–1083.
[4]
Ashley A Anderson, Dominique Brossard, Dietram A Scheufele, Michael A Xenos, and Peter Ladwig. 2014. The “nasty effect:” Online incivility and risk perceptions of emerging technologies. Journal of computer-mediated communication, 19, 3 (2014), 373–387.
[5]
Ashley A Anderson, Sara K Yeo, Dominique Brossard, Dietram A Scheufele, and Michael A Xenos. 2018. Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30, 1 (2018), 156–168.
[6]
Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In 2013 35th International Conference on Software Engineering (ICSE). 712–721.
[7]
Mike Barnett, Christian Bird, João Brunet, and Shuvendu K Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering. 1, 134–144.
[8]
Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). 81–90.
[9]
Teresa M Bejan. 2017. Mere civility. Harvard University Press.
[10]
John D Blischak, Emily R Davenport, and Greg Wilson. 2016. A quick introduction to version control with Git and GitHub. PLoS computational biology, 12, 1 (2016), e1004668.
[11]
Amiangshu Bosu and Jeffrey C Carver. 2012. Peer code review in open source communities using reviewboard. In Proceedings of the ACM 4th annual workshop on Evaluation and usability of programming languages and tools. 17–24.
[12]
Amiangshu Bosu and Jeffrey C Carver. 2013. Impact of peer code review on peer impression formation: A survey. In 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. 133–142.
[13]
Amiangshu Bosu, Jeffrey C Carver, Christian Bird, Jonathan Orbeck, and Christopher Chockley. 2016. Process aspects and social dynamics of contemporary code review: Insights from open source development and industrial practice at microsoft. IEEE Transactions on Software Engineering, 43, 1 (2016), 56–75.
[14]
Deborah Jordan Brooks and John G Geer. 2007. Beyond negativity: The effects of incivility on the electorate. American Journal of Political Science, 51, 1 (2007), 1–16.
[15]
Jose Camacho-Collados, Kiamehr Rezaee, Talayeh Riahi, Asahi Ushio, Daniel Loureiro, Dimosthenis Antypas, Joanne Boisson, Luis Espinosa Anke, Fangyu Liu, and Eugenio Martínez-Cámara. 2022. TweetNLP: Cutting-Edge Natural Language Processing for Social Media. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. 38–49.
[16]
Kevin Daniel André Carillo, Josianne Marsan, and Bogdan Negoita. 2016. Towards developing a theory of toxicity in the context of free/open source software & peer production communities. SIGOPEN 2016.
[17]
Stanley F Chen, Douglas Beeferman, and Roni Rosenfeld. 1998. Evaluation metrics for language models.
[18]
Moataz Chouchen, Ali Ouni, Raula Gaikovina Kula, Dong Wang, Patanamon Thongtanunam, Mohamed Wiem Mkaouer, and Kenichi Matsumoto. 2021. Anti-patterns in modern code review: Symptoms and prevalence. In 2021 IEEE international conference on software analysis, evolution and reengineering (SANER). 531–535.
[19]
Jamie Cleland. 2014. Racism, football fans, and online message boards: How social media has added a new dimension to racist discourse in English football. Journal of Sport and Social Issues, 38, 5 (2014), 415–431.
[20]
Kevin Coe, Kate Kenski, and Stephen A Rains. 2014. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of communication, 64, 4 (2014), 658–679.
[21]
Jason Cohen, Steven Teleki, and Eric Brown. 2006. Best kept secrets of peer code review. Smart Bear Incorporated.
[22]
Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, and Jean Maillard. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672.
[23]
David Dale, Anton Voronov, Daryna Dementieva, Varvara Logacheva, Olga Kozlova, Nikita Semenov, and Alexander Panchenko. 2021. Text Detoxification using Large Pre-trained Neural Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 7979–7996.
[24]
Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 250–259.
[25]
Maeve Duggan. 2017. Online harassment 2017.
[26]
Carolyn D Egelman, Emerson Murphy-Hill, Elizabeth Kammer, Margaret Morrow Hodges, Collin Green, Ciera Jaspan, and James Lin. 2020. Predicting developers’ negative feelings about code review. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 174–185.
[27]
Ikram El Asri, Noureddine Kerzazi, Gias Uddin, Foutse Khomh, and MA Janati Idrissi. 2019. An empirical study of sentiments in code reviews. Information and Software Technology, 114 (2019), 37–54.
[28]
Michael Fagan. 2002. Design and code inspections to reduce errors in program development. In Software pioneers. Springer, 575–607.
[29]
Isabella Ferreira, Jinghui Cheng, and Bram Adams. 2021. The​ Shut the f** k up​ Phenomenon: Characterizing Incivility in Open Source Code Review Discussions. Proceedings of the ACM on Human-Computer Interaction, 5, CSCW2 (2021), 1–35.
[30]
Isabella Ferreira, Kate Stewart, Daniel German, and Bram Adams. 2019. A longitudinal study on the maintainers’ sentiment of a large scale open source ecosystem. In 2019 IEEE/ACM 4th International Workshop on Emotion Awareness in Software Engineering (SEmotion). 17–22.
[31]
Anna Filippova and Hichang Cho. 2015. Mudslinging and manners: Unpacking conflict in free and open source software. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1393–1403.
[32]
Anna Filippova and Hichang Cho. 2016. The effects and antecedents of conflict in free and open source software development. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 705–716.
[33]
Joseph L Fleiss, Bruce Levin, and Myunghee Cho Paik. 1981. The measurement of interrater agreement. Statistical methods for rates and proportions, 2, 212-236 (1981), 22–23.
[34]
Griffin Floto, Mohammad Mahdi Abdollah Pour, Parsa Farinneya, Zhenwei Tang, Ali Pesaranghader, Manasa Bharadwaj, and Scott Sanner. 2023. DiffuDetox: A Mixed Diffusion Model for Text Detoxification. In Findings of the Association for Computational Linguistics: ACL 2023. 7566–7574.
[35]
Corrado Fumagalli. 2021. Counterspeech and ordinary citizens: how? when? Political Theory, 49, 6 (2021), 1021–1047.
[36]
Daniela Girardi, Filippo Lanubile, Nicole Novielli, and Alexander Serebrenik. 2021. Emotions and perceived productivity of software developers at the workplace. IEEE Transactions on Software Engineering, 48, 9 (2021), 3326–3341.
[37]
Sandra González-Bailón and Yphtach Lelkes. 2023. Do social media undermine social cohesion? A critical review. Social Issues and Policy Review, 17, 1 (2023), 155–180.
[38]
Daniel Graziotin, Fabian Fagerholm, Xiaofeng Wang, and Pekka Abrahamsson. 2018. What happens when software developers are (un) happy. Journal of Systems and Software, 140 (2018), 32–47.
[39]
Sanuri Gunawardena, Ewan Tempero, and Kelly Blincoe. 2023. Concerns identified in code review: A fine-grained, faceted classification. Information and Software Technology, 153 (2023), 107054.
[40]
Sanuri Dananja Gunawardena, Peter Devine, Isabelle Beaumont, Lola Piper Garden, Emerson Murphy-Hill, and Kelly Blincoe. 2022. Destructive criticism in software code review impacts inclusion. Proceedings of the ACM on Human-Computer Interaction, 6, CSCW2 (2022), 1–29.
[41]
Skyler Hallinan, Alisa Liu, Yejin Choi, and Maarten Sap. 2023. Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 228–242.
[42]
Wenjian Huang, Tun Lu, Haiyi Zhu, Guo Li, and Ning Gu. 2016. Effectiveness of conflict management strategies in peer review process of online collaboration projects. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. 717–728.
[43]
Nasif Imtiaz, Justin Middleton, Joymallya Chakraborty, Neill Robson, Gina Bai, and Emerson Murphy-Hill. 2019. Investigating the effects of gender bias on GitHub. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). 700–711.
[44]
Md Rakibul Islam and Minhaz F Zibran. 2018. SentiStrength-SE: Exploiting domain specificity for improved sentiment analysis in software engineering text. Journal of Systems and Software, 145 (2018), 125–146.
[45]
Carlos Jensen, Scott King, and Victor Kuechler. 2011. Joining free/open source software communities: An analysis of newbies’ first interactions on project mailing lists. In 2011 44th Hawaii international conference on system sciences. 1–10.
[46]
Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. 2018. Marian: Fast Neural Machine Translation in C++. 116–121 pages. http://www.aclweb.org/anthology/P18-4020
[47]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 7871–7880.
[48]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
[49]
Jon Loeliger and Matthew McCullough. 2012. Version Control with Git: Powerful tools and techniques for collaborative software development. " O’Reilly Media, Inc.".
[50]
Varvara Logacheva, Daryna Dementieva, Sergey Ustyantsev, Daniil Moskovskiy, David Dale, Irina Krotova, Nikita Semenov, and Alexander Panchenko. 2022. Paradetox: Detoxification with parallel data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 6804–6818.
[51]
Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabás Poczós, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhumoye. 2020. Politeness Transfer: A Tag and Generate Approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 1869–1881.
[52]
Suman Kalyan Maity, Aishik Chakraborty, Pawan Goyal, and Animesh Mukherjee. 2018. Opinion conflicts: An effective route to detect incivility in Twitter. Proceedings of the ACM on Human-Computer Interaction, 2, CSCW (2018), 1–27.
[53]
Marcelo Marinho, Luís Amorim, Rafael Camara, Brigitte Renata Oliveira, Marcos Sobral, and Suzana Sampaio. 2021. Happier and further by going together: The importance of software team behaviour during the COVID-19 pandemic. Technology in society, 67 (2021), 101799.
[54]
Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica, 22, 3 (2012), 276–282.
[55]
Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2016. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering, 21, 5 (2016), 2146–2189.
[56]
Courtney Miller, Sophie Cohen, Daniel Klug, Bogdan Vasilescu, and Christian KaUstner. 2022. " Did you miss my comment or what?" understanding toxicity in open source discussions. In Proceedings of the 44th International Conference on Software Engineering. 710–722.
[57]
Rocío Galarza Molina and Freddie J Jennings. 2018. The role of civility and metacommunication in Facebook discussions. Communication studies, 69, 1 (2018), 42–66.
[58]
Rodrigo Morales, Shane McIntosh, and Foutse Khomh. 2015. Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In 2015 IEEE 22nd international conference on software analysis, evolution, and reengineering (SANER). 171–180.
[59]
Murtuza Mukadam, Christian Bird, and Peter C Rigby. 2013. Gerrit software code review data from android. In 2013 10th Working Conference on Mining Software Repositories (MSR). 45–48.
[60]
Sourabrata Mukherjee, Vojtěch Hudeček, and Ondřej Dušek. 2023. Polite Chatbot: A Text Style Transfer Application. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop. 87–93.
[61]
Dawn Nafus, James Leach, and Bernhard Krieger. 2006. Gender: Integrated report of findings. FLOSSPOLS, Deliverable D, 16 (2006).
[62]
BM İnsan Hakları Ofisi. 2021. Report: online hate increasing against minorities, says expert.
[63]
Gwenn Schurgin O’Keeffe and Kathleen Clarke-Pearson. 2011. The impact of social media on children, adolescents, and families. Pediatrics, 127, 4 (2011), 800–804.
[64]
Rajshakhar Paul, Amiangshu Bosu, and Kazi Zakia Sultana. 2019. Expressions of sentiments during code reviews: Male vs. female. In 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). 26–37.
[65]
Carla Pérez-Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don’t Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities. In Proceedings of the 28th International Conference on Computational Linguistics. 5891–5902.
[66]
Mohammad Mahdi Abdollah Pour, Parsa Farinneya, Manasa Bharadwaj, Nikhil Verma, Ali Pesaranghader, and Scott Sanner. 2023. COUNT: COntrastive UNlikelihood Text Style Transfer for Text Detoxification. In Findings of the Association for Computational Linguistics: EMNLP 2023. 8658–8666.
[67]
Huilian Sophie Qiu, Bogdan Vasilescu, Christian Kästner, Carolyn Egelman, Ciera Jaspan, and Emerson Murphy-Hill. 2022. Detecting interpersonal conflict in issues and code review: cross pollinating open-and closed-source approaches. In Proceedings of the 2022 ACM/IEEE 44th International Conference on Software Engineering: Software Engineering in Society. 41–55.
[68]
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20-074.html
[69]
Md Shamimur Rahman, Zadia Codabux, and Chanchal K Roy. 2024. Replication Package of Do Words Have Power? Understanding and Fostering Civility in Code Review Discussion. https://doi.org/10.5281/zenodo.10775868
[70]
Md Shamimur Rahman, Debajyoti Mondal, Zadia Codabux, and Chanchal K Roy. 2023. Integrating Visual Aids to Enhance the Code Reviewer Selection Process. In 2023 IEEE International Conference on Software Maintenance and Evolution (ICSME). 293–305.
[71]
Naveen Raman, Minxuan Cao, Yulia Tsvetkov, Christian Kästner, and Bogdan Vasilescu. 2020. Stress and burnout in open source: Toward finding, understanding, and mitigating unhealthy interactions. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: New Ideas and Emerging Results. 57–60.
[72]
Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern code review: a case study at google. In Proceedings of the 40th international conference on software engineering: Software engineering in practice. 181–190.
[73]
Punyajoy Saha, Kanishk Singh, Adarsh Kumar, Binny Mathew, and Animesh Mukherjee. 2022. CounterGeDi: A controllable approach to generate polite, detoxified and emotional counterspeech. arXiv preprint arXiv:2205.04304.
[74]
Jaydeb Sarker. 2022. Identification and Mitigation of Toxic Communications Among Open Source Software Developers. In 37th IEEE/ACM International Conference on Automated Software Engineering. 1–5.
[75]
Jaydeb Sarker. 2022. ‘Who built this crap?’Developing a Software Engineering Domain Specific Toxicity Detector. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1–3.
[76]
Jaydeb Sarker, Asif Kamal Turzo, and Amiangshu Bosu. 2020. A benchmark study of the contemporary toxicity detectors on software engineering interactions. In 2020 27th Asia-Pacific Software Engineering Conference (APSEC). 218–227.
[77]
Jaydeb Sarker, Asif Kamal Turzo, Ming Dong, and Amiangshu Bosu. 2023. Automated Identification of Toxic Code Reviews Using ToxiCR. ACM Transactions on Software Engineering and Methodology.
[78]
Megan Squire and Rebecca Gazda. 2015. FLOSS as a Source for Profanity and Insults: Collecting the Data. In 2015 48th Hawaii International Conference on System Sciences. 5290–5298.
[79]
Igor Steinmacher and Marco Aurélio Gerosa. 2014. How to support newcomers onboarding to open source software projects. In Open Source Software: Mobile Open Source Technologies: 10th IFIP WG 2.13 International Conference on Open Source Systems, OSS 2014, San José, Costa Rica, May 6-9, 2014. Proceedings 10. 199–201.
[80]
Igor Steinmacher, Igor Wiese, Ana Paula Chaves, and Marco Aurélio Gerosa. 2013. Why do newcomers abandon open source software projects? In 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE). 25–32.
[81]
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27 (2014).
[82]
Yida Tao and Sunghun Kim. 2015. Partitioning composite code changes to facilitate code review. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. 180–190.
[83]
Kurt Thomas, Devdatta Akhawe, Michael Bailey, Dan Boneh, Elie Bursztein, Sunny Consolvo, Nicola Dell, Zakir Durumeric, Patrick Gage Kelley, and Deepak Kumar. 2021. Sok: Hate, harassment, and the changing landscape of online abuse. In 2021 IEEE Symposium on Security and Privacy (SP). 247–267.
[84]
Parastou Tourani, Bram Adams, and Alexander Serebrenik. 2017. Code of conduct in open source projects. In 2017 IEEE 24th international conference on software analysis, evolution and reengineering (SANER). 24–33.
[85]
Alexia Tsotsis. 2011. Meet phabricator, the witty code review tool built inside facebook. Retrieved November, 30 (2011), 2021.
[86]
Russell M Viner, Aswathikutty Gireesh, Neza Stiglic, Lee D Hudson, Anne-Lise Goddings, Joseph L Ward, and Dasha E Nicholls. 2019. Roles of cyberbullying, sleep, and physical activity in mediating the effects of social media use on mental health and wellbeing among young people in England: a secondary analysis of longitudinal data. The Lancet Child & Adolescent Health, 3, 10 (2019), 685–696.
[87]
Jing Wang, Patrick C Shih, Yu Wu, and John M Carroll. 2015. Comparative case studies of open source software peer review practices. Information and Software Technology, 67 (2015), 1–12.
[88]
John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond BLEU: Training Neural Machine Translation with Semantic Similarity. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4344–4355.
[89]
Thilini Wijesiriwardene, Hale Inan, Ugur Kursuncu, Manas Gaur, Valerie L Shalin, Krishnaprasad Thirunarayan, Amit Sheth, and I Budak Arpinar. 2020. Alone: A dataset for toxic behavior among adolescents on twitter. In Social Informatics: 12th International Conference, SocInfo 2020, Pisa, Italy, October 6–9, 2020, Proceedings 12. 427–439.
[90]
Robert F Woolson. 2007. Wilcoxon signed-rank test. Wiley encyclopedia of clinical trials, 1–3.
[91]
Li Yujian and Liu Bo. 2007. A normalized Levenshtein distance metric. IEEE transactions on pattern analysis and machine intelligence, 29, 6 (2007), 1091–1095.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Software Engineering
Proceedings of the ACM on Software Engineering  Volume 1, Issue FSE
July 2024
2770 pages
EISSN:2994-970X
DOI:10.1145/3554322
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 July 2024
Published in PACMSE Volume 1, Issue FSE

Badges

Author Tags

  1. Civil and Uncivil Comments
  2. Code Review
  3. Pre-trained Translation Models

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 274
    Total Downloads
  • Downloads (Last 12 months)274
  • Downloads (Last 6 weeks)118
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media