Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Chillbot: Content Moderation in the Backchannel

Published: 08 November 2024 Publication History

Abstract

Moderating online spaces effectively is not a matter of simply taking down content: moderators also provide private feedback and defuse situations before they cross the line into harm. However, moderators have little tool support for these activities, which often occur in the backchannel rather than in front of the entire community. In this paper, we introduce Chillbot, a moderation tool for Discord designed to facilitate backchanneling from moderators to users. With Chillbot, moderators gain the ability to send rapid anonymous feedback responses to situations where removal or formal punishment is too heavy-handed to be appropriate, helping educate users about how to improve their behavior while avoiding direct confrontations that can put moderators at risk. We evaluated Chillbot through a two week field deployment on eleven Discord servers ranging in size from 25 to over 240,000 members. Moderators in these communities used Chillbot more than four hundred times during the study, and moderators from six of the eleven servers continued using the tool past the end of the formal study period. Based on this deployment, we describe implications for the design of a broader variety of means by which moderation tools can help shape communities' norms and behavior.

References

[1]
Mark S Ackerman. 2000. The intellectual challenge of CSCW: the gap between social requirements and technical feasibility. Human-Computer Interaction, Vol. 15, 2--3 (2000), 179--203.
[2]
Carolina Are. 2023. An autoethnography of automated powerlessness: lacking platform affordances in Instagram and TikTok account deletions. Media, Culture & Society, Vol. 45, 4 (2023), 822--840. https://doi.org/10.1177/01634437221140531
[3]
Jiajun Bao, Junjie Wu, Yiming Zhang, Eshwar Chandrasekharan, and David Jurgens. 2021. Conversations Gone Alright: Quantifying and Predicting Prosocial Outcomes in Online Conversations. In Proceedings of the Web Conference 2021 (Ljubljana, Slovenia) (WWW '21). Association for Computing Machinery, New York, NY, USA, 1134--1145. https://doi.org/10.1145/3442381.3450122
[4]
Virginia Braun and Victoria Clarke. 2012. Thematic analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. American Psychological Association, Washington, DC, US, 57--71. https://doi.org/10.1037/13620-004
[5]
Jie Cai and Donghee Yvette Wohn. 2021. After Violation But Before Sanction: Understanding Volunteer Moderators' Profiling Processes Toward Violators in Live Streaming Communities. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 410 (Oct 2021), 25 pages. https://doi.org/10.1145/3479554
[6]
Jie Cai, Donghee Yvette Wohn, and Mashael Almoqbel. 2021. Moderation Visibility: Mapping the Strategies of Volunteer Moderators in Live Streaming Micro Communities. In ACM International Conference on Interactive Media Experiences (Virtual Event, USA) (IMX '21). Association for Computing Machinery, New York, NY, USA, 61--72. https://doi.org/10.1145/3452918.3458796
[7]
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A Cross-Community Learning-Based System to Assist Reddit Moderators. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 174 (Nov. 2019), 30 pages. https://doi.org/10.1145/3359276
[8]
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 32 (Nov. 2018), 25 pages. https://doi.org/10.1145/3274301
[9]
Justin Cheng, Michael Bernstein, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2017. Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW '17). ACM, New York, NY, USA, 1217--1230. https://doi.org/10.1145/2998181.2998213
[10]
Justin Cheng, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2015. Antisocial Behavior in Online Discussion Communities. Proceedings of the International AAAI Conference on Web and Social Media, Vol. 9, 1 (2015), 61--70. https://ojs.aaai.org/index.php/ICWSM/article/view/14583
[11]
John W Creswell. 2013. Qualitative Inquiry and Research Design: Choosing Among Five Traditions. SAGE, Thousand Oaks, CA.
[12]
Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of Eating Disorders through Content Moderation. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW1, Article 40 (may 2020), 28 pages. https://doi.org/10.1145/3392845
[13]
Sarah A. Gilbert. 2020. "I Run the World's Largest Historical Outreach Project and It's on a Cesspool of a Website." Moderating a Public Scholarship Site on Reddit: A Case Study of r/AskHistorians. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW1, Article 019 (May 2020), 27 pages. https://doi.org/10.1145/3392822
[14]
Kishonna L. Gray and Krysten Stein. 2021. 'We 'said her name' and got zucked': Black Women Calling-out the Carceral Logics of Digital Platforms. Gender & Society, Vol. 35, 4 (2021), 538--545. https://doi.org/10.1177/08912432211029393
[15]
James Grimmelmann. 2015. The Virtues of Moderation. Yale J.L. & Tech, Vol. 17 (2015), 42--109.
[16]
Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. 2013. The Rise and Decline of an Open Collaboration System: How Wikipedia's Reaction to Popularity Is Causing Its Decline. American Behavioral Scientist, Vol. 57, 5 (2013), 664--688. https://doi.org/10.1177/0002764212469365
[17]
Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, and Eric Gilbert. 2020. Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3313831.3376383
[18]
Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. 'Did you suspect the post would be removed'' Understanding user reactions to content removals on Reddit. Proceedings of the ACM on human-computer interaction, Vol. 3, CSCW (2019), 1--33.
[19]
Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator. ACM Trans. Comput.-Hum. Interact., Vol. 26, 5, Article 31 (July 2019), 35 pages. https://doi.org/10.1145/3338243
[20]
Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2019. Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 150 (Nov. 2019), 27 pages. https://doi.org/10.1145/3359252
[21]
Shagun Jhaver, Quan Ze Chen, Detlef Knauss, and Amy X. Zhang. 2022. Designing Word Filter Tools for Creator-Led Comment Moderation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 205, 21 pages. https://doi.org/10.1145/3491102.3517505
[22]
Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online Harassment and Content Moderation: The Case of Blocklists. ACM Trans. Comput.-Hum. Interact., Vol. 25, 2, Article 12 (March 2018), 33 pages. https://doi.org/10.1145/3185593
[23]
Jialun Aaron Jiang, Charles Kiene, Skyler Middler, Jed R. Brubaker, and Casey Fiesler. 2019. Moderation Challenges in Voice-based Online Communities on Discord. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 55 (Nov. 2019), 23 pages. https://doi.org/10.1145/3359157
[24]
Prerna Juneja, Deepika Ramasubramanian, and Tanushree Mitra. 2020. Through the Looking Glass: Study of Transparency in Reddit's Moderation Practices. In Proceedings of the 21st International Conference on Supporting Group Work. ACM, New York, NY, USA.
[25]
Matthew Katsaros, Kathy Yang, and Lauren Fratamico. 2022. Reconsidering Tweets: Intervening during Tweet Creation Decreases Offensive Content. Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16, 1 (May 2022), 477--487. https://doi.org/10.1609/icwsm.v16i1.19308
[26]
Charles Kiene, Jialun Aaron Jiang, and Benjamin Mako Hill. 2019. Technological Frames and User Innovation: Exploring Technological Change in Community Moderation Teams. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 44 (Nov. 2019), 23 pages. https://doi.org/10.1145/3359146
[27]
Sara Kiesler, Robert Kraut, Paul Resnick, and Aniket Kittur. 2012. Regulating behavior in online communities. In Building Successful Online Communities: Evidence-Based Social Design, Robert Kraut and Paul Resnick (Eds.). MIT Press, Cambridge, MA, USA, Chapter 4, 125--177.
[28]
Hanlin Li, Brent Hecht, and Stevie Chancellor. 2022. Measuring the Monetary Value of Online Volunteer Work. Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16, 1 (May 2022), 596--606. https://ojs.aaai.org/index.php/ICWSM/article/view/19318
[29]
Renkai Ma and Yubo Kou. 2022. "I'm Not Sure What Difference is between Their Content and Mine, Other than the Person Itself": A Study of Fairness Perception of Content Moderation on YouTube. Proc. ACM Hum.-Comput. Interact., Vol. 6, CSCW2, Article 425 (nov 2022), 28 pages. https://doi.org/10.1145/3555150
[30]
Kaitlin Mahar, Amy X. Zhang, and David Karger. 2018. Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). ACM, New York, NY, USA, Article 586, 13 pages. https://doi.org/10.1145/3173574.3174160
[31]
Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 17 (may 2021), 14867--14875. https://ojs.aaai.org/index.php/AAAI/article/view/17745
[32]
J Nathan Matias. 2019. Preventing harassment and increasing group participation through social norms in 2,190 online science discussions. Proceedings of the National Academy of Sciences, Vol. 116, 20 (2019), 9785--9789. https://doi.org/10.1073/pnas.1813486116
[33]
Konstantinos Perifanos and Dionysis Goutsos. 2021. Multimodal Hate Speech Detection in Greek Social Media. Multimodal Technologies and Interaction, Vol. 5, 7 (Jun 2021), 34. https://doi.org/10.3390/mti5070034
[34]
Flor Miriam Plaza-del Arco, M Dolores Molina-González, L Alfonso Ure na-López, and M Teresa Martín-Valdivia. 2021. Comparing pre-trained language models for Spanish hate speech detection. Expert Systems with Applications, Vol. 166 (2021), 114120. https://doi.org/10.1016/j.eswa.2020.114120
[35]
Nauros Romim, Mosahed Ahmed, Hriteshwar Talukder, and Md. Saiful Islam. 2021. Hate Speech Detection in the Bengali Language: A Dataset and Its Baseline Evaluation. In Proceedings of International Joint Conference on Advances in Computational Intelligence, Mohammad Shorif Uddin and Jagdish Chand Bansal (Eds.). Springer Singapore, Singapore, 457--468.
[36]
Lee Ross and Richard E Nisbett. 2011. The person and the situation: Perspectives of social psychology. Pinter & Martin Publishers, London, UK.
[37]
Charlotte Schluger, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil, and Karen Levy. 2022. Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support. Proc. ACM Hum.-Comput. Interact., Vol. 6, CSCW2, Article 370 (nov 2022), 27 pages. https://doi.org/10.1145/3555095
[38]
Sarita Schoenebeck, Amna Batool, Giang Do, Sylvia Darling, Gabriel Grill, Daricia Wilkinson, Mehtab Khan, Kentaro Toyama, and Louise Ashwell. 2023. Online Harassment in Majority Contexts: Examining Harms and Remedies across Countries. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 485, 16 pages. https://doi.org/10.1145/3544548.3581020
[39]
Sarita Schoenebeck, Carol F Scott, Emma Grace Hurley, Tammy Chang, and Ellen Selkie. 2021. Youth Trust in Social Media Companies and Expectations of Justice: Accountability and Repair after Online Harassment. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--18.
[40]
Joseph Seering, Brianna Dym, Geoff Kaufman, and Michael Bernstein. 2022. Pride and Professionalization in Volunteer Moderation: Lessons for Effective Platform-User Collaboration. Journal of Online Trust and Safety, Vol. 1, 2 (Feb 2022). https://doi.org/10.54501/jots.v1i2.34
[41]
Joseph Seering, Juan Pablo Flores, Saiph Savage, and Jessica Hammer. 2018. The Social Roles of Bots: Evaluating Impact of Bots on Discussions in Online Communities. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 157 (Nov. 2018), 29 pages. https://doi.org/10.1145/3274426
[42]
Joseph Seering, Geoff Kaufman, and Stevie Chancellor. 2020. Metaphors in moderation. New Media & Society, Vol. 24, 3 (2020), 621--640. https://doi.org/10.1177/1461444820964968
[43]
Joseph Seering, Robert Kraut, and Laura Dabbish. 2017. Shaping Pro and Anti-Social Behavior on Twitch Through Moderation and Example-Setting. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW '17). ACM, New York, NY, USA, 111--125. https://doi.org/10.1145/2998181.2998277
[44]
Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms. New Media & Society, Vol. 21, 7 (2019), 1417--1443. https://doi.org/10.1177/1461444818821316
[45]
Arushi Sharma, Anubha Kabra, and Minni Jain. 2022. Ceasing hate with MoH: Hate Speech Detection in Hindi--English code-switched language. Information Processing & Management, Vol. 59, 1 (2022), 102760. https://doi.org/10.1016/j.ipm.2021.102760
[46]
C. Estelle Smith, Irfanul Alam, Chenhao Tan, Brian C. Keegan, and Anita L. Blanchard. 2022. The Impact of Governance Bots on Sense of Virtual Community: Development and Validation of the GOV-BOTs Scale. Proc. ACM Hum.-Comput. Interact., Vol. 6, CSCW2, Article 462 (nov 2022), 30 pages. https://doi.org/10.1145/3555563
[47]
Jean Y. Song, Sangwook Lee, Jisoo Lee, Mina Kim, and Juho Kim. 2023. ModSandbox: Facilitating Online Community Moderation Through Error Prediction and Improvement of Automated Rules. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 107, 20 pages. https://doi.org/10.1145/3544548.3581057
[48]
Nicolas P Suzor, Sarah Myers West, Andrew Quodling, and Jillian York. 2019. What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, Vol. 13 (2019), 1526--1543.
[49]
Hibby Thach, Samuel Mayworm, Daniel Delmonaco, and Oliver Haimson. 2022. (In)visible moderation: A digital ethnography of marginalized users and content moderation on Twitch and Reddit. New Media & Society, Vol. 0, 0 (2022), 14614448221109804. https://doi.org/10.1177/14614448221109804
[50]
Bruce W Tuckman and Mary Ann C Jensen. 1977. Stages of small-group development revisited. Group & organization studies, Vol. 2, 4 (1977), 419--427.
[51]
Kristen Vaccaro, Christian Sandvig, and Karrie Karahalios. 2020. "At the End of the Day Facebook Does What It Wants": How Users Experience Contesting Algorithmic Content Moderation. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW2, Article 167 (oct 2020), 22 pages. https://doi.org/10.1145/3415238
[52]
Sarah Myers West. 2018. Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, Vol. 20, 11 (2018), 4366--4383. https://doi.org/10.1177/1461444818773059
[53]
Donghee Yvette Wohn. 2019. Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). ACM, New York, NY, USA, Article 160, 13 pages. https://doi.org/10.1145/3290605.3300390
[54]
Sijia Xiao, Coye Cheshire, and Niloufar Salehi. 2022. Sensemaking, Support, Safety, Retribution, Transformation: A Restorative Justice Approach to Understanding Adolescents? Needs for Addressing Online Harm. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 146, 15 pages. https://doi.org/10.1145/3491102.3517614
[55]
Sijia Xiao, Shagun Jhaver, and Niloufar Salehi. 2023. Addressing Interpersonal Harm in Online Gaming Communities: The Opportunities and Challenges for a Restorative Justice Approach. ACM Trans. Comput.-Hum. Interact., Vol. 30, 6, Article 83 (sep 2023), 36 pages. https://doi.org/10.1145/3603625
[56]
Amy X. Zhang and Justin Cranshaw. 2018. Making Sense of Group Chat through Collaborative Tagging and Summarization. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 196 (nov 2018), 27 pages. https://doi.org/10.1145/3274465

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW2
CSCW
November 2024
5177 pages
EISSN:2573-0142
DOI:10.1145/3703902
  • Editor:
  • Jeff Nichols
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 November 2024
Published in PACMHCI Volume 8, Issue CSCW2

Check for updates

Author Tags

  1. chatbot
  2. community interaction
  3. discord
  4. interaction design
  5. moderation

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 16
    Total Downloads
  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)16
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media