Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleMay 2024
Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing SystemsArticle No.: 191, Pages 1–9https://doi.org/10.1145/3613904.3642204Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the ...
- panelOctober 2023
Getting Data for CSCW Research
CSCW '23 Companion: Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social ComputingPages 415–416https://doi.org/10.1145/3584931.3608440This panel will bring together a group of scholars from diverse methodological backgrounds to discuss critical aspects of data collection for CSCW research. This discussion will consider the rapidly evolving ethical, practical, and data access ...
- research-articleOctober 2023
Personalizing Content Moderation on Social Media: User Perspectives on Moderation Choices, Interface Design, and Labor
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 7, Issue CSCW2Article No.: 289, Pages 1–33https://doi.org/10.1145/3610080Social media platforms moderate content for each user by incorporating the outputs of both platform-wide content moderation systems and, in some cases, user-configured personal moderation preferences. However, it is unclear (1) how end users perceive the ...
- research-articleSeptember 2023
Addressing Interpersonal Harm in Online Gaming Communities: The Opportunities and Challenges for a Restorative Justice Approach
ACM Transactions on Computer-Human Interaction (TOCHI), Volume 30, Issue 6Article No.: 83, Pages 1–36https://doi.org/10.1145/3603625Most social media platforms implement content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive approaches, e.g., bans and content removal. We consider an alternative justice framework, ...
- research-articleApril 2022
Designing Word Filter Tools for Creator-led Comment Moderation
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsArticle No.: 205, Pages 1–21https://doi.org/10.1145/3491102.3517505Online social platforms centered around content creators often allow comments on content, where creators can then moderate the comments they receive. As creators can face overwhelming numbers of comments, with some of them harassing or hateful, ...
- research-articleMarch 2022
Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit
ACM Transactions on Computer-Human Interaction (TOCHI), Volume 29, Issue 4Article No.: 29, Pages 1–26https://doi.org/10.1145/3490499Should social media platforms override a community’s self-policing when it repeatedly break rules? What actions can they consider? In light of this debate, platforms have begun experimenting with softer alternatives to outright bans. We examine one such ...
- research-articleOctober 2021Honorable Mention
Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 5, Issue CSCW2Article No.: 381, Pages 1–30https://doi.org/10.1145/3479525Deplatforming refers to the permanent ban of controversial public figures with large followings on social media sites. In recent years, platforms like Facebook, Twitter and YouTube have deplatformed many influencers to curb the spread of offensive ...
- research-articleOctober 2021Honorable Mention
Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels
- Manoel Horta Ribeiro,
- Shagun Jhaver,
- Savvas Zannettou,
- Jeremy Blackburn,
- Gianluca Stringhini,
- Emiliano De Cristofaro,
- Robert West
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 5, Issue CSCW2Article No.: 316, Pages 1–24https://doi.org/10.1145/3476057When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, ...
- abstractNovember 2019
Volunteer Work: Mapping the Future of Moderation Research
- Charles Kiene,
- Kate Grandprey-Shores,
- Eshwar Chandrasekharan,
- Shagun Jhaver,
- Jialun "Aaron" Jiang,
- Brianna Dym,
- Joseph Seering,
- Sarah Gilbert,
- Kat Lo,
- Donghee Yvette Wohn,
- Bryan Dosono
CSCW '19 Companion: Companion Publication of the 2019 Conference on Computer Supported Cooperative Work and Social ComputingPages 492–497https://doi.org/10.1145/3311957.3359443Research on the governance of online communities often requires exchanges and interactions between researchers and moderators. While a growing body of work has studied commercial content moderation in the context of platform governance and policy ...
- research-articleNovember 2019
Learning to Airbnb by Engaging in Online Communities of Practice
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 3, Issue CSCWArticle No.: 228, Pages 1–19https://doi.org/10.1145/3359330Technological advances, combined with sustained, minimalist consumerism, have raised the popularity of sharing economy platforms like Airbnb and Uber. These platforms are considered to have disrupted traditional industries and revolutionized how ...
- research-articleNovember 2019
"Did You Suspect the Post Would be Removed?": Understanding User Reactions to Content Removals on Reddit
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 3, Issue CSCWArticle No.: 192, Pages 1–33https://doi.org/10.1145/3359294Thousands of users post on Reddit every day, but a fifth of all posts are removed. How do users react to these removals? We conducted a survey of 907 Reddit users, asking them to reflect on their post removal a few hours after it happened. Examining the ...
- research-articleNovember 2019
Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 3, Issue CSCWArticle No.: 150, Pages 1–27https://doi.org/10.1145/3359252When posts are removed on a social media platform, users may or may not receive an explanation. What kinds of explanations are provided? Do those explanations matter? Using a sample of 32 million Reddit posts, we characterize the removal explanations ...
- research-articleJuly 2019
Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator
ACM Transactions on Computer-Human Interaction (TOCHI), Volume 26, Issue 5Article No.: 31, Pages 1–35https://doi.org/10.1145/3338243What one may say on the internet is increasingly controlled by a mix of automated programs, and decisions made by paid and volunteer human moderators. On the popular social media site Reddit, moderators heavily rely on a configurable, automated program ...
- research-articleNovember 2018
The Internet's Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales
- Eshwar Chandrasekharan,
- Mattia Samory,
- Shagun Jhaver,
- Hunter Charvat,
- Amy Bruckman,
- Cliff Lampe,
- Jacob Eisenstein,
- Eric Gilbert
Proceedings of the ACM on Human-Computer Interaction (PACMHCI), Volume 2, Issue CSCWArticle No.: 32, Pages 1–25https://doi.org/10.1145/3274301Norms are central to how online communities are governed. Yet, norms are also emergent, arise from interaction, and can vary significantly between communities---making them challenging to study at scale. In this paper, we study community norms on Reddit ...
- research-articleApril 2018
Algorithmic Anxiety and Coping Strategies of Airbnb Hosts
CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing SystemsPaper No.: 421, Pages 1–12https://doi.org/10.1145/3173574.3173995Algorithms increasingly mediate how work is evaluated in a wide variety of work settings. Drawing on our interviews with 15 Airbnb hosts, we explore the impact of algorithmic evaluation on users and their work practices in the context of Airbnb. Our ...
- research-articleMarch 2018
Online Harassment and Content Moderation: The Case of Blocklists
ACM Transactions on Computer-Human Interaction (TOCHI), Volume 25, Issue 2Article No.: 12, Pages 1–33https://doi.org/10.1145/3185593Online harassment is a complex and growing problem. On Twitter, one mechanism people use to avoid harassment is the blocklist, a list of accounts that are preemptively blocked from interacting with a subscriber. In this article, we present a rich ...
- abstractFebruary 2016
PostScholar: Surfacing Social Signals in Google Scholar Search
CSCW '16 Companion: Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing CompanionPages 17–20https://doi.org/10.1145/2818052.2874314PostScholar is a service that augments the results returned by Google Scholar, a search engine for academic citations. PostScholar detects the social media activity related to an article and displays that information on the search results page returned ...