Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Toxicity in the Decentralized Web and the Potential for Model Sharing

Published: 06 June 2022 Publication History

Abstract

The "Decentralised Web" (DW) is an evolving concept, which encompasses technologies aimed at providing greater transparency and openness on the web. The DW relies on independent servers (aka instances) that mesh together in a peer-to-peer fashion to deliver a range of services (e.g. micro-blogs, image sharing, video streaming). However, toxic content moderation in this decentralised context is challenging. This is because there is no central entity that can define toxicity, nor a large central pool of data that can be used to build universal classifiers. It is therefore unsurprising that there have been several high-profile cases of the DW being misused to coordinate and disseminate harmful material. Using a dataset of 9.9M posts from 117K users on Pleroma (a popular DW microblogging service), we quantify the presence of toxic content. We find that toxic content is prevalent and spreads rapidly between instances. We show that automating per-instance content moderation is challenging due to the lack of sufficient training data available and the effort required in labelling. We therefore propose and evaluate ModPair, a model sharing system that effectively detects toxic content, gaining an average per-instance macro-F1 score 0.89.

References

[1]
2018. ActivityPub W3C Recommendation. https://www.w3.org/TR/activitypub/.
[2]
2019. Decentralized social media platform Mastodon deals with an influx of Gab users. https://www.tsf.foundation/ blog/decentralized-social-media-platform-mastodon-deals-with-an-influx-of-gab.
[3]
2021. Why free speech on-the internet isn't free for all. https://www.bloomberg.com/news/articles/2021-06- 19/why-free-speech-on-the-internet-isn-t-free-for-all-quicktake.
[4]
2022. Perspective API. https://www.perspectiveapi.com/.
[5]
Yong-Yeol Ahn, Seungyeop Han, Haewoon Kwak, Sue Moon, and Hawoong Jeong. 2007. Analysis of topological characteristics of huge online social networking services. In Proceedings of the 16th international conference on World Wide Web. 835--844.
[6]
Hind Almerekhi, Bernard J Jansen, and Haewoon Kwak. 2020. Investigating toxicity across multiple Reddit communities, users, and moderators. In Companion proceedings of the web conference 2020. 294--298.
[7]
Kofi Arhin, Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, and Moninder Singh. 2021. Ground-Truth, Whose Truth?--Examining the Challenges with Annotating Toxic Text Datasets. arXiv preprint arXiv:2112.03529 (2021).
[8]
Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep Learning for Hate Speech Detection in Tweets. In Proceedings of the 26th International Conference on World Wide Web Companion (Perth, Australia) (WWW '17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 759--760. https://doi.org/10.1145/3041021.3054223
[9]
Adam Berger, Stephen A Della Pietra, and Vincent J Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics 22, 1 (1996), 39--71.
[10]
Michael Bernstein, Andrés Monroy-Hernández, Drew Harry, Paul André, Katrina Panovich, and Greg Vargas. 2011. 4chan and/b: An Analysis of Anonymity and Ephemerality in a Large Online Community. In Proceedings of the Proc. ACM Meas. Anal. Comput. Syst., Vol. 6, No. 2, Article 35. Publication date: June 2022. 35:22 Haris Bin Zia et al. International AAAI Conference on Web and Social Media, Vol. 5.
[11]
Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2020. Pre-training is a hot topic: Contextualized document embeddings improve topic coherence. arXiv preprint arXiv:2004.03974 (2020).
[12]
Ames Bielenberg, Lara Helm, Anthony Gentilucci, Dan Stefanescu, and Honggang Zhang. 2012. The growth of diaspora-a decentralized online social network in the wild. In 2012 proceedings IEEE INFOCOM workshops. IEEE, 13--18.
[13]
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
[14]
Pete Burnap and Matthew L Williams. 2015. Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & internet 7, 2 (2015), 223--242.
[15]
Rui Cao, Roy Ka-Wei Lee, and Tuan-Anh Hoang. 2020. DeepHate: Hate Speech Detection via Multi-Faceted Text Representations. In 12th ACM Conference on Web Science (Southampton, United Kingdom) (WebSci '20). Association for Computing Machinery, New York, NY, USA, 11--20. https://doi.org/10.1145/3394231.3397890
[16]
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human-Computer Interaction 1, CSCW (2017), 1--22.
[17]
Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Athena Vakali, and Nicolas Kourtellis. 2019. Detecting cyberbullying and cyberaggression in social media. ACM Transactions on the Web (TWEB) 13, 3 (2019), 1--51.
[18]
Xu Cheng, Cameron Dale, and Jiangchuan Liu. 2008. Statistics and social network of youtube videos. In 2008 16th Interntional Workshop on Quality of Service. IEEE, 229--238.
[19]
Nick Craswell. 2009. Precision at n. Springer US, Boston, MA, 2127--2128. https://doi.org/10.1007/978-0--387- 39940--9_484
[20]
Anwitaman Datta, Sonja Buchegger, Le-Hung Vu, Thorsten Strufe, and Krzysztof Rzadca. 2010. Decentralized online social networks. In Handbook of social network technologies and applications. Springer, 349--378.
[21]
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Eleventh International AAAI Conference on Web and Social Media.
[22]
Ona De Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2018. Hate speech dataset from a white supremacy forum. arXiv preprint arXiv:1809.04444 (2018).
[23]
Lysandre Debut. 2019. Benchmarking Transformers: PyTorch and TensorFlow. https://medium.com/huggingface/ benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2.
[24]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[25]
Diaspora. 2010. https://diasporafoundation.org.
[26]
Ben Dickson. 2020. The GPT-3 economy. https://bdtechtalks.com/2020/09/21/gpt-3-economy-businessmodel/.
[27]
Ysabel Gerrard. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media & Society 20, 12 (2018), 4492--4511.
[28]
Zakaria Gheid and Yacine Challal. 2015. An efficient and privacy-preserving similarity evaluation for big data analytics. In 2015 IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC). IEEE, 281--289.
[29]
Vicenç Gómez, Andreas Kaltenbrunner, and Vicente López. 2008. Statistical Analysis of the Social Network and Discussion Threads in Slashdot. In Proceedings of the 17th International Conference on World Wide Web (Beijing, China) (WWW '08). Association for Computing Machinery, New York, NY, USA, 645--654. https://doi.org/10.1145/ 1367497.1367585
[30]
Roberto Gonzalez, Ruben Cuevas, Reza Motamedi, Reza Rejaie, and Angel Cuevas. 2013. Google+ or Google-? Dissecting the evolution of the new OSN in its first year. In Proceedings of the 22nd international conference on World Wide Web. 483--494.
[31]
Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An expert annotated dataset for the detection of online misogyny. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 1336--1350.
[32]
Anaobi Ishaku Hassan, Aravindh Raman, Ignacio Castro, Haris Bin Zia, Emiliano De Cristofaro, Nishanth Sastry, and Gareth Tyson. 2021. Exploring content moderation in the decentralised web: The pleroma case. In Proceedings of the 17th International Conference on emerging Networking EXperiments and Technologies. 328--335.
[33]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735--1780.
[34]
Shagun Jhaver, Sucheta Ghoshal, Amy Bruckman, and Eric Gilbert. 2018. Online Harassment and Content Moderation: The Case of Blocklists. 25, 2, Article 12 (mar 2018), 33 pages. https://doi.org/10.1145/3185593 Proc. ACM Meas. Anal. Comput. Syst., Vol. 6, No. 2, Article 35. Publication date: June 2022. Toxicity in the Decentralized Web and the Potential for Model Sharing 35:23
[35]
Sepandar D Kamvar, Mario T Schlosser, and Hector Garcia-Molina. 2003. The eigentrust algorithm for reputation management in p2p networks. In Proceedings of the 12th international conference on World Wide Web. 640--651.
[36]
Sebastian Kaune, Konstantin Pussep, Christof Leng, Aleksandra Kovacevic, Gareth Tyson, and Ralf Steinmetz. 2009. Modelling the internet delay space based on geographical locations. In 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing. IEEE, 301--310.
[37]
Ravi Kumar, Jasmine Novak, and Andrew Tomkins. 2010. Structure and evolution of online social networks. In Link mining: models, algorithms, and applications. Springer, 337--357.
[38]
Jure Leskovec and Eric Horvitz. 2008. Planetary-scale views on a large instant-messaging network. In Proceedings of the 17th international conference on World Wide Web. 915--924.
[39]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS'17). Curran Associates Inc., Red Hook, NY, USA, 4768--4777.
[40]
Walid Magdy, Kareem Darwish, Norah Abokhodair, Afshin Rahimi, and Timothy Baldwin. 2016. # isisisnotislam or# deportallmuslims? Predicting unspoken views. In Proceedings of the 8th ACM Conference on Web Science. 95--106.
[41]
Gabriel Magno, Giovanni Comarela, Diego Saez-Trumper, Meeyoung Cha, and Virgilio Almeida. 2012. New kid on the block: Exploring the google+ social graph. In Proceedings of the 2012 Internet Measurement Conference. 159--170.
[42]
Lydia Manikonda, Yuheng Hu, and Subbarao Kambhampati. 2014. Analyzing user activities, demographics, social network structure and user-generated content on Instagram. arXiv preprint arXiv:1410.8099 (2014).
[43]
Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, and Jing Bai. 2020. Ladabert: Lightweight adaptation of bert through hybrid model compression. arXiv preprint arXiv:2004.04124 (2020).
[44]
Mastodon. 2016. https://joinmastodon.org.
[45]
Antonis Matakos, Evimaria Terzi, and Panayiotis Tsaparas. 2017. Measuring and moderating opinion polarization in social networks. Data Mining and Knowledge Discovery 31, 5 (2017), 1480--1505.
[46]
Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of hate speech in online social media. In Proceedings of the 10th ACM conference on web science. 173--182.
[47]
H Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. arXiv preprint 1602.05629 (2016).
[48]
Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, and Nicolas Kourtellis. 2021. PPFL: privacypreserving federated learning with trusted execution environments. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services. 94--108.
[49]
Shruthi Mohan, Apala Guha, Michael Harris, Fred Popowich, Ashley Schuster, and Chris Priebe. 2017. The impact of toxic language on the health of reddit communities. In Canadian Conference on Artificial Intelligence. Springer, 51--56.
[50]
Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2019. A BERT-based transfer learning approach for hate speech detection in online social media. In International Conference on Complex Networks and Their Applications. Springer, 928--940.
[51]
W James Murdoch, Peter J Liu, and Bin Yu. 2018. Beyond word importance: Contextual decomposition to extract interactions from lstms. In Proceedings of the International Conference on Learning Representations.
[52]
Seth A Myers, Aneesh Sharma, Pankaj Gupta, and Jimmy Lin. 2014. Information network or social network? The structure of the Twitter follow graph. In Proceedings of the 23rd International Conference on World Wide Web. 493--498.
[53]
Antonis Papasavva, Jeremy Blackburn, Gianluca Stringhini, Savvas Zannettou, and Emiliano De Cristofaro. 2021. ?Is it a Qoincidence?": An Exploratory Study of QAnon on Voat. In Proceedings of the Web Conference 2021. 460--471.
[54]
Antonis Papasavva, Savvas Zannettou, Emiliano De Cristofaro, Gianluca Stringhini, and Jeremy Blackburn. 2020. Raiders of the lost kek: 3.5 years of augmented 4chan posts from the politically incorrect board. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 14. 885--894.
[55]
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825--2830.
[56]
PeerTube. 2018. https://joinpeertube.org.
[57]
Pleroma. 2016. https://pleroma.social.
[58]
Daniel Ramage, Evan Rosen, Jason Chuang, Christopher D Manning, and Daniel A McFarland. 2009. Topic modeling for the social sciences. In NIPS 2009 workshop on applications for topic models: text and beyond, Vol. 5. 1--4.
[59]
Aravindh Raman, Sagar Joglekar, Emiliano De Cristofaro, Nishanth Sastry, and Gareth Tyson. 2019. Challenges in the decentralised web: The mastodon case. In Proceedings of the Internet Measurement Conference. 217--229.
[60]
Elizabeth Reichert, Helen Qiu, and Jasmine Bayrooti. 2020. Reading between the demographic lines: Resolving sources of bias in toxicity classifiers. arXiv preprint arXiv:2006.16402 (2020). Proc. ACM Meas. Anal. Comput. Syst., Vol. 6, No. 2, Article 35. Publication date: June 2022. 35:24 Haris Bin Zia et al.
[61]
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 (2019).
[62]
Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virgílio AF Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on twitter. In Twelfth international AAAI conference on web and social media.
[63]
Sebastián A. Ríos, Roberto A. Silva, and Felipe Aguilera. 2012. A Dissimilarity Measure for Automate Moderation in Online Social Networks. In Proceedings of the 4th International Workshop on Web Intelligence & Communities (Lyon, France) (WI&C '12). Association for Computing Machinery, New York, NY, USA, Article 3, 9 pages. https: //doi.org/10.1145/2189736.2189741
[64]
Georgios Rizos, Konstantin Hemker, and Björn Schuller. 2019. Augment to prevent: short-text data augmentation in deep learning for hate-speech classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 991--1000.
[65]
Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining. 399--408.
[66]
Paul Röttger, Bertram Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, and Janet Pierrehumbert. 2020. Hatecheck: Functional tests for hate speech detection models. arXiv preprint arXiv:2012.15606 (2020).
[67]
GREGORI SAAVEDRA. 2021. Terrorists are hiding where they can't be moderated. https://www.wired.co.uk/ article/terrorists-dweb.
[68]
Adam Satariano. 2021. Facebook Hearing Strengthens Calls for Regulation in Europe. https://www.nytimes.com/ 2021/10/06/technology/facebook-european-union-regulation.html.
[69]
Lorenz Schwittmann, Christopher Boelmann, Matthäus Wander, and Torben Weis. 2013. SoNet--Privacy and replication in federated online social networks. In 2013 IEEE 33rd International Conference on Distributed Computing Systems Workshops. IEEE, 51--57.
[70]
Amit Sheth, Valerie L Shalin, and Ugur Kursuncu. 2021. Defining and detecting toxicity on social media: context and knowledge are key. Neurocomputing (2021).
[71]
Carson Sievert and Kenneth Shirley. 2014. LDAvis: A method for visualizing and interpreting topics. In Proceedings of the workshop on interactive language learning, visualization, and interfaces. 63--70.
[72]
Sara Owsley Sood, Elizabeth F Churchill, and Judd Antin. 2012. Automatic identification of personal insults on social news sites. Journal of the American Society for Information Science and Technology 63, 2 (2012), 270--285.
[73]
Amanda L Traud, Peter J Mucha, and Mason A Porter. 2012. Social structure of Facebook networks. Physica A: Statistical Mechanics and its Applications 391, 16 (2012), 4165--4180.
[74]
Betty Van Aken, Julian Risch, Ralf Krestel, and Alexander Löser. 2018. Challenges for toxic comment classification: An in-depth error analysis. arXiv preprint arXiv:1809.07572 (2018).
[75]
Bertie Vidgen and Leon Derczynski. 2020. Directions in abusive language training data, a systematic review: Garbage in, garbage out. PloS one 15, 12 (2020), e0243300.
[76]
Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detection. In Proceedings of the third workshop on abusive language online. 80--93.
[77]
Bimal Viswanath, Alan Mislove, Meeyoung Cha, and Krishna P Gummadi. 2009. On the evolution of user interaction in facebook. In Proceedings of the 2nd ACM workshop on Online social networks. 37--42.
[78]
William Warner and Julia Hirschberg. 2012. Detecting Hate Speech on the World Wide Web. In Proceedings of the Second Workshop on Language in Social Media (Montreal, Canada) (LSM 2012). Association for Computational Linguistics, USA, 19--26.
[79]
Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop. 88--93.
[80]
Bencheng Wei, Jason Li, Ajay Gupta, Hafiza Umair, Atsu Vovor, and Natalie Durzynski. 2021. Offensive Language and Hate Speech Detection with Deep Learning and Transfer Learning. arXiv preprint arXiv:2108.03305 (2021).
[81]
Markus Weimer, Iryna Gurevych, and Max Mühlhäuser. 2007. Automatically assessing the post quality in online discussions on software. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. 125--128.
[82]
Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex Machina: Personal Attacks Seen at Scale. In Proceedings of the 26th International Conference on World Wide Web. 1391--1399.
[83]
Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini, and Jeremy Blackburn. 2018. What is gab: A bastion of free speech or an alt-right echo chamber. In Companion Proceedings of the The Web Conference 2018. 1007--1014.
[84]
Matteo Zignani, Sabrina Gaito, and Gian Paolo Rossi. 2018. Follow the ?Mastodon": Structure and evolution of a decentralized online social network. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 12. Proc. ACM Meas. Anal. Comput. Syst., Vol. 6, No. 2, Article 35. Publication date: June 2022. Toxicity in the Decentralized Web and the Potential for Model Sharing 35:25
[85]
Matteo Zignani, Christian Quadri, Alessia Galdeman, Sabrina Gaito, and Gian Paolo Rossi. 2019. Mastodon content warnings: Inappropriate contents in a microblogging platform. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 639--645.

Cited By

View all
  • (2024)PleromaSistematična analiza decentraliziranih družbenih medijev10.18690/um.feri.3.2024.16(263-284)Online publication date: 13-May-2024
  • (2024)ReSPect: Enabling Active and Scalable Responses to Networked Online HarassmentProceedings of the ACM on Human-Computer Interaction10.1145/36373948:CSCW1(1-30)Online publication date: 26-Apr-2024
  • (2024)An Exploration of Decentralized Moderation on MastodonProceedings of the 16th ACM Web Science Conference10.1145/3614419.3644016(53-58)Online publication date: 21-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Measurement and Analysis of Computing Systems
Proceedings of the ACM on Measurement and Analysis of Computing Systems  Volume 6, Issue 2
POMACS
June 2022
499 pages
EISSN:2476-1249
DOI:10.1145/3543145
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2022
Published in POMACS Volume 6, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. content moderation
  2. decentralised web
  3. pleroma
  4. toxicity analysis

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)106
  • Downloads (Last 6 weeks)18
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)PleromaSistematična analiza decentraliziranih družbenih medijev10.18690/um.feri.3.2024.16(263-284)Online publication date: 13-May-2024
  • (2024)ReSPect: Enabling Active and Scalable Responses to Networked Online HarassmentProceedings of the ACM on Human-Computer Interaction10.1145/36373948:CSCW1(1-30)Online publication date: 26-Apr-2024
  • (2024)An Exploration of Decentralized Moderation on MastodonProceedings of the 16th ACM Web Science Conference10.1145/3614419.3644016(53-58)Online publication date: 21-May-2024
  • (2024)Decentralized social networks and the future of free speech onlineComputer Law & Security Review10.1016/j.clsr.2024.10605955(106059)Online publication date: Nov-2024
  • (2023)Flocking to Mastodon: Tracking the Great Twitter MigrationProceedings of the 2023 ACM on Internet Measurement Conference10.1145/3618257.3624819(111-123)Online publication date: 24-Oct-2023
  • (2023)A First Look at User-Controlled Moderation on Web3 Social Media: The Case of Memo.cashProceedings of the 3rd International Workshop on Open Challenges in Online Social Networks10.1145/3599696.3612901(29-37)Online publication date: 4-Sep-2023
  • (2023)Set in Stone: Analysis of an Immutable Web3 Social Media PlatformProceedings of the ACM Web Conference 202310.1145/3543507.3583510(1865-1874)Online publication date: 30-Apr-2023
  • (2023)Biases and Ethical Considerations for Machine Learning Pipelines in the Computational Social SciencesEthics in Artificial Intelligence: Bias, Fairness and Beyond10.1007/978-981-99-7184-8_6(99-113)Online publication date: 30-Dec-2023

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media