Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3450614.3463353acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Towards Continuous Automatic Audits of Social Media Adaptive Behavior and its Role in Misinformation Spreading

Published: 22 June 2021 Publication History

Abstract

In this paper, we argue for continuous and automatic auditing of social media adaptive behavior and outline its key characteristics and challenges. We are motivated by the spread of online misinformation, which has recently been fueled by opaque recommendations on social media platforms. Although many platforms have declared to take steps against the spread of misinformation, the effectiveness of such measures must be assessed independently. To this end, independent organizations and researchers carry out audits to quantitatively assess platform recommendation behavior and its effects (e.g., filter bubble creation tendencies). The audits are typically based on agents simulating the user behavior and collecting platform reactions (e.g., recommended items). The downside of such auditing is the cost related to the interpretation of collected data (here, some auditors are advancing automatic annotation). Furthermore, social media platforms are dynamic and ever-changing (algorithms change, concepts drift, new content appears). Therefore, audits need to be performed continuously. This further increases the need for automated data annotation. Regarding the data annotation, we argue for the application of weak supervision, semi-supervised learning, and human-in-the-loop techniques.

Supplementary Material

MP4 File (UMAP-ADJ21-FairU03s.mp4)
Presentation video - short version

References

[1]
Corey H Basch, Grace C Hillyer, Zoe C Meleo-Erwin, Christie Jaime, Jan Mohlman, and Charles E Basch. 2020. Preventive behaviors conveyed on YouTube to mitigate transmission of COVID-19: cross-sectional study. JMIR public health and surveillance 6, 2 (2020), e18807.
[2]
Andrea Bontempelli, Fausto Giunchiglia, Andrea Passerini, and Stefano Teso. 2021. Human-in-the-loop Handling of Knowledge Drift. arxiv:2103.14874 [cs.LG]
[3]
L. Borges, B. Martins, and P. Calado. 2019. Combining similarity features and deep representation learning for stance detection in the context of checking fake news. Journal of Data and Information Quality (JDIQ) 11, 3 (2019), 1–26.
[4]
Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems (Boston, Massachusetts, USA) (RecSys ’16). ACM, New York, NY, USA, 191–198. https://doi.org/10.1145/2959100.2959190
[5]
R. Elwell and R. Polikar. 2011. Incremental Learning of Concept Drift in Nonstationary Environments. IEEE Transactions on Neural Networks 22, 10 (2011), 1517–1531. https://doi.org/10.1109/TNN.2011.2160459
[6]
Donald Honeycutt, Mahsan Nourani, and Eric Ragan. 2020. Soliciting human-in-the-loop user feedback for interactive machine learning reduces user trust and impressions of model accuracy. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8. 63–72.
[7]
Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020. Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proc. ACM Hum.-Comput. Interact. 4, CSCW1, Article 048 (May 2020), 27 pages. https://doi.org/10.1145/3392854
[8]
Insight. [n.d.]. The Assessment List for Trustworthy Artificial Intelligence. https://altai.insight-centre.org/. Accessed: 2021-04-12.
[9]
Petros Iosifidis and Nicholas Nicoli. 2020. The battle to end fake news: A qualitative content analysis of Facebook announcements on how it combats disinformation. International Communication Gazette 82, 1 (2020), 60–81.
[10]
Hamdi Kavak, Jose J. Padilla, Christopher J. Lynch, and Saikou Y. Diallo. 2018. Big Data, Agents, and Machine Learning: Towards a Data-Driven Agent-Based Modeling Approach. In Proceedings of the Annual Simulation Symposium(Baltimore, Maryland) (ANSS ’18). Society for Computer Simulation International, San Diego, CA, USA, Article 12, 12 pages.
[11]
Anna Kawakami, Khonzodakhon Umarova, and Eni Mustafaraj. 2020. The Media Coverage of the 2020 US Presidential Election Candidates through the Lens of Google’s Top Stories. Proceedings of the International AAAI Conference on Web and Social Media 14, 1 (May 2020), 868–877.
[12]
Alexander Kogan, Ephraim F. Sudit, and Miklos A. Vasarhelyi. 1999. Continuous Online Auditing: A Program of Research. Journal of Information Systems 13, 2 (09 1999), 87–103. https://doi.org/10.2308/jis.1999.13.2.87
[13]
Juhi Kulshrestha, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna P. Gummadi, and Karrie Karahalios. 2017. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media. In Proc. of the 2017 ACM Conf. on Computer Supported Cooperative Work and Social Computing(CSCW ’17). ACM, New York, NY, USA, 417–432. https://doi.org/10.1145/2998181.2998321
[14]
Stephen M. Mattingly 2019. The Tesserae Project: Large-Scale, Longitudinal, In Situ, Multimodal Sensing of Information Workers. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). ACM, New York, NY, USA, 1–8.
[15]
Danaë Metaxa, Joon Sung Park, James A. Landay, and Jeff Hancock. 2019. Search Media and Elections: A Longitudinal Investigation of Political Search Results. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 129 (Nov. 2019), 17 pages. https://doi.org/10.1145/3359231
[16]
Kostantinos Papadamou, Savvas Zannettou, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Michael Sirivianos. 2021. ” It is just a flu”: Assessing the Effect of Watch History on YouTube’s Pseudoscientific Video Recommendations. arXiv preprint arXiv:2010.11638v3(2021).
[17]
Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and · Christopher Ré. 2020. Snorkel: rapid training data creation with weak supervision. The VLDB Journal 29(2020), 709–730. https://doi.org/10.1007/s00778-019-00552-1
[18]
Christian Sandvig, K. Hamilton, K. Karahalios, and C. Langbort. 2014. Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms. In 64th Annual Meeting of the International Communication Association. Seattle, WA, 23 pages.
[19]
Márcio Silva, Lucas Santos de Oliveira, Athanasios Andreou, Pedro Olmo Vaz de Melo, Oana Goga, and Fabricio Benevenuto. 2020. Facebook Ads Monitor: An Independent Auditing System for Political Ads on Facebook. In Proceedings of The Web Conference 2020 (Taipei, Taiwan) (WWW ’20). Association for Computing Machinery, New York, NY, USA, 224–234. https://doi.org/10.1145/3366423.3380109
[20]
Larissa Spinelli and Mark Crovella. 2020. How YouTube Leads Privacy-Seeking Users Away from Reliable Information. In Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization(Genoa, Italy) (UMAP ’20 Adjunct). ACM, New York, NY, USA, 244–251.
[21]
Ivan Srba, Robert Moro, Daniela Chuda, Maria Bielikova, Jakub Simko, Jakub Sevcech, Daniela Chuda, Pavol Navrat, and Maria Bielikova. 2019. Monant: Universal and Extensible Platform for Monitoring, Detection and Mitigation of Antisocial Behavior. In Proceedings of Workshop on Reducing Online Misinformation Exposure (ROME 2019). 1–7.
[22]
Siva Vaidhyanathan. 2018. Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press.
[23]
Xuezhi Wang, Cong Yu, Simon Baumgartner, and Flip Korn. 2018. Relevant Document Discovery for Fact-Checking Articles. In Companion Proceedings of the The Web Conference 2018 (Lyon, France) (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 525–533. https://doi.org/10.1145/3184558.3188723
[24]
Q. Zhang, S. Liang, A. Lipani, Z. Ren, and E. Yilmaz. 2019. From Stances’ Imbalance to Their Hierarchical Representation and Detection. In The World Wide Web Conference (San Francisco, CA, USA) (WWW ’19). ACM, New York, NY, USA, 2323–2332. https://doi.org/10.1145/3308558.3313724
[25]
Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. 2019. Recommending What Video to Watch next: A Multitask Ranking System. In Proceedings of the 13th ACM Conference on Recommender Systems (Copenhagen, Denmark) (RecSys ’19). ACM, New York, NY, USA, 43–51. https://doi.org/10.1145/3298689.3346997

Cited By

View all
  • (2024)YouTube and Conspiracy Theories: A Longitudinal Audit of Information PanelsProceedings of the 35th ACM Conference on Hypertext and Social Media10.1145/3648188.3675128(273-284)Online publication date: 10-Sep-2024
  • (2024)Beyond phase-in: assessing impacts on disinformation of the EU Digital Services ActAI and Ethics10.1007/s43681-024-00467-wOnline publication date: 11-Apr-2024
  • (2023)Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social NetworksACM Transactions on the Web10.1145/361608817:4(1-26)Online publication date: 10-Oct-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '21: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
June 2021
431 pages
ISBN:9781450383677
DOI:10.1145/3450614
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. audits
  2. filter bubbles
  3. misinformation
  4. personalization
  5. recommendations
  6. social media

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

Conference

UMAP '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)54
  • Downloads (Last 6 weeks)6
Reflects downloads up to 28 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)YouTube and Conspiracy Theories: A Longitudinal Audit of Information PanelsProceedings of the 35th ACM Conference on Hypertext and Social Media10.1145/3648188.3675128(273-284)Online publication date: 10-Sep-2024
  • (2024)Beyond phase-in: assessing impacts on disinformation of the EU Digital Services ActAI and Ethics10.1007/s43681-024-00467-wOnline publication date: 11-Apr-2024
  • (2023)Understanding the Contribution of Recommendation Algorithms on Misinformation Recommendation and Misinformation Dissemination on Social NetworksACM Transactions on the Web10.1145/361608817:4(1-26)Online publication date: 10-Oct-2023
  • (2023)Auditing YouTube’s Recommendation Algorithm for Misinformation Filter BubblesACM Transactions on Recommender Systems10.1145/35683921:1(1-33)Online publication date: 27-Jan-2023

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media