Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3613905.3650801acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

Trust and Transparency: An Exploratory Study on Emerging Adults' Interpretations of Credibility Indicators on Social Media Platforms

Published: 11 May 2024 Publication History

Abstract

The misinformation crisis across social media has disrupted critical access to information in health, politics, and public safety. Content labels have become a feature that social media platforms use to signal credibility of social media posts. Young adults receive a proportionally high percentage of their news through social media platforms, yet prior work has shown that credibility indicators are not effective signals for young audiences. This late-breaking work presents initial findings from an exploratory study into how emerging adults (ages 18-25) assess different credibility indicators currently used on social media platforms. Our findings indicate that participants have a wide variety of interpretations of the purpose and source of context labels, are supportive of automated approaches to content labeling, and trust social media platforms to oversee the application of content labels. This paper contributes these findings to the growing scholarship on content labeling and discusses their implications for designers and policymakers.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
PDF File - Interview Guide
Interview Guide

References

[1]
Ifeoma Adaji. 2023. Age Differences in the Spread of Misinformation Online. European Conference on Social Media 10, 1 (May 2023), 12–19. https://doi.org/10.34190/ecsm.10.1.1156
[2]
Dolores Albarracin, Daniel Romer, Christopher Jones, Kathleen Hall Jamieson, and Patrick Jamieson. 2018. Misleading Claims About Tobacco Products in YouTube Videos: Experimental Effects of Misinformation on Unhealthy Attitudes. Journal of Medical Internet Research 20, 6 (June 2018), e229. https://doi.org/10.2196/jmir.9959
[3]
Vimala Balakrishnan. 2022. Socio-demographic Predictors for Misinformation Sharing and Authenticating amidst the COVID-19 Pandemic among Malaysian Young Adults. Information Development (Aug. 2022), 026666692211189. https://doi.org/10.1177/02666669221118922
[4]
Porismita Borah, Bimbisar Irom, and Ying Chia Hsu. 2022. ‘It infuriates me’: examining young adults’ reactions to and recommendations to fight misinformation about COVID-19. Journal of Youth Studies 25, 10 (Nov. 2022), 1411–1431. https://doi.org/10.1080/13676261.2021.1965108
[5]
Kathy Charmaz. 2006. Constructing grounded theory: A practical guide through qualitative analysis. sage.
[6]
Diego Esteves, Aniketh Janardhan Reddy, Piyush Chawla, and Jens Lehmann. 2018. Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web. (2018). https://doi.org/10.48550/ARXIV.1809.00494 Publisher: arXiv Version Number: 1.
[7]
Andrew J Flanagin and Miriam J Metzger. 2008. Digital media and youth: Unparalleled opportunity and unprecedented responsibility. MacArthur Foundation Digital Media and Learning Initiative Cambridge, MA, USA.
[8]
Chen Guo, Nan Zheng, and Chengqi (John) Guo. 2023. Seeing is Not Believing: A Nuanced View of Misinformation Warning Efficacy on Video-Sharing Social Media Platforms. Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (Sept. 2023), 1–35. https://doi.org/10.1145/3610085
[9]
Eszter Hargittai, Lindsay Fullerton, Ericka Menchen-Trevino, and Kristin Yates Thomas. 2010. Trust online: Young adults’ evaluation of web content. International journal of communication 4 (2010), 27.
[10]
Amelia Hassoun, Ian Beacock, Sunny Consolvo, Beth Goldberg, Patrick Gage Kelley, and Daniel M. Russell. 2023. Practicing Information Sensibility: How Gen Z Engages with Online Information. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–17. https://doi.org/10.1145/3544548.3581328
[11]
Hendrik Heuer and Elena Leah Glassman. 2022. A Comparative Evaluation of Interventions Against Misinformation: Augmenting the WHO Checklist. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–21. https://doi.org/10.1145/3491102.3517717
[12]
Chenyan Jia, Alexander Boltz, Angie Zhang, Anqing Chen, and Min Kyung Lee. 2022. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-partisan Misinformation. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (Nov. 2022), 1–27. https://doi.org/10.1145/3555096
[13]
Prerna Juneja, Md Momen Bhuiyan, and Tanushree Mitra. 2023. Assessing enactment of content regulation policies: A post hoc crowd-sourced audit of election misinformation on YouTube. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–22. https://doi.org/10.1145/3544548.3580846
[14]
Jan Kirchner and Christian Reuter. 2020. Countering fake news: A comparison of possible solutions regarding user acceptance and effectiveness. Proceedings of the ACM on Human-computer Interaction 4, CSCW2 (2020), 1–27. ISBN: 2573-0142 Publisher: ACM New York, NY, USA.
[15]
Xingyu Liu, Li Qi, Laurent Wang, and Miriam J. Metzger. 2023. Checking the Fact-Checkers: The Role of Source Type, Perceived Credibility, and Individual Differences in Fact-Checking Effectiveness. Communication Research (Oct. 2023), 00936502231206419. https://doi.org/10.1177/00936502231206419
[16]
Cameron Martel and David G. Rand. 2023. Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology (Oct. 2023), 101710. https://doi.org/10.1016/j.copsyc.2023.101710
[17]
Ericka Menchen-Trevino and Eszter Hargittai. 2011. Young Adults’ Credibility Assessment of Wikipedia. Information, Communication & Society 14, 1 (2011), 24–51. Publisher: Taylor & Francis.
[18]
Amy Mitchell and Mason Walker. 2021. More Americans now say government should take steps to restrict false information online than in 2018. https://www.pewresearch.org/short-reads/2021/08/18/more-americans-now-say-government-should-take-steps-to-restrict-false-information-online-than-in-2018/
[19]
Garrett Morrow, Briony Swire‐Thompson, Jessica Montgomery Polny, Matthew Kopec, and John P. Wihbey. 2022. The emerging science of content labeling: Contextualizing social media content moderation. Journal of the Association for Information Science and Technology 73, 10 (Oct. 2022), 1365–1386. https://doi.org/10.1002/asi.24637
[20]
Mohsen Mosleh, Cameron Martel, Dean Eckles, and David Rand. 2021. Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445642
[21]
Tayo Oyedeji. 2011. Credibility perceptions of different types of weblogs among young adults. Global Media Journal 11, 19 (2011), N_A. Publisher: Purdue University Calumet.
[22]
Christina A. Pan, Sahil Yakhmi, Tara P. Iyer, Evan Strasnick, Amy X. Zhang, and Michael S. Bernstein. 2022. Comparing the Perceived Legitimacy of Content Moderation Processes: Contractors, Algorithms, Expert Panels, and Digital Juries. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 2022), 1–31. https://doi.org/10.1145/3512929
[23]
Francesco Pierri, Luca Luceri, Nikhil Jindal, and Emilio Ferrara. 2023. Propaganda and Misinformation on Facebook and Twitter during the Russian Invasion of Ukraine. In Proceedings of the 15th ACM Web Science Conference 2023. ACM, Austin TX USA, 65–74. https://doi.org/10.1145/3578503.3583597
[24]
Claudette Pretorius, Darragh McCashin, Naoise Kavanagh, and David Coyle. 2020. Searching for Mental Health: A Mixed-Methods Study of Young People’s Online Help-seeking. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376328
[25]
Julie Ray. 2021. Young People Rely on Social-Media, but Don’t Trust it. Gallup. com (2021).
[26]
Matthew Sadiku, Tochukwu Eze, and Sarhan Musa. 2018. Fake news and misinformation. International Journal of Advances in Scientific Research and Engineering 4, 5 (2018), 187–190.
[27]
Emily Saltz, Claire R Leibowicz, and Claire Wardle. 2021. Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventions. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1–6. https://doi.org/10.1145/3411763.3451807
[28]
Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science. 265–274.
[29]
Filipo Sharevski, Amy Devine, Emma Pieroni, and Peter Jacnim. 2022. Meaningful Context, a Red Flag, or Both? Users’ Preferences for Enhanced Misinformation Warnings on Twitter. http://arxiv.org/abs/2205.01243 arXiv:2205.01243 [cs].
[30]
Elisa Shearer. 2021. More than eight-in-ten Americans get news from digital devices. Pew Research Center 12 (2021). https://www.pewresearch.org/short-reads/2021/01/12/more-than-eight-in-ten-americans-get-news-from-digital-devices/
[31]
Rohit Shewale. 2024. ChatGPT Statistics — User Demographics (January 2024). https://www.demandsage.com/chatgpt-statistics/
[32]
Kai Shu, Suhang Wang, and Huan Liu. 2019. Beyond news contents: The role of social context for fake news detection. In Proceedings of the twelfth ACM international conference on web search and data mining. 312–320.
[33]
Niraj Sitaula, Chilukuri K. Mohan, Jennifer Grygiel, Xinyi Zhou, and Reza Zafarani. 2020. Credibility-Based Fake News Detection. In Disinformation, Misinformation, and Fake News in Social Media, Kai Shu, Suhang Wang, Dongwon Lee, and Huan Liu (Eds.). Springer International Publishing, Cham, 163–182. https://doi.org/10.1007/978-3-030-42699-6_9 Series Title: Lecture Notes in Social Networks.
[34]
Matthew Spradling, Jeremy Straub, and Jay Strong. 2021. Protection from ‘Fake News’: The Need for Descriptive Factual Labeling for Online Content. Future Internet 13, 6 (May 2021), 142. https://doi.org/10.3390/fi13060142
[35]
Julian Unkel and Alexander Haas. 2017. The effects of credibility cues on the selection of search engine results. Journal of the Association for Information Science and Technology 68, 8 (2017), 1850–1862. ISBN: 2330-1635 Publisher: Wiley Online Library.
[36]
John Wihbey, Matthew Kopec, and Ronald Sandler. 2021. Informational Quality Labeling on Social Media: In Defense of a Social Epistemology Strategy. Available at SSRN 3858906 (2021).
[37]
Waheeb Yaqub, Otari Kakhidze, Morgan L. Brockman, Nasir Memon, and Sameer Patil. 2020. Effects of Credibility Indicators on Social Media News Sharing Intent. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1–14. https://doi.org/10.1145/3313831.3376213
[38]
Himanshu Zade, Megan Woodruff, Erika Johnson, Mariah Stanley, Zhennan Zhou, Minh Tu Huynh, Alissa Elizabeth Acheson, Gary Hsieh, and Kate Starbird. 2023. Tweet Trajectory and AMPS-based Contextual Cues can Help Users Identify Misinformation. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (April 2023), 1–27. https://doi.org/10.1145/3579536

Index Terms

  1. Trust and Transparency: An Exploratory Study on Emerging Adults' Interpretations of Credibility Indicators on Social Media Platforms

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Author Tags

    1. credibility assessment
    2. credibility indicators
    3. emerging adults
    4. misinformation
    5. young people

    Qualifiers

    • Work in progress
    • Research
    • Refereed limited

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 302
      Total Downloads
    • Downloads (Last 12 months)302
    • Downloads (Last 6 weeks)65
    Reflects downloads up to 14 Dec 2024

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media