Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3638380.3638388acmotherconferencesArticle/Chapter ViewAbstractPublication PagesozchiConference Proceedingsconference-collections
research-article

XAI in Automated Fact-Checking? The Benefits Are Modest and There's No One-Explanation-Fits-All

Published: 10 May 2024 Publication History

Abstract

The massive volume of online information along with the issue of misinformation has spurred active research in the automation of fact-checking. Like fact-checking by human experts, it is not enough for an automated fact-checker to just be accurate, but also be able to inform and convince the user of the validity of its predictions. This becomes viable with explainable artificial intelligence (XAI). In this work, we conduct a study of XAI fact-checkers involving 180 participants to determine how users’ actions towards news and their attitudes towards explanations are affected by the XAI. Our results suggest that XAI has limited effects on users’ agreement with the veracity prediction of the automated fact-checker and on their intent to share news. However, XAI nudges users towards forming uniform judgments of news veracity, thereby signaling their reliance on the explanations. We also found polarizing preferences towards XAI and raise several design considerations on them.

References

[1]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376615
[2]
Facebook AI. 2020. Here’s how we’re using AI to help detect misinformation. Retrieved June 1, 2023 from https://ai.facebook.com/blog/heres-how-were-using-ai-to-help-detect-misinformation/
[3]
Facebook AI. 2021. Facebook’s five pillars of Responsible AI. Retrieved June 1, 2023 from https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/
[4]
AI4EU. 2020. Human-Centred AI: What is going on in Europe.Retrieved June 1, 2023 from https://www.ai4eu.eu/news/human-centred-ai-what-going-europe
[5]
Hassan Ali, Muhammad Suleman Khan, Amer Alghadhban, Meshari Alazmi, Ahmad Alzamil, Khaled Al-Utaibi, and Junaid Qadir. 2021. All Your Fake Detector are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings. IEEE Access 9 (2021), 81678–81692. https://doi.org/10.1109/ACCESS.2021.3085875
[6]
Hunt Allcott, Matthew Gentzkow, and Chuan Yu. 2019. Trends in the diffusion of misinformation on social media. Research & Politics 6, 2 (2019), 2053168019848554. https://doi.org/10.1177/2053168019848554 arXiv:https://doi.org/10.1177/2053168019848554
[7]
Nujud Aloshban. 2020. ACT : Automatic Fake News Classification Through Self-Attention. In 12th ACM Conference on Web Science (Southampton, United Kingdom) (WebSci ’20). Association for Computing Machinery, New York, NY, USA, 115–124. https://doi.org/10.1145/3394231.3397901
[8]
Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 75, 13 pages. https://doi.org/10.1145/3411764.3445736
[9]
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020. Generating Fact Checking Explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7352–7364. https://doi.org/10.18653/v1/2020.acl-main.656
[10]
Daniel Ben David, Yehezkel S. Resheff, and Talia Tron. 2021. Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (Virtual Event, USA) (AIES ’21). Association for Computing Machinery, New York, NY, USA, 390–400. https://doi.org/10.1145/3461702.3462565
[11]
Arjun Narayan Bettadapur. 2020. TikTok partners with fact-checking experts to combat misinformation. Retrieved June 1, 2023 from https://newsroom.tiktok.com/en-au/tiktok-partners-with-fact-checking-experts-to-combat-misinformation
[12]
Luís Borges, Bruno Martins, and Pável Calado. 2019. Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News. J. Data and Information Quality 11, 3, Article 14 (June 2019), 26 pages. https://doi.org/10.1145/3287763
[13]
Alexandre Bovet and Hernán A. Makse. 2019. Influence of fake news in Twitter during the 2016 US presidential election. Nature Communications 10, 7 (Jan. 2019). https://doi.org/10.1038/s41467-018-07761-2
[14]
Dermot Browne, Peter Totterdell, and Mike Norman. 1990. Adaptive User Interfaces. Academic Press, London.
[15]
John M. Carroll. 2003. Interfacing Thought: Cognitive Aspects of Human-Computer Interaction. The MIT Press.
[16]
Xinran Chen and Sei-Ching Joanna Sin. 2013. ’Misinformation? What of It?’: Motivations and Individual Differences in Misinformation Sharing on Social Media. In Proceedings of the 76th ASIS&T Annual Meeting: Beyond the Cloud: Rethinking Information Boundaries (Montreal, Quebec, Canada) (ASIST ’13). American Society for Information Science, USA, Article 104, 4 pages.
[17]
Michael Chromik and Andreas Butz. 2021. Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. In Human-Computer Interaction – INTERACT 2021, Carmelo Ardito, Rosa Lanzilotti, Alessio Malizia, Helen Petrie, Antonio Piccinno, Giuseppe Desolda, and Kori Inkpen (Eds.). Springer International Publishing, Cham, 619–640.
[18]
Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan. 2020. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior 42 (2020), 1073–1095. https://doi.org/10.1007/s11109-019-09533-0
[19]
Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement 20, 1 (1960), 37–46. https://doi.org/10.1177/001316446002000104 arXiv:https://doi.org/10.1177/001316446002000104
[20]
Colin Crowell. 2017. Our approach to bots and misinformation. Retrieved June 1, 2023 from https://blog.twitter.com/en_us/topics/company/2017/Our-Approach-Bots-Misinformation
[21]
André Ferreira Cruz, Gil Rocha, and Henrique Lopes Cardoso. 2020. On Document Representations for Detection of Biased News Articles. In Proceedings of the 35th Annual ACM Symposium on Applied Computing (Brno, Czech Republic) (SAC ’20). Association for Computing Machinery, New York, NY, USA, 892–899. https://doi.org/10.1145/3341105.3374025
[22]
Limeng Cui, Kai Shu, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. DEFEND: A System for Explainable Fake News Detection. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (Beijing, China) (CIKM ’19). Association for Computing Machinery, New York, NY, USA, 2961–2964. https://doi.org/10.1145/3357384.3357862
[23]
Luigi De Angelis, Francesco Baglivo, Guglielmo Arzilli, Gaetano Pierpaolo Privitera, Paolo Ferragina, Alberto Eugenio Tozzi, and Caterina Rizzo. 2023. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health 11 (2023). https://doi.org/10.3389/fpubh.2023.1166120
[24]
Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’21). Association for Computing Machinery, New York, NY, USA, 754–768. https://doi.org/10.1145/3472749.3474784
[25]
Kristin G. Esterberg. 2002. Qualitative methods in social research. McGraw-Hill.
[26]
Factinsect. [n. d.]. Factinsect, the Automated AI Fact Checker. Retrieved June 1, 2023 from https://factinsect.com/
[27]
Center for an Informed Public, Digital Forensic Research Lab, Graphika, and Stanford Internet Observatory. 2021. The Long Fuse: Misinformation and the 2020 Election. Retrieved June 1, 2023 from https://purl.stanford.edu/tr171zs0069
[28]
Knight Foundation. 2018. Disinformation, ‘Fake News’ and Influence Campaigns on Twitter. Retrieved June 1, 2023 from https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter/
[29]
Aman Framewala, Aum Patil, and Faruk Kazi. 2020. Shapley based Interpretable Semi-supervised Model for Detecting Similarity Index of Social Media Campaigns. In 2020 International Conference on Computational Science and Computational Intelligence (CSCI). 270–274. https://doi.org/10.1109/CSCI51800.2020.00052
[30]
Stéphane Ganassali. 2008. The Influence of the Design of Web Survey Questionnaires on the Quality of Responses. Survey Research Methods 2, 1 (mar 2008), 21–32. https://doi.org/10.18148/srm/2008.v2i1.598
[31]
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proc. ACM Hum.-Comput. Interact. 4, CSCW3, Article 235 (Jan. 2021), 28 pages. https://doi.org/10.1145/3432934
[32]
Mark E. Glickman, Sowmya R. Rao, and Mark R. Schultz. 2014. False discovery rate control is a recommended alternative to Bonferroni-type adjustments in health studies. Journal of Clinical Epidemiology 67, 8 (2014), 850–857. https://doi.org/10.1016/j.jclinepi.2014.03.012
[33]
David Gunning, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. 2019. XAI—Explainable artificial intelligence. Science Robotics 4, 37 (2019), eaay7120. https://doi.org/10.1126/scirobotics.aay7120 arXiv:https://www.science.org/doi/pdf/10.1126/scirobotics.aay7120
[34]
Ronan Hamon, Henrik Junklewitz, Gianclaudio Malgieri, Paul De Hert, Laurent Beslay, and Ignacio Sanchez. 2021. Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 549–559. https://doi.org/10.1145/3442188.3445917
[35]
Casper Hansen, Christian Hansen, Stephen Alstrup, Jakob Grue Simonsen, and Christina Lioma. 2019. Neural Check-Worthiness Ranking with Weak Supervision: Finding Sentences for Fact-Checking. In Companion Proceedings of The 2019 World Wide Web Conference (San Francisco, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 994–1000. https://doi.org/10.1145/3308560.3316736
[36]
McKenzie Himelein-Wachowiak, Salvatore Giorgi, Amanda Devoto, Muhammad Rahman, Lyle Ungar, H Andrew Schwartz, David H Epstein, Lorenzo Leggio, and Brenda Curtis. 2021. Bots and Misinformation Spread on Social Media: Implications for COVID-19. J Med Internet Res 23, 5 (May 2021). https://doi.org/10.2196/26933
[37]
Reuters Institute. 2022. Digital News Report 2022. Retrieved June 1, 2023 from https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2022
[38]
Md Rafiqul Islam, Shaowu Liu, Xianzhi Wang, and Guandong Xu. 2020. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Social Network Analysis and Mining 10, 82 (Sept. 2020). https://doi.org/10.1007/s13278-020-00696-x
[39]
Jasser Jasser. 2019. Dynamics of Misinformation Cascades. In Companion Proceedings of The 2019 World Wide Web Conference (San Francisco, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 33–36. https://doi.org/10.1145/3308560.3314194
[40]
Caio Libanio Melo Jeronimo, Leandro Balby Marinho, Claudio E. C. Campelo, Adriano Veloso, and Allan Sales da Costa Melo. 2019. Fake News Classification Based on Subjective Language. In Proceedings of the 21st International Conference on Information Integration and Web-Based Applications & Services (Munich, Germany) (iiWAS2019). Association for Computing Machinery, New York, NY, USA, 15–24. https://doi.org/10.1145/3366030.3366039
[41]
Tanveer Khan, Antonis Michalas, and Adnan Akhunzada. 2021. Fake news outbreak 2021: Can we stop the viral spread?Journal of Network and Computer Applications 190 (2021), 103112. https://doi.org/10.1016/j.jnca.2021.103112
[42]
Jan Kirchner and Christian Reuter. 2020. Countering Fake News: A Comparison of Possible Solutions Regarding User Acceptance and Effectiveness. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 140 (Oct. 2020), 27 pages. https://doi.org/10.1145/3415211
[43]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 29–38. https://doi.org/10.1145/3287560.3287590
[44]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473. https://doi.org/10.1016/j.artint.2021.103473
[45]
Gionnieve Lim and Simon T. Perrault. 2022. Explanation Preferences in XAI Fact-Checkers. Proceedings of 20th European Conference on Computer-Supported Cooperative Work (2022). https://doi.org/10.48340/ecscw2022_p02
[46]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777. https://doi.org/10.5555/3295222.3295230
[47]
Harrison Mantas. 2021. Twitter finally turns to the experts on fact-checking. Retrieved June 1, 2023 from https://www.poynter.org/fact-checking/2021/twitter-finally-turns-to-the-experts-on-fact-checking
[48]
Julian Marx, Felix Brünker, Milad Mirbabaie, and Eric Hochstrate. 2020. ‘Conspiracy Machines’ - The Role of Social Bots during the COVID-19 ‘Infodemic’. ACIS 2020 Proceedings 82 (2020). https://aisel.aisnet.org/acis2020/82
[49]
Yelena Mejova and Kyriaki Kalimeri. 2020. COVID-19 on Facebook Ads: Competing Agendas around a Public Health Crisis. In Proceedings of the 3rd ACM SIGCAS Conference on Computing and Sustainable Societies (Ecuador) (COMPASS ’20). Association for Computing Machinery, New York, NY, USA, 22–31. https://doi.org/10.1145/3378393.3402241
[50]
Will Moy. 2021. Scaling Up the Truth: Fact-Checking Innovations and the Pandemic. National Endowment For Democracy. https://www.ned.org/wp-content/uploads/2021/01/Fact-Checking-Innovations-Pandemic-Moy.pdf.
[51]
An T. Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease. 2018. Believe It or Not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin, Germany) (UIST ’18). Association for Computing Machinery, New York, NY, USA, 189–199. https://doi.org/10.1145/3242587.3242666
[52]
Mahsan Nourani, Samia Kabir, Sina Mohseni, and Eric D. Ragan. 2019. The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (Oct. 2019), 97–105. https://doi.org/10.1609/hcomp.v7i1.5284
[53]
Gordon Pennycook, Adam Bear, Evan T. Collins, and David G. Rand. 2019. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science 66, 11 (2019), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478
[54]
Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. 2017. Where the Truth Lies: Explaining the Credibility of Emerging Claims on the Web and Social Media. In Proceedings of the 26th International Conference on World Wide Web Companion (Perth, Australia) (WWW ’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1003–1012. https://doi.org/10.1145/3041021.3055133
[55]
Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. 2018. CredEye: A Credibility Lens for Analyzing and Explaining Misinformation. In Companion Proceedings of the The Web Conference 2018 (Lyon, France) (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 155–158. https://doi.org/10.1145/3184558.3186967
[56]
Facebook Journalism Project. 2021. How Facebook’s third-party fact-checking program works. Retrieved June 1, 2023 from https://www.facebook.com/journalismproject/programs/third-party-fact-checking/how-it-works
[57]
Miriam Redi, Besnik Fetahu, Jonathan Morgan, and Dario Taraborelli. 2019. Citation Needed: A Taxonomy and Algorithmic Assessment of Wikipedia’s Verifiability. In The World Wide Web Conference (San Francisco, CA, USA) (WWW ’19). Association for Computing Machinery, New York, NY, USA, 1567–1578. https://doi.org/10.1145/3308558.3313618
[58]
Julio C. S. Reis, André Correia, Fabrício Murai, Adriano Veloso, and Fabrício Benevenuto. 2019. Explainable Machine Learning for Fake News Detection. In Proceedings of the 10th ACM Conference on Web Science (Boston, Massachusetts, USA) (WebSci ’19). Association for Computing Machinery, New York, NY, USA, 17–26. https://doi.org/10.1145/3292522.3326027
[59]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD ’16). Association for Computing Machinery, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778
[60]
Twitter Safety. 2020. COVID-19: Our approach to misleading vaccine information. Retrieved June 1, 2023 from https://blog.twitter.com/en_us/topics/company/2020/covid19-vaccine
[61]
Karishma Sharma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu. 2019. Combating Fake News: A Survey on Identification and Mitigation Techniques. ACM Trans. Intell. Syst. Technol. 10, 3, Article 21 (April 2019), 42 pages. https://doi.org/10.1145/3305260
[62]
Ben Shneiderman. 2021. Human-Centered AI. Issues in Science and Technology 37, 2 (2021), 56–61. https://issues.org/human-centered-ai/
[63]
Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. DEFEND: Explainable Fake News Detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (Anchorage, AK, USA) (KDD ’19). Association for Computing Machinery, New York, NY, USA, 395–405. https://doi.org/10.1145/3292500.3330935
[64]
Snopes. 2021. Fact Check Ratings. Retrieved June 1, 2023 from https://www.snopes.com/fact-check-ratings/
[65]
Tian Tian, Yudong Liu, Xiaoyu Yang, Yuefei Lyu, Xi Zhang, and Binxing Fang. 2020. QSAN: A Quantum-Probability Based Signed Attention Network for Explainable False Information Detection. Association for Computing Machinery, New York, NY, USA, 1445–1454. https://doi.org/10.1145/3340531.3411890
[66]
TikTok. 2020. An update on our virtual Transparency & Accountability Center experience. Retrieved June 1, 2023 from https://newsroom.tiktok.com/en-us/an-update-on-our-virtual-transparency-and-accountability-center-experience
[67]
Kristen Vaccaro, Karrie Karahalios, Deirdre K. Mulligan, Daniel Kluttz, and Tad Hirsch. 2019. Contestability in Algorithmic Systems. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing (Austin, TX, USA) (CSCW ’19). Association for Computing Machinery, New York, NY, USA, 523–527. https://doi.org/10.1145/3311957.3359435
[68]
Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, and Mark Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404. https://doi.org/10.1016/j.artint.2020.103404
[69]
Gerben A. Van Kleef, Helma van den Berg, and Marc W. Heerdink. 2015. The persuasive power of emotions: Effects of emotional expressions on attitude formation and change.Journal of Applied Psychology 100, 4 (2015), 1124–1142. https://doi.org/10.1037/apl0000003
[70]
Luis Vargas, Patrick Emami, and Patrick Traynor. 2020. On the Detection of Disinformation Campaign Activity with Network Analysis. In Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop (Virtual Event, USA) (CCSW’20). Association for Computing Machinery, New York, NY, USA, 133–146. https://doi.org/10.1145/3411495.3421363
[71]
Bairong Wang and Jun Zhuang. 2018. Rumor response, debunking response, and decision makings of misinformed Twitter users during disasters. Natural Hazards 93 (Sept. 2018), 1145–1162. https://doi.org/10.1007/s11069-018-3344-6
[72]
Patrick Wang, Rafael Angarita, and Ilaria Renna. 2018. Is This the Era of Misinformation yet: Combining Social Bots and Fake News to Deceive the Masses. In Companion Proceedings of the The Web Conference 2018 (Lyon, France) (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1557–1561. https://doi.org/10.1145/3184558.3191610
[73]
Xinru Wang and Ming Yin. 2021. Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (IUI ’21). Association for Computing Machinery, New York, NY, USA, 318–328. https://doi.org/10.1145/3397481.3450650
[74]
Yongyue Wang, Chunhe Xia, Chengxiang Si, Beitong Yao, and Tianbo Wang. 2020. Robust Reasoning Over Heterogeneous Textual Information for Fact Verification. IEEE Access 8 (2020), 157140–157150. https://doi.org/10.1109/ACCESS.2020.3019586
[75]
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. "Do You Trust Me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (Paris, France) (IVA ’19). Association for Computing Machinery, New York, NY, USA, 7–9. https://doi.org/10.1145/3308532.3329441
[76]
Jutta Williams and Rumman Chowdhury. 2021. Introducing our Responsible Machine Learning Initiative. Retrieved June 1, 2023 from https://blog.twitter.com/en_us/topics/company/2021/introducing-responsible-machine-learning-initiative
[77]
Stephan Winter and Nicole C. Krämer. 2014. A question of credibility – Effects of source cues and recommendations on information selection on news sites and blogs. Communications 39, 4 (2014), 435–456. https://doi.org/
[78]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. Association for Computing Machinery, New York, NY, USA, 143–146. https://doi.org/10.1145/1978942.1978963
[79]
Christine T. Wolf and Kathryn E. Ringland. 2020. Designing Accessible, Explainable AI (XAI) Experiences. SIGACCESS Access. Comput.125, Article 6 (March 2020), 1 pages. https://doi.org/10.1145/3386296.3386302
[80]
Liang Wu, Fred Morstatter, Kathleen M. Carley, and Huan Liu. 2019. Misinformation in Social Media: Definition, Manipulation, and Detection. SIGKDD Explor. Newsl. 21, 2 (Nov. 2019), 80–90. https://doi.org/10.1145/3373464.3373475
[81]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300509
[82]
Xinyi Zhou and Reza Zafarani. 2020. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. ACM Comput. Surv. 53, 5, Article 109 (Sept. 2020), 40 pages. https://doi.org/10.1145/3395046

Cited By

View all
  • (2024)To Share or Not to Share: Randomized Controlled Study of Misinformation Warning Labels on Social MediaDisinformation in Open Online Media10.1007/978-3-031-71210-4_4(46-69)Online publication date: 31-Aug-2024
  • (2024)Recent Developments on Accountability and Explainability for Complex Reasoning TasksAccountable and Explainable Methods for Complex Reasoning over Text10.1007/978-3-031-51518-7_9(191-199)Online publication date: 6-Apr-2024

Index Terms

  1. XAI in Automated Fact-Checking? The Benefits Are Modest and There's No One-Explanation-Fits-All

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    OzCHI '23: Proceedings of the 35th Australian Computer-Human Interaction Conference
    December 2023
    733 pages
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. automated fact-checking
    2. explainable artificial intelligence
    3. human-AI interaction
    4. human-centered design
    5. interpretable machine learning
    6. misinformation

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    OzCHI 2023
    OzCHI 2023: OzCHI 2023
    December 2 - 6, 2023
    Wellington, New Zealand

    Acceptance Rates

    Overall Acceptance Rate 362 of 729 submissions, 50%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)32
    • Downloads (Last 6 weeks)9
    Reflects downloads up to 13 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)To Share or Not to Share: Randomized Controlled Study of Misinformation Warning Labels on Social MediaDisinformation in Open Online Media10.1007/978-3-031-71210-4_4(46-69)Online publication date: 31-Aug-2024
    • (2024)Recent Developments on Accountability and Explainability for Complex Reasoning TasksAccountable and Explainable Methods for Complex Reasoning over Text10.1007/978-3-031-51518-7_9(191-199)Online publication date: 6-Apr-2024

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media