Nothing Special   »   [go: up one dir, main page]

Skip to main content

Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review

  • Conference paper
  • First Online:
Disinformation in Open Online Media (MISDOOM 2024)

Abstract

The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 37.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Beckett, C., Yaseen, M.: Generating Change. A global survey of what news organisations are doing with AI (2023). https://static1.squarespace.com/static/64d60527c01ae7106f2646e9. Accessed 22 June 2024

  2. Augenstein, I., et al.: Factuality challenges in the era of large language models. arXiv preprint arXiv:2310.05189 (2023)

  3. Aydin, Ö., Karaarslan, E.: Is ChatGPT leading generative AI? What is beyond expectations? Acad. Platform J. Eng. Smart Syst. 11(3), 118–134 (2023)

    Article  Google Scholar 

  4. Yenduri, G., et al.: GPT (Generative pre-trained transformer)-a comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. arXiv, Eprint: arXiv:2305.10435v2 (2023)

  5. Unver, H.A.: Emerging Technologies and Automated Fact-Checking: Tools, Techniques and Algorithms (2023). SSRN: https://ssrn.com/abstract=4555022

  6. Cuartielles, R., Ramon-Vegas, X., Pont-Sorribes, C.: Retraining fact-checkers: the emergence of ChatGPT in information verification. Profesional de la información/Inf. Profess. 32(5) (2023)

    Google Scholar 

  7. Wolfe, R., Mitra, T.: The impact and opportunities of generative AI in fact-checking. In: The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1531–1543 (2024)

    Google Scholar 

  8. Budach, L., et al.: The effects of data quality on machine learning performance. arXiv preprint arXiv:2207.14529 (2022)

  9. Gudivada, V.N., Apon, A., Ding, J.: Data quality considerations for big data and machine learning: going beyond data cleaning and transformations. Int. J. Adv. Softw. 10(1), 1–20 (2017)

    Google Scholar 

  10. Dwivedi, Y.K., et al.: So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manage. 71, 102642 (2023)

    Google Scholar 

  11. Hartmann, J., Schwenzow, J., Witte, M.: The political ideology of conversational AI: converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 (2023)

  12. Fujimoto, S., Takemoto, K.: Revisiting the political biases of ChatGPT. Front. Artif. Intell. 6, 1232003 (2023)

    Article  Google Scholar 

  13. Jones, B., Luger, E., Jones, R.: Generative AI & Journalism: A Rapid Risk-Based Review. Edinburgh Research Explorer, University of Edinburgh (2023)

    Google Scholar 

  14. Wach, K., et al.: The dark side of generative artificial intelligence: a critical analysis of controversies and risks of ChatGPT. Entrep. Bus. Econ. Rev. 11(2), 7–30 (2023)

    Google Scholar 

  15. Bontcheva, K., et al.: Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities. European Digital Media Observatory (2024)

    Google Scholar 

  16. Spitale, G., Biller-Andorno, N., Germani, F.: AI model GPT-3 (dis) informs us better than humans. arXiv preprint arXiv:2301.11924 (2023)

  17. Hanley, H.W.A., Durumeric, Z.: Machine-made media: monitoring the mobilization of machine-generated articles on misinformation and mainstream news websites. arXiv preprint arXiv:2305.09820 (2023)

  18. Rawte, V., Sheth, A., Das, A.: A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 (2023)

  19. Dignum, V.: Responsible artificial intelligence: designing AI for human values. ITU J. ICT Discov. 1, 1–8 (2017)

    Google Scholar 

  20. Johnson, B., Smith, J.: Towards ethical data-driven software: filling the gaps in ethics research & practice. In: 2021 IEEE/ACM 2nd International Workshop on Ethics in Software Engineering Research and Practice (SEthics) (2021)

    Google Scholar 

  21. Khan, A.A., et al.: Ethics of AI: a systematic literature review of principles and challenges. In: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering, pp. 383–392 (2022)

    Google Scholar 

  22. Stahl, B.C.: Concepts of ethics and their application to AI. In: Artificial Intelligence for a Better Future. SRIG, pp. 19–33. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69978-9_3

    Chapter  Google Scholar 

  23. Jahan, N., Naveed, S., Zeshan, M., Tahir, M.A.: How to conduct a systematic review: a narrative literature review. Cureus 8(11), e864 (2016)

    Google Scholar 

  24. Baumeister, R.F., Leary, M.R.: Writing narrative literature reviews. Rev. Gen. Psychol. 1(3), 311–320 (1997)

    Article  Google Scholar 

  25. Rother, E.T.: Systematic literature review X narrative review. Acta Paulista de Enfermagem 20, v–vi (2007)

    Article  Google Scholar 

  26. Dierickx, L., Lindén, C.-G., Opdahl, A.L.: Automated fact-checking to support professional practices: systematic literature review and meta-analysis. Int. J. Commun. 17, 21 (2023)

    Google Scholar 

  27. Dierickx, L., Lindén, C., Opdahl, A.: The information disorder level (IDL) index: a human-based metric to assess the factuality of machine-generated content. In: Multidisciplinary International Symposium On Disinformation in Open Online Media, pp. 60–71 (2023)

    Google Scholar 

  28. Singer, J.B.: Border patrol: the rise and role of fact-checkers and their challenge to journalists’ normative boundaries. Journalism 22(8), 1929–1946 (2021)

    Article  Google Scholar 

  29. Mena, P.: Principles and boundaries of fact-checking: journalists’ perceptions. Journal. Pract. 13(6), 657–672 (2019)

    Google Scholar 

  30. Chen, C., Shu, K.: Combating misinformation in the age of LLMs: opportunities and challenges. arXiv preprint arXiv:2311.05656 (2023)

  31. Shapiro, I., Brin, C., Bédard-Brûlé, I., Mychajlowycz, K.: Verification as a strategic ritual: how journalists retrospectively describe processes for ensuring accuracy. Journal. Pract. 7(6), 657–673 (2013)

    Google Scholar 

  32. Martin, N., Comm, B. A.: Information verification in the age of digital journalism. In: Special Libraries Association Annual Conference, Vancouver (2014)

    Google Scholar 

  33. Hermida, A.: Tweets and truth: journalism as a discipline of collaborative verification. Journal. Pract. 6(5–6), 659–668 (2012)

    Google Scholar 

  34. Graves, L., Amazeen, M.A.: Fact-checking as idea and practice in journalism. In: Oxford Research Encyclopedia of Communication. Oxford University Press, Oxford (2019)

    Google Scholar 

  35. Brandtzaeg, P.B., Lüders, M., Spangenberg, J., Rath-Wiggins, L., Følstad, A.: Emerging journalistic verification practices concerning social media. Journal. Pract. 10(3), 323–342 (2016)

    Google Scholar 

  36. Konstantinovskiy, L., Price, O., Babakar, M., Zubiaga, A.: Toward automated fact-checking: developing an annotation schema and benchmark for consistent automated claim detection. Digit. Threats Res. Pract. 2(2), 1–16 (2021)

    Article  Google Scholar 

  37. Vlachos, A., Riedel, S.: Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pp. 18–22 (2014)

    Google Scholar 

  38. Sheikhi, G., Touileb, S., Khan, S.: Automated claim detection for fact-checking: a case study using Norwegian pre-trained language models. In: Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), pp. 1–9 (2023)

    Google Scholar 

  39. Al-Ghamdi, L.M.: Towards adopting AI techniques for monitoring social media activities. Sustain. Eng. Innov. 3(1), 15–22 (2021)

    Article  Google Scholar 

  40. Himma-Kadakas, M., Ojamets, I.: Debunking false information: investigating journalists’ fact-checking skills. Digit. Journal. 10(5), 866–887 (2022)

    Google Scholar 

  41. Johnson, P.R.: A case of claims and facts: automated fact-checking the future of journalism’s authority. Digit. Journal. 1–24 (2023)

    Google Scholar 

  42. Hassan, N., Arslan, F., Li, C., Tremayne, M.: Toward automated fact-checking: detecting check-worthy factual claims by ClaimBuster. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1803–1812 (2017)

    Google Scholar 

  43. Atanasova, P., et al.: Automatic fact-checking using context and discourse information. J. Data Inf. Qual. (JDIQ) 11(3), 1–27 (2019)

    Article  Google Scholar 

  44. Guo, Z., Schlichtkrull, M., Vlachos, A.: A survey on automated fact-checking. Trans. Assoc. Comput. Linguist. 10, 178–206 (2022)

    Article  Google Scholar 

  45. Lecheler, S., Kruikemeier, S.: Re-evaluating journalistic routines in a digital age. New Media Soc. 18(1), 156–171 (2016)

    Article  Google Scholar 

  46. Lim, C.: Checking how fact-checkers check. Res. Polit. 5(3), 2053168018786848 (2018)

    Google Scholar 

  47. Steensen, S., Kalsnes, B., Westlund, O.: The limits of live fact-checking: epistemological consequences of introducing a breaking news logic to political fact-checking. New Media Soc., 14614448231151436 (2023)

    Google Scholar 

  48. Nakov, P., et l.: Automated fact-checking for assisting human fact-checkers. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, Canada, pp. 4826–4832. IJCAI (2021)

    Google Scholar 

  49. Müller, N., Wiik, J.: From gatekeeper to gate-opener: open-source spaces in investigative journalism. Journal. Pract. 17, 189–208 (2023)

    Google Scholar 

  50. Powers, M.: In forms that are familiar and yet-to-be invented. Am. Journal. Discourse Technol. Specific Work. J. Commun. Inq. 36(1), 24–43 (2012)

    Google Scholar 

  51. Olsen, G.R.: Enthusiasm and alienation: how implementing automated journalism affects the work meaningfulness of three newsroom groups. Journal. Pract., 1–17 (2023)

    Google Scholar 

  52. Lopez, M.G., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., Missaoui, S.: A question of design: strategies for embedding AI-driven tools into journalistic work routines. Digit. Journal. 11(3), 484–503 (2023)

    Google Scholar 

  53. Dierickx, L., Lindén, C.G.: Journalism and fact-checking technologies: understanding user needs. Communication+1 10(1) (2023)

    Google Scholar 

  54. Samuelsen, R.J., Kalsnes, B., Steensen, S.: The relevance of technology to information verification: insights from norwegian journalism during a national election. Journal. Pract. 1–20 (2023)

    Google Scholar 

  55. Edwardsson, M.P., Al-Saqaf, W., Nygren, G.: Verification of digital sources in Swedish newsrooms-a technical issue or a question of newsroom culture? Journal. Pract. 17(8), 1678–1695 (2023)

    Google Scholar 

  56. Weikmann, T., Lecheler, S.: Cutting through the hype: understanding the implications of deepfakes for the fact-checking actor-network. Digit. Journal. 1–18 (2023)

    Google Scholar 

  57. Reese, S.D.: Exploring the institutional space of journalism. Problemi dell Informazione 48(1) (2023)

    Google Scholar 

  58. Pastor-Galindo, J., Nespoli, P., Mármol, F., Pérez, G.: The not yet exploited goldmine of OSINT: opportunities, open challenges and future trends. IEEE Access. 8, 10282–10304 (2020)

    Article  Google Scholar 

  59. Westlund, O., Larsen, R., Graves, L., Kavtaradze, L., Steensen, S.: Technologies and fact-checking: a sociotechnical mapping. In: Disinformation Studies: Perspectives from An Emerging Field, pp. 193–236. Labcom Communication & Arts, Covilhã, Portugal (2022)

    Google Scholar 

  60. Lindén, C.G.: What makes a reporter human? A research agenda for augmented journalism. Questions de communication 37, 337–351 (2020)

    Google Scholar 

  61. Shkliarevsky, G.: The Emperor with No Clothes: Chomsky Against ChatGPT (2023). Available at SSRN 4439662

    Google Scholar 

  62. Larssen, U.: “But verifying facts is what we do!”: fact-checking and journalistic professional autonomy. In: Democracy and Fake News: Information Manipulation and Post-Truth Politics, pp. 199–213. Routledge, London (2020)

    Google Scholar 

  63. Komatsu, T., et l.: AI should embody our values: investigating journalistic values to inform AI technology design. In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, pp. 1–13, Association for Computing Machinery, New York (2020)

    Google Scholar 

  64. Schapals, A.K., Porlezza, C.: Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions. Media Commun. 8(3), 16–26 (2020)

    Article  Google Scholar 

  65. Graves, L., Cherubini, F.: The rise of fact-checking sites in Europe. Digital News Project Report (2016)

    Google Scholar 

  66. Ward, S.J.A.: Global journalism ethics: widening the conceptual base. Glob. Media J. 1, 137 (2008)

    Google Scholar 

  67. de Haan, Y., van den Berg, E., Goutier, N., Kruikemeier, S., Lecheler, S.: Invisible friend or foe? How journalists use and perceive algorithmic-driven tools in their research process. Digit. Journal. 10(10), 1775–1793 (2022)

    Google Scholar 

  68. Leiser, M.: Bias, journalistic endeavours, and the risks of artificial intelligence. In: Editor, F., Editor, S. (eds.) Artificial Intelligence and the Media, pp. 8–32. Edward Elgar Publishing, Cheltenham (2022)

    Google Scholar 

  69. Ferrario, A., Loi, M.: How explainability contributes to trust in AI. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1457–1466. Association for Computing Machinery, New York (2022)

    Google Scholar 

  70. Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635. Association for Computing Machinery, New York (2021)

    Google Scholar 

  71. Lim, G., Perrault, S.T.: Explanation Preferences in XAI Fact-Checkers. European Society for Socially Embedded Technologies (EUSSET) (2022)

    Google Scholar 

  72. Micallef, N., Armacost, V., Memon, N., and Patil, S.: True or false: studying the work practices of professional fact-checkers. In: Proceedings of the ACM on Human-Computer Interaction, vol. 6, pp. 1–44. Association for Computing Machinery, New York (2022)

    Google Scholar 

  73. Nguyen, A.T., Kharosekar, A., Krishnan, S., Tate, E., Wallace, B.C., Lease, M.: Believe it or not: designing a human-AI partnership for mixed-initiative fact-checking. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pp. 189–199. Association for Computing Machinery, New York (2018)

    Google Scholar 

  74. Demartini, G., Mizzaro, S., Spina, D.: Human-in-the-loop artificial intelligence for fighting online misinformation: challenges and opportunities. IEEE Data Eng. Bull. 43(3), 65–74 (2020)

    Google Scholar 

  75. Hamed, A. A., Zachara-Szymanska, M., Wu, X.: Safeguarding authenticity for mitigating the harms of generative AI: Issues, research agenda, and policies for detection, fact-checking, and ethical AI. iScience 27(2), 108782 (2024)

    Google Scholar 

  76. Van Witsen, A., Takahashi, B.: How science journalists verify numbers and statistics in news stories: towards a theory. Journal. Pract. 1–20 (2021)

    Google Scholar 

  77. Stray, J.: Making artificial intelligence work for investigative journalism. In: Thurman, N., Lewis, S.C., Kunert, J. (eds.) Algorithms, Automation, and News, pp. 97–118. Routledge, London (2021)

    Chapter  Google Scholar 

  78. Montoro-Montarroso, A., et al.: Fighting disinformation with artificial intelligence: fundamentals, advances and challenges. Profesional de la información 32(3) (2023)

    Google Scholar 

  79. Currie, G.M.: Academic integrity and artificial intelligence: is ChatGPT hype, hero or heresy? In: Seminars in Nuclear Medicine, pp. 1–13. Springer, Heidelberg (2023)

    Google Scholar 

  80. Ji, Z., et al.: Survey of hallucination in natural language generation. ACM Comput. Surv. 55(12), 1–38 (2023)

    Article  Google Scholar 

  81. Li, Z.: The dark side of chatGPT: legal and ethical challenges from stochastic parrots and hallucination. arXiv preprint arXiv:2304.14347 (2023)

  82. Ray, P.P.: ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 3(1), 121–154 (2023)

    Article  Google Scholar 

  83. Yu, W.: A survey of knowledge-enhanced text generation. ACM Comput. Surv. 54(11s), 1–38 (2022)

    Article  Google Scholar 

  84. Kreps, S., McCain, R.M., Brundage, M.: All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. J. Exp. Political Sci. 9(1), 104–117 (2022)

    Article  Google Scholar 

  85. Brugger, F., Gehrke, C.: Skilling and deskilling: technological change in classical economic theory and its empirical evidence. Theory Soc. 47, 663–689 (2018)

    Article  Google Scholar 

  86. Polyportis, A., Pahos, N.: Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation. Humanit. Soc. Sci. Commun. 11(1), 1–10 (2024)

    Article  Google Scholar 

  87. LaGrandeur, K.: The consequences of AI hype. AI Ethics, 1–4 (2023)

    Google Scholar 

  88. van Dalen, A.: Algorithmic Gatekeeping for Professional Communicators: Power, Trust, and Legitimacy. Taylor & Francis, London (2023)

    Google Scholar 

  89. Siau, K., Wang, W.: Building trust in artificial intelligence, machine learning, and robotics. Cutter Bus. Technol. J. 31(2), 47–53 (2018)

    Google Scholar 

  90. Bartneck, C., Lütge, C., Wagner, A., Welsh, S.: Trust and fairness in AI systems. In: An Introduction to Ethics in Robotics and AI. SE, pp. 27–38. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-51110-4_4

    Chapter  Google Scholar 

  91. Opdahl, A.L., et al.: Trustworthy journalism through AI. Data Knowl. Eng. 146, 102182 (2023)

    Article  Google Scholar 

  92. Deuze, M., Beckett, C.: Imagination, algorithms and news: developing AI literacy for journalism. Digit. Journal. 10(10), 1913–1918 (2022)

    Google Scholar 

  93. Lopezosa, C., Codina, L., Pont-Sorribes, C., Vállez, M.: Use of generative artificial intelligence in the training of journalists: challenges, uses and training proposal. Profesional de la información/Inf. Prof. 32(4) (2023)

    Google Scholar 

  94. Becker, K., et al.: Policies in parallel? A comparative study of journalistic AI policies in 52 Global News Organisations. Oxford University Research Archive, pp. 1–37 (2023)

    Google Scholar 

  95. Weisz, J.D., Muller, M., He, J., Houde, S.: Toward general design principles for generative AI applications. arXiv preprint arXiv:2301.05578 (2023)

  96. Tonmoy, S.M., et al.: A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313 (2024)

  97. Feldman, P., Foulds, J.R., Pan, S.: Trapping LLM hallucinations using tagged context prompts. arXiv preprint arXiv:2306.06085 (2023)

  98. Bsharat, S.M., Myrzakhan, A., Shen, Z.: Principled instructions are all you need for questioning LLaMA-1/2, GPT-3.5/4. arXiv preprint arXiv:2312.16171 (2023)

  99. White, J., et al.: A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 (2023)

  100. Rai, A.: Explainable AI: from black box to glass box. J. Acad. Mark. Sci. 48, 137–141 (2020)

    Article  Google Scholar 

  101. Weber-Wulff, D., et al.: Testing of detection tools for AI-generated text. Int. J. Educ. Integr. 19(1), 26 (2023)

    Article  Google Scholar 

Download references

Acknowledgments

This research was funded by EU CEF Grant No. 101158604.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laurence Dierickx .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dierickx, L., van Dalen, A., Opdahl, A.L., Lindén, CG. (2024). Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review. In: Preuss, M., Leszkiewicz, A., Boucher, JC., Fridman, O., Stampe, L. (eds) Disinformation in Open Online Media. MISDOOM 2024. Lecture Notes in Computer Science, vol 15175. Springer, Cham. https://doi.org/10.1007/978-3-031-71210-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-71210-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-71209-8

  • Online ISBN: 978-3-031-71210-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics