On the Role of Large Language Models in Crowdsourcing Misinformation Assessment
DOI:
https://doi.org/10.1609/icwsm.v18i1.31417Abstract
The proliferation of online misinformation significantly undermines the credibility of web content. Recently, crowd workers have been successfully employed to assess misinformation to address the limited scalability of professional fact-checkers. An alternative approach to crowdsourcing is the use of large language models (LLMs). These models are however also not perfect. In this paper, we investigate the scenario of crowd workers working in collaboration with LLMs to assess misinformation. We perform a study where we ask crowd workers to judge the truthfulness of statements under different conditions: with and without LLMs labels and explanations. Our results show that crowd workers tend to overestimate truthfulness when exposed to LLM-generated information. Crowd workers are misled by wrong LLM labels, but, on the other hand, their self-reported confidence is lower when they make mistakes due to relying on the LLM. We also observe diverse behaviors among crowd workers when the LLM is presented, indicating that leveraging LLMs can be considered a distinct working strategy.Downloads
Published
2024-05-28
How to Cite
Xu, J., Han, L., Sadiq, S., & Demartini, G. (2024). On the Role of Large Language Models in Crowdsourcing Misinformation Assessment. Proceedings of the International AAAI Conference on Web and Social Media, 18(1), 1674-1686. https://doi.org/10.1609/icwsm.v18i1.31417
Issue
Section
Full Papers