ArxEval: Evaluating Retrieval and Generation in Language Models for Scientific Literature

A Sinha, V Virk, D Chakraborty, PS Sreeja - arXiv preprint arXiv …, 2025 - arxiv.org
A Sinha, V Virk, D Chakraborty, PS Sreeja
arXiv preprint arXiv:2501.10483, 2025arxiv.org
Language Models [LMs] are now playing an increasingly large role in information
generation and synthesis; the representation of scientific knowledge in these systems needs
to be highly accurate. A prime challenge is hallucination; that is, generating apparently
plausible but actually false information, including invented citations and nonexistent
research papers. This kind of inaccuracy is dangerous in all the domains that require high
levels of factual correctness, such as academia and education. This work presents a …
Language Models [LMs] are now playing an increasingly large role in information generation and synthesis; the representation of scientific knowledge in these systems needs to be highly accurate. A prime challenge is hallucination; that is, generating apparently plausible but actually false information, including invented citations and nonexistent research papers. This kind of inaccuracy is dangerous in all the domains that require high levels of factual correctness, such as academia and education. This work presents a pipeline for evaluating the frequency with which language models hallucinate in generating responses in the scientific literature. We propose ArxEval, an evaluation pipeline with two tasks using ArXiv as a repository: Jumbled Titles and Mixed Titles. Our evaluation includes fifteen widely used language models and provides comparative insights into their reliability in handling scientific literature.
arxiv.org