Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution

E McMilin - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Proceedings of the AAAI Conference on Artificial Intelligence, 2024ojs.aaai.org
Modern language modeling tasks are often underspecified: for a given token prediction,
many words may satisfy the user's intent of producing natural language at inference time,
however only one word will minimize the task's loss function at training time. We introduce a
simple causal mechanism to describe the role underspecification plays in the generation of
spurious correlations. Despite its simplicity, our causal model directly informs the
development of two lightweight black-box evaluation methods, that we apply to gendered …
Modern language modeling tasks are often underspecified: for a given token prediction, many words may satisfy the user’s intent of producing natural language at inference time, however only one word will minimize the task’s loss function at training time. We introduce a simple causal mechanism to describe the role underspecification plays in the generation of spurious correlations. Despite its simplicity, our causal model directly informs the development of two lightweight black-box evaluation methods, that we apply to gendered pronoun resolution tasks on a wide range of LLMs to 1) aid in the detection of inference-time task underspecification by exploiting 2) previously unreported gender vs. time and gender vs. location spurious correlations on LLMs with a range of A) sizes: from BERT-base to GPT-3.5, B) pre-training objectives: from masked & autoregressive language modeling to a mixture of these objectives, and C) training stages: from pre-training only to reinforcement learning from human feedback (RLHF). Code and open-source demos available at https://github.com/2dot71mily/uspec.
ojs.aaai.org