Nothing Special   »   [go: up one dir, main page]

Skip to main content
eScholarship
Open Access Publications from the University of California

What Language Reveals about Perception: Distilling Psychophysical Knowledge from Large Language Models

Creative Commons 'BY' version 4.0 license
Abstract

Understanding the extent to which the perceptual world can be recovered from language is a fundamental problem in cognitive science. We reformulate this problem as that of distilling psychophysical information from text and show how this can be done by combining large language models (LLMs) with a classic psychophysical method based on similarity judgments. Specifically, we use the prompt completion functionality of GPT3, a state-of-the-art LLM, to elicit similarity scores between stimuli and then apply multidimensional scaling to uncover their underlying psychological space. We test our approach on six perceptual domains and show that the elicited judgments strongly correlate with human data and successfully recover well-known psychophysical structures such as the color wheel and pitch spiral. We also explore meaningful divergences between LLM and human representations. Our work showcases how combining state-of-the-art machine models with well-known cognitive paradigms can shed new light on fundamental questions in perception and language research.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View