Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

Authors

Abstract

How should we compare the capabilities of language models (LMs) and humans? In this paper, I draw inspiration from comparative psychology to highlight challenges in these comparisons. I focus on a case study: processing of recursively nested grammatical structures. Prior work suggests that LMs cannot process these structures as reliably as humans can. However, the humans were provided with instructions and training, while the LMs were evaluated zero-shot. I therefore match the evaluation more closely. Providing large LMs with a simple prompt— substantially less content than the human training—allows the LMs to consistently outperform the human results, and even to extrapolate to more deeply nested conditions than were tested with humans. Furthermore, the effects of prompting are robust to the particular structures and vocabulary used in the prompt. Finally, reanalyzing the existing human data suggests that the humans may not perform above chance at the difficult structures initially. Thus, large LMs may indeed process recursively nested grammatical structures as reliably as humans. This case study highlights how discrepancies in the evaluation methods can confound comparisons of language models and humans. I conclude by reflecting on the broader challenge of comparing human and model capabilities, and highlight an important difference between evaluating cognitive models and foundation models.

Author Biography

  • Andrew Kyle Lampinen, Google DeepMind
    I am a Senior Reseearch Scientist at Google DeepMind.

Published

2024-12-23

Issue

Section

Special Issue on Language Learning, Representation, and Processing in Humans and Machines