Convergent Representations of Linguistic Constructions in Human and Artificial Neural Systems
EEG data from 10 people shows neural patterns for sentence structures mirror those in models like GPT.
A research team led by Patrick Krauss at the University of Erlangen-Nuremberg has published a groundbreaking study demonstrating a direct parallel between how human brains and artificial neural networks process language. The study recorded EEG data from 10 native English speakers as they listened to 200 sentences across four distinct grammatical constructions (transitive, ditransitive, caused-motion, and resultative). Analysis revealed that the brain generates construction-specific neural signatures, most reliably at the end of a sentence and predominantly in the alpha frequency band (8-12 Hz). For example, the neural patterns for ditransitive constructions (e.g., "She gave him the book") were most distinct from those for resultative constructions (e.g., "She painted the fence green").
Crucially, the temporal emergence and the similarity structure of these neural patterns were found to mirror the internal representations that spontaneously develop in artificial language models, including both recurrent neural networks (RNNs) and modern transformer architectures. This convergence suggests that both biological and artificial learning systems are navigating a similar "Platonic representational space" to discover stable, efficient abstractions for grammar. The findings provide strong empirical support for Construction Grammar theories, which posit that linguistic constructions are stored as integrated form-meaning pairs. This work bridges cognitive neuroscience and AI, offering a new framework for interpreting what large language models like GPT-4 or Llama 3 are actually learning about the structure of human language.
- EEG data from 10 humans showed distinct neural signatures for 4 grammatical constructions, peaking at sentence-final words in the alpha band.
- The similarity structure of these brain patterns directly matched the internal representations learned by transformer and RNN language models.
- The results support Construction Grammar theory and suggest a shared "Platonic representational space" guides efficient language learning in both brains and AI.
Why It Matters
This provides a neuroscientific foundation for understanding AI language models and validates them as tools for studying human cognition.