Research & Papers

Left-right asymmetry in predicting brain activity from LLMs' representations emerges with their formal linguistic competence

New research reveals AI's internal patterns are starting to mirror the human brain.

Deep Dive

A new study shows that as large language models (LLMs) like OLMo-2 7B learn formal grammar, their internal activations become better at predicting human brain activity—specifically in the left hemisphere. This left-right asymmetry emerges alongside the model's ability to judge grammatical correctness, not its reasoning or world knowledge. The finding, replicated with Pythia models and in French, suggests AI is developing a neural signature similar to human language processing.

Why It Matters

This provides a new, measurable link between AI architecture and human cognition, potentially guiding more brain-like AI development.