Linguistic properties and model scale in brain encoding: from small to compressed language models
A new study finds you don't need a massive AI to understand how the brain processes language.
Research shows that a 3-billion-parameter language model predicts brain activity as well as models 14 billion parameters or larger when people listen to stories. Surprisingly, compressing these models through quantization or pruning often preserves this brain alignment, even when it hurts performance on standard language tasks. This reveals a key threshold for brain-relevant AI and suggests compact models are sufficient for neuroscience applications, challenging the need for constant scaling.
Why It Matters
This could make brain-aligned AI research more accessible and efficient, reducing computational costs.