Training-Driven Representational Geometry Modularization Predicts Brain Alignment in Language Models
Scientists discover a key link between how AI learns language and how the human brain processes it.
Researchers tracked how language models from 70 million to 1 billion parameters develop during training. They found the models' internal layers self-organize into distinct modules. The simpler, more stable module better predicts actual brain activity in human language regions. This alignment happens quickly in temporal brain areas but is delayed and more dynamic in frontal regions. The findings suggest a process called 'representational smoothing' helps AI process language more like a brain.
Why It Matters
This reveals a fundamental bridge between artificial and biological intelligence, guiding the development of more human-like AI.