Representational drift changes the encoding of fast and slow-varying natural scene features differently
New research reveals how our brains' changing neural codes process motion 5-6x faster than background scenery.
A University of Chicago research team led by Siwei Wang has published groundbreaking findings on how the brain's neural representations change over time, a phenomenon called 'representational drift.' Their study, published on arXiv, reveals that this drift affects different visual features at dramatically different rates. Using a novel contrastive learning approach on neural recordings from mice watching movies, the researchers developed a latent space embedding that captures how multiple animals encode the same visual stimuli, separating shared stimulus features from individual neural variation.
The technical breakthrough came from applying this trained decoder across sessions separated by 90 minutes, using the drop in decoding performance to measure drift magnitude. The key finding: neural encoding of fast-varying local motion features (changing every 33ms) drifts 5-6 times faster than encoding of slower-changing background scenery. This suggests the brain maintains more stable representations for persistent environmental features while allowing greater flexibility in processing rapid changes. The research provides crucial insights for AI development, particularly for systems that need to maintain consistent representations over time while processing dynamic visual inputs.
- Neural encoding of fast motion features (33ms changes) drifts 5-6x faster than slow background features
- Novel contrastive learning method separates shared stimulus encoding from individual neural variation across animals
- 90-minute session separation reveals fundamental constraints on temporal stability in visual processing systems
Why It Matters
This research provides crucial insights for developing more biologically-plausible AI vision systems that maintain stable representations over time.