Contribution of task-irrelevant stimuli to drift of neural representations
Research shows task-irrelevant data causes long-term representational drift in both biological and artificial neural networks.
A new study accepted at NeurIPS 2025 provides a systematic analysis of representational drift, the phenomenon where neural network representations gradually change over time even while performance remains stable. Led by researcher Farhad Pashakhanloo, the work specifically investigates how 'task-irrelevant stimuli'—background data that an AI or biological agent learns to ignore in a given context—can create long-term drift in how task-relevant information is represented. Using both theoretical analysis and simulations, the paper demonstrates this effect across multiple learning architectures, including Hebbian-based rules like Oja's rule and Similarity Matching, as well as stochastic gradient descent applied to autoencoders and supervised two-layer networks.
The research establishes a clear quantitative relationship: the rate of representational drift increases with both the variance and the dimensionality of the data in the task-irrelevant subspace. This finding produces different qualitative predictions about the geometry and dimension-dependency of drift compared to models that attribute drift solely to random Gaussian synaptic noise. By linking the structure of stimuli, task definition, and specific learning rules to the observed drift, the study provides a more unified framework for understanding lifelong learning in adaptive systems.
Ultimately, this work bridges neuroscience and machine learning, suggesting that representational drift is not merely noise but a structured signal influenced by an agent's entire learning environment. The findings could pave the way for using drift measurements as a diagnostic tool to reverse-engineer the computational principles at work in both biological brains and artificial neural networks, particularly in continual learning scenarios.
- Task-irrelevant background data causes systematic representational drift in neural networks, even when performance is stable.
- Drift rate increases with the variance and dimensionality of irrelevant data across Oja's rule, Similarity Matching, and SGD.
- Provides a unified framework linking data structure, task, and learning rules to drift in both biological and artificial systems.
Why It Matters
Provides a new lens to diagnose and understand continual learning in AI models and the brain, moving beyond viewing drift as mere noise.