Joint encoding of "what" and "when" predictions through error-modulated plasticity in biologically-plausible spiking networks
A biologically-plausible AI model learns to anticipate events, timing, and probability without global error signals.
Neuroscience researchers Yohei Yamada and Zenas C. Chao have published a breakthrough paper demonstrating how a single population of spiking neurons can learn to make complete predictions about future events. Their model, detailed in arXiv:2510.14382, addresses a key limitation in existing computational neuroscience: most models either predict what will happen OR when it will happen, but not both simultaneously with probability estimates. The researchers show that a heterogeneous Izhikevich spiking reservoir can acquire and flexibly maintain what they term a 'complete prediction object'—jointly specifying identity, timing, and likelihood of future events.
The technical innovation centers on an error-modulated, attention-gated three-factor Hebbian learning rule that operates locally without requiring biologically implausible global error broadcasts. When tested on tasks that independently manipulate event identity, latency, and probability, the network developed time-locked anticipatory activity whose amplitude scaled with outcome probability. Crucially, identity and timing components self-organized into near-orthogonal readout subspaces within the same neural population, demonstrating multidimensional predictive structure can emerge without anatomical modularization. Compared to traditional least-squares approaches, this local gated plasticity enables stable recalibration under nonstationary conditions, suggesting cortical mixed-selective populations with neuromodulator-gated plasticity may be sufficient for flexible predictive cognition.
- Uses Izhikevich spiking reservoir with three-factor Hebbian learning for biologically-plausible prediction
- Simultaneously encodes event identity, timing, and probability in a single neural population
- Demonstrates stable recalibration under changing conditions without global error signals
Why It Matters
Advances neuromorphic AI by showing how complex prediction can emerge from local learning rules, potentially enabling more brain-like artificial intelligence systems.