Multisensory learning recruits visual neurons into an olfactory memory engram
Visual mushroom body cells join olfactory engrams after multisensory training in Drosophila.
A new study from Scott Waddell’s group at Oxford reveals how the brain binds sensory features during multisensory learning. In Drosophila, pairing visual (color) and olfactory (odor) cues during appetitive or aversive training significantly boosts memory performance compared to single-sense training – even when tested with only one sense. Using temporal control of neuronal function and synapse-level connectomics, the team identified that visually-selective mushroom body Kenyon Cells (KCs) are recruited into the olfactory memory engram after multisensory training. These KCs become required for enhanced recall of both modalities.
The mechanism hinges on serotonergic DPM neurons, which span between modality-selective KC streams. DPM transmission is uniquely required during multisensory formation, not during single-sense learning. Downstream, dopamine signaling via the DopR1 receptor in APL neurons releases GABAergic inhibition, allowing a ‘bridging microcircuit’ to function. The result: the olfactory engram expands to include visual KCs, enabling a single sensory feature (e.g., an odor) to retrieve the full multimodal memory. This work, published on arXiv (2604.28007), provides the first detailed neural circuit model for cross-modal memory binding, with direct implications for AI multimodal learning architectures.
- Fruit flies trained with color-odor pairs show 2x memory improvement over single-sense training.
- Visual Kenyon Cells become essential for recall of both visual and olfactory memories after multisensory learning.
- Serotonergic DPM neurons and DopR1 signaling in APL neurons are required specifically during multisensory memory formation.
Why It Matters
This neural ‘engram expansion’ mechanism could inspire more robust multimodal AI systems that bind sensory inputs like vision and smell.