[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture
Open-source code enables AI models to rewrite their own weights during inference, achieving 99% accuracy on synthetic tasks.
An independent researcher has successfully implemented and open-sourced a key component of the BDH (Dragon Hatchling) architecture that was previously missing from public releases. The implementation enables Hebbian synaptic plasticity where AI models can rewrite their own decoder weights during inference, using sparse activation codes as memory addresses. On synthetic n-back associative recall tasks, the system achieved remarkable accuracy—99.0% on n2, 98.0% on n4, and 97.5% on n8 tasks—compared to just 1% for baseline models without write-back capability.
The breakthrough includes a selective consolidation mechanism that preserves learned information when transferring from fast episodic weights to slow permanent weights. While dense writeback degraded performance to 75-89%, selective writeback (targeting only the top 10% most active rows) maintained 96-97% accuracy. The researcher solved five critical bugs to make the system work and has released the code under Apache 2.0 license, though the 25M parameter model has only been tested on synthetic tasks and not yet validated on natural language data like FineWeb-Edu.
- First open-source implementation of Hebbian fast-weight write-back for BDH architecture, achieving 99% accuracy on n-back tasks
- Selective consolidation preserves 96-97% accuracy by writing back only top 10% most active weights
- 25M parameter model demonstrates biologically-inspired learning where AI updates its own parameters during inference
Why It Matters
Enables AI systems that learn continuously during operation, moving toward more efficient, brain-like machine learning architectures.