[P] Reproducing Google’s Nested Learning / HOPE in PyTorch (mechanism-faithful implementation + reproducible tooling and library)
Community-built PyTorch implementation of Google's breakthrough paper hits 600+ GitHub stars with production-ready tooling.
The AI research community now has access to a production-ready implementation of Google's groundbreaking Nested Learning/HOPE paper through an open-source PyTorch library called 'nested-learning.' Created by developer kmccleary3301, this project addresses the critical gap left when Google published their continual learning research paper (arXiv:2512.24695) without releasing accompanying code. The library has gained significant traction with 600+ GitHub stars and represents a mechanism-faithful reproduction of Google's HOPE (Hierarchical Online Parameter Evolution) approach, which enables AI models to learn continuously without catastrophic forgetting. The implementation includes the core CMS (Contextual Memory System) and self-modification pathways described in the original paper.
The library now features professional-grade tooling including a new CLI with commands for common workflows (nl doctor, nl smoke, nl audit, nl train), cleaner installation via PyPI (pip install nested-learning), and robust CI/CD automation. While the developer notes that full paper-scale training results haven't been replicated due to computational constraints, the implementation provides researchers with reproducible local workflows and mechanism-level faithfulness to Google's architecture. This enables broader experimentation with continual learning techniques that could help overcome one of AI's persistent challenges: enabling models to learn new information without forgetting previous knowledge, moving beyond simple transformer tweaks toward more fundamental architectural solutions.
- Faithful PyTorch implementation of Google's Nested Learning/HOPE paper with 600+ GitHub stars
- Available on PyPI as 'nested-learning' with CLI tools for doctor, smoke, audit, and train workflows
- Implements HOPE's CMS and self-mod paths for continual learning without catastrophic forgetting
Why It Matters
Democratizes access to Google's continual learning research, enabling faster experimentation with architectures that prevent AI forgetting.