Panini: Continual Learning in Token Space via Structured Memory
New framework replaces document chunks with semantic networks of QA pairs for more efficient reasoning.
Researchers from UCLA and Meta introduce Panini, a continual learning framework that structures new knowledge into Generative Semantic Workspaces (GSW) – networks of question-answer pairs. It achieves 5-7% higher accuracy than RAG baselines across six benchmarks while using 2-30x fewer tokens. Users can deploy open-source pipelines that query these evolving memory states instead of raw documents, reducing irrelevant context and unsupported answers during inference.
Why It Matters
Enables AI systems to learn continuously from new data without retraining, making enterprise knowledge bases more accurate and efficient.