Andrej Karpathy's 'LLM Knowledge Base' Idea Goes Viral as a 'Second Brain' for AI Research
Structured markdown files let LLMs retain research context without knowledge expiry.
Andrej Karpathy, former OpenAI researcher and AI educator, has sparked a viral trend with his “LLM Knowledge Base” concept—essentially a personal ‘Second Brain’ built on structured markdown files. The idea, which gained traction as of April 2026, involves maintaining research notes, competitive intelligence, and stakeholder context in a format that large language models (LLMs) like Claude Code can directly reason over. Unlike traditional note-taking apps, this system prevents knowledge from expiring by keeping it machine-readable and queryable. Developers and researchers have praised the approach for making LLMs more useful as persistent memory layers, enabling them to retain nuanced context across sessions.
For professionals, this means turning fragmented notes into a living knowledge graph that an LLM can interrogate. Instead of losing context after a conversation, Claude Code or similar models can pull from the markdown files to answer questions, suggest connections, or summarize recent developments. The method leverages the same disciplined note-taking habits that knowledge workers already use, but supercharges them with LLM reasoning. As the AI field accelerates, Karpathy’s ‘Second Brain’ offers a practical way to keep up without relying on external databases or complex RAG pipelines—just plain text files and a capable model.
- Uses structured markdown files as persistent, queryable memory for LLMs like Claude Code
- Prevents knowledge expiry by maintaining research and stakeholder context in a machine-readable format
- Eliminates the need for complex RAG pipelines or external databases
Why It Matters
Turns scattered AI research notes into a persistent, queryable second brain that never forgets context.