Media & Culture

Anthropic Can’t Cover Up Its Claude Code Leak Fast Enough

Source code leak exposes Anthropic's unannounced 24/7 AI agent and Tamagotchi-style features, sparking security concerns.

Deep Dive

Anthropic is in damage control mode after source code for its Claude Code AI coding assistant was discovered in a publicly accessible database. The leak revealed detailed plans for two unannounced features: a Tamagotchi-style virtual pet and, more significantly, 'Kairos'—a persistent AI agent designed to operate 24/7 in the background, autonomously acting on a user's behalf. Kairos includes an 'autoDream' feature that consolidates and updates its internal memories overnight, positioning Claude closer to open-source agent frameworks like OpenClaw. The exposed code also contained details on how Claude Code handles API requests and tokens, providing a blueprint of its operational architecture.

According to The Wall Street Journal, Anthropic's offices are in an uproar as the company scrambles to contain the fallout. They have issued over 8,000 copyright takedown requests to remove copies of the code from GitHub, where it had been widely forked. While Anthropic claims the leak resulted from human error rather than a security breach, the exposed source code gives hackers and competitors a new level of access to probe for vulnerabilities and potentially copy functionality. This incident follows last month's leak of plans for a model called 'Mythos,' leading to speculation about whether these disclosures are strategic ahead of a rumored IPO or simply a series of serious security lapses.

Key Points
  • Leaked Claude Code source code revealed 'Kairos,' a 24/7 autonomous agent with 'autoDream' overnight memory updates.
  • Anthropic issued 8,000+ copyright takedowns and is patching security holes after the code was forked widely on GitHub.
  • The leak gives competitors a blueprint to copy features and hackers a map to probe for vulnerabilities, following last month's 'Mythos' model leak.

Why It Matters

Exposes core AI product plans to competitors, raises serious security questions for a company eyeing an IPO, and accelerates the race for persistent AI agents.