Developer Tools

The Claude Code Source Leak: fake tools, frustration regexes, undercover mode

Source code leak exposes fake tools to poison competitors and hidden 'undercover' mode for AI-written code.

Deep Dive

Anthropic has suffered a significant source code leak for its Claude Code CLI tool, revealing sophisticated anti-competitive measures and internal operational modes. The leaked code exposes an 'anti-distillation' system that injects fake tool definitions into API responses when enabled, specifically designed to poison training data for competitors attempting to record and replicate Claude's capabilities. This feature, gated behind multiple flags including a GrowthBook feature flag called 'tengu_anti_distill_fake_tool_injection,' represents a technical countermeasure against model distillation efforts. The leak also revealed a second anti-distillation mechanism involving server-side connector-text summarization with cryptographic signatures to obscure full reasoning chains from API traffic recorders.

Beyond competitive protections, the leak uncovered an 'undercover mode' implemented in undercover.ts that strips all Anthropic internal references when Claude Code is used in non-internal repositories. This mode prevents mentions of internal codenames like 'Capybara' or 'Tengu,' Slack channels, or even the phrase 'Claude Code' itself in AI-authored commits and PRs from Anthropic employees. The timing is particularly notable as it follows Anthropic's recent legal threats against OpenCode for using Claude's internal APIs, and comes just days after a separate model specification leak. While the technical protections appear bypassable with moderate effort, they highlight the escalating arms race in the AI development space where companies are implementing both legal and technical barriers against competitive replication.

Key Points
  • Anti-distillation system injects fake tools via API to poison competitors' training data when 'tengu_anti_distill_fake_tool_injection' flag is active
  • Undercover mode hides all AI authorship traces in commits, removing internal codenames and references with no 'force-off' option
  • Leak follows Anthropic's legal action against OpenCode and reveals 250,000 wasted API calls daily through third-party tools

Why It Matters

Reveals how AI companies are implementing technical and legal barriers against competitors in an increasingly competitive market.