Claude code source code has been leaked via a map file in their npm registry
Full source code for the AI coding assistant was exposed in a public npm registry file.
Anthropic, the AI safety company behind the Claude models, suffered a significant source code leak on March 31, 2026. The full source code for their Claude Code CLI tool was exposed via a .map file inadvertently published to the public npm (Node Package Manager) registry. These .map files are typically used for debugging minified JavaScript by mapping it back to the original source code. In this case, the file contained the complete, unobfuscated original TypeScript/JavaScript source code for the command-line interface tool, which is a key product in Anthropic's suite of developer-focused AI assistants.
The leaked repository, quickly mirrored on GitHub, reveals the internal architecture of Claude Code, including how it interfaces with Anthropic's APIs, handles prompts, manages context windows, and implements specific coding workflows. Security researchers are analyzing the code for potential vulnerabilities that could be exploited against the live service. For competitors and developers, the leak provides an unprecedented look into the engineering practices and product design of a leading AI company, though it may also expose trade secrets and proprietary algorithms Anthropic intended to keep confidential.
This incident highlights the ongoing security challenges AI companies face as they rapidly deploy tools across multiple distribution channels like npm. It serves as a stark reminder of the risks associated with build artifacts and the importance of rigorous CI/CD pipeline security. The exposure could impact user trust and gives competitors insight into Anthropic's technical roadmap for AI-assisted software development.
- Full Claude Code CLI source leaked via npm .map file on March 31, 2026
- Exposes internal API integrations, prompt handling, and proprietary architecture
- Raises significant security and intellectual property concerns for Anthropic
Why It Matters
Exposes proprietary AI tech and security flaws, impacting commercial trust and giving competitors an edge.