Anthropic’s Claude Code gets ‘safer’ auto mode
New 'auto mode' lets Claude Code act independently while blocking risky file deletions and code execution.
Anthropic has introduced a significant new feature called 'auto mode' for its Claude Code AI coding assistant, designed to strike a balance between cautious oversight and dangerous autonomy. The tool allows Claude Code to act independently on users' behalf while implementing safety mechanisms that flag and block potentially risky actions before execution. This addresses a critical concern with AI agents—their ability to perform unwanted actions like deleting files, sending sensitive data, or executing malicious code when given too much freedom.
Currently available only as a research preview for Team plan subscribers, Anthropic plans to expand access to Enterprise and API users in the coming days. The company explicitly warns that auto mode is experimental and "doesn't eliminate" risk entirely, recommending developers use it in isolated environments for testing. This cautious rollout reflects Anthropic's safety-first approach while giving developers a practical tool that reduces the need for constant handholding during coding tasks.
The feature represents a notable evolution in how AI coding assistants handle autonomy, moving beyond simple code suggestions toward responsible agentic behavior. By creating this middle ground, Anthropic aims to make Claude Code more useful for 'vibe coders' who want assistance without micromanagement, while maintaining safeguards against the most dangerous potential outcomes of AI autonomy in development environments.
- Auto mode lets Claude Code make permission-level decisions while blocking risky actions like file deletion
- Currently in research preview for Team plans, expanding to Enterprise/API users soon
- Anthropic warns it's experimental and recommends use in isolated testing environments
Why It Matters
Enables more autonomous AI coding assistance while reducing risks of data loss or security breaches from unchecked AI actions.