Enterprise & Industry

ChatGPT's new Lockdown Mode can stop prompt injection - here's how it works

New security feature restricts external data access to prevent hackers from stealing confidential information.

Deep Dive

OpenAI has launched Lockdown Mode for ChatGPT Enterprise, Edu, Healthcare, and Teachers plans. This optional security setting limits how ChatGPT interacts with external systems and data to prevent prompt injection attacks, where hackers insert malicious code. It disables risky features like live web browsing and adds Elevated Risk labels to warn users. Workspace admins can control which apps are restricted by the new mode.

Why It Matters

Provides critical security for professionals handling sensitive data, preventing AI from becoming a vector for corporate espionage.