Introducing Lockdown Mode and Elevated Risk labels in ChatGPT
OpenAI's new security features aim to stop hackers from hijacking your AI.
OpenAI has launched 'Lockdown Mode' and 'Elevated Risk' labels for ChatGPT Enterprise and Team users. The features are designed to defend organizations against sophisticated prompt injection attacks and AI-driven data exfiltration attempts. This marks a significant escalation in OpenAI's enterprise security posture, directly addressing one of the most critical vulnerabilities in deploying LLMs at scale. The move follows increasing reports of hackers using clever prompts to manipulate AI models and steal sensitive information.
Why It Matters
This is a major step for secure enterprise AI adoption, directly tackling the top security fear for businesses.