Media & Culture

OpenAI Rolls Out ‘Advanced’ Security Mode for At-Risk Accounts

For journalists and activists, OpenAI now mandates physical security keys to protect ChatGPT accounts.

Deep Dive

OpenAI is rolling out Advanced Account Security, a new optional protection tier for ChatGPT and Codex accounts targeting journalists, elected officials, dissidents, researchers, and security-conscious users. The feature enforces phishing-resistant authentication by requiring two physical security keys (like YubiKeys) or passkeys, completely replacing traditional passwords. It also eliminates email and SMS-based account recovery—users must rely on recovery keys, backup passkeys, or hardware keys. OpenAI has partnered with Yubico to offer discounted YubiKey bundles to make adoption easier. This move mirrors Google's Advanced Protection program, which has existed for nearly a decade, but OpenAI highlights that AI accounts now hold deeply personal and high-stakes data, making such safeguards urgent.

Beyond authentication changes, Advanced Account Security introduces stricter session management—shorter sign-in windows and automatic re-authentication prompts. Users receive alerts for every new login, viewable in a dashboard of active ChatGPT and Codex sessions. Crucially, support teams lose the ability to perform account recovery, blocking social engineering attacks on help desks. Additionally, while all users can opt out of conversation training, Advanced Account Security enables this exclusion by default. Starting June 1, members of OpenAI's Trusted Access for Cyber program must enable the feature or provide proof of phishing-resistant enterprise single sign-on, reinforcing that this is a baseline for high-risk use cases.

Key Points
  • Requires two physical security keys or passkeys; passwords are disabled entirely for the account.
  • Eliminates email/SMS recovery and prevents support-portal social engineering attacks.
  • Shorter session windows, login alerts, and opt-out of training conversations enabled by default.

Why It Matters

Brings enterprise-grade phishing resistance to AI accounts, critical for high-risk users like journalists and activists.