Models & Releases

Despite what OpenAI says, ChatGPT can access memories outside projects set to "project-only" memory

Security flaw lets ChatGPT access information it's supposed to keep isolated between projects, raising privacy concerns.

Deep Dive

A significant security vulnerability has been discovered in OpenAI's ChatGPT memory system, where the 'project-only' memory setting fails to properly isolate information between projects. Users on Reddit demonstrated that ChatGPT can recall data from outside designated projects despite explicit privacy settings, contradicting OpenAI's documentation about project memory isolation.

The bug manifests when users have the general 'Reference chat history' setting enabled. Testers used random 64-character strings (simulating passwords) as test data, telling ChatGPT these were names or identifiers. When creating new projects set to 'project-only' memory, ChatGPT could still recall these strings from previous conversations, indicating cross-project memory access. The vulnerability appears to work whether or not ChatGPT saves information as permanent memories, suggesting a fundamental flaw in how the system handles context boundaries.

This discovery raises serious privacy concerns for enterprise users and individuals who rely on project isolation for sensitive work. OpenAI's memory system, launched in February 2024, was designed to let ChatGPT remember user preferences and information across conversations, with 'project-only' memory specifically intended to keep information contained within individual projects. The bug undermines this fundamental security promise and could potentially expose confidential business information, personal data, or proprietary research across project boundaries.

OpenAI has not yet commented on the vulnerability, but the reproducible nature of the bug suggests it affects multiple users. The company faces pressure to address this quickly, as memory features are central to ChatGPT's evolution toward more personalized, context-aware assistance. This incident highlights the ongoing challenges in implementing secure memory systems in large language models and the importance of rigorous security testing for AI privacy features.

Key Points
  • ChatGPT's 'project-only' memory setting fails to isolate data between projects when 'Reference chat history' is enabled
  • The bug was demonstrated using random 64-character strings that ChatGPT recalled across project boundaries
  • Vulnerability contradicts OpenAI's privacy assurances and could expose sensitive enterprise or personal data

Why It Matters

Enterprise users relying on project isolation for confidential work may have their data exposed across organizational boundaries.