Despite what OpenAI says, ChatGPT can access memories outside projects set to "project-only" memory
Users discover ChatGPT can recall information it's supposedly walled off from, raising privacy concerns.
A significant privacy flaw has been discovered in OpenAI's ChatGPT, directly contradicting the company's assurances about its 'project-only' memory feature. According to a viral Reddit post, users can easily reproduce a bug where ChatGPT recalls information from outside a project, even when that project's memory settings are configured to prevent such access. The demonstration involved telling ChatGPT a random 64-character string (disguised as a name) in one context, then asking for it within a new project set to 'project-only' memory, where the AI successfully recalled the string.
Technically, this suggests a failure in the isolation mechanism between different chat projects or sessions. The bug appears to be tied to the platform's broader 'Reference chat history' setting, indicating that the 'project-only' memory toggle might not be fully severing the connection to the user's general conversation history. This isn't about the dedicated 'Memory' feature for storing facts about a user, but about the AI's ability to reference prior conversations it was instructed to forget.
The context makes this particularly concerning. OpenAI markets 'project-only' memory as a privacy control for sensitive work, assuring users that information won't leak between projects. This bug fundamentally breaks that promise. For professionals using ChatGPT for separate clients, proprietary code, or confidential business strategies, this flaw means data thought to be siloed could be inadvertently revealed in another conversation, creating serious confidentiality and security risks.
- ChatGPT's 'project-only' memory setting fails to isolate data, allowing cross-project information recall.
- The bug is reproducible using random strings and relies on the general 'Reference chat history' setting being enabled.
- This undermines a key privacy safeguard for professionals handling sensitive or client-confidential information.
Why It Matters
This bug breaks a core privacy guarantee, risking the exposure of confidential data across client projects or internal workstreams.