How to audit what ChatGPT knows about you - and reclaim your data privacy
900M weekly users risk exposing personal data to AI models
ZDNET's Erin Carson outlines five concrete steps for ChatGPT's 900 million weekly users to reduce personal data leakage. First, users can opt out of training data by navigating to Settings > Data controls > Improve the model for everyone and toggling off switches, or via OpenAI's privacy portal. Second, deleting old chats (individual or all) removes them from history but OpenAI may retain copies for up to 30 days for security or legal obligations. Third, temporary chats won't appear in history or be used for training, though OpenAI may still hold a copy for 30 days. Fourth, managing memories prevents ChatGPT from retaining details like pets or preferences. Privacy experts caution that personal data could be repurposed for mass surveillance or other unforeseen uses, making these settings critical for professionals concerned about long-term data exposure.
- Opt out of training data via Settings > Data controls or OpenAI's privacy portal
- Delete old chats (individual or all) but expect 30-day retention period
- Use temporary chats to avoid history and training data inclusion
Why It Matters
Professionals must proactively manage AI data exposure as models may retain personal details for unknown future uses.