Media & Culture

34.8% of employee AI inputs now contain sensitive data

Sensitive data in AI chats tripled in a year, with 83% of companies lacking technical controls to stop it.

Deep Dive

A new report from Elephas highlights a critical and growing security blind spot in corporate AI adoption. The analysis found that 34.8% of employee inputs into AI tools like ChatGPT now contain sensitive data, a more than threefold increase from 10.7% just last year. This surge in risky behavior is happening largely in the dark, as 83% of companies reportedly have zero technical controls in place to prevent the upload of confidential documents, intellectual property, or personal data.

This lack of governance creates a perfect storm for data breaches and compliance failures. The report notes that over 225,000 ChatGPT credentials have been found for sale on dark web markets, providing a direct pipeline for leaked corporate secrets. Major firms like Samsung, Apple, JPMorgan, and Goldman Sachs have already restricted or banned internal ChatGPT use in response. The risks are compounded by OpenAI's data policies: consumer plan conversations are used for model training by default, authorized reviewers can access chats, and deleted data persists on servers for 30 days.

For professionals in regulated industries, the implications are severe. Uploading client information, protected health data (PHI), or material non-public information (MNPI) into a public AI model can violate attorney-client privilege, HIPAA regulations, and NDAs, creating significant legal and financial liability. The report serves as a urgent call for organizations to implement clear AI usage policies, deploy data loss prevention (DLP) tools, and educate employees on the tangible risks of treating conversational AI as a confidential workspace.

Key Points
  • Sensitive data in AI inputs tripled to 34.8%, up from 10.7% in 2023.
  • 83% of companies lack technical controls to prevent confidential data uploads to tools like ChatGPT.
  • Over 225,000 ChatGPT credentials have been sold on dark web markets, exposing corporate data.

Why It Matters

Unchecked AI use creates massive compliance risks for legal, healthcare, and finance, potentially violating HIPAA and NDAs.