OpenAI uses internal version of ChatGPT to identify staffers who leak information: report
The company deploys its own AI to analyze employee data and hunt for press leaks.
Deep Dive
OpenAI has built a specialized, internal version of ChatGPT to monitor its workforce, according to a New York Post report. The tool analyzes employee communications and data to identify staff members who leak confidential information to the press. This represents a direct application of the company's own AI technology for internal security and corporate leak prevention, turning its flagship product into a tool for employee oversight.
Why It Matters
It sets a precedent for AI-powered workplace surveillance, raising major ethical questions for the tech industry.