Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope
Former employee testifies GPT-4 was deployed in India without safety review
Elon Musk's legal challenge to dissolve OpenAI has put the company's safety practices under scrutiny. In federal court in Oakland, former AGI readiness team member Rosie Campbell testified that OpenAI transformed from a research-driven organization to a product-focused one, undermining its original mission of safe AGI development. She highlighted a specific incident where Microsoft deployed a GPT-4 model in India through Bing before OpenAI's Deployment Safety Board had evaluated it. While the model posed minimal risk, Campbell emphasized the need for strong precedents as technology advances. OpenAI's attorneys noted that Campbell considered OpenAI's safety approach superior to Musk's xAI.
Separate testimony from former board member Tasha McCauley revealed deeper governance issues. McCauley described a pattern of CEO Sam Altman misleading the board—including lying about her intentions to remove another member and failing to disclose the ChatGPT launch decision. These concerns, along with Altman's conflict-averse management style, led the non-profit board to briefly fire him in 2023. The board reversed course after staff sided with Altman and Microsoft intervened. OpenAI recently hired Dylan Scandinaro from Anthropic as head of preparedness, with CEO Altman saying the hire would help him "sleep better tonight."
- Former OpenAI employee Rosie Campbell testified the company shifted from research to product focus, compromising safety
- GPT-4 was deployed in India via Bing before OpenAI's Deployment Safety Board evaluated it
- Board member Tasha McCauley said CEO Sam Altman misled the board, leading to his brief firing in 2023
Why It Matters
Musk's lawsuit exposes tension between OpenAI's profit motives and its founding safety mission, setting a precedent for AI governance.