Elon Musk Seemingly Admits xAI Has Used OpenAI’s Models to Train Its Own
During cross-examination, Musk says distillation is 'standard practice' for all AI companies.
Elon Musk, founder of xAI, took the witness stand Thursday in his ongoing legal battle against OpenAI and seemed to concede that his company has used OpenAI's models to train its own AI via distillation. During cross-examination, OpenAI attorney William Savitt asked Musk if he knew what distillation is—a technique where a smaller model is trained to mimic a larger, more capable one, making it cheaper and faster. When asked directly whether xAI has done this with OpenAI's models, Musk replied, 'Generally all the AI companies [do that].' Savitt pressed: 'So that's a yes.' Musk responded, 'Partly.' When asked if OpenAI technology had been used in any way to develop xAI, Musk said, 'It is standard practice to use other AIs to validate your AI.' Neither OpenAI nor xAI immediately commented.
The exchange highlights the delicate balance between legitimate model benchmarking and unauthorized copying. OpenAI has publicly taken steps to prevent competitors—especially Chinese labs like DeepSeek—from distilling its models. In a February 2026 memo to a House committee, OpenAI said it 'has taken steps to protect and harden our models against distillation,' framing the issue as national security. The Trump administration has also warned US companies about foreign distillation. Meanwhile, US labs have begun cutting each other off: in August 2025, Anthropic blocked OpenAI from accessing its Claude coding models, and later cut off xAI. Musk's admission could fuel further scrutiny of xAI's practices and deepen the rivalry between the two companies as they compete for AI dominance.
- Musk confirmed under oath that xAI uses distillation on OpenAI's models, calling it 'standard practice' and 'partly' true.
- OpenAI has actively hardened its models against distillation, especially citing threats from Chinese AI labs like DeepSeek.
- Anthropic blocked both OpenAI and xAI from using its Claude models for coding, citing terms of service violations.
Why It Matters
Reveals the fine line between standard AI training techniques and competitive espionage in the industry.