Startups & Funding

Elon Musk’s only AI expert witness at the OpenAI trial fears an AGI arms race

Stuart Russell says pursuit of AGI clashes with safety, but judge limits testimony.

Deep Dive

In a pivotal moment of Elon Musk's lawsuit against OpenAI, Stuart Russell—a UC Berkeley computer science professor and longtime AI researcher—testified as the only expert witness directly addressing AI technology. Russell's role was to establish that AI poses genuine dangers, from cybersecurity threats to misalignment and the winner-take-all nature of artificial general intelligence (AGI). He emphasized a fundamental tension between pursuing AGI and ensuring safety. However, OpenAI's attorneys successfully objected to Russell discussing existential threats, limiting his testimony to narrower technical concerns.

Russell's testimony highlights the contradictions at the heart of the AI safety debate: He and Musk both signed the March 2023 open letter calling for a six-month pause in AI research, yet Musk simultaneously launched xAI, his own for-profit AI lab. Meanwhile, OpenAI's founders have similarly warned about AI risks while racing to build AGI and pursuing for-profit structures to fund massive compute needs. The trial now forces the court to weigh how seriously to take safety warnings when the same individuals benefit from the arms race they decry.

Key Points
  • Stuart Russell, UC Berkeley professor and AI expert, testified for Musk in OpenAI trial, focusing on risks of AGI arms race and misalignment.
  • Judge limited Russell's testimony on existential threats after OpenAI's objections, restricting it to technical alignment and cybersecurity risks.
  • Russell and Musk both signed the March 2023 pause letter, yet Musk launched for-profit xAI, illustrating contradictions in the safety vs. competition dynamic.

Why It Matters

The case tests whether courts can enforce AI safety promises against corporate profit motives in a high-stakes AGI arms race.