AI Safety

The AI x-risk lawsuit waiting to happen

Florida AG investigates OpenAI after ChatGPT linked to fatal shooting...

Deep Dive

David Scott Krueger, a researcher and writer, explores the legal landscape for holding AI developers accountable if their systems cause catastrophic harm, such as an AI going rogue and killing people. He notes that while U.S. laws against reckless endangerment and public nuisance exist, the bar for prosecution is high because courts typically require evidence of repeated, similar dangerous behaviors that have already caused harm. However, Krueger points to a recent case in Florida where the Attorney General launched a criminal investigation against OpenAI after ChatGPT was used to help plot a shooting at Florida State University that killed two people, suggesting a potential legal precedent.

Krueger argues that AI development practices could be seen as 'doing a shoddy job on a safety-critical system,' akin to building a bridge with subpar materials. He suggests framing AI as an autonomous being not properly controlled, like a dangerous pet, to strengthen the case. While such a lawsuit would be highly unusual, Krueger believes it could be won if the facts about AI risks are widely acknowledged. He also mentions that other countries, like Canada, may be more receptive to public endangerment cases, offering alternative legal avenues for holding AI developers accountable.

Key Points
  • Florida AG launched a criminal investigation against OpenAI after ChatGPT was used in a fatal shooting at Florida State University.
  • Krueger compares AI development to 'shoddy work on a safety-critical system,' like building a bridge with subpar materials.
  • Existing U.S. laws against reckless endangerment and public nuisance may apply, but the legal bar is high due to a need for precedent of similar harms.

Why It Matters

This could set a legal precedent holding AI developers criminally liable for future catastrophic harms.