Media & Culture

The companies building the most powerful AI in history are also the ones deciding what counts as 'safe.' Nobody seems to think that's a problem. It should be.

Who watches the AI builders? Mostly themselves.

Deep Dive

The post argues that the companies building the most powerful AI—OpenAI, Google DeepMind, Anthropic—are also the ones setting safety standards, writing guidelines, and advising governments. This self-regulation mirrors past industries like pharmaceuticals, tobacco, and aviation, where independent oversight was absent until harm occurred. The author emphasizes that this isn't about bad actors but a broken structure: the people grading the exam wrote the answers. They call for structural change, not just good intentions, to prevent the next AI-related crisis.

The pattern is clear: in every industry that caused serious public harm, self-regulation failed. The post warns that AI, as the most powerful technology ever built, needs independent oversight before it's too late. The defense of 'trust us, we're the experts' is the same one used in every past disaster.

Key Points
  • OpenAI, Google DeepMind, and Anthropic both build AI and set safety standards
  • Historical parallels: pharma, tobacco, aviation had self-regulation before harm occurred
  • The argument is structural, not personal—good intentions don't fix broken oversight

Why It Matters

Without independent oversight, AI safety risks mirroring past industry failures that caused public harm.