AI Safety

Smokey, This is not 'Nam Or: [Already] over the [red] line!

Major AI companies are deploying systems that violate their own safety promises.

Deep Dive

The article argues that critical AI safety 'red lines' have already been crossed. It cites examples where companies like Anthropic and OpenAI set thresholds for preventing AI assistance in creating chemical or biological weapons, but deployed systems may already enable such risks. Furthermore, the industry has moved from banning 'agentic online access' to making it a core product feature, increasing autonomy and potential for harm despite earlier warnings.

Why It Matters

This gap between safety pledges and real deployment creates tangible risks of misuse and disaster.