AI Safety

Have we already lost? Part 3: Reasons for Optimism

A 2026 analysis finds new reasons for optimism in the high-stakes AI safety race.

Deep Dive

In a 2026 analysis for the Inkhaven Residency, AI safety researcher LawrenceC directly confronts the community's growing pessimism, arguing we have not passed a point of no return toward AI doom. The post identifies two 'silver linings' to current fears: rapid AI progress has made catastrophic risks concrete rather than abstract, exemplified by Anthropic withholding its powerful 'Mythos' model over security concerns. Furthermore, rising US-Europe geopolitical tensions mean European nations are now more likely 'live players' willing to take drastic action against unchecked US AI development, increasing the number of actors who could enforce safety measures.

LawrenceC outlines continued reasons for optimism from 2024 that still hold, including a lack of major actors deliberately seeking to create a misaligned superintelligence and sustained public skepticism in the US toward big tech, which supports regulatory measures. New grounds for hope include Anthropic remaining competitively close to OpenAI in model capability, demonstrating that safety-conscious development can keep pace. The enduring and expanded presence of government AI safety institutes (like the UK's AISI) and safety teams at frontier labs suggests institutional momentum for oversight persists despite industry drama.

Key Points
  • Anthropic's advanced 'Mythos' model is being withheld from public deployment due to concrete security exploitation risks, making dangers less abstract.
  • Geopolitical shifts mean European governments are now more likely to act against unchecked US AI development, creating more 'live players' for safety enforcement.
  • Key safety institutions like UK AISI and lab safety teams have persisted and expanded since 2024, maintaining a foundation for governance.

Why It Matters

The analysis suggests concrete risks and new geopolitical dynamics may finally force the serious, coordinated policy action that abstract warnings could not.