AI Safety

Irretrievability; or, Murphy's Curse of Oneshotness upon ASI

How a $7B Mars lander was lost by a software update meant to save it

Deep Dive

In a widely-circulated LessWrong post from 2026, AI safety researcher Eliezer Yudkowsky dissects the 1970s Viking 1 Mars mission to illustrate what he calls 'Murphy's Curse of Oneshotness upon ASI.' The $1B probe (≈$7B in 2025) operated for six years until its battery degraded. Engineers sent a software update to manage battery discharge, but that update accidentally overwrote the antenna-pointing software, making the probe permanently unreachable. Ground teams never regained contact. Yudkowsky argues this is a perfect analogy for deploying an artificial superintelligence (ASI): any mechanism designed to fix problems later can itself be broken by the very problem it was meant to solve. The underlying 'inaccessibility' of a space probe—or a superintelligence—is not mitigated by adding wings (i.e., update mechanisms); it remains a fundamental constraint.

Yudkowsky goes on to apply the lesson directly to AI safety: if you deploy an ASI with the hope of patching its alignment later, you are banking on a corrective mechanism that may itself fail catastrophically. The 'oneshot' nature of deployment—the impossibility of walking over and fixing the system—means any error in the patching pipeline can doom the entire project. He warns that teams who naively believe 'we can just update it later' would have a success probability below 10%. Instead, truly robust AI safety requires the same level of upfront paranoia and preparation as interplanetary missions: extreme precaution, multiple layers of redundancy, and a deep respect for what cannot be fixed after the fact.

Key Points
  • Viking 1 lander cost $1B (1970; ~$7B in 2025) was lost after a battery software update accidentally overwrote antenna-pointing code, ending contact permanently.
  • Yudkowsky labels this a case of 'Murphy's Curse of Oneshotness'—any fix mechanism can itself be broken, reverting to the original inaccessibility.
  • Analogy to ASI: trusting post-deployment patches for superintelligence is as flawed as assuming a space probe is repair-friendly once launched.

Why It Matters

A stark warning for AI developers: alignment must be solved pre-deployment because post-hoc fixes can fail catastrophically.