AI Safety

Bad Problems Don't Stop Being Bad Because Somebody's Wrong About Fault Analysis

A common cognitive mistake treats reasons for a problem as excuses to ignore it.

Deep Dive

Linch’s post on LessWrong highlights a recurring logical fallacy in online discourse: treating a descriptive explanation for why a problem occurred as if it answers the normative question of whether the problem is acceptable. Three key examples illustrate the pattern. First, a complaint about a misleading headline is deflected by noting that writers don't write headlines—while ignoring the misleading effect. Second, a friend's concern about a prosaic safety problem at a major AI company going unfixed for months is met with a detailed account of organizational restrictions, implying the problem was correctly prioritized away. Third, criticism of a cruise ship releasing hantavirus carriers onto public airplanes is countered by pointing out that the WHO lacks legal quarantine authority—yet the danger to public health remains regardless.

The core argument is that explanation is not exoneration. Even if the party being blamed couldn't have acted differently, the underlying bad outcome (misleading readers, unaddressed AI risk, potential pandemic) still exists and still needs fixing. Linch notes that this fallacy is especially pernicious in AI safety, where discussions often get bogged down in ‘why the company’s incentives made it impossible’ rather than asking whether the safety measures themselves are adequate. The post cautions that ‘ought implies can’ is only a valid rebuttal when the problem is genuinely impossible to solve, not merely inconvenient. For tech professionals, this is a reminder to keep the focus on outcomes, not organizational excuses.

Key Points
  • The ‘explanation-as-exoneration’ fallacy treats reasons for a problem (e.g., resource constraints, organizational limits) as if they excuse the problem itself.
  • In AI safety, a prosaic bug left unfixed for months due to team priorities is still a risk—regardless of why it wasn't fixed.
  • During the hantavirus outbreak, WHO's lack of quarantine authority didn't make releasing infected passengers onto public flights safe; it just shifted blame without reducing danger.

Why It Matters

For AI safety and tech governance, this fallacy can obscure real risks behind organizational excuses—a dangerous distraction from fixing actual problems.