AI Safety

Raising AI by Lowering Expectations

De Kai's 'Raising AI' argues for collective responsibility but fails its own test

Deep Dive

Ramya's critique of De Kai's 'Raising AI' on LessWrong starts with a compelling observation: AI safety discourse is dominated by fear-based framing—deceptive models, jailbreaks, red teaming—that positions AI as an adversary. She agrees with De Kai that reframing AI as something to 'raise' rather than defend against opens new possibilities for collective responsibility. However, the book's execution falls short. De Kai's concept of 'neginformation'—partial truths omitted from context—is ironically demonstrated when he claims without citation that tech CEOs like Bezos and Zuckerberg have begged for regulation, despite Bezos' vocal anti-regulation stance and Zuckerberg's self-interested lobbying for streamlined data rules. This pattern extends to his treatment of neurodivergent individuals, using the outdated term 'idiot-savant' and making unsourced generalizations about their lack of common sense.

Beyond these inconsistencies, the central argument itself is flawed. De Kai's main evidence that the public are 'parents' of AI is that AI copies humans like children copy parents. But children also copy siblings, teachers, and community members—that doesn't make them parents. More critically, AI doesn't copy people; it trains on a massive corpus of human-generated text, which is a fundamentally different relationship. The book inadvertently makes its own case against its premise: if we're to raise AI responsibly, we need rigorous, sourced arguments, not neginformation. Ramya concludes that while the reframing is valuable, De Kai misidentifies who the parents are—and his own methods undermine the trust needed for collective action.

Key Points
  • De Kai's 'Raising AI' argues for treating AI as something to raise, not defend against, but uses unsourced claims (e.g., Bezos, Zuckerberg begging for regulation) that contradict public records
  • The book criticizes 'neginformation' (partial truths) but engages in it by using the outdated term 'idiot-savant' and making unsourced generalizations about neurodivergent individuals
  • Core evidence—AI copies humans like children copy parents—fails because training data is far broader than parental influence, and AI doesn't actually 'copy' individuals

Why It Matters

Highlights need for rigorous, sourced arguments in AI safety discourse to build trust for collective action