The gen AI Kool-Aid tastes like eugenics
Director Valerie Veatch's film traces modern AI bias back to Victorian-era race science and statistical modeling.
Director Valerie Veatch's documentary 'Ghost in the Machine' presents a provocative thesis: the racist and sexist outputs of generative AI models like OpenAI's Sora are not bugs, but features with a direct lineage to Victorian-era eugenics. The film argues that to understand why AI systems generate biased content, one must examine the historical foundations of the statistical tools that power them. It traces this lineage from Francis Galton—Charles Darwin's cousin and the originator of eugenics—through his protégé Karl Pearson. Galton's work in multidimensional modeling, which he used to racially categorize and measure the attractiveness of women, informed Pearson's development of statistical concepts like logical regression, a fundamental component of modern machine learning.
The documentary positions the term 'artificial intelligence' itself as a purposeful marketing obfuscation, coined in 1956 by John McCarthy to secure funding. Veatch contends that the core idea—that human intelligence can be measured and mechanized—stems from the same racist, pseudoscientific belief system that fueled eugenics. The film serves as a direct counter-narrative to the optimistic hype of AI accelerationists, focusing instead on the technology's embedded historical biases and the industry's refusal to adequately address them. It challenges viewers to see past the marketing and understand the material and ideological history shaping our current AI moment.
- Traces AI bias to Victorian eugenicist Francis Galton's racist 'multidimensional modeling' of human features.
- Argues statistical tools like logical regression, developed by Galton's protégé, are built on racist pseudoscience.
- Posits 'artificial intelligence' is a meaningless marketing term coined in 1956 to secure research funding.
Why It Matters
Forces a historical reckoning with the data and ideologies baked into foundational AI systems, challenging purely technical fixes for bias.