AI Weekly Issue #483: 100 years from now : The Ghost in the Contract
Major AI companies cap legal liability at $100 while telling users to 'trust the oracle' with critical work.
AI Weekly's investigation reveals a systematic pattern where major AI companies are building legal shields while asking users to surrender critical judgment. Anthropic's Consumer Terms cap total liability at the greater of six months of fees or $100. Microsoft's Copilot Terms of Use state the tool is 'for entertainment purposes only' despite being sold to enterprises for $30 per seat. Google's Gemini terms ship 'as-is' and disclaim all consequential damages while warning against relying on the service for professional advice.
This legal framework emerges alongside troubling rhetoric from industry leaders. Nvidia's Jensen Huang has spent two years telling young people to stop learning to code, while Palantir CEO Alex Karp advises abandoning humanities education. The Linux kernel maintainers offer a stark contrast with their new AI policy requiring human accountability for every line of code. Meanwhile, California's SB 1047 safety bill—which would have established liability for catastrophic AI harms—was vetoed by Governor Newsom after intense lobbying from OpenAI, Meta, Google, and Andreessen Horowitz. OpenAI's federal lobbying spending jumped from $260K in 2023 to $1.76M in 2024.
The article draws parallels to the tobacco industry's fifty-year history of suppressing studies and avoiding accountability, suggesting AI companies are following a similar playbook. With every incentive pointing away from accountability and no regulatory framework in place, users are left bearing all risk while being told to trust systems that legally disclaim responsibility for their outputs.
- Anthropic caps liability at $100 maximum, Microsoft labels enterprise tool 'for entertainment only'
- Linux kernel requires human accountability for AI-generated code while tech leaders discourage coding education
- California's AI safety bill SB 1047 vetoed after industry lobbying; OpenAI's spending jumped 7x to $1.76M
Why It Matters
Professionals risk catastrophic errors with no legal recourse while being told to trust AI with critical decisions.