Mass Surveillance w/ LLMs is the Default Outcome. Contracts Won't Change That.
Contracts with OpenAI or Anthropic can't stop government from buying your private data for AI analysis.
In a viral LessWrong post, AI researcher Logan Riggs presents a stark warning: mass surveillance using large language models (LLMs) by government agencies is becoming the default, and contractual safeguards with companies like OpenAI or Anthropic are insufficient to stop it. Riggs analyzes the hypothetical 'best-case' contract between OpenAI and the Department of War (DoW), noting that even an airtight agreement with enforcement teeth is a temporary fix. The government can simply switch vendors to Google's Gemini, xAI's Grok, or, within a year, run capable open-source models like Llama 3 on their own servers, bypassing all commercial safeguards and safety guardrails. This is enabled by existing precedent, such as the DoW's past warrantless purchase of commercial location data on Americans.
The fundamental legal barrier is the Third Party Doctrine, a 1979 Supreme Court ruling that strips privacy protections from data shared with third parties—which in 2026 includes nearly all digital activity. Riggs argues the current public focus on the Anthropic-DoW story creates a rare political window. The solution is not better contracts but new legislation, specifically a narrow carve-out prohibiting AI from analyzing data obtained without a warrant. He suggests targeting the soon-to-expire Defense Production Act (DPA) as a legislative vehicle, as the deadline for the National Defense Authorization Act (NDAA) has passed. The post concludes that the contract fight merely buys a year, after which open-source AI models will eliminate the need for any vendor negotiation or oversight, making immediate legislative action critical.
- The 1979 Third Party Doctrine allows US agencies to legally purchase vast troves of commercial data (location, messages, searches) without warrants for AI analysis.
- Even strong vendor contracts (e.g., OpenAI/DoW) are temporary; agencies can switch to Gemini, Grok, or open-source models within ~1 year, voiding all safeguards.
- The only durable fix is legislation: a narrow carve-out, like amending the Defense Production Act, to ban AI-assisted analysis of warrantlessly obtained data.
Why It Matters
Without new laws, government mass surveillance powered by increasingly capable open-source LLMs could become permanent and unchallengeable.