1.5M people quit GPT and all for the right reasons tbh.
Users abandon OpenAI after it signs Pentagon deal that rival Anthropic refused, citing ethical red lines.
A major ethical schism has erupted in the AI industry, pitting rivals Anthropic and OpenAI against each other over military contracts. Anthropic, the maker of Claude, reportedly refused a Pentagon request to develop technology for mass surveillance and autonomous weapons systems. The U.S. government's response was to allegedly blacklist Anthropic, labeling it a national security threat—a classification on par with Chinese telecom giant Huawei. This stance established clear, public 'red lines' for the company's constitutional AI principles.
Within hours, OpenAI stepped in to sign a replacement deal with the Pentagon. While CEO Sam Altman stated the agreement respected the same ethical boundaries, the contract language permits the Department of Defense to use OpenAI's models for 'any lawful purpose,' a far broader scope than Anthropic's refusal. This perceived ethical compromise triggered a massive user backlash. A grassroots 'QuitGPT' boycott campaign claims 1.5 million people have taken action, with reports indicating ChatGPT uninstalls surged by 295% in a single day. The incident has reshaped public perception, boosting respect for Anthropic's stance while applying intense scrutiny to OpenAI's commercial decisions.
- Anthropic refused a Pentagon deal for mass surveillance/autonomous weapons tech, leading to a U.S. blacklist.
- OpenAI signed a replacement Pentagon deal allowing use for 'any lawful purpose,' contradicting its stated 'red lines.'
- User backlash via 'QuitGPT' boycott claims 1.5M participants, with ChatGPT uninstalls reportedly spiking 295% in one day.
Why It Matters
The incident forces a critical choice for the industry and users: prioritize commercial growth or adhere to strict ethical AI principles.