Models & Releases

OpenAI Exposes Industrial-Scale Chinese Influence Operation Run Through ChatGPT

State-linked actors generated thousands of posts targeting global audiences with AI-crafted narratives.

Deep Dive

OpenAI has publicly disclosed the takedown of a sophisticated, state-aligned influence operation originating from China that extensively leveraged its AI models, including ChatGPT, to generate and spread propaganda. The campaign, which researchers have previously tracked as 'Spamouflage,' used AI to create thousands of multilingual social media posts, articles, and comments aimed at shaping international discourse on sensitive geopolitical issues, including U.S. domestic politics and the status of Taiwan. This marks a significant escalation in the weaponization of generative AI for information operations, moving beyond simple spam to more persuasive, large-scale narrative generation.

In its report, OpenAI detailed that the operation's accounts were used to generate text in multiple languages, which was then posted across platforms like X (formerly Twitter) and Medium. The company terminated the associated accounts for violating its policies against covert influence operations and has shared technical indicators with industry peers. This incident underscores the dual-use nature of powerful AI tools and presents a critical challenge for AI developers: balancing open access with the need to prevent malicious state-level exploitation that can undermine democratic processes and geopolitical stability.

Key Points
  • Operation 'Spamouflage' used OpenAI models to generate thousands of multilingual propaganda posts.
  • Targets included discussions on U.S. politics and Taiwan, aiming to sway global public opinion.
  • OpenAI terminated the accounts and shared threat indicators, highlighting AI's role in modern info warfare.

Why It Matters

Sets a precedent for AI companies policing state-level misuse, directly impacting global information security and trust.