Elon Musk’s X Finally Tries to Stop the Epidemic of AI-Generated War Footage
X will suspend creators from its revenue program for 90 days if they post undisclosed AI-generated conflict footage.
X has implemented a new policy aimed at curbing the rampant spread of AI-generated misinformation surrounding the ongoing U.S.-Iran conflict. Announced by Head of Product Nikita Bier, the rule mandates that users who share AI-generated videos of armed conflicts without proper disclosure will face a 90-day suspension from the Creator Revenue Sharing program, with permanent removal for subsequent violations. The platform plans to identify this content through Community Notes and metadata signals from generative AI tools like Google's SynthID. This move comes after the platform was flooded with viral fakes, including images of a downed U.S. pilot and videos of Tel Aviv under rocket fire, which were shared millions of times and even incorrectly verified by xAI's Grok chatbot.
The policy represents X's first major attempt to financially disincentivize the creation and sharing of hyper-realistic war propaganda, but significant loopholes and unanswered questions remain. It's unclear how large disclosures must be or if they can simply be in a post's text, and the policy currently does not address non-AI fakery like mislabeled real footage or video game clips presented as news. The effectiveness hinges on X's ability to reliably detect AI-generated content and enforce consistent penalties, a challenge given the sophistication of modern tools and the platform's historical struggles with moderation. The announcement signals a reactive step toward platform integrity during geopolitical crises, but its real-world impact on the misinformation ecosystem is yet to be seen.
- Creators posting undisclosed AI war footage face 90-day revenue suspension and permanent bans for repeat violations.
- Detection will rely on Community Notes and AI metadata (e.g., Google SynthID watermarks), but enforcement specifics are vague.
- The policy does not currently address non-AI fakery, such as mislabeled real videos or repurposed video game footage.
Why It Matters
Attempts to weaponize financial incentives against AI disinformation, setting a precedent for platform accountability during crises.