Startups & Funding

X says it will suspend creators from revenue-sharing program for unlabeled AI posts of ‘armed conflict’

X will boot creators from its revenue program for 90 days if they post undisclosed AI-generated war content.

Deep Dive

X (formerly Twitter) has announced a significant new enforcement policy targeting AI-generated misinformation related to global conflicts. Head of Product Nikita Bier stated that creators who use AI technology to produce videos depicting armed conflict without clearly disclosing the synthetic nature of the content will face a 90-day suspension from the platform's Creator Revenue Sharing Program. The policy is a direct response to the ease with which modern generative AI can create convincing but false depictions of wartime events, which Bier emphasized is critical to address to preserve access to authentic information. The enforcement will rely on a combination of proprietary AI detection tools and the platform's crowdsourced fact-checking system, Community Notes.

This move highlights the growing challenge platforms face in monetizing user-generated content while preventing the spread of harmful synthetic media. The Creator Revenue Sharing Program, which allows popular creators to earn a share of ad revenue, has been criticized for potentially incentivizing sensationalist and misleading content. X's new rule is a targeted but limited fix, applying only to undisclosed AI content about armed conflicts and enforced through financial penalties rather than content removal. Critics note the policy does not address the broader ecosystem of AI-generated political misinformation or commercial deception, which will remain permissible, raising questions about the scalability and effectiveness of such platform-specific, content-type-limited interventions.

Key Points
  • Creators face a 90-day suspension from X's revenue program for posting undisclosed AI war videos.
  • Enforcement uses AI detection tools and Community Notes, with permanent bans for repeat offenders.
  • The policy is a limited fix, as it doesn't address AI political misinformation or deceptive product ads.

Why It Matters

It sets a precedent for financially penalizing AI misinformation but exposes the limited scope of current platform enforcement.