Wikipedia bans AI‑generated text in articles, with two narrow exceptions
The Wikimedia Foundation's new policy prohibits AI content but allows limited use for translation and grammar.
The Wikimedia Foundation has enacted a formal policy prohibiting the use of AI-generated text in Wikipedia articles, a significant move to safeguard the encyclopedia's foundational commitment to verifiability and accuracy. The decision stems from the inherent unreliability of large language models (LLMs), which can produce convincing but factually incorrect 'hallucinations.' This blanket ban applies to the creation of new article drafts, sections, or substantial rewrites. The policy reinforces that Wikipedia's content must be based on reliable, published sources, a standard AI-generated text cannot inherently meet.
However, the policy carves out two specific, narrow exceptions where AI tools are permitted. The first is for translating existing, high-quality articles from one language version of Wikipedia to another. The second allows for using AI to correct spelling, grammar, or to improve sentence clarity, but not to alter factual content. Crucially, in both cases, human editors must take full responsibility for the final output, explicitly disclose the use of AI, and ensure all information remains verifiable against cited sources. This structured approach aims to leverage AI's utility for mechanical tasks while maintaining human oversight for factual integrity.
- Wikipedia's operator, the Wikimedia Foundation, has instituted a formal ban on AI-generated article text to combat misinformation.
- The policy allows only two exceptions: using AI for translating existing articles and for correcting basic spelling/grammar.
- All AI-assisted edits require human editor disclosure and verification against reliable published sources to ensure accuracy.
Why It Matters
This sets a crucial precedent for content platforms, prioritizing human-verified accuracy over AI-scale automation for trusted information.