What Influences Readers' and Writers' Perceived Necessity of AI Disclosure?
727-person study reveals stark gap in AI disclosure expectations
A new study from researchers Jingchao Fang, Victoria Xiaohan Wen, and Mina Lee (accepted to FAccT 2026) investigates a burning question in the age of generative AI: when should writers disclose their use of AI? Using a vignette study with 727 participants, the team manipulated three dimensions—perspective (reader vs. writer), purpose (e.g., creative vs. professional writing), and procedural factors (how AI was used, including replaceability, effortfulness, intentionality, and directness).
The results reveal clear asymmetries. Readers demand disclosure significantly more than writers, especially when AI contributions are irreplaceable (the writer couldn't have produced the text without AI) and when AI output is directly incorporated with little editing. Surprisingly, the effort the writer put in had no significant effect on perceived necessity. More intriguingly, the writer's intentionality (whether they deliberately steered the AI) produced opposite reactions: readers saw low intentionality as requiring disclosure, while writers saw high intentionality as requiring disclosure. These findings challenge current one-size-fits-all disclosure regulations and suggest that effective AI transparency tools must account for differing reader and writer perspectives.
- Readers view AI disclosure as more necessary than writers, across all use cases (N=727 vignette study).
- Disclosure deemed most needed when AI contribution is irreplaceable and directly incorporated into the final text.
- Writer's intentionality has contrasting effects: readers want disclosure when steering is low; writers want it when steering is high.
Why It Matters
These grassroots perceptions should inform AI disclosure policies and tool design to reduce friction between readers and writers.