A Faceted Proposal for Transparent Attribution of AI-Assisted Text Production
A 39-page paper reveals a model for tracking AI's role in writing, from intent to evaluation.
Researcher Geraldo Xexéo has released a 39-page paper proposing a faceted model for transparent attribution of AI-assisted text production, addressing the growing challenge of distinguishing human from machine authorship. The core model introduces three dimensions: Form (how the text is structured), Generation (how AI produced it), and Evaluation (how the output was reviewed). An extended model adds Intent (why the AI was used), Control (who directed the process), and Traceability (audit trails for each intervention). The framework operates at document, chapter, section, and paragraph levels, enabling granular disclosure of AI's role in each segment.
The paper positions this as a minimal operational baseline, designed to be extensible toward higher-fidelity representations. A worked example demonstrates the model's applicability by analyzing the production of the paper itself. This proposal directly challenges current practices where AI use is disclosed vaguely (e.g., 'AI-assisted') without specifying how, where, or to what extent AI intervened. By making attribution structured and auditable, the model could standardize how academic papers, reports, and other professional documents credit AI contributions, potentially influencing publishing norms and intellectual property discussions.
- Proposes a core model with Form, Generation, and Evaluation dimensions for AI attribution
- Extended model adds Intent, Control, and Traceability for higher fidelity
- Applies at document, chapter, section, and paragraph levels
Why It Matters
Could standardize AI attribution in publishing, replacing vague disclosures with structured, auditable records.