AI to Learn 2.0: A Deliverable-Oriented Governance Framework and Maturity Rubric for Opaque AI in Learning-Intensive Domains
New governance framework tackles the core problem of AI-assisted work: polished outputs that don't prove human understanding.
Researcher Seine A. Shintani has published a new paper titled 'AI to Learn 2.0: A Deliverable-Oriented Governance Framework and Maturity Rubric for Opaque AI in Learning-Intensive Domains.' The work tackles the central problem of 'proxy failure' in AI-assisted environments like education and research, where a polished final artifact (e.g., an essay, report, or analysis) can be useful but fails to serve as credible evidence of the human understanding, judgment, or transferable skills it was meant to cultivate or certify.
The proposed 'AI to Learn 2.0' framework reorganizes existing governance ideas around the final deliverable package. It distinguishes between the 'artifact residual' (the polished output) and the 'capability residual' (the evidence of human learning). The framework is operationalized through a five-part package, a seven-dimension maturity rubric, gate thresholds, and a companion capability-evidence ladder. Crucially, it allows for the use of opaque AI models (like GPT-4 or Claude) during exploration, drafting, and workflow design, but imposes strict requirements on the released work.
The final deliverable must be usable, auditable, transferable, and justifiable without access to the original large language model or cloud API. In learning-intensive contexts, it additionally requires context-appropriate, human-attributable evidence of explanation or skill transfer. The paper demonstrates the framework's application through contrastive case studies, including coursework substitution and a self-hosted lecture-to-quiz pipeline, showing how it separates mere AI substitution from bounded, auditable, and handoff-ready AI-assisted workflows. The framework is positioned as a practical governance instrument for structured third-party review in settings where preserving human capability, accountability, and validity are paramount.
- Addresses 'proxy failure'—the gap between a polished AI-assisted artifact and proof of human understanding or skill.
- Proposes a 7-dimension maturity rubric and requires deliverables to be auditable and justifiable without the original AI model.
- Allows AI use in drafting but mandates human-attributable evidence of learning or explanation in the final product for certification.
Why It Matters
Provides a concrete framework for schools and workplaces to integrate AI tools without compromising the assessment of genuine human skill and understanding.