AI Safety

Teaching Students to Question the Machine: An AI Literacy Intervention Improves Students' Regulation of LLM Use in a Science Task

A brief, 2-hour workshop teaching students to question AI outputs led to significantly better science task results.

Deep Dive

A research team from INRIA and Université de Bordeaux led by O. Clerc published findings showing that a brief, scalable AI literacy intervention can significantly improve how middle school students use generative AI. In a controlled study with 116 students (ages 13-15), researchers tested a two-hour workshop that combined technical knowledge about how large language models (LLMs) work and fail with practical guidance on prompting strategies and response evaluation. The intervention group attended this workshop two days before completing six science investigation tasks using a generative AI system, while the control group received no training.

The results were striking: students who received the training showed substantially less uncritical reliance on AI outputs. They were more likely to reformulate queries, ask follow-up questions, and accurately judge response correctness. This translated to better performance on science tasks—trained students outperformed their untrained peers by approximately 40%. Notably, neither self-reported GenAI knowledge nor metacognitive scores predicted performance, suggesting that effective AI use depends more on explicit training in interaction regulation than on students' self-assessed abilities.

This study addresses a critical gap in AI education as generative tools become ubiquitous in classrooms. With classroom time and teacher training resources constrained, the research demonstrates that even brief, focused interventions can meaningfully improve how students engage with AI systems. The findings have implications for curriculum development, suggesting that AI literacy should emphasize critical evaluation and regulatory skills rather than just technical knowledge.

Key Points
  • 2-hour workshop improved student performance by ~40% on AI-assisted science tasks
  • 116 middle school students participated in controlled study with intervention vs. control groups
  • Trained students showed less uncritical reliance, better query reformulation, and more accurate AI response evaluation

Why It Matters

Provides evidence-based approach for scalable AI literacy education that improves critical thinking skills in young users.