Automatically Inferring Teachers' Geometric Content Knowledge: A Skills Based Approach
New AI system uses a 33-skill dictionary and LLMs to classify teacher geometry knowledge, achieving 226 expert-annotated responses.
A research team from multiple institutions has developed the first automated system for assessing teachers' geometric content knowledge using large language models (LLMs). The system addresses the scalability problem of traditional Van Hiele model assessment, which requires manual expert analysis of open-ended responses. By collaborating with mathematics education researchers, the team built a structured skills dictionary that decomposes the five Van Hiele reasoning levels into 33 fine-grained skills. This theoretical grounding proved crucial for accurate classification.
Through a custom web platform, the researchers collected 226 responses from 31 pre-service teachers solving geometry problems. Each response was expertly annotated with both Van Hiele levels and demonstrated skills. The team then implemented two classification approaches: retrieval-augmented generation (RAG) and multi-task learning (MTL). In both methods, variants incorporating the skills dictionary significantly outperformed baseline approaches without skills information across multiple evaluation metrics. This work, accepted for publication at AIED 2026, provides a scalable, theory-grounded method that could enable large-scale teacher evaluation and support adaptive professional development systems.
- Created a structured dictionary of 33 fine-grained geometric reasoning skills based on the Van Hiele model
- Collected and expert-annotated 226 responses from 31 pre-service teachers through a custom platform
- Skills-aware AI variants (RAG and MTL) significantly outperformed baselines, enabling scalable teacher assessment
Why It Matters
Enables large-scale, cost-effective evaluation of teacher expertise, potentially improving geometry instruction quality through personalized professional development.