AI Safety

Policy-Governed LLM Routing with Intent Matching for Instrument Laboratories

EduRouter routes 75% of queries to local models, saving $0.17 per query while boosting learning.

Deep Dive

AI tutoring in engineering labs must balance assistance with preserving learning opportunities—a tension existing systems often fail to manage. Researchers Emmanuel A. Olowe and Danial Chitnis tackle this with a two-component system: Routiium, an OpenAI-compatible gateway that manages multiple LLM backends with configurable prompt modifications and usage logging, and EduRouter, a policy-aware routing service enforcing per-lab budgets, approval workflows, and embedding-based question matching. The system gives instructors granular control over when and how AI help appears, preventing premature hints that kill productive struggle.

Evaluated via trace-driven simulation from two engineering labs (LED characterization, RC circuit analysis) and a 100-query replay through live models, the results are striking. Governed policies boosted the challenge-alignment index from 0.90 to 0.98 and overlay-adherence from 0.69 to 0.87. The productive-struggle window—turns before high-scaffold hints appear—jumped from 1.4 to 3.6. In the live test, EduRouter intelligently routed 75% of queries to a local model, reducing token costs by 66% ($0.087 vs. $0.26 for all-premium routing) while achieving perfect accuracy on a curated 89-intent question bank. The researchers have open-sourced Routiium, EduRouter, and simulator configs to enable replication and further classroom studies. This work directly addresses the cost and control challenges of deploying LLMs in educational settings.

Key Points
  • EduRouter increased the challenge-alignment index from 0.90 to 0.98 and overlay-adherence from 0.69 to 0.87 vs. ungoverned operation.
  • The productive-struggle window expanded from 1.4 to 3.6 simulated turns before high-scaffold hints appeared.
  • Live 100-query test: 75% routed to local model, cutting token costs 66% ($0.087 vs. $0.26) with 100% canonical hit rate.

Why It Matters

For EdTech and LLM deployers: a practical system to cut costs while enforcing pedagogical policies and preserving learning.