LaTA: A Drop-in, FERPA-Compliant Local-LLM Autograder for Upper-Division STEM Coursework
Run a FERPA-compliant AI grader on a single Mac Studio for 200 students
Researchers at Oregon State University introduced LaTA (LaTeX Teaching Assistant), a drop-in autograder for upper-division STEM courses that runs entirely on local hardware, eliminating the FERPA violations and data risks of third-party API-based AI grading. Designed for LaTeX-native workflows common in engineering and physics, LaTA uses a four-stage pipeline (ingest, segment, grade, report) with an open-weight chain-of-thought LLM (gpt-oss:120b) to compare student work against instructor-authored reference solutions via YAML rubrics.
In a Winter 2026 deployment for ME 373 (Mechanical Engineering Methods) at Oregon State, LaTA graded weekly assignments for approximately 200 students using a single Mac Studio. Cost per assignment was $0 marginal, with wall-clock time of 1-3 minutes per submission. The instructor-confirmed grading error rate held at roughly 0.02-0.04% per rubric line item across the term. Compared to the previous traditionally-graded cohort, the LaTA-graded cohort outperformed by 11% on the midterm and 8% on the final exam, with surveyed students reporting large gains in confidence on all learning objectives (N=159, Δ≥+1.49 Likert points, p<10⁻²⁷). The code is released under AGPLv3.
- Runs entirely on commodity hardware (e.g., Mac Studio) with ~1-3 min grading per submission for 200 students
- Grading error rate of 0.02-0.04% per rubric line, comparable to or better than human graders
- LaTA-graded cohort scored 11% higher on midterms and 8% higher on finals, with large confidence gains
Why It Matters
Local AI grading eliminates privacy risks and API costs while demonstrably improving STEM student outcomes.