AI Safety

Development and Validation of a Faculty Artificial Intelligence Literacy and Competency (FALCON-AI) Scale for Higher Education

New 23-item assessment tool validated with 269 faculty responses and GPT-based expert review.

Deep Dive

A research team led by Yukyeong Song has published a new paper on arXiv detailing the development and validation of the Faculty Artificial Intelligence Literacy and Competency (FALCON-AI) Scale. This tool addresses a critical gap in higher education: while AI literacy assessments exist for students and K-12 teachers, there has been no validated instrument specifically designed for university faculty. The FALCON-AI Scale provides a standardized way to measure how well professors understand and can apply AI across their core responsibilities.

Grounded in the Critical Tech-resilient Literacies (CTRL) framework, the scale measures three key literacies—functional (using AI tools), evaluative (critically assessing AI outputs), and ethical (understanding AI's societal impact)—across four domains of faculty work: general, teaching, research, and service/administration. The development process was rigorous, starting with 43 items that were refined through structured interviews with subject-matter experts and a novel GPT-based reviewer to assess clarity and relevance. This hybrid human-AI validation process helped refine the item pool to 39 for pilot testing.

The final 23-item scale was validated through a pilot study with 269 faculty responses, analyzed using confirmatory factor analysis (CFA). The result is a concise, psychometrically sound instrument that demonstrates strong reliability and validity. By providing a practical, deployable tool, the FALCON-AI Scale enables universities to move beyond anecdotal evidence and systematically assess their faculty's readiness to integrate AI into pedagogy, research, and institutional service, which is essential for preparing the next generation of professionals.

Key Points
  • First validated AI literacy scale specifically for university faculty, filling a major gap in existing assessments for students and K-12 teachers.
  • Built on a 3x4 framework measuring functional, evaluative, and ethical literacy across teaching, research, service, and general domains.
  • Refined using a hybrid validation process with human experts and a GPT-based reviewer, then tested with 269 faculty pilot responses.

Why It Matters

Provides universities with a data-driven tool to benchmark and improve faculty AI skills, directly impacting curriculum development and research quality.