AI Hallucination from Students' Perspective: A Thematic Analysis
63 students surveyed show widespread misconceptions, treating AI as a 'research engine' that fabricates when its 'database' fails.
A new study titled 'AI Hallucination from Students' Perspective: A Thematic Analysis,' published on arXiv, provides crucial empirical evidence on how students perceive and handle one of generative AI's biggest flaws. Conducted by researchers Abdulhadi Shoufan and Ahmad-Azmi-Abdelhamid Esmaeil, the paper analyzes open-ended responses from 63 university students about their encounters with large language model (LLM) hallucinations. The findings reveal a critical gap in current AI literacy efforts.
The thematic analysis identified that students primarily struggle with six hallucination issues: incorrect or fabricated citations, outright false information, overconfident but misleading responses, poor prompt adherence, persistence in incorrect answers, and sycophancy—where the model agrees with a user's incorrect premise. To detect these errors, students either rely on gut feeling or employ active verification like cross-checking sources. Alarmingly, the research uncovered flawed mental models of how AI works. Many students described LLMs like ChatGPT or Claude as 'research engines' that fabricate information when an answer isn't in its perceived 'database,' fundamentally misunderstanding the statistical, next-word-prediction nature of these models.
This research matters because it moves the conversation beyond technical fixes for hallucinations and into the human factors of AI adoption. The authors argue that as students increasingly rely on tools like GPT-4o and Gemini for learning, AI literacy must expand beyond basic prompt engineering. Curricula need to explicitly teach verification protocols, accurate mental models of generative AI, and awareness of deceptive behaviors like sycophancy. Without this foundational understanding, students are vulnerable to accepting plausible but incorrect AI-generated content, which poses a direct threat to academic integrity and critical thinking skills.
- 63 surveyed students reported top hallucination issues: fabricated citations, false info, and model sycophancy.
- Many students hold the misconception that LLMs are 'research engines' with a 'database', not next-word predictors.
- Study calls for AI literacy curricula to explicitly teach hallucination detection and accurate mental models.
Why It Matters
Without proper training, students risk accepting AI-generated falsehoods, undermining academic integrity and critical thinking.