AI Safety

LLM Use, Cheating, and Academic Integrity in Software Engineering Education

New research reveals AI cheating is rampant in coding courses, driven by unclear rules and deadlines.

Deep Dive

A new study titled "LLM Use, Cheating, and Academic Integrity in Software Engineering Education," authored by Ronnie de Souza Santos and five colleagues, investigates how students are using large language models (LLMs) like GPT-4 and Claude in ways they perceive as cheating. The research, based on a cross-sectional survey of 116 undergraduate software engineering students from multiple countries, combines quantitative data with qualitative insights to map the landscape of AI-assisted academic dishonesty.

The results show that reported LLM cheating practices are highly context-dependent. Students primarily misuse AI for programming assignments, routine coursework, and documentation tasks, often citing time pressure and unclear instructor guidance as key drivers. Interestingly, use during formal quizzes and exams was less frequent and more uniformly identified by students as a clear violation. While students reported awareness of potential academic and professional consequences, they perceived formal institutional sanctions for LLM cheating as currently limited.

The study concludes that LLM misuse is strongly associated with specific assessment and instructional conditions. The ambiguity historically present in software engineering education—around collaboration, code reuse, and external help—has been amplified by generative AI. The authors argue this creates a pressing need for educators to proactively redesign assessments and establish clear, aligned expectations for LLM use that support learning objectives rather than inadvertently encouraging circumvention.

Key Points
  • Survey of 116 software engineering students found LLM cheating is most common in programming assignments and documentation, driven by time pressure.
  • Students reported clearer rules against AI use in exams, but perceived a lack of formal sanctions for cheating in regular coursework.
  • The study concludes that ambiguous assessment design is a key factor, calling for updated academic integrity policies specific to generative AI tools.

Why It Matters

As AI becomes ubiquitous, universities must urgently redefine cheating and redesign assessments to preserve educational integrity in technical fields.