AI Safety

Plagiarism or Productivity? Students Moral Disengagement and Behavioral Intentions to Use ChatGPT in Academic Writing

Research on 418 Filipino students reveals how moral disengagement drives AI tool adoption in academia.

Deep Dive

A team of researchers from multiple Philippine universities, including John Paul P. Miranda and seven co-authors, published a study analyzing how moral disengagement influences college students' intentions to use ChatGPT for academic writing. The research, involving 418 students with prior ChatGPT experience, tested a model based on the Theory of Planned Behavior. It specifically examined five psychological mechanisms of moral disengagement: moral justification, euphemistic labeling, displacement of responsibility, minimizing consequences, and attribution of blame. These mechanisms were analyzed as predictors of students' attitudes, subjective norms, and perceived behavioral control, which in turn predicted their behavioral intention to use the AI tool.

The findings revealed that several of these disengagement mechanisms significantly influenced students' attitudes and their sense of control over using ChatGPT. Notably, 'attribution of blame'—where students justify their actions by blaming institutional gaps, unclear rules, or peer behavior—emerged as the strongest influencing factor. Overall, the model successfully explained more than half (over 50%) of the variation in students' behavioral intentions. The study concludes that many students perceive using ChatGPT as acceptable for learning purposes, especially when academic guidelines are ambiguous. This points to a pressing need for educational institutions to develop explicit academic integrity policies, provide structured ethical guidance, and integrate classroom support to foster responsible AI use.

The authors also acknowledge limitations, noting that intention-based models may not fully capture the complexity of student behavior. Factors like emotional states, strong peer influence, and the sheer convenience of AI tools can also sway decisions beyond rational calculation. Published as a conference proceeding for the 2025 International Workshop on Artificial Intelligence and Education, this research provides crucial, data-driven insights for educators and administrators navigating the integration of generative AI like ChatGPT into higher education, emphasizing that policy clarity is key to managing its use.

Key Points
  • The study of 418 Filipino students found 'attribution of blame' (blaming unclear rules) was the strongest predictor for justifying ChatGPT use in academic work.
  • The research model, analyzing five moral disengagement mechanisms, explained over 50% of the variation in students' behavioral intentions.
  • Findings indicate a critical need for clear academic integrity policies and ethical guidance from institutions to manage AI tool adoption responsibly.

Why It Matters

For educators and institutions, this research underscores that unclear policies directly enable questionable AI use, demanding proactive ethical frameworks.