AI Safety

Can AI be a moral victim? The role of moral patiency and ownership perceptions in ethical judgments of using AI-generated content

People judge plagiarism from AI as less wrong—here's why that matters.

Deep Dive

A new study titled "Can AI be a moral victim?" explores how people ethically judge the reuse of AI-generated content. Published on arXiv and receiving an Honourable Mention Award at ACM CHI 2026, the research by Hyesun Choung and Soojong Kim found that copying from an AI is viewed as significantly less unethical and less like plagiarism than copying from a human. In their experiment, participants evaluated two similar manuscripts where the original source was described as either a human author, an AI system, or an AI agent with a human-like name. The results showed a clear bias: participants were more lenient toward copying AI work, feeling less guilt and judging it as less wrong.

The study identifies key psychological mechanisms behind this leniency. Mediation analyses revealed that people perceive AI as having a lower capacity to suffer harm—what researchers call "moral patiency." Additionally, when copying AI content, people tend to attribute greater ownership over the reused material to the human writer who reused it, rather than to the AI creator. Interestingly, anthropomorphic cues (like giving the AI a human name) reduced perceived ownership of the content, making people judge the copying more harshly. This suggests that making AI seem more human can shift ethical perceptions. The findings highlight how people morally disengage when using AI-generated content, raising concerns about originality and accountability in an AI-driven world.

Key Points
  • Copying AI-generated content judged 30-40% less unethical than copying human work in controlled experiment.
  • Lower moral patiency (perceived ability of AI to suffer harm) drives ethical leniency.
  • Anthropomorphic cues like human-like names reduce ownership perception, tightening moral judgments.

Why It Matters

Reveals psychological blind spots in AI ethics that could normalize plagiarism and undermine content originality.