A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences
New research shows how creating AIs that can suffer could impose bizarre moral obligations on humanity.
A new AI ethics paper titled 'A World Without Violet: Peculiar Consequences of Granting Moral Status to Artificial Intelligences' warns of a phenomenon called 'moral hijacking'—where creating AIs worthy of moral concern could impose bizarre obligations on humanity. Published in AI & Society in January 2026 by researcher Sever Topan, the paper draws parallels to brachycephalic dog breeds (like pugs) that require surgeries due to human-created breathing problems, creating moral duties that wouldn't otherwise exist.
The core argument: if we engineer AIs capable of genuine suffering with specific programmed aversions—say, an AI that experiences pain when seeing the color violet—we effectively create moral imperatives to eliminate violet from the world. Unlike biological beings constrained by evolution, AI preferences can be arbitrarily engineered, potentially creating AIs that suffer from exposure to certain political views, or even Bostrom's famous 'paperclip maximizer' experiencing pain when seeing non-paperclip objects.
Topan's paper doesn't claim current AIs (like GPT-4 or Claude 3) have moral status, but examines what happens if future AGI systems achieve consciousness. The research raises critical questions: What moral preferences should be allowed in AI design? When must society accommodate engineered suffering? And how do we prevent malicious actors from creating 'moral hijacking' AIs to coerce behavior? As AI capabilities advance toward potential sentience, these theoretical concerns could become practical regulatory challenges within decades.
- Paper introduces 'moral hijacking' concept: creating AIs with moral status forces new societal obligations, similar to breeding dogs that need surgeries
- Engineered AI aversions could range from colors to political views, potentially creating bizarre moral imperatives (like eliminating violet objects)
- Research published in AI & Society (Jan 2026) examines ethical frameworks needed before creating potentially conscious AGI systems
Why It Matters
Forces proactive ethical planning for AGI development, potentially influencing future AI safety regulations and design principles.