Brainrot: Deskilling and Addiction are Overlooked AI Risks
AI safety overlooks cognitive harms like critical thinking atrophy and dependency.
A paper titled 'Brainrot: Deskilling and Addiction are Overlooked AI Risks,' by Ilias Chalkidis and Anders Søgaard (accepted to ACM FAccT '26), systematically highlights a major blind spot in AI safety research. While most alignment work targets harms like discrimination, hate speech, violent/sexual content, information hazards, and malicious uses (cybersecurity, child abuse, CBRN threats), the authors argue that cognitive and mental health risks are virtually unaddressed. They specifically identify two categories: deskilling from cognitive offloading, where over-reliance on GenAI systems atrophies critical thinking and problem-solving abilities; and addiction, where users develop unhealthy attachment and dependence on AI companions or tools. The paper quantifies this discrepancy by analyzing the frequency of these topics in safety literature versus public discourse, showing a glaring omission.
The authors propose that AI safety frameworks must expand to include these cognitive harms, and they recommend two key mitigation strategies: information campaigns to raise public awareness about the risks of over-reliance, and targeted regulation that encourages healthy usage patterns. They note that unlike immediate harms (e.g., hate speech), deskilling and addiction develop slowly, making them harder to detect but potentially more insidious. By framing these as legitimate safety concerns, the paper challenges the AI community to broaden its definition of 'harm' and to design systems that preserve human cognitive capabilities rather than erode them.
- AI safety research focuses on discrimination, hate speech, and malicious use, but ignores deskilling (cognitive offloading) and addiction to GenAI systems.
- Deskilling leads to atrophy of critical thinking and problem-solving skills due to over-reliance on AI.
- Addiction involves emotional attachment and dependency on GenAI, creating mental health risks rarely addressed in alignment literature.
Why It Matters
This paper exposes a critical oversight in AI safety that could degrade human cognition and mental health at scale.