AI-Induced Human Responsibility (AIHR) in AI-Human teams
New research finds people hold humans more responsible for AI-assisted errors, even when they could shift blame.
A new study titled 'AI-Induced Human Responsibility (AIHR) in AI-Human teams,' authored by Greg Nyilasy and colleagues from the University of Melbourne and published on arXiv, reveals a counterintuitive finding about accountability in hybrid workplaces. Across four controlled experiments involving 1,801 participants, researchers simulated an AI-assisted lending environment with scenarios like discriminatory loan rejections and filing errors. They discovered that when a mistake occurs in a human-AI team, people assign significantly more blame to the human teammate than they would if the human were paired with another person. The 'AIHR effect' averaged a 10-point increase on a 100-point responsibility scale.
The research indicates this heightened blame isn't due to mind perception of the AI or self-threat, but because AI is subconsciously viewed as a constrained implementer—a tool following rules. This perception makes the human, seen as the autonomous agent with discretion, the default locus of responsibility. Crucially, the effect held even in 'self' conditions where participants could have shifted blame to the AI to protect their own image, suggesting a robust psychological bias. This finding flips the common concern of a 'responsibility gap' or diluted accountability when AI is involved, showing instead that human responsibility can be amplified.
The implications are significant for organizational design and legal frameworks. As companies deploy AI agents from OpenAI, Anthropic, and others as collaborative teammates, this research suggests humans in the loop may face disproportionate blame for systemic failures. Leaders must design workflows, training, and accountability systems that acknowledge this bias to prevent unfair scapegoating and ensure ethical AI integration. The study extends research on algorithm aversion and provides a crucial lens for the future of human-AI teaming in high-stakes fields like finance, healthcare, and law.
- In 4 experiments with 1,801 participants, humans were assigned 10% more responsibility for errors when teamed with AI vs. another human.
- The 'AIHR effect' persisted even when the human was the participant themselves, countering expectations of self-serving bias.
- The mechanism is 'inferences of agent autonomy': AI is seen as a constrained tool, making the human the default responsible agent.
Why It Matters
As AI becomes a teammate, organizations must design accountability systems that address this bias to prevent unfair blame on human workers.