If you mess up while using an AI assistant, people will actually judge you way harder than if a human coworker had helped you make the same mistake.
April 13, 2026
Original Paper
AI-Induced Human Responsibility (AIHR) in AI-Human teams
arXiv · 2604.08866
The Takeaway
We often worry that AI will create a 'responsibility gap' where no one is held accountable for errors. Instead, because we view AI as a mindless tool rather than an autonomous teammate, we place the full weight of a failure squarely on the human's shoulders.
From the abstract
As organizations increasingly deploy AI as a teammate rather than a standalone tool, morally consequential mistakes often arise from joint human-AI workflows in which causality is ambiguous. We ask how people allocate responsibility in these hybrid-agent settings. Across four experiments (N = 1,801) in an AI-assisted lending context (e.g., discriminatory rejection, irresponsible lending, and low-harm filing errors), participants consistently attributed more responsibility to the human decision m