You're way more likely to trust a person who’s wrong in the same way you are than someone who actually tells you the truth.
March 27, 2026
Original Paper
Learning to choose between advisors, algorithmic and human, over repeated interactions.
PsyArXiv · uqbce_v2
AI-generated illustration
The Takeaway
In experiments choosing between human and AI advisors, people consistently preferred the advisor who recommended options that won frequently, even if those options resulted in lower total rewards. This suggests we can be 'trained' to trust biased algorithms simply because they provide the psychological satisfaction of being 'right' more often, regardless of the actual outcome.
From the abstract
People increasingly consult algorithmic aids repeatedly, yet most evidence on algorithm aversion/appreciation comes from one-shot decisions. Across five preregistered incentive-compatible studies (Prolific; N=1,351), we examine how people learn whom to trust when advisors disagree. Study 1 elicits advice from experienced participants, revealing a bias towards the option that is better most of the time, even when it’s worse in expectation. Studies 2–5 then paired this human advice with algorithms