economics Nature Is Weird

Computational authority makes human brains trust an AI’s output even when they know the information is a total hallucination.

April 25, 2026

Original Paper

AI-Authority-Bias - A Neurocognitive Explanation for Uncritical Human Deference to AI Systems

SSRN · 6363699

The Takeaway

AI-Authority-Bias bypasses our critical thinking by appealing to a deep-seated respect for computational structure. This neurocognitive tendency means that people often defer to a machine's judgment over their own eyes and logic. Most experts hoped that teaching people about AI errors would make them more skeptical and careful. Instead, this research shows that the bias is so systematic that it remains active even after users are warned. This makes AI an incredibly dangerous tool for misinformation because the human brain is naturally inclined to treat a digital calculation as an objective truth.

From the abstract

Human users routinely display a disproportionate willingness to accept AI-generated outputs as accurate, reliable, and epistemically superior-often even after being explicitly warned that such systems may produce errors. This paper introduces AI-Authority-Bias as a specific form of epistemic over-deference: the systematic tendency to attribute unwarranted epistemic authority to AI systems based on their structural, computational, and social affordances rather than on the evidentiary quality of t