Physics Paradigm Challenge

AI will never be able to fully overcome its biases because it lacks the specific brain structure humans use to double-check their own thoughts.

April 17, 2026

Original Paper

The role of System 1 and System 2 semantic memory structure in human and LLM biases

arXiv · 2604.12816

The Takeaway

Humans have two systems: one for quick associations and one for deliberate, conscious thought. We use that second system to look at a biased thought and say, 'Wait, that’s not right.' This paper shows that Large Language Models simply don't have that architecture; they are entirely built on associative memory. This means bias in AI isn't just a data problem you can 'fix' with better training—it's a fundamental hardware limitation. Without a digital 'System 2,' AI will always be a mirror of our worst snap judgments.

From the abstract

Implicit biases in both humans and large language models (LLMs) pose significant societal risks. Dual process theories propose that biases arise primarily from associative System 1 thinking, while deliberative System 2 thinking mitigates bias, but the cognitive mechanisms that give rise to this phenomenon remain poorly understood. To better understand what underlies this duality in humans, and possibly in LLMs, we model System 1 and System 2 thinking as semantic memory networks with distinct str