Physics Nature Is Weird

AI models are unable to apply a simple rule to a new situation until they are taught exactly how the new environment works from scratch.

April 29, 2026

Original Paper

Grounding Before Generalizing: How AI Differs from Humans in Causal Transfer

arXiv · 2604.24062

The Takeaway

Humans can transfer abstract causal structures to new scenarios immediately after learning them. AI models require specific environmental grounding and mapping before they show any efficiency gains. This reveals a massive architectural difference in how biological and artificial minds understand cause and effect. The machine does not actually grasp the abstract rule in the way a person does. For AI to be truly useful in the real world, it needs a different way to process the underlying logic of how things work.

From the abstract

Extracting abstract causal structures and applying them to novel situations is a hallmark of human intelligence. While Large Language Models (LLMs) and Vision Language Models (VLMs) have shown strong performance on a wide range of reasoning tasks, their capacity for interactive causal learning -- inducing latent structures through sequential exploration and transferring them across contexts -- remains uncharacterized. Human learners accomplish such transfer after minimal exposure, whereas classi