AI & ML Breaks Assumption

Provides causal evidence that reasoning models often decide on an action (like a tool call) before they even start generating their 'Chain-of-Thought'.

April 2, 2026

Original Paper

Therefore I am. I Think

Esakkivel Esakkiraja, Sai Rajeswar, Denis Akhiyarov, Rajagopal Venkatesaramani

arXiv · 2604.01202

The Takeaway

Strongly suggests that reasoning traces are often post-hoc rationalizations rather than the actual computation that drives a decision, fundamentally changing how researchers should interpret 'thinking' in models.

From the abstract

We consider the question: when a large language reasoning model makes a choice, did it think first and then decide to, or decide first and then think? In this paper, we present evidence that detectable, early-encoded decisions shape chain-of-thought in reasoning models. Specifically, we show that a simple linear probe successfully decodes tool-calling decisions from pre-generation activations with very high confidence, and in some cases, even before a single reasoning token is produced. Activati