Investors can make way more money by ignoring what an AI says and trading based on how "confident" the AI's internal math looks.
SSRN · March 18, 2026 · 6425017
The Takeaway
Large Language Models often exhibit 'decoding bias' where they sound certain but are wrong. By looking at the internal probability distribution of the AI (the 'inner confidence'), researchers were able to build a stock portfolio that outperformed the AI's own explicit recommendations by 20%.
From the abstract
Autoregressive LLMs generate text by sampling from estimated probability distributions over the next token, conditional on prior context. We use these probabilities to construct an entropy-based measure of prediction uncertainty, termed inner confidence. In news classification, LLM predictions with higher inner confidence are systematically more accurate. To evaluate the measure's economic relevance, we form long-short portfolios based on LLM predictions. The portfolio based on high-confide