AI & ML Nature Is Weird

An LLM's confidence score hides a secret: models use different internal 'vocabularies' to distinguish between being ignorant and being confused.

April 14, 2026

Original Paper

From Scalars to Tensors: Declared Losses Recover Epistemic Distinctions That Neutrosophic Scalars Cannot Express

Tony Mason

arXiv · 2604.09602

The Takeaway

The study shows that LLMs produce 'hyper-truth' where probabilities sum to >1.0, revealing that uncertainty isn't a simple scalar. By recovering tensor-based losses, we can identify when a model is failing because it doesn't know the answer versus when the prompt itself is a paradox.

From the abstract

Leyva-Vázquez and Smarandache (2025) demonstrated that neutrosophic T/I/F evaluation, where Truth, Indeterminacy, and Falsity are independent dimensions not constrained to sum to 1.0, which reveals "hyper-truth"' (T+I+F > 1.0) in 35% of complex epistemic cases evaluated by LLMs. We extend their work in two directions. First, we replicate and extend their experiment across five model families from five vendors (Anthropic, Meta, DeepSeek, Alibaba, Mistral), finding hyper-truth in 84% of unconstrai