Language models use sophisticated literary devices like the tricolon to sound certain even when they are completely making things up.
April 25, 2026
Original Paper
Saying More Than They Know: A Framework for Quantifying Epistemic-Rhetorical Miscalibration in Large Language Models
arXiv · 2604.19768
The Takeaway
Rhetorical confidence in AI is systematically decoupled from its actual knowledge base. A model will sound just as authoritative when it is hallucinating as when it is stating a proven fact. This miscalibration allows the AI to mask its ignorance with persuasive writing styles. Researchers found that models use specific patterns of three-part sentences to build a false sense of credibility. We can no longer use the tone of an AI response to judge its accuracy.
From the abstract
Large language models (LLMs) exhibit systematic miscalibration with rhetorical intensity not proportionate to epistemic grounding. This study tests this hypothesis and proposes a framework for quantifying this decoupling by designing a triadic epistemic-rhetorical marker (ERM) taxonomy. The taxonomy is operationalized through composite metrics of form-meaning divergence (FMD), genuine-to-performed epistemic ratio (GPR), and rhetorical device distribution entropy (RDDE). Applied to 225 argumentat