AI & ML Efficiency Breakthrough

Obtain epistemic and aleatoric uncertainty from a single forward-backward pass of an unmodified pretrained LLM.

April 1, 2026

Original Paper

An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

Nils Grünefeld, Jes Frellsen, Christian Hardmeier

arXiv · 2603.29466

The Takeaway

It bypasses computationally expensive MCMC or ensemble methods by using a first-order Taylor expansion and an isotropy assumption on parameter covariance. This allows practitioners to quantify confidence in LLM outputs in real-time without needing access to training data or multiple inference passes.

From the abstract

Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty