DeIllusionLLM introduces task-level autoregressive reasoning to prevent LLMs from hallucinating answers to ill-posed or faulty scientific questions.
March 25, 2026
Original Paper
Bridging the Know-Act Gap via Task-Level Autoregressive Reasoning
arXiv · 2603.22619
The Takeaway
It addresses the 'know-act gap' where models have the knowledge to identify an error but generate a response anyway due to token-level autoregression. By explicitly modeling the decision to validate versus answer, this framework makes models significantly more reliable in technical and scientific deployments.
From the abstract
LLMs often generate seemingly valid answers to flawed or ill-posed inputs. This is not due to missing knowledge: under discriminative prompting, the same models can mostly identify such issues, yet fail to reflect this in standard generative responses. This reveals a fundamental know-act gap between discriminative recognition and generative behavior. Prior work largely characterizes this issue in narrow settings, such as math word problems or question answering, with limited focus on how to inte