AI 'fact-checkers' are lazy; they'll verify a whole scientific paper as true if the title looks correct, even if the body is wrong.
April 14, 2026
Original Paper
When Verification Fails: How Compositionally Infeasible Claims Escape Rejection
arXiv · 2604.10990
The Takeaway
LLMs use a 'salient-constraint' shortcut, accepting claims as long as the most prominent part is accurate. This exposed flaw means models fail to check complex fact-combinations, making them unreliable for scientific verification.
From the abstract
Scientific claim verification, the task of determining whether claims are entailed by scientific evidence, is fundamental to establishing discoveries in evidence while preventing misinformation. This process involves evaluating each asserted constraint against validated evidence. Under the Closed-World Assumption (CWA), a claim is accepted if and only if all asserted constraints are positively supported. We show that existing verification benchmarks cannot distinguish models enforcing this stand