NANOZK enables verifiable LLM inference with 70x smaller proofs and 24ms verification time using a novel layerwise decomposition.
arXiv · March 20, 2026 · 2603.18046
The Takeaway
This solves the trust problem in proprietary LLM APIs by allowing users to cryptographically verify that a specific model was actually used to generate an output. Its constant-size proof regardless of model width makes Zero-Knowledge proofs for frontier-scale models practically viable for the first time.
From the abstract
When users query proprietary LLM APIs, they receive outputs with no cryptographic assurance that the claimed model was actually used. Service providers could substitute cheaper models, apply aggressive quantization, or return cached responses - all undetectable by users paying premium prices for frontier capabilities. We present METHOD, a zero-knowledge proof system that makes LLM inference verifiable: users can cryptographically confirm that outputs correspond to the computation of a specific m