AI & ML Breaks Assumption

Proves an information-theoretic lower bound showing that embedding hidden payloads in LLM text must increase its Kolmogorov complexity.

March 24, 2026

Original Paper

Kolmogorov Complexity Bounds for LLM Steganography and a Perplexity-Based Detection Proxy

Andrii Shportko

arXiv · 2603.21567

The Takeaway

It establishes that perfect steganography is theoretically impossible in the sense that any non-trivial payload forces a measurable complexity increase. It also proposes a practical perplexity-based proxy for detecting hidden AI-to-AI communications.

From the abstract

Large language models can rewrite text to embed hidden payloads while preserving surface-level meaning, a capability that opens covert channels between cooperating AI systems and poses challenges for alignment monitoring. We study the information-theoretic cost of such embedding. Our main result is that any steganographic scheme that preserves the semantic load of a covertext~$M_1$ while encoding a payload~$P$ into a stegotext~$M_2$ must satisfy $K(M_2) \geq K(M_1) + K(P) - O(\log n)$, where $K$