AI & ML Paradigm Shift

Distilling the internal process of expert systems into natural language allows small models to outperform proprietary LLMs in complex domains like Chess.

March 24, 2026

Original Paper

Grounded Chess Reasoning in Language Models via Master Distillation

Zhenwei Tang, Qianfeng Wen, Seth Grief-Albert, Yahya Elgabra, Blair Yang, Honghua Dong, Ashton Anderson

arXiv · 2603.20510

The Takeaway

Instead of distilling just the final output (move), this 'Master Distillation' captures the step-by-step logic of a specialized engine (Stockfish) into human-readable CoT. This provides a blueprint for injecting deep domain expertise into compact LLMs.

From the abstract

Language models often lack grounded reasoning capabilities in specialized domains where training data is scarce but bespoke systems excel. We introduce a general framework for distilling expert system reasoning into natural language chain-of-thought explanations, enabling compact models to acquire domain expertise and the ability to generate faithful, grounded explanations. Rather than distilling only final outputs, we capture the full reasoning process, transforming opaque expert computations i