Speculative Decoding Scaling Laws (SDSL) provides a theoretical framework to predict optimal throughput hyperparameters for LLM inference systems before pre-training.
arXiv · March 13, 2026 · 2603.11053
Why it matters
Optimizing speculative decoding currently requires expensive trial-and-error training of draft models. This theory allows practitioners to analytically determine the best configuration (e.g., draft model size and lookahead) for a given target LLM, significantly reducing the cost of deploying low-latency inference pipelines.
From the abstract
Speculative decoding is a technique that uses multiple language models to accelerate infer- ence. Previous works have used an experi- mental approach to optimize the throughput of the inference pipeline, which involves LLM training and can be costly. This study of spec- ulative decoding proposes a theory that ana- lytically connects the key hyperparameters of pre-trained LLMs to the throughput efficiency of a downstream SD-based inference system. The theory allows the prediction of throughput- o