Provides a strictly controlled comparison of autoregressive vs. masked diffusion language models on identical compute budgets.
March 24, 2026
Original Paper
Autoregressive vs. Masked Diffusion Language Models: A Controlled Comparison
arXiv · 2603.22075
The Takeaway
The study reveals a fundamental trade-off: autoregressive models converge faster and are more fluent, but diffusion models produce significantly higher narrative diversity. This helps practitioners choose the right generation paradigm based on whether they prioritize consistency or creativity.
From the abstract
We present a controlled empirical comparison between autoregressive (AR) and masked diffusion (MDLM) language models. Both models are trained on identical data (50M tokens from TinyStories), identical compute budget (20,000 steps, batch size 32, sequence length 512), and identical hardware (NVIDIA H100 80GB), isolating the generation paradigm as the sole variable. We report three findings. First, both paradigms achieve comparable training throughput (~50K tokens/second), with MDLM requiring only