FlashHead is a drop-in replacement for the LM classification head that provides 1.75x inference speedup by treating vocabulary selection as a retrieval problem.
March 17, 2026
Original Paper
FlashHead: Efficient Drop-In Replacement for the Classification Head in Language Model Inference
arXiv · 2603.14591
The Takeaway
The output head often accounts for 60% of parameters and 50% of compute in small models. FlashHead's hardware-friendly clustering and multiprobe retrieval remove this bottleneck entirely without requiring retraining of the base model.
From the abstract
Language models are increasingly adopting smaller architectures optimized for consumer devices. In this setting, inference efficiency is the primary constraint. Meanwhile, vocabulary sizes continue to grow rapidly, making the classification head a critical bottleneck that accounts for up to 60\% of model parameters, and 50\% of inference compute. We introduce FlashHead, the first efficient drop-in replacement for the dense classification head that is training-free and hardware-friendly. FlashHea