AI & ML Breaks Assumption

Discovers that task-specific experts are so dense around pretrained weights that random parameter perturbations can compete with complex RL methods like PPO.

arXiv · March 13, 2026 · 2603.12228

Yulu Gan, Phillip Isola

Why it matters

This challenges the conventional wisdom that sophisticated gradient-based RL is the only way to align large models. If simple random ensembling of specialized perturbations matches SOTA performance, the barrier to entry for post-training alignment drops significantly.

From the abstract

Pretraining produces a learned parameter vector that is typically treated as a starting point for further iterative adaptation. In this work, we instead view the outcome of pretraining as a distribution over parameter vectors, whose support already contains task-specific experts. We show that in small models such expert solutions occupy a negligible fraction of the volume of this distribution, making their discovery reliant on structured optimization methods such as gradient descent. In contrast