AI & ML Breaks Assumption

Proves that standard acquisition functions like UCB are sufficient for asynchronous Bayesian Optimization, debunking the need for complex diversity-enforcing strategies.

arXiv · March 17, 2026 · 2603.13501

Ben Riegler, James Odgers, Vincent Fortuin

The Takeaway

Many practitioners use complex heuristics (like the 'Constant Liar' method) to ensure query diversity in parallel hyperparameter optimization. This paper demonstrates that simply using standard acquisition with intermediate posterior updates is not only sufficient but often superior, significantly simplifying the implementation of parallelized optimization workflows.

From the abstract

Asynchronous Bayesian optimization is widely used for gradient-free optimization in domains with independent parallel experiments and varying evaluation times. Existing methods posit that standard acquisitions lead to redundant and repeated queries, proposing complex solutions to enforce diversity in queries. Challenging this fundamental premise, we show that methods, like the Upper Confidence Bound, can in fact achieve theoretical guarantees essentially equivalent to those of sequential Thompso