AI & ML Nature Is Weird

Massive AIs aren't actually geniuses at everything; they’re just a giant pile of tiny specialists that each know one specific thing.

April 3, 2026

Original Paper

The Expert Strikes Back: Interpreting Mixture-of-Experts Language Models at Expert Level

Jeremy Herbst, Jae Hee Lee, Stefan Wermter

arXiv · 2604.02178

The Takeaway

AI 'brains' are more modular than we thought, with individual parts dedicated to tiny tasks like closing a bracket or a specific grammar rule. Understanding this allows us to see how complexity emerges from millions of simple, specialized operations.

From the abstract

Mixture-of-Experts (MoE) architectures have become the dominant choice for scaling Large Language Models (LLMs), activating only a subset of parameters per token. While MoE architectures are primarily adopted for computational efficiency, it remains an open question whether their sparsity makes them inherently easier to interpret than dense feed-forward networks (FFNs). We compare MoE experts and dense FFNs using $k$-sparse probing and find that expert neurons are consistently less polysemantic,