AI & ML Nature Is Weird

AI keeps a specific "room" in its brain just for your grandma, settling a 50-year-old argument about how our own memories work.

April 3, 2026

Original Paper

Friends and Grandmothers in Silico: Localizing Entity Cells in Language Models

Itay Yona, Dan Barzilay, Michael Karasik, Mor Geva

arXiv · 2604.01404

The Takeaway

Researchers found that activating just one specific neuron is enough to make an AI recall facts about a person. This proves that complex knowledge in AI is often locked into single switches rather than being scattered across the entire network.

From the abstract

Language models can answer many entity-centric factual questions, but it remains unclear which internal mechanisms are involved in this process. We study this question across multiple language models. We localize entity-selective MLP neurons using templated prompts about each entity, and then validate them with causal interventions on PopQA-based QA examples. On a curated set of 200 entities drawn from PopQA, localized neurons concentrate in early layers. Negative ablation produces entity-specif