Researchers are using 'digital lobotomies' on AI to figure out how the human brain manages multiple languages.
April 14, 2026
Original Paper
Computational Lesions in Multilingual Language Models Separate Shared and Language-specific Brain Alignment
arXiv · 2604.10627
The Takeaway
By intentionally breaking specific parts of multilingual AI models, scientists discovered that the human brain uses a shared 'backbone' for all language but keeps native language specializations in separate modules. It is a literal surgical tool for understanding the internal architecture of our own minds.
From the abstract
How the brain supports language across different languages is a basic question in neuroscience and a useful test for multilingual artificial intelligence. Neuroimaging has identified language-responsive brain regions across languages, but it cannot by itself show whether the underlying processing is shared or language-specific. Here we use six multilingual large language models (LLMs) as controllable systems and create targeted ``computational lesions'' by zeroing small parameter sets that are i