AI & ML Paradigm Challenge

Once an AI sees something, you can't really make it unsee it; even when we tell it to 'forget,' the memory stays buried in its brain.

April 6, 2026

Original Paper

Can VLMs Truly Forget? Benchmarking Training-Free Visual Concept Unlearning

Zhangyun Tan, Zeliang Zhang, Susan Liang, Yolo Yunlong Tang, Lisha Chen, Chenliang Xu

arXiv · 2604.03114

The Takeaway

It reveals that 'unlearning' in vision-language models is often just a surface-level illusion that does not actually remove the data. This has huge implications for copyright and the 'right to be forgotten' in the age of AI.

From the abstract

VLMs trained on web-scale data retain sensitive and copyrighted visual concepts that deployment may require removing. Training-based unlearning methods share a structural flaw: fine-tuning on a narrow forget set degrades general capabilities before unlearning begins, making it impossible to attribute subsequent performance drops to the unlearning procedure itself. Training-free approaches sidestep this by suppressing concepts through prompts or system instructions, but no rigorous benchmark exis