You can't just tell a picture-making AI to "forget" something—it literally doesn't have the brain parts to understand that request.
April 3, 2026
Original Paper
Why Instruction-Based Unlearning Fails in Diffusion Models?
arXiv · 2604.01514
The Takeaway
While a chatbot can be told to stop talking about a topic, image generators ignore these commands because their internal mechanics work differently. This means current methods for making AI safe or copyright-compliant are fundamentally failing for images.
From the abstract
Instruction-based unlearning has proven effective for modifying the behavior of large language models at inference time, but whether this paradigm extends to other generative models remains unclear. In this work, we investigate instruction-based unlearning in diffusion-based image generation models and show, through controlled experiments across multiple concepts and prompt variants, that diffusion models systematically fail to suppress targeted concepts when guided solely by natural-language un