AI & ML Efficiency Breakthrough

RAZOR provides a lightweight, targeted unlearning framework for Transformers and Diffusion models without retraining.

March 17, 2026

Original Paper

RAZOR: Ratio-Aware Layer Editing for Targeted Unlearning in Vision Transformers and Diffusion Models

Ravi Ranjan, Utkarsh Grover, Xiaomin Lin, Agoritsa Polyzou

arXiv · 2603.14819

The Takeaway

This method identifies and edits specific layers and heads to erase sensitive information while preserving model utility. It is significantly faster than conventional methods and provides a scalable path for model safety compliance and 'right to be forgotten' requests.

From the abstract

Transformer based diffusion and vision-language models have achieved remarkable success; yet, efficiently removing undesirable or sensitive information without retraining remains a central challenge for model safety and compliance. We introduce Ratio-Aware Zero/One-step Optimized Retentive unlearning (RAZOR), a lightweight, model-agnostic unlearning framework that generalizes forgetting updates to coordinated multi-layer and multi-head edits within transformer backbones. RAZOR identifies the mos