AI & ML Breaks Assumption

This paper presents an exact federated unlearning protocol for foundation models that is pointwise identical to centralized retraining but uses fixed-size messages.

arXiv · March 16, 2026 · 2603.12977

Yijun Quan, Wentai Wu, Giovanni Montana

Why it matters

Challenges the idea that federated unlearning must be approximate or require expensive retraining; for the common use case of frozen foundation models with ridge heads, it provides a mathematically exact 'right to be forgotten' with zero KL divergence from a fresh model.

From the abstract

Foundation models are commonly deployed as frozen feature extractors with a small trainable head to adapt to private, user-generated data in federated settings. The ``right to be forgotten'' requires removing the influence of specific samples or users from the trained model on demand. Existing federated unlearning methods target general deep models and rely on approximate reconstruction or selective retraining, making exactness costly or elusive. We study this problem in a practically relevant b