AI & ML Breaks Assumption

Finds that privacy vulnerability and utility are both concentrated in a tiny fraction of 'critical weights' based on their location rather than value.

arXiv · March 16, 2026 · 2603.13186

Xingli Fang, Jung-Eun Kim

Why it matters

Instead of retraining whole models or applying global DP-SGD, practitioners can selectively rewind and fine-tune a small weight subset to preserve membership privacy without sacrificing utility.

From the abstract

Prior approaches for membership privacy preservation usually update or retrain all weights in neural networks, which is costly and can lead to unnecessary utility loss or even more serious misalignment in predictions between training data and non-training data. In this work, we observed three insights: i) privacy vulnerability exists in a very small fraction of weights; ii) however, most of those weights also critically impact utility performance; iii) the importance of weights stems from their