AI & ML Breaks Assumption

Formalizes random cropping as a source of differential privacy, offering 'free' privacy amplification.

March 27, 2026

Original Paper

Amplified Patch-Level Differential Privacy for Free via Random Cropping

Kaan Durmaz, Jan Schuchardt, Sebastian Schmidt, Stephan Günnemann

arXiv · 2603.24695

The Takeaway

It proves that standard data augmentation inherently provides privacy guarantees for spatially localized data (like faces), allowing practitioners to achieve stronger DP bounds without adding more noise or changing training procedures.

From the abstract

Random cropping is one of the most common data augmentation techniques in computer vision, yet the role of its inherent randomness in training differentially private machine learning models has thus far gone unexplored. We observe that when sensitive content in an image is spatially localized, such as a face or license plate, random cropping can probabilistically exclude that content from the model's input. This introduces a third source of stochasticity in differentially private training with s