Leverages model reprogramming as an 'active signal amplifier' to proactively audit privacy leakage in LLMs and Diffusion models.
April 1, 2026
Original Paper
\texttt{ReproMIA}: A Comprehensive Analysis of Model Reprogramming for Proactive Membership Inference Attacks
arXiv · 2603.28942
The Takeaway
Membership Inference Attacks (MIAs) are often too slow or inaccurate for real-world auditing. This framework induces latent privacy footprints to achieve a major performance jump in low-False Positive Rate regimes, making privacy compliance auditing much more reliable.
From the abstract
The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent propensity for data memorization. While Membership Inference Attacks (MIAs) serve as the gold standard for auditing these privacy vulnerabilities, conventional MIA paradigms are increasingly constrained by the prohibitive computational costs of shadow model training and a precipitous performance degradation under low False Positive Rate constraints. To over