AI & ML Efficiency Breakthrough

Unifies leading membership inference attacks into a single framework and uses Bayesian variance inference to enable privacy auditing with 10x less compute.

arXiv · March 13, 2026 · 2603.11799

Rickard Brännvall

Why it matters

Privacy auditing is becoming a regulatory requirement, but current SOTA attacks require training many 'shadow models' which is prohibitively expensive for large LLMs. BaVarIA stabilizes variance estimation at low budgets, making rigorous privacy testing practical for production-scale models.

From the abstract

Membership inference attacks (MIAs) are becoming standard tools for auditing the privacy of machine learning models. The leading attacks -- LiRA (Carlini et al., 2022) and RMIA (Zarifzadeh et al., 2024) -- appear to use distinct scoring strategies, while the recently proposed BASE (Lassila et al., 2025) was shown to be equivalent to RMIA, making it difficult for practitioners to choose among them. We show that all three are instances of a single exponential-family log-likelihood ratio framework,