AI & ML Breaks Assumption

Disproves the common assumption that bottom models in Vertical Federated Learning effectively represent private labels.

March 20, 2026

Original Paper

Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend

Yige Liu, Dexuan Xu, Zimai Guo, Yongzhi Cao, Hanpin Wang

arXiv · 2603.18680

The Takeaway

It identifies the 'model compensation' phenomenon, showing that label inference attacks actually exploit feature-label distribution alignment rather than model representations. This discovery enables a zero-overhead defense by simply adjusting the model's cut layer.

From the abstract

Vertical federated learning (VFL) allows an active party with a top model, and multiple passive parties with bottom models to collaborate. In this scenario, passive parties possessing only features may attempt to infer active party's private labels, making label inference attacks (LIAs) a significant threat. Previous LIA studies have claimed that well-trained bottom models can effectively represent labels. However, we demonstrate that this view is misleading and exposes the vulnerability of exis