AI & ML Breaks Assumption

This study proves that even with a 'perfect' noise transition matrix, statistically consistent noise-correction methods still suffer from performance collapse.

arXiv · March 16, 2026 · 2603.12997

Chen Feng, Zhuo Zhi, Zhao Huang, Jiawei Ge, Ling Xiao, Nicu Sebe, Georgios Tzimiropoulos, Ioannis Patras

Why it matters

It deconstructs a decade-long hypothesis in learning with noisy labels, showing that the failure is rooted in microscopic optimization dynamics rather than just estimation errors, guiding researchers away from dead-end theoretical approaches.

From the abstract

Statistically consistent methods based on the noise transition matrix ($T$) offer a theoretically grounded solution to Learning with Noisy Labels (LNL), with guarantees of convergence to the optimal clean-data classifier. In practice, however, these methods are often outperformed by empirical approaches such as sample selection, and this gap is usually attributed to the difficulty of accurately estimating $T$. The common assumption is that, given a perfect $T$, noise-correction methods would rec