AI & ML Breaks Assumption

LACE enables continual learning models to automatically expand their own capacity by monitoring loss signals during training.

March 31, 2026

Original Paper

LACE: Loss-Adaptive Capacity Expansion for Continual Learning

Shivnath Tathe

arXiv · 2603.28611

The Takeaway

It challenges the need for fixed-width architectures in lifelong learning, using an online mechanism that detects domain shifts with 100% precision and expands the model only when necessary.

From the abstract

Fixed representational capacity is a fundamental constraint in continual learning: practitioners must guess an appropriate model width before training, without knowing how many distinct concepts the data contains. We propose LACE (Loss-Adaptive Capacity Expansion), a simple online mechanism that expands a model's representational capacity during training by monitoring its own loss signal. When sustained loss deviation exceeds a threshold - indicating that the current capacity is insufficient for