AI & ML Breaks Assumption

Shows that a simple pruned adaptation module (PAM) outperforms complex SOTA foundation-model-based continual learning methods.

March 24, 2026

Original Paper

Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models

Elif Ceren Gok Yildirim, Murat Onur Yildirim, Joaquin Vanschoren

arXiv · 2603.21170

The Takeaway

It challenges the recent trend toward increasingly complex continual learning architectures. By freezing most of a pre-trained model and enabling sparse task-specific layers, PAM reduces parameters by 6x while delivering better results, setting a new 'true' baseline for the field.

From the abstract

The continual learning literature has rapidly shifted from traditional class incremental learning (CIL) techniques to foundation model (FM)-based CIL methods without a clear understanding of how these newer approaches compare to strong, lightweight convolutional baselines. This abrupt transition has created a substantial methodological gap, making it difficult to assess whether recent FM-based CIL progress reflects genuine advances or merely the absence of rigorous baselines. To address this gap