AI & ML New Capability

SCAN enables reliable sequential knowledge editing in LLMs for up to 3,000 edits without the catastrophic forgetting or model collapse seen in current methods.

arXiv · March 17, 2026 · 2603.15226

Yuhuan Liu, Haitian Zhong, Xinyuan Xia, Qiang Liu, Shu Wu, Liang Wang

The Takeaway

It moves from dense parameter intervention to mechanism-aware manipulation using Sparse Transcoders to isolate knowledge circuits. This allows practitioners to update model facts continuously over time while preserving general reasoning capabilities on benchmarks like MMLU.

From the abstract

Large Language Models (LLMs) often suffer from catastrophic forgetting and collapse during sequential knowledge editing. This vulnerability stems from the prevailing dense editing paradigm, which treats models as black boxes and relies on coarse-grained parameter interventions that inevitably disrupt preserved knowledge. To address this, we propose SCAN (a sparse editing framework based on Sparse Circuit Anchored Neuron) which transforms editing into a mechanism-aware manipulation by constructin