AI & ML Paradigm Shift

Uses Sparse Autoencoders (SAEs) to disentangle and modulate bias-relevant features in Vision-Language Models without retraining.

March 20, 2026

Original Paper

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

Quentin Guimard, Federico Bartsch, Simone Caldarella, Rahaf Aljundi, Elisa Ricci, Massimiliano Mancini

arXiv · 2603.19028

The Takeaway

Instead of blunt projections in dense embedding space that degrade model performance, SEM identifies specific 'bias neurons' in a sparse latent space. This represents a major shift toward using mechanistic interpretability tools for practical, post-hoc model debiasing and safety interventions.

From the abstract

Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-ho