AI & ML New Capability

Ensures safe Vision-Language Model generation without over-refusal by steering activations within the null-space of benign inputs.

March 24, 2026

Original Paper

Principled Steering via Null-space Projection for Jailbreak Defense in Vision-Language Models

Xingyu Zhu, Beier Zhu, Shuo Wang, Junfeng Fang, Kesen Zhao, Hanwang Zhang, Xiangnan He

arXiv · 2603.22094

The Takeaway

It solves the common problem where safety steering degrades a model's performance on safe prompts. By mathematically isolating the 'harmful' directions from the 'benign' subspace, it maintains utility while robustly defending against jailbreaks.

From the abstract

As vision-language models (VLMs) are increasingly deployed in open-world scenarios, they can be easily induced by visual jailbreak attacks to generate harmful content, posing serious risks to model safety and trustworthy usage. Recent activation steering methods inject directional vectors into model activations during inference to induce refusal behaviors and have demonstrated effectiveness. However, a steering vector may both enhance refusal ability and cause over-refusal, thereby degrading mod