AI & ML Paradigm Shift

Introduces modular, composable safety alignment via learnable control tokens rather than static parameter-level tuning.

March 18, 2026

Original Paper

MOSAIC: Composable Safety Alignment with Modular Control Tokens

Jingyu Peng, Hongyu Chen, Jiancheng Dong, Maolin Wang, Wenxi Li, Yuchen Li, Kai Zhang, Xiangyu Zhao

arXiv · 2603.16210

The Takeaway

Current safety alignment is rigid and difficult to adjust for different regions or use cases. MOSAIC allows developers to compose multiple safety constraints at inference time by activating specific tokens, preserving base model utility while enabling context-dependent safety policies.

From the abstract

Safety alignment in large language models (LLMs) is commonly implemented as a single static policy embedded in model parameters. However, real-world deployments often require context-dependent safety rules that vary across users, regions, and applications. Existing approaches struggle to provide such conditional control: parameter-level alignment entangles safety behaviors with general capabilities, while prompt-based methods rely on natural language instructions that provide weak enforcement. W