This paper proposes a method to align and personalize LLMs directly from raw user interactions using self-distillation, bypassing the need for explicit human labels or RLHF.
arXiv · March 16, 2026 · 2603.12273
Why it matters
It shifts the alignment paradigm from static, expensive preference datasets to continuous learning from deployment logs. It demonstrates that a model's ability to correct itself in multi-turn chats can be distilled back into its base policy for improved instruction following and personalization.
From the abstract
Multi-turn user interactions are among the most abundant data produced by language models, yet we lack effective methods to learn from them. While typically discarded, these interactions often contain useful information: follow-up user messages may indicate that a response was incorrect, failed to follow an instruction, or did not align with the user's preferences. Importantly, language models are already able to make use of this information in context. After observing a user's follow-up, the sa