Identifies that the direction of log-probability change is more critical than magnitude for improving LLM reasoning via RL.
March 24, 2026
Original Paper
On the Direction of RLVR Updates for LLM Reasoning: Identification and Exploitation
arXiv · 2603.22117
The Takeaway
The paper introduces a test-time extrapolation method that improves reasoning accuracy by amplifying specific policy directions identified during RL training. It shifts the focus from simply increasing probability to targeting the most 'meaningful' directional updates.
From the abstract
Reinforcement learning with verifiable rewards (RLVR) has substantially improved the reasoning capabilities of large language models. While existing analyses identify that RLVR-induced changes are sparse, they primarily focus on the \textbf{magnitude} of these updates, largely overlooking their \textbf{direction}. In this work, we argue that the direction of updates is a more critical lens for understanding RLVR's effects, which can be captured by the signed, token-level log probability differen