Enables reinforcement learning for long-horizon robots across diverse tasks without requiring manual reward engineering.
April 2, 2026
Original Paper
Generalizable Dense Reward for Long-Horizon Robotic Tasks
arXiv · 2604.00055
The Takeaway
By combining VLM-based task progress recognition with policy self-certainty rewards, this framework allows robots to learn from failure in the real world without human-coded reward functions, achieving 56% gains in success rates.
From the abstract
Existing robotic foundation policies are trained primarily via large-scale imitation learning. While such models demonstrate strong capabilities, they often struggle with long-horizon tasks due to distribution shift and error accumulation. While reinforcement learning (RL) can finetune these models, it cannot work well across diverse tasks without manual reward engineering. We propose VLLR, a dense reward framework combining (1) an extrinsic reward from Large Language Models (LLMs) and Vision-La