DreamerAD accelerates imagination-based training for autonomous driving by 80x, compressing 100-step diffusion sampling down to a single step.
March 26, 2026
Original Paper
DreamerAD: Efficient Reinforcement Learning via Latent World Model for Autonomous Driving
arXiv · 2603.24587
The Takeaway
This is the first latent world model to make high-frequency RL interaction practical for driving. It allows policies to train in a high-fidelity 'imagined' latent space with physically plausible trajectories, achieving SOTA performance on NavSim v2 with significantly less compute.
From the abstract
We introduce DreamerAD, the first latent world model framework that enables efficient reinforcement learning for autonomous driving by compressing diffusion sampling from 100 steps to 1 - achieving 80x speedup while maintaining visual interpretability. Training RL policies on real-world driving data incurs prohibitive costs and safety risks. While existing pixel-level diffusion world models enable safe imagination-based training, they suffer from multi-step diffusion inference latency (2s/frame)