AI & ML Efficiency Breakthrough

Scales imitation learning data efficiency by generating synthetic 'multi-view' demonstrations from a single expert trajectory.

April 2, 2026

Original Paper

Multi-Camera View Scaling for Data-Efficient Robot Imitation Learning

Yichen Xie, Yixiao Wang, Shuqi Zhao, Cheng-En Wu, Masayoshi Tomizuka, Jianwen Xie, Hao-Shu Fang

arXiv · 2604.00557

The Takeaway

Data collection is the primary bottleneck in robotics; this paper shows that camera-view scaling provides 'free' diversity that significantly improves policy generalization. It allows single-view policies to achieve multi-view robustness with no additional human effort.

From the abstract

The generalization ability of imitation learning policies for robotic manipulation is fundamentally constrained by the diversity of expert demonstrations, while collecting demonstrations across varied environments is costly and difficult in practice. In this paper, we propose a practical framework that exploits inherent scene diversity without additional human effort by scaling camera views during demonstration collection. Instead of acquiring more trajectories, multiple synchronized camera pers