AI & ML Paradigm Shift

Infrastructure-taught 3D perception uses static roadside sensors as unsupervised teachers for moving vehicles, eliminating the need for manual labels.

March 18, 2026

Original Paper

When the City Teaches the Car: Label-Free 3D Perception from Infrastructure

Zhen Xu, Jinsu Yoo, Cristian Bautista, Zanming Huang, Tai-Yu Pan, Zhenzhen Liu, Katie Z Luo, Mark Campbell, Bharath Hariharan, Wei-Lun Chao

arXiv · 2603.16742

The Takeaway

This shifts the annotation burden to the environment itself, leveraging fixed viewpoint consistency to generate pseudo-labels for ego-vehicles. It provides a scalable solution for the cold-start problem of deploying autonomous fleets in new, unmapped cities without expensive human annotation.

From the abstract

Building robust 3D perception for self-driving still relies heavily on large-scale data collection and manual annotation, yet this paradigm becomes impractical as deployment expands across diverse cities and regions. Meanwhile, modern cities are increasingly instrumented with roadside units (RSUs), static sensors deployed along roads and at intersections to monitor traffic. This raises a natural question: can the city itself help train the vehicle? We propose infrastructure-taught, label-free 3D