AI & ML New Capability

Enables zero-shot monocular metric depth estimation across any camera type (fisheye, 360, ERP) using a single unified model.

March 31, 2026

Original Paper

UniDAC: Universal Metric Depth Estimation for Any Camera

Girish Chandar Ganesan, Yuliang Guo, Liu Ren, Xiaoming Liu

arXiv · 2603.27105

The Takeaway

Existing depth models are highly sensitive to camera intrinsics and usually fail on non-standard optics. By decoupling relative depth from spatially varying scale and using distortion-aware embeddings, this framework allows reliable depth sensing for AR/VR and robotics across diverse hardware without per-camera retraining.

From the abstract

Monocular metric depth estimation (MMDE) is a core challenge in computer vision, playing a pivotal role in real-world applications that demand accurate spatial understanding. Although prior works have shown promising zero-shot performance in MMDE, they often struggle with generalization across diverse camera types, such as fisheye and $360^\circ$ cameras. Recent advances have addressed this through unified camera representations or canonical representation spaces, but they require either includi