AI & ML New Capability

Achieves high-quality 3D reconstruction and camera pose estimation from sparse views without any pre-trained priors or ground-truth annotations.

March 31, 2026

Original Paper

From None to All: Self-Supervised 3D Reconstruction via Novel View Synthesis

Ranran Huang, Weixun Luo, Ye Mao, Krystian Mikolajczyk

arXiv · 2603.27455

The Takeaway

Unlike previous methods that rely on pre-trained SfM or large-scale foundation models, NAS3R uses a self-supervised photometric loss to jointly optimize Gaussians and camera parameters. This enables 3D reconstruction from truly unconstrained, uncalibrated data where priors are unavailable.

From the abstract

In this paper, we introduce NAS3R, a self-supervised feed-forward framework that jointly learns explicit 3D geometry and camera parameters with no ground-truth annotations and no pretrained priors. During training, NAS3R reconstructs 3D Gaussians from uncalibrated and unposed context views and renders target views using its self-predicted camera parameters, enabling self-supervised training from 2D photometric supervision. To ensure stable convergence, NAS3R integrates reconstruction and camera