AI & ML New Capability

Achieves pose-free 3D Gaussian Splatting using only event streams, enabling reconstruction in extreme lighting and high-speed motion scenarios.

arXiv · March 17, 2026 · 2603.14684

Yunsoo Kim, Changki Sung, Dasol Hong, Hyun Myung

The Takeaway

By extracting structural edges from noise-heavy event data, it eliminates the requirement for pre-calculated camera poses or high-quality RGB images. This is a significant breakthrough for robotics and autonomous navigation in unstructured environments.

From the abstract

The emergence of neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) has advanced novel view synthesis (NVS). These methods, however, require high-quality RGB inputs and accurate corresponding poses, limiting robustness under real-world conditions such as fast camera motion or adverse lighting. Event cameras, which capture brightness changes at each pixel with high temporal resolution and wide dynamic range, enable precise sensing of dynamic scenes and offer a promising solution. Howe