SeriesFusion
Science, curated & edited by AI
Practical Magic  /  AI

If a robot touches an object in just a few spots, it can 'hallucinate' the rest of the shape so accurately it's like it has X-ray vision.

This research allows robots to reconstruct objects they can't see by transferring visual knowledge from diffusion models to the sense of touch. It's a major step toward robots that can navigate and manipulate objects in the dark or in cluttered spaces.

Original Paper

TouchAnything: Diffusion-Guided 3D Reconstruction from Sparse Robot Touches

Langzhe Gu, Hung-Jui Huang, Mohamad Qadri, Michael Kaess, Wenzhen Yuan

arXiv  ·  2604.08945

Accurate object geometry estimation is essential for many downstream tasks, including robotic manipulation and physical interaction. Although vision is the dominant modality for shape perception, it becomes unreliable under occlusions or challenging lighting conditions. In such scenarios, tactile sensing provides direct geometric information through physical contact. However, reconstructing global 3D geometry from sparse local touches alone is fundamentally underconstrained. We present TouchAnyt