Aligns visual motion embeddings with physics simulations to predict fall injury risk without requiring human-labeled injury data.
March 17, 2026
Original Paper
Bridging the Visual-to-Physical Gap: Physically Aligned Representations for Fall Risk Analysis
arXiv · 2603.13410
The Takeaway
Most fall detection models rely on sparse, noisy injury labels. PHARL uses simulation-derived contact mechanics to regularize visual representations, resulting in a zero-shot interpretable severity structure that outperforms traditional supervised baselines.
From the abstract
Vision-based fall analysis has advanced rapidly, but a key bottleneck remains: visually similarmotions can correspond to very different physical outcomes because small differences in contactmechanics and protective responses are hard to infer from appearance alone. Most existingapproaches handle this by supervised injury prediction, which depends on reliable injurythis http URLpractice, such labels are difficult to obtain: video evidence is often ambiguous (occlusion,viewpoint limits), and true