Derives a variational ELBO for the Joint-Embedding Predictive Architecture (JEPA), unifying it with generative modeling.
March 23, 2026
Original Paper
Var-JEPA: A Variational Formulation of the Joint-Embedding Predictive Architecture -- Bridging Predictive and Generative Self-Supervised Learning
arXiv · 2603.20111
The Takeaway
Removes the need for ad-hoc heuristics and anti-collapse regularizers that typically plague JEPAs. This provides a principled way to perform uncertainty quantification in representation space and improves downstream performance.
From the abstract
The Joint-Embedding Predictive Architecture (JEPA) is often seen as a non-generative alternative to likelihood-based self-supervised learning, emphasizing prediction in representation space rather than reconstruction in observation space. We argue that the resulting separation from probabilistic generative modeling is largely rhetorical rather than structural: the canonical JEPA design, coupled encoders with a context-to-target predictor, mirrors the variational posteriors and learned conditiona