Laya introduces the first EEG foundation model based on Joint Embedding Predictive Architecture (JEPA), outperforming traditional reconstruction-based models.
March 18, 2026
Original Paper
Laya: A LeJEPA Approach to EEG via Latent Prediction over Reconstruction
arXiv · 2603.16281
The Takeaway
Traditional self-supervised learning for neural signals relies on signal reconstruction, which often overfits to high-variance artifacts (noise). By predicting latent representations instead, this model learns more transferable features, offering a new standard for processing complex biological time-series data.
From the abstract
Electroencephalography (EEG) is a widely used tool for studying brain function, with applications in clinical neuroscience, diagnosis, and brain-computer interfaces (BCIs). Recent EEG foundation models trained on large unlabeled corpora aim to learn transferable representations, but their effectiveness remains unclear; reported improvements over smaller task-specific models are often modest, sensitive to downstream adaptation and fine-tuning strategies, and limited under linear probing. We hypot