Incorporating PDE residuals into fine-tuning allows pre-trained physics foundation models to adapt to new tasks without requiring ground-truth solutions.
arXiv · March 17, 2026 · 2603.15431
The Takeaway
This enables the adaptation of neural PDE solvers in data-scarce regimes by using physical consistency as a supervisor. It effectively bridges the gap between purely data-driven foundation models and physics-informed neural networks.
From the abstract
Foundation models for partial differential equations (PDEs) have emerged as powerful surrogates pre-trained on diverse physical systems, but adapting them to new downstream tasks remains challenging due to limited task-specific data and distribution shifts. While fine-tuning has proven transformative in natural language processing, best practices for adapting PDE foundation models remain underexplored. Although physics-informed training has successfully trained accurate solvers across a wide ran