AI & ML Breaks Assumption

Neural PDE solvers are not learning general operators, but rather a family of solutions specifically indexed to the boundary conditions seen during training.

arXiv · March 18, 2026 · 2603.01406

Lennon J. Shikhman

The Takeaway

It proves that standard operator training leads to non-identifiability outside the training boundary distribution. This is a critical warning for those building 'foundation models' for PDEs, highlighting that current architectures cannot generalize across boundary conditions without explicit modeling.

From the abstract

Neural PDE solvers are often described as learning solution operators that map problem data to PDE solutions. In this work, we argue that this interpretation is generally incorrect when boundary conditions vary. We show that standard neural operator training implicitly learns a boundary-indexed family of operators, rather than a single boundary-agnostic operator, with the learned mapping fundamentally conditioned on the boundary-condition distribution seen during training. We formalize this pers