Uncovers that neural operator digital twins are acutely vulnerable to sparse adversarial perturbations on boundary conditions that bypass standard anomaly detection.
March 25, 2026
Original Paper
Adversarial Vulnerabilities in Neural Operator Digital Twins: Gradient-Free Attacks on Nuclear Thermal-Hydraulic Surrogates
arXiv · 2603.22525
The Takeaway
As digital twins become core to safety-critical infrastructure (nuclear, energy), this paper highlights a catastrophic robustness gap. It shows that even models with high validated accuracy can be triggered into total failure by modifying less than 1% of the input.
From the abstract
Operator learning models are rapidly emerging as the predictive core of digital twins for nuclear and energy systems, promising real-time field reconstruction from sparse sensor measurements. Yet their robustness to adversarial perturbations remains uncharacterized, a critical gap for deployment in safety-critical systems. Here we show that neural operators are acutely vulnerable to extremely sparse (fewer than 1% of inputs), physically plausible perturbations that exploit their sensitivity to b