Graph Foundation Models (GFMs) are shown to fail when using fixed architectural backbones, requiring a new approach of inference-time architecture adaptivity.
March 25, 2026
Original Paper
Can Graph Foundation Models Generalize Over Architecture?
arXiv · 2603.22984
The Takeaway
It proves that a single message-passing regime cannot generalize across all graph types and scales. The proposed framework allows a GFM to discover and mix task-specific operators at inference time, enabling true zero-shot generalization across heterogeneous graph tasks without retraining.
From the abstract
Graph foundation models (GFMs) have recently attracted interest due to the promise of graph neural network (GNN) architectures that generalize zero-shot across graphs of arbitrary scales, feature dimensions, and domains. While existing work has demonstrated this ability empirically across diverse real-world benchmarks, these tasks share a crucial hidden limitation: they admit a narrow set of effective GNN architectures. In particular, current domain-agnostic GFMs rely on fixed architectural back