AI & ML Efficiency Breakthrough

A cross-graph tuning-free prompting framework for GNNs that achieves massive gains on unseen graphs without retraining.

April 2, 2026

Original Paper

A Cross-graph Tuning-free GNN Prompting Framework

Yaqi Chen, Shixun Huang, Ryan Twemlow, Lei Wang, John Le, Sheng Wang, Willy Susilo, Jun Yan, Jun Shen

arXiv · 2604.00399

The Takeaway

Allows GNNs to function as 'inference engines' for new graph tasks/domains with zero parameter tuning. Achieves an average 30.8% accuracy gain over current SOTA prompting methods, enabling immediate deployment on novel graphs.

From the abstract

GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without