GSR-GNN achieves 30x training speedups and 87% memory reduction for deep Graph Neural Networks on circuit graphs.
March 31, 2026
Original Paper
GSR-GNN: Training Acceleration and Memory-Saving Framework of Deep GNNs on Circuit Graph
arXiv · 2603.27156
The Takeaway
Scaling GNNs to large-scale EDA (Electronic Design Automation) tasks has been blocked by GPU memory; this framework enables 100+ layer GNNs to run on massive circuit graphs with negligible accuracy loss. This represents a major breakthrough for AI-driven hardware design.
From the abstract
Graph Neural Networks (GNNs) show strong promise for circuit analysis, but scaling to modern large-scale circuit graphs is limited by GPU memory and training cost, especially for deep models. We revisit deep GNNs for circuit graphs and show that, when trainable, they significantly outperform shallow architectures, motivating an efficient, domain-specific training framework. We propose Grouped-Sparse-Reversible GNN (GSR-GNN), which enables training GNNs with up to hundreds of layers while reducin