AI & ML Breaks Assumption

Challenges the entire foundation of Spectral Graph Neural Networks, proving their success is due to implementation quirks rather than spectral theory.

March 20, 2026

Original Paper

Position: Spectral GNNs Are Neither Spectral Nor Superior for Node Classification

Qin Jiang, Chengjia Wang, Michael Lones, Dongdong Chen, Wei Pang

arXiv · 2603.19091

The Takeaway

It reveals that commonly used 'graph Fourier bases' are not classical Fourier bases and that Spectral GNN performance often stems from implementation issues that reduce them to standard MPNNs. This forces a complete re-evaluation of why these models work and suggests practitioners can achieve similar results with simpler, more robust architectures.

From the abstract

Spectral Graph Neural Networks (Spectral GNNs) for node classification promise frequency-domain filtering on graphs, yet rest on flawed foundations. Recent work shows that graph Laplacian eigenvectors do not in general have the key properties of a true Fourier basis, but leaves the empirical success of Spectral GNNs unexplained. We identify two theoretical glitches: (1) commonly used "graph Fourier bases" are not classical Fourier bases for graph signals; (2) (n-1)-degree polynomials (n = number