AI & ML Paradigm Shift

Graph2Video reframes dynamic graph learning as a video modeling problem, allowing the use of video foundation models to capture long-range temporal dependencies in networks.

March 17, 2026

Original Paper

Graph2Video: Leveraging Video Models to Model Dynamic Graph Evolution

Hua Liu, Yanbin Wei, Fei Xing, Tyler Derr, Haoyu Han, Yu Zhang

arXiv · 2603.13360

The Takeaway

Instead of specialized dynamic graph architectures, this approach treats temporal subgraphs as frames. This allows practitioners to leverage the massive pre-training of video models for graph-based tasks like link prediction and recommendation.

From the abstract

Dynamic graphs are common in real-world systems such as social media, recommender systems, and traffic networks. Existing dynamic graph models for link prediction often fall short in capturing the complexity of temporal evolution. They tend to overlook fine-grained variations in temporal interaction order, struggle with dependencies that span long time horizons, and offer limited capability to model pair-specific relational dynamics. To address these challenges, we propose \textbf{Graph2Video},