Knowledge graph models are 25 percent dumber at remembering things than previous benchmarks suggested.
April 24, 2026
Original Paper
Revisiting Catastrophic Forgetting in Continual Knowledge Graph Embedding
arXiv · 2604.19401
The Takeaway
Academic benchmarks suggested steady progress on preventing AI from forgetting old information. A major flaw in how these models are tested has been hiding a massive performance gap. This error stems from ignoring entity interference, which occurs when new facts overwrite old ones in a way the test doesn't detect. When this interference is accounted for, the supposed progress in the field largely disappears. AI developers must rethink how they measure memory if they want to build systems that truly learn over time.
From the abstract
Knowledge Graph Embeddings (KGEs) support a wide range of downstream tasks over Knowledge Graphs (KGs). In practice, KGs evolve as new entities and facts are added, motivating Continual Knowledge Graph Embedding (CKGE) methods that update embeddings over time. Current CKGE approaches address catastrophic forgetting (i.e., the performance degradation on previously learned tasks) primarily by limiting changes to existing embeddings.However, we show that this view is incomplete. When new entities a