AI & ML Paradigm Shift

A new curriculum learning method identifies 'transitional problems' whose difficulty is measured directly relative to a model's current competence rather than using static proxy scores.

March 17, 2026

Original Paper

Level Up: Defining and Exploiting Transitional Problems for Curriculum Learning

Zhenwei Tang, Amogh Inamdar, Ashton Anderson, Richard Zemel

arXiv · 2603.13761

The Takeaway

This moves curriculum learning away from universal 'easy-to-hard' rankings toward learner-specific progressions. It demonstrates that training on problems at the edge of a model's current ability level is the most efficient way to 'level up' a model to the next tier of performance in complex domains like chess and mathematics.

From the abstract

Curriculum learning--ordering training examples in a sequence to aid machine learning--takes inspiration from human learning, but has not gained widespread acceptance. Static strategies for scoring item difficulty rely on indirect proxy scores of varying quality and produce curricula that are not specific to the learner at hand. Dynamic approaches base difficulty estimates on gradient information, requiring considerable extra computation during training. We introduce a novel method for measuring