AI & ML Efficiency Breakthrough

RNNs can be trained online without Jacobian propagation, matching BPTT performance at 1000x less memory.

March 31, 2026

Original Paper

Temporal Credit Is Free

Aur Shalev Merin

arXiv · 2603.28750

The Takeaway

This paper challenges the necessity of Backpropagation Through Time (BPTT) and Real-Time Recurrent Learning (RTRL) for recurrent architectures. By showing that immediate derivatives with proper normalization are sufficient, it enables high-performance online learning on streaming data with massive compute and memory savings.

From the abstract

Recurrent networks do not need Jacobian propagation to adapt online. The hidden state already carries temporal credit through the forward pass; immediate derivatives suffice if you stop corrupting them with stale trace memory and normalize gradient scales across parameter groups. An architectural rule predicts when normalization is needed: \b{eta}2 is required when gradients must pass through a nonlinear state update with no output bypass, and unnecessary otherwise. Across ten architectures, rea