A simple Python script found by an AI beats billion-dollar networks at predicting how complex systems work.
April 20, 2026
Original Paper
EVIL: Evolving Interpretable Algorithms for Zero-Shot Inference on Event Sequences and Time Series with LLMs
arXiv · 2604.15787
The Takeaway
EVIL uses an LLM-guided evolutionary search to find compact, human-readable math formulas for time series prediction. These pure Python programs match or beat the accuracy of massive deep learning models on new datasets. The AI discovers the underlying physical laws of a system rather than just memorizing patterns in a black box. This means practitioners can use a few lines of code to solve problems that previously required expensive GPU clusters. Interpretable math is becoming more powerful than the opaque neural networks we spent a decade building.
From the abstract
We introduce EVIL (\textbf{EV}olving \textbf{I}nterpretable algorithms with \textbf{L}LMs), an approach that uses LLM-guided evolutionary search to discover simple, interpretable algorithms for dynamical systems inference. Rather than training neural networks on large datasets, EVIL evolves pure Python/NumPy programs that perform zero-shot, in-context inference across datasets. We apply EVIL to three distinct tasks: next-event prediction in temporal point processes, rate matrix estimation for Ma