AI & ML New Capability

Meta-Harness automates the engineering of the 'code' surrounding LLMs, improving RAG and agent performance by optimizing retrieval and context management logic.

March 31, 2026

Original Paper

Meta-Harness: End-to-End Optimization of Model Harnesses

Yoonho Lee, Roshen Nair, Qizheng Zhang, Kangwook Lee, Omar Khattab, Chelsea Finn

arXiv · 2603.28052

The Takeaway

Most optimizers focus only on weights or prompts, but Meta-Harness treats the entire system architecture (harness) as an optimization target. It significantly outperforms hand-engineered context management and RAG systems across coding and math tasks.

From the abstract

The performance of large language model (LLM) systems depends not only on model weights, but also on their harness: the code that determines what information to store, retrieve, and present to the model. Yet harnesses are still designed largely by hand, and existing text optimizers are poorly matched to this setting because they compress feedback too aggressively. We introduce Meta-Harness, an outer-loop system that searches over harness code for LLM applications. It uses an agentic proposer tha