SEVerA enables the synthesis of self-evolving agents with formal guarantees by combining LLM planning with first-order logic rejection samplers.
March 27, 2026
Original Paper
SEVerA: Verified Synthesis of Self-Evolving Agents
arXiv · 2603.25111
The Takeaway
It solves the safety/correctness bottleneck in self-improving agents. By wrapping generative calls in 'Formally Guarded Generative Models', it ensures that autonomously generated code or actions always satisfy a hard contract.
From the abstract
Recent advances have shown the effectiveness of self-evolving LLM agents on tasks such as program repair and scientific discovery. In this paradigm, a planner LLM synthesizes an agent program that invokes parametric models, including LLMs, which are then tuned per task to improve performance. However, existing self-evolving agent frameworks provide no formal guarantees of safety or correctness. Because such programs are often executed autonomously on unseen inputs, this lack of guarantees raises