Vision-Language Models (VLMs) can outperform specialized learning-based placers in chip floorplanning through visual evolutionary optimization.
March 31, 2026
Original Paper
See it to Place it: Evolving Macro Placements with Vision-Language Models
arXiv · 2603.28733
The Takeaway
It demonstrates that the spatial reasoning of off-the-shelf VLMs can be applied to complex physical design problems in EDA. The framework achieves wirelength reductions of over 32%, proving that general-purpose foundation models can tackle hard combinatorial optimization tasks in engineering.
From the abstract
We propose using Vision-Language Models (VLMs) for macro placement in chip floorplanning, a complex optimization task that has recently shown promising advancements through machine learning methods. Because human designers rely heavily on spatial reasoning to arrange components on the chip canvas, we hypothesize that VLMs with strong visual reasoning abilities can effectively complement existing learning-based approaches. We introduce VeoPlace (Visual Evolutionary Optimization Placement), a nove