AI & ML Nature Is Weird

LLMs don't actually 'see' the story in your data; they're just reading a spreadsheet back to you in a different order.

April 15, 2026

Original Paper

How Do LLMs See Charts? A Comparative Study on High-Level Visualization Comprehension in Humans and LLMs

arXiv · 2604.08959

The Takeaway

We assume that Vision-LLMs interpret charts like humans do—by finding trends and narratives. This study shows a fundamental cognitive gap: humans look for the 'why,' while LLMs default to structural enumeration (comparing point A to point B). Even when prompted to be 'human-like,' the models stick to a rigid, numerical range approach regardless of the chart type. This means if you want a true 'narrative' from an AI analyst, you're currently getting a glorified list of comparisons, not a synthesis. It highlights a critical limitation in using AI for high-level data storytelling.

From the abstract

Designers often create visualizations to achieve specific high-level analytical or communication goals. These goals require people to extract complex and interconnected data patterns. Prior perceptual studies of visualization effectiveness have focused on low-level tasks, such as estimating statistical quantities, and have recently explored high-level comprehension of visualization. Despite the growing use of Large Language Models (LLMs) as visualization interpreters, how their interpretations r