Research & Papers

How Do LLMs See Charts? A Comparative Study on High-Level Visualization Comprehension in Humans and LLMs

New research shows LLMs like GPT-4 analyze charts by listing comparisons, while humans synthesize trends into stories.

Deep Dive

A research team led by Hyotaek Jeon from KAIST and collaborators has published a groundbreaking study, accepted to EuroVis 2026, that systematically compares how Large Language Models (LLMs) and humans understand data visualizations. The study, titled 'How Do LLMs See Charts?', moves beyond low-level data extraction tasks to investigate high-level comprehension—the ability to grasp the communicative goals and complex patterns intended by a chart's designer. The researchers conducted a qualitative analysis using three common chart types (line graphs, bar graphs, and scatterplots) and tested LLMs under three different prompt conditions to map their reasoning pathways.

The key finding is a stark divergence in interpretative strategy. Humans naturally synthesize data into cohesive, trend-centric narratives, focusing on the 'big picture' story the chart tells. In contrast, LLMs consistently default to a methodical 'structural enumeration' approach, systematically listing out comparisons between data points and describing numerical ranges without forming a synthesized narrative. This strategy remained rigid and unchanged across different prompt constraints, suggesting it is a fundamental characteristic of current LLM architecture. The study concludes that while LLMs can describe charts, their comprehension mechanism is fundamentally distinct from human intuition, creating both critical challenges for reliable automated analysis and new opportunities for designing visualizations optimized for AI interpretation.

Key Points
  • LLMs use a rigid 'structural enumeration' strategy for charts, listing comparisons and ranges, unlike human narrative synthesis.
  • The study tested three visualization types (line, bar, scatter plots) and found LLM strategy was consistent across all prompts.
  • The research highlights a core reasoning gap, posing challenges for using LLMs as trustworthy, automated data interpreters in professional settings.

Why It Matters

Professionals relying on AI for data analysis must understand its non-human reasoning to avoid misinterpretation of automated chart insights.