Research & Papers

Generics in science communication: Misaligned interpretations across laypeople, scientists, and large language models

A new study reveals a dangerous communication gap between scientists, the public, and AI.

Deep Dive

Scientists and AI models often use broad, unquantified statements called 'generics' (e.g., 'statins reduce heart events'). Research comparing laypeople, scientists, and LLMs like ChatGPT-5 found systematic mismatches in interpretation. Laypeople and, especially, AI models judged these statements as more generalizable and credible than scientists intended. This highlights a significant risk where scientists' language choices can be misinterpreted, and AI summaries may systematically overgeneralize research findings.

Why It Matters

These communication failures can spread scientific misinformation and reduce public trust in research.