AI Safety

Do LLMs Track Public Opinion? A Multi-Model Study of Favorability Predictions in the 2024 U.S. Presidential Election

AI models consistently overestimated public favorability for a major candidate in the 2024 election.

Deep Dive

A study tested nine large language models against five major exit polls from the 2024 U.S. presidential election. Researchers found the models were systematically miscalibrated. For Kamala Harris, all models overpredicted her favorability by 10-40% compared to actual polls. Biases for Donald Trump were smaller at 5-10%. The deviations persisted over time and were not fixed by giving the models internet access, showing LLMs cannot reliably track public opinion.

Why It Matters

This reveals a critical limitation in using AI for political forecasting and understanding public sentiment.