AI Safety

Evidence of political bias in search engines and language models before major elections

Audit of 4 search engines and 2 LLMs reveals systematic skews in political information.

Deep Dive

A team of researchers from Portugal, led by Íris Damião, published a landmark study on arXiv analyzing potential political bias in major information platforms. Using a standardized, privacy-preserving bot methodology, they audited four search engines (including Google) and two large language models (LLMs) by collecting answers to 4,360 election-related queries before the 2024 European Parliament and US presidential elections. The findings reveal systematic, measurable skews in the political information presented to users.

In the US context, the audit found Google's search results strongly favored topics more important to Republican voters, while other search engines showed a bias toward issues more relevant to Democrats. For the European elections, search results across platforms disproportionately mentioned far-right political entities beyond levels expected from polls or media coverage. While the LLMs' responses were generally more balanced, they still showed evidence of overrepresenting far-right and Green party entities. The researchers argue these algorithmic biases, even if small, can significantly influence democratic processes by shaping the information landscape for millions of voters.

The study's methodology involved mapping query answers to ideological positions in the EU and issue associations in the US, providing a robust, data-driven framework for such audits. The authors conclude that the central role of search engines and LLMs in political information access necessitates greater transparency and systematic, independent auditing of their outputs, especially during critical democratic events like elections.

Key Points
  • Google's US search results showed a strong bias toward Republican-favored topics in the 2024 election audit.
  • European search results across platforms disproportionately amplified far-right political entities beyond their electoral support.
  • Large language models (LLMs) like GPT-4 and Claude showed more balanced responses but still overrepresented certain parties.

Why It Matters

Algorithmic bias in major information platforms can shape public perception and influence democratic outcomes at scale.