Research & Papers

Persuadability and LLMs as Legal Decision Tools

Can a slick lawyer trick an AI judge? New research says yes...

Deep Dive

A new academic paper, accepted at the 21st International Conference on Artificial Intelligence and Law (ICAIL 2026), exposes a critical flaw in using Large Language Models (LLMs) as legal decision assistants or first-instance judges. Researchers Oisin Suttle and David Lillis from University College Dublin conducted experiments with frontier open- and closed-weights LLMs, testing how they respond to legal arguments. Their key finding: the quality of the advocate making the argument significantly affects whether the model agrees with a particular legal point of view. This means a more persuasive lawyer could sway an AI judge to rule based on advocacy skill rather than the actual merits of the case.

The study has immediate implications for any organization considering LLMs for legal or administrative decision-making. The researchers highlight a fundamental tension: legal decision-makers must be persuadable by good arguments, but not unduly influenced by a compelling advocate. Their results suggest current LLMs struggle with this balance, potentially leading to inconsistent or unfair outcomes. The paper is available on arXiv (2604.26233) and will be presented at ICAIL 2026. For professionals building AI-powered legal tools, this research underscores the need for careful testing and safeguards against manipulation by skilled advocates.

Key Points
  • Study tested frontier open- and closed-weights LLMs on legal argument persuadability
  • Advocate quality significantly influenced model decisions, not just case merits
  • Accepted at ICAIL 2026, implications for AI in judicial and administrative settings

Why It Matters

Highlights a critical risk: AI judges could be swayed by advocacy skill, not legal facts.