"Make It Sound Like a Lawyer Wrote It": Scenarios of Potential Impacts of Generative AI for Legal Conflict Resolution
New research compares how EU and US experts foresee AI transforming legal battles, with stark regional differences.
A new academic paper titled "Make It Sound Like a Lawyer Wrote It" explores the profound and uncertain future of generative AI in the legal sector. Authored by researchers Kimon Kieslich, Natali Helberger, and Nicholas Diakopoulos, the study employs a unique scenario-writing methodology, surveying both legal experts and ordinary citizens in the European Union and United States to envision how tools like GPT-4 and Claude could be integrated into legal processes from document review to dispute resolution. The core finding is a landscape defined by a critical trade-off: while AI offers dramatic efficiency improvements and potential for greater access to justice, it simultaneously introduces severe risks of "hallucinated" or inaccurate legal advice and challenges the perceived legitimacy of AI-assisted judicial outcomes.
The research reveals a significant divergence in anticipated impacts based on regional regulatory frameworks. Participants from the EU, operating under the prescriptive EU AI Act, envisioned more controlled, human-in-the-loop implementations focused on risk mitigation. In contrast, US participants, reflecting an industry self-regulatory environment, described scenarios with faster, more autonomous AI adoption but correspondingly higher risks. The study qualitatively analyzed narratives to identify prevalent themes, concluding that the ultimate societal impact—whether positive or negative—will be less about the technology itself and more about how it is implemented and governed by legal professionals and institutions navigating this new digital frontier.
- Study used a scenario-writing survey of EU/US legal experts & citizens to map AI's future in law.
- Found central trade-off: 50%+ efficiency gains vs. risks of inaccurate advice and delegitimized decisions.
- EU narratives favored controlled use under AI Act, while US scenarios predicted faster, riskier autonomous adoption.
Why It Matters
For legal tech, this research provides a crucial roadmap for navigating the ethical and practical minefield of AI integration in high-stakes domains.