Can LLMs Synthesize Court-Ready Statistical Evidence? Evaluating AI-Assisted Sentencing Bias Analysis for California Racial Justice Act Claims
Open-source tool uses LLMs to generate court-ready statistical evidence from massive prison datasets.
A new research paper accepted to ACM CHI 2026 demonstrates how AI can help bridge the 'second-chance gap' in criminal justice reform. Researcher Aparna Komarla developed an open-source platform that processes 95,000 prison records obtained through California Public Records Act requests to generate court-ready statistical evidence of racial bias in sentencing. This addresses implementation challenges of California's 2020 Racial Justice Act, which allows defendants to challenge convictions based on statistical disparities but has seen limited practical application due to the complexity of analyzing massive datasets.
The platform employs LLMs as an interpretive layer that synthesizes results from statistical methods like Odds Ratio, Relative Risk, and Chi-Square Tests into cohesive legal narratives. These narratives include crucial context like confidence intervals, sample sizes, and data limitations—elements essential for prima facie and discovery motions. Using the LLM-as-a-Judge evaluation framework, the research found AI can serve as a powerful descriptive assistant for real-time evidence generation when ethically incorporated into analysis pipelines, potentially helping identify hundreds of overlooked resentencing opportunities in California's justice system.
- Platform processes 95,000 California prison records to identify racial bias in sentencing
- Uses LLMs to synthesize statistical results (Odds Ratio, Relative Risk, Chi-Square) into legal narratives
- Research accepted to ACM CHI 2026 shows AI can ethically assist with real-time evidence generation
Why It Matters
Could help identify hundreds of overlooked resentencing cases and make racial justice claims more accessible.