FASE : A Fairness-Aware Spatiotemporal Event Graph Framework for Predictive Policing
A new AI framework for crime prediction enforces fairness constraints, but a 3.5% detection gap persists.
A team of researchers led by Pronob Kumar Barman has introduced FASE, a novel AI framework designed to address the critical issue of racial bias in predictive policing systems. Traditional systems that allocate patrols based purely on predicted crime risk can create a feedback loop, where over-policing in certain areas leads to more reported crime, which in turn justifies further policing. FASE tackles this by integrating two core components: a sophisticated crime prediction module and a fairness-constrained resource allocation optimizer.
The prediction module models Baltimore as a graph of 25 ZIP code areas, using 139,982 Part 1 crime incidents from 2017-2019. It employs a spatiotemporal graph neural network combined with a multivariate Hawkes process to capture spatial dependencies and temporal dynamics of crime. Outputs are modeled using a Zero-Inflated Negative Binomial distribution, suitable for the overdispersed and zero-heavy nature of crime data, achieving a test loss of 0.4857.
The framework's key innovation is its closed-loop simulator. After prediction, patrol allocation is formulated as a linear optimization problem that maximizes risk-weighted coverage while enforcing a strict Demographic Impact Ratio constraint, with deviation bounded by 0.05. Across six simulated deployment cycles, the system maintained fairness metrics between 0.9928 and 1.0262, with coverage ranging from 0.876 to 0.936. However, the research revealed a persistent and critical finding: a detection rate gap of approximately 3.5 percentage points remained between minority and non-minority areas. This shows that fairness constraints on patrol allocation alone are insufficient to prevent bias from seeping back into the training data during retraining cycles, underscoring the need for fairness interventions across the entire AI pipeline, not just at the allocation stage.
- FASE combines a spatiotemporal graph neural network with a Hawkes process on 139,982 crime incidents, achieving a test loss of 0.4857.
- Its fairness-constrained patrol allocation optimizer kept demographic impact ratios between 0.9928 and 1.0262 across six simulation cycles.
- A persistent 3.5% detection rate gap between areas shows allocation fairness alone doesn't stop feedback bias, requiring full-pipeline interventions.
Why It Matters
This research provides a crucial blueprint for building less-biased public safety AI and exposes the limits of simple fairness fixes, pushing for systemic solutions.