Talk page
Title:
Plenary Talk: Sources and consequences of algorithmic bias
Speaker:
Abstract:
In this talk I will first provide a taxonomy of different sources of bias in machine learning algorithms. I will then present novel results on the effect of differential victim crime reporting on predictive policing systems (FAccT’21). Previous research on fairness in predictive policing has concentrated on the feedback loops which occur when models are trained on discovered crime data, but has limited implications for models trained on victim crime reporting data. We demonstrate how differential victim crime reporting rates across geographical areas can lead to outcome disparities in common crime hot spot prediction models, which may lead to misallocations both in the form of over-policing and under-policing. I will conclude the talk by discussing paths forward for research on algorithmic fairness, arguing that reliable assessment and design require us to center AI-assisted decisions, rather than AI predictions, as the locus of evaluation.
Link:
Workshop: