Race-Blind Charging

img
Our work explores the problem of redacting racial information from free-text police incident narratives used by prosecutors to make charging decisions. In addition to describing the incident leading to an arrest or citation, these reports often contain the race and physical description of the suspect. Recent studies have shown that there is reason for concern that the judgments made by prosecutors using these reports may suffer from explicit or implicit racial bias. In this paper, we apply several deep learning approaches to the problem of obfuscating a suspect's race through redaction. We make use of pre-trained models to mitigate data availability issues, and ultimately show that the use of unsupervised pre-trained models fine-tuned on downstream tasks, like named entity recognition, are competitive with the performance of past algorithms designed for this problem, and notably, do not require labeled data or additional human inputs.