Can Algorithms lessen The Bias in The Criminal Justice System
By Kasondrea Thomas
Human decision-making and discretion in the criminal justice system have resulted in “appalling levels of mistreatment of disadvantaged groups.”1 Artificial Intelligence (AI) seemed to be a cause of hope that taking out discretionary decisions made by law enforcement, prosecutors, and judges may reduce the effect of bias in the criminal justice system. The United States imprisons the largest percentage of its population compared to any other country in the world.2 Black and Latinos respectively make up an estimated 13% and 18% of the U.S. population, yet they are disproportionately represented among inmates. Black inmates make up 30% of the incarcerated population and Latinos account for 22% of the imprisoned population.3 Discretion plays a critical role in the criminal justice system at each step, which can lead to disparate treatment.4 Decisions involving discretion begin from law enforcement officers, even before the initial arrest has occurred. Prosecutors then determine plea offers and sentencing recommendations and judges have discretion to determine bail requirements or sentences.5 Compared to algorithms, judges have been shown to use their discretion to detain more people than necessary to achieve lower crime rates.
AI has the capacity to increase the effectiveness of predictive and proactive policing.6 Algorithms that analyze patterns of previous behavior can predict crime in specific geographical areas and time frames with accuracy.7 AI can also create risk assessments and predict the probability of whether a person will appear at for court hearings or whether they will commit another crime.8 The rise in collection of information about people has made it possible for policing algorithms to become predictive. The use of AI to monitor closed-circuit television (CCTV) and alert law enforcement has already been implemented in Australia.9 It can detect irregular behavior like running, loitering, punching, and certain aggressive stances that can coincide with hostility. This can alert law enforcement to criminal activity in real time and keep numbers of patrols lower in certain areas.10 AI can help decide bail decisions through algorithms. Courts have given rise to using these algorithms because unlike judges or people they can be more consistent and fairer in the process.11 The use of algorithms in New Jersey led to a “16 percent drop in its pretrial jail population, again with no increase in crime.”12 A similar study in New York City showed that the algorithm’s risk assessment would outperform a judge’s record.13 Consistent and transparent sentencing is vital for the law and is imperative that cases be treated similarly. AI offers an opportunity to keep implicit biases of judges out of the sentencing phase of a trial.14
With all the hope that AI would create a more just way of taking bias from discretion out of the criminal justice system, there have been continual downfalls that show this may not be the route we need. Critics have argued that there is a risk using algorithms.15 Using seemingly neutral traits in algorithms, like education level, socioeconomic background, or address may “exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”16 When formulas include prior arrests or an individual’s legal history, past discrimination can be repeated with the algorithm.17 When algorithms assign individuals with a threat score, it influences who the police target and how they handle those interactions. These algorithms get support because they have accurately predicted geographical locations with higher rates of gun violence based on profiles.18 However, when police encounter a high threat score, their decision making becomes distorted and this increases the rates in which they use force, leading to disproportionate monitoring of minorities.19
Facial recognition has also been problematic. Databases have limitation which lead to misidentification of people in certain groups.20 Research has shown divergent error rates across demographics, “with the poorest accuracy consistently found in subject who are female, Black and 18-30 years old.”21 Additionally, discriminatory law enforcement practices led to an overrepresentation of Black individuals in mugshot data, which is then used to make predictions.22 Facial recognition technology with ingrained bias can misidentify suspects and further increase the incarceration of innocent Black Americans.23 Facial recognition can also be limited and reinforce bias based on its application. For example, Project Green Light (PGL), a model surveillance program, was enacted in 2016. High-definition cameras were installed throughout Detroit with a direct stream to the Detroit Police Department.24 These cameras used facial recognition and compared faces against the criminal databases and state identification photos.25 However, these PGL camera stations were not equally distributed across the city, the surveillance correlated with majority Black residential areas, avoiding White and Asian neighborhoods.26 Therefore, whether the technology of AI is inherently bias because of technological deficits or if it is prejudicially applied, this can lead to further disparate treatment for already disadvantaged groups in the criminal justice system.
Bail algorithms have shortcomings as well; judges show their mistrust of AI by overruling the systems recommendation a substantial proportion of the time. A study of Virgina, which adopted the use of algorithm-based risk assessment in 2002, showed that racial disparities increased in the circuits that relied most on the risk assessments.27 Specifically, in Kentucky, a study has shown that since
their risk assessment tools were introduced, white defendants were offered no-bail release at a much higher rate than black defendants.28 These risk assessment tools may have led to more defendants on bail, but “white defendants were the ones to benefit.”29 Accordingly, when a defendant has a racially unequal past, any prediction made based on that past will continually produce racially unequal recommendations.30
Racial inequality is now widely understood to be unacceptable in the criminal justice system and there have been calls to learn how to use these algorithms in ways that do not exacerbate the disparity.31 Criminal Justice institutions must decide if they should adopt the risk-assessment tools and, “if so, what measure of equality to demand those tools fulfill.”32 Racial-justice advocates have demanded that race, and facts correlating with race, be excluded as input factors that predict future behavior or risk.33 There has also been a “call for ‘algorithmic affirmative action’ to equalize adverse predictions across racial lines.”34 Critics also argue that if algorithmic risk assessments cannot be made race-neutral, the criminal justice system must reject them.35 Therefore, AI has shown to encounter the same biases and issues that humans are susceptible to. These algorithms must continue to be monitored and advanced to work towards a more equitable criminal justice system.
References:
1 Mirko Bagaric, Jennifer Svilar, Melissa Bull, Dan Hunter, and Nigel Stobbs, Article: The Solution to The Pervasive Bias and Discrimination in The Criminal Justice System: Transparent and Fair Artificial Intelligence, 59 AM. CRIM. L. REV. 95, 98 (Winter 2022).
2 Judge Juan Villasenor and Laurel Quinto, Judges On Race: The Power of Discretion In Criminal Justice, LAW 360, (January 10, 2021).
3 Id.
4 Id.
5 Id.
6 Id. at 109.
7 Id.
8 Id.
9 Id.
10 Id. at 114.
11 Id.
12 Id.
13 Id. at 117
14 Id. at 119.
15 Crystal S. Yang & Will Dobbie, Article: Equal Protection Under Algorithms: A New Statistical and Legal Framework, 119 MICH. L. REV. 29, 291, 295 (November 2020).
16 Id. at 296.
17 Id.
18 Supra note 1 at 112.
19 Id.
20 Id.
21 Alex Najibi, Racial Discrimination in Face Recognition Technology, Harvard University Blog, Science Policy, Special Edition: Science Policy and Social Justice, October 24, 2020 https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/. (last visited November 11, 2023).
22 Id.
23 Id.
24 Id.
25 Id.
26 Id.
27 Tom Simonite, Algorithms Were Supposed to Fix the Bail System. They Haven’t, Wired, https://www.wired.com/story/algorithms-supposed-fix-bail-system-they-havent/ (Feb. 19, 2020).
28 Id.
29 Supra note 1 at 117.
30 Sandra G. Mayson, Article: Bias In, Bias Out, YALE L.J. 2218, 2224 (June 2019).
31 Id. at 2223.
32 Id.
33 Id.
34 Id. at 2224.
35 Id.