Skip to main content
search
0

Sentencing 2.0: Navigating Recidivism with Algorithms

By Kennedy Ginaitt

The American public has always found solace in the notion that those “who do the crime will do the time.” However, judges who prescribe criminal sentences for offenders consider far more than just identifying the committed crime. Through pretrial interviews, a comprehensive “scoresheet” is generated, proposing a recommended sentence range for that unique defendant. While this conventional approach has been heavily relied upon, the advent of Artificial Intelligence (AI) elicits questions about the fairness and accuracy achievable through algorithmic systems. Critics of AI’s use in sentencing argue that the profound question of whether man’s liberty ought to be taken away is a question that is best answered by humans.

Criminal sentencing recommendation scoresheets are calculated based on the factors indicated in 18 U.S. Code §3553.1 The main factors that sentencing guidelines consider is the charged crime as well as the extent and severity of any past offenses.2 In addition, most defendants undergo an extensive presentence interview process which reveals information regarding the offender’s “family data, physical condition, mental and emotional health, and substance abuse.”3 The interview will also highlight any mitigating factors that suggest a departure from guidelines is appropriate, such as the presence, or lack thereof, of acceptance of responsibility or acknowledgment of guilt.4 In whole, the current sentencing guidelines presents the judge with a complete and meticulous review of the defendant’s life and recommends a sentence range that appropriately addresses the individual. The judge then can use his discretion to determine where in the range the defendant fits, or if an upward or downward variance is appropriate.5

However, the sentencing recommendation scoresheets can contain inaccuracies, causing the sentence to contain unjustified variances. In United States v. Smith, the sentencing guidelines characterized the defendant’s criminal history as “bad,” however this failed to fully account for the numerous financial crimes, homicide, and assault, leading the judge to perceive the guideline as overly lenient.6 Given the failure of the sentencing recommendation to accurately explore the defendant’s criminal profile, the judge used his discretion to prescribe an upward variance on his sentence. This serves as just one example of how scoresheets compiled by humans can fall short in accurately describing criminal conduct and how improper sentences can be imposed as a result. However, these errors are not exclusive to humans and the same shortfalls can also be found in sophisticated algorithms.

Predictive AI analyzes data inputs to anticipate distinct outcomes. In the context of criminal sentencing, AI assists judges by “provid[ing] a prediction based on a comparison of information about the individual to a similar data group.”7 Essentially, the algorithm considers the rates of recidivism following other sentences and identifies the optimal duration of incarceration that effectively deters crime while maintaining a balance with liberty interests.

In recent years, several American jurisdictions have implemented the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system. The system introduces a novel algorithmic program aimed at aiding sentencing decisions by analyzing a defendant’s profile and responses to 137 questions.8 The program assesses the likelihood of recidivism and appropriateness of a sentence, assigning a ranking on a scale of one to ten, with higher numbers indicating more severe sentences.9

While the program demonstrates considerable accuracy in predicting defendant sentences, it also exhibits implicit biases, notably in disproportionately ranking black defendants with a higher perceived rate of recidivism compared to their white equivalents. For example, non-recidivating black defendants were “incorrectly predicted to re-offend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%.”10 Through testing, the software demonstrated an accuracy of 65% in predicting recidivism rates.11 While this may be a result of the inherent complexity of human behavior, it is crucial to acknowledge that the algorithm draws information primarily from prior judicial decisions. Consequently, the implicit biases of past judges are absorbed by the algorithm.

To evaluate the apparent shortcomings of the COMPAS algorithm against human perception, the algorithm was tasked with assessing the likelihood of recidivism, alongside a group of individuals selected randomly and possessing minimal to no criminal justice experience. Both the algorithm and the testers were provided a criminal dataset from Broward County, their assignment being to predict the rates of recidivism. “A one-sided t test reveals that the average of the 20 median participant accuracies of 62.8% [and a standard deviation (SD) of 4.8%] is, just barely, lower than the COMPAS accuracy of 65.2% (P = 0.045).”12 Therefore, “in the end, the results from these two approaches appear to be indistinguishable.”13 The fact that individuals lacking familiarity with the justice system could predict recidivism rates at a level comparable to a sophisticated algorithm underscores the argument that a judge, on his own, would possess a more robust ability to accurately assess a defendant’s likelihood to recidivate. Consequently, this suggests the potential for crafting more tailored and effective sentences is more attainable through human judgment compared to artificial intelligence.

The examination of algorithmic systems such as COMPAS, alongside the comparison with the predictive abilities of individuals unfamiliar with the justice system, sheds light on the intricate nature of sentencing decisions. While algorithms offer a structured approach to prediction, its inherent biases and limitations become evident. The nuanced understanding, contextual insight, and reasoning skills immersed in judicial decision-making stands out as indispensable and a comprehensive assessment of a defendant’s profile remains the optimal method. Judges and the current sentencing practice remain a superior means to assess recidivism rates and determine applicable sentences for individuals within the criminal justice system. While algorithms can offer valuable tools for streamlining sentencing and aiding in judicial decision-making, they should be used sparingly. Regardless of the technological advancements, the role of human judgement should not be overlooked, and the significance of human rationality ought not be negated by an algorithm.   

References:

1 18 U.S.C. §3553

2 Gerald E. Rosen, Why the Criminal History Category Promotes Disparity—and Some Modest Proposals to Address the Problem, 9 FED. SENT’g Rep. 205 (1997).

3 Alan Ellis, Federal Presentence Investigation Report, 29 CRIM. JUST. 49 (2014).

4 Id.

5 Gerald E. Rosen, Why the Criminal History Category Promotes Disparity—and Some Modest Proposals to Address the Problem, 9 FED. SENT’g Rep. 206 (1997).

6 United States v. Smith, No. 22-11830, 2023 WL 8643276 (11th Cir. Dec. 14, 2023).

7 State v. Loomis, 371 Wis. 2d 235, 245, 246 (Wis., 2016)

8 Issac Taylor Justice by Algorithm: The Limits of AI in Criminal Sentencing, Crim. Just. Ethics, 42:3 193-213 (2023).

9 Id.

10 Julia Dressel, Hany Farid, The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4, eaao558 (2018).

11 Id.

12 Id.

13 Id.

Close Menu

(239) 687-5300