Skip to main content
search

The Evolution of the Judiciary

By Ryan Rahilly,
The Gavel, Contributor
J.D. Candidate, Class of 2025

There are ever-increasing areas of law adopting the use of Artificial Intelligence (“AI”). Although these AI systems are being adopted to assist lawyers and judges, these systems can also have dangerous consequences. One such area that is experiencing both the benefits and consequences of AI is the judicial system. Namely today many judges are utilizing AI systems to assist them in making sentencing decisions, which can result in judges simply relying on the AI’s determination instead of making their own decision. 

Now the benefit that AI brings to this area of the legal system, is its unparalleled ability to compile and compare a wealth of information.1 Namely AI systems can look at several factors that could affect an individual’s risk of recidivism, such as family situations, where they live, current employment, as well as numerous other factors.2 Without these AI systems judges would simply not have the time or resources to examine all these factors. Therefore, AI allows judges to make more informed decisions regarding the sentencing of defendants. 

The other benefit of AI systems is that they can eliminate and ignore certain factors that are not relevant.3 This could be an important feature because according to some studies, judges were more likely to give out harsher sentences right before lunch when they were hungry, compared to after lunch.4 Additionally, judges oftentimes make credibility and trustworthiness determinations based on how a defendant dresses and appears.5 Now, AI systems do not have a specific worldview and will only look at the facts and data that it has.6 While a judge will view something through his or her specific worldview.7 Although judges are supposed to be unbiased, and ignore any potential biases they have, it is sometimes impossible for a judge to be aware of every potential bias they have.8 Several things might be considered a bias such as how some judges are more inclined to give out harsher sentences on specific crimes over other judges in the same situation.9 Therefore, these AI systems would be able to ignore those emotional factors that tend to impact a human’s decision and only look at those factors that would impact recidivism risks.10 These AI systems also promise to give out more consistent sentences compared to judges.11

Based on these benefits, AI systems have also begun to be used when determining whether to charge an individual with a crime. In San Francisco, the District Attorney has begun using AI to help make charging decisions.12 There has even been a trend to try and implement AI systems that make credibility determinations like a lie detector. Now for all the potential benefits of AI systems, numerous potential disadvantages could threaten to cause injustice.

The first disadvantage to AI is the trust that people put in it. Namely, people are inclined to simply follow and trust the determinations of an AI system instead of using it as a guide to aid them in reaching their decisions.13 Additionally, the other problem with AI is the fact that AI learns, which means that AI can learn stereotypes.14 AI also depends on the data that is imputed into it, which means that if there is biased data put into the system then the results will also be biased.15 Namely, “As Justice Cuéllar notes, it is only because we can reverse engineer the situation that we can understand the bias. The danger of not knowing how the machines reach their conclusions could lead to misappropriations of justice.”16 Therefore, to ensure that the AI systems are making nonbiased determinations, then we need to know the data that is being used and be able to trust that data. 

However, according to some, AI systems should never be used in sentencing determinations.17 This is because a computer is making a mathematical determination on how long to sentence someone to prison based on the chance that they might commit a crime in the future. What this means is that these defendants are being sentenced to longer sentences not based on crimes that they have committed, but based on the chance that they might commit another crime in the future.18

It is true, however, that these systems are being used to make recidivism risk determinations to give out sentences. The criticism of punishment based on potential future crimes is improper because it ignores the underlying principle of the criminal justice system, which is rehabilitation.19 The system also needs to balance rehabilitation with protecting society.20 Therefore, if someone is more likely to commit another crime, then there is a fundamental interest in protecting society from those people. Therefore, for judges to make just decisions, they need to be able to weigh these competing interests, which is something that AI is incapable of doing. Moreover, AI cannot make decisions based on compassion or mercy, which is sometimes required for true justice.21

Ultimately AI, if used properly can help judges make sentencing determinations, based on a more holistic view of the person. However, judges need to keep in mind the potential risks of AI. That is, judges need to be able to trust the data, without completely relying upon the AI’s determination, because judges need to balance the underlying interests of the criminal justice system. Namely, they need to balance the interests of protecting society, rehabilitating the offender, punishing the offender, and deterring him and others from doing so again in the future. Furthermore, true justice and equity require the judge in some cases to show mercy and understanding, as people are more than just data sets.22 Therefore, AI if used properly can be a useful tool in the execution of justice, however, it cannot be a replacement for the judge. 

References:

1 Cris Goodman, SYMPOSIUM: LAWYERING IN THE AGE OF ARTIFICIAL INTELLIGENCE: AI/ESQ.: IMPACTS OF ARTIFICIAL INTELLIGENCE IN LAWYER-CLIENT RELATIONSHIPS, 72 Okla. L. Rev. 149, 150 (2019).

\2 Katia Schwerzmann, Abolish! Against the Use of Risk Assessment Algorithms at Sentencing in the US Criminal Justice System (Nov. 23, 2021), https://link.springer.com/article/10.1007/s13347-021-00491-2#author-information.

3 Id. at 175.

4 Natalie Salmanowitz, ARTICLE: Unconventional Methods for a Traditional Setting: The Use of Virtual Reality to Reduce Implicit Racial Bias in the Courtroom, 15 U.N.H. L. Rev. 117, 118.

5 Id. at 119.

6 Cris Goodman, supra. Note 1 at 176

7 Id. at 167.

8 Mark B. Bear, Injustice at the Hands of Judges and Justices, PSYCH. TODAY 

(Apr. 15, 2017), https://www.psychologytoday.com/us/blog/empathy-and-relationships/201704/injustice-the-hands-judges-and-justices.

9 Id.

10 Cris Goodman, supra. Note 1 at 169.

11 Id. at 172

12 All Things Considered: San Francisco DA Looks to AI to Remove Potential Prosecution Bias (National Public Radio broadcast June 15, 2019), https://www.npr.org/2019/06/15/733081706/san-francisco-da-looks-to-ai-to-remove-potential-prosecution-bias.

13 Cris Goodman, supra. Note 1 at 169.

14 Id. at 174

15 Id. at 172.

16 Id. at 173.

17 Katia Schwerzmann, supra. Note 2.

18 Id.

19 Doris Layton Mackenzie, Sentencing and Corrections in the 21st Century: Setting the Stage for the Future (July 2001), https://www.ojp.gov/sites/g/files/xyckuh241/files/archives/ncjrs/189106-2.pdf.

20 Id.

21 See Urmson. J.O., Et al., Nicomachean Ethics, Bk. 5 Ch 10 (Oxford University Press, Oxford, 1980)(equality is a correction of legal justice when the universal statement of the law is not correct).

22 Id.

Close Menu

(239) 687-5300