Navigating the Impact of AI on Legal Practice:
Opportunities, Challenges, and Ethical Considerations
By Chris Tyler,
Smith Business Law Fellow
J.D. Candidate, Class of 2025
The rise of Artificial Intelligence (AI) in the legal profession has posed many challenges for businesses and their general counsel. LexisNexis has recently released an AI tool called Lexis+ AI, which leverages the technology.[1] These developments may have business professionals asking themselves things like: “My job includes transactional activities like drafting contracts, settlement agreements, and recruiting new employees.[2] Will those tasks be taken over by AI tools like Chat GPT, since they can do a better and more efficient job than I can?”[3], “Can my company use AI tools to select who to interview and hire without showing bias towards a protected group of workers?”[4], and “Will generative AI be able to solve legal problems internally that we previously had to outsource to expensive law firms, such as writing pleadings, answers, and motions?”[5]
Firstly, there is no doubt that the traditional tasks of a transactional attorney will be reduced. AI tools now allow lawyers to draft contracts that are tailored to their industry, including all essential terms and conditions.[6] These tools are still a bit flawed, but even so, they help eliminate half the work. Gone are the days of finding superb secondary sources manually or sifting through case law to find the best, most enforceable contracts that exist.[7] Additionally, AI tools can consider recent court cases and incorporate those decisions into the contracts it drafts.[8]
Secondly, a problem does exist in relying on AI for searching and decision-making.[9] Two areas in which AI is being used are resume review and candidate selection. These uses raise questions about hiring discrimination, bias, and other potential workplace protection violations.[10] The developers of AI tools currently do not disclose how they are retrieving data, testing models, or what data is being used to train those models. If the tools are relying on historical data, past biases concerning race, sex, familial status, and other markers may greatly skew the AI’s decision-making.[11] Additionally, when using AI for hiring decisions, there is no iterative process to reevaluate the hire once they are hired and assess whether the decision was a good one.[12] This means that there is no training data for the AI to refer to come the next hire.[13] The model can only “look back” at the data that was given to it, and it cannot reevaluate its past decisions on the fly.[14] Any kind of new certification, license or degree that the first AI-selected hire obtained would also be overlooked by the model since it does not have access to that data.
Thirdly, the responsibility, accountability and privacy implications of AI tools are a complex issue.[15] Who will be responsible when an AI tool discriminates against job candidates, or when a lawyer relies on erroneous recommendations of outdated law? As of now, the business itself will be primarily responsible. This is evident in the case Roberto Mata v. Avianca, Inc., case[16]. In the case, a group of lawyers used ChatGPT to write a reply brief named “Affirmation on Opposition,” which cited to fake cases, fake opinions, and fake judges.[17] The lawyers did not research or shepardize the cited cases before submitting them to the court.[18] Most of the opinions they used were entirely fabricated by the AI to cater to the legal issue at hand.[19] Additionally, before they submitted their brief, they requested and were approved for an extension, which they claimed they needed because they were going on vacation and needed more time to read Avianca’s documents.[20] In reality, they did not go on vacation, but instead used this as an excuse to grant them more time, alongside other dishonest statements and practices.[21] The judge opted to sanction the lawyers under Federal Rule of Civil Procedure 11, for which they incurred a $5,000 penalty, were ordered to apologize to each of the judges they misquoted, and were ordered to send a copy of the sanction to the Plaintiff.[22] The court did not necessarily care how the lawyers obtained these false materials, but rather, it cared that they had relied on them without conducting any further research or shepardizing. This holding makes it clear that the courts will not lay blame on the AI, but rather, on the people who use it negligently.[23]
There are, however, some benefits to using AI in the practice of law. The main benefit is efficiency.[24] AI tools can scan a vast number of documents very quickly, summarizing the state of the law and providing the lawyer with a great starting point for their research.[25] This increased efficiency can save a lot of money in legal fees. The cost savings can be passed on to clients, including businesses, and may influence whether general counsel decides to pursue litigation or to settle.[26]
In summary, AI tools are already being implemented in the legal field and will inevitably impact businesses small and large. These tools, like any other, are subject to certain limitations and must be used correctly. By understanding the capabilities and limitations of AI, lawyers can increase their overall efficiency, gain valuable research information, and analyze complex legal issues with more accuracy. However, it is crucial to understand that AI is not perfect and may lead to reduced work opportunities for transactional lawyers, employment discrimination, and, in some cases, Rule 11 sanctions for those who fail to use it with due diligence.
[1] Dana Greenstein, LexisNexis Launches Lexis+ AI, a Generative AI Solution with Linked Hallucination-Free Legal Citations, WWW.LEXISNEXIS.COM, https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-launches-lexis-ai-a-generative-ai-solution-with-hallucination-free-linked-legal-citations (last visited 23 March 2024).
[2] James Devaney, Partner, Shook, Hardy & Bacon, AI Applications in the Practice of Law (Jan. 19, 2024).
[3] Id.
[4] Id.
[5] Id.
[6] Greenstein, supra note 1.
[7] Devaney, supra note 2.
[8] Greenstein, supra note 1.
[9] Devaney, supra note 2.
[10] Id.
[11] Id.
[12] Id.
[13] Id.
[14] Id.
[15] Id.
[16] Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263, 1 (S.D.N.Y. June 22, 2023).
[17] See id. at 2, 14-26.
[18] Id. at 38-39.
[19] Id.
[20] Id. at 10-11.
[21] Id.
[22] Id. at 45.
[23] Id. at 1.
[24] Devaney, supra note 2.
[25] Id.
[26] Id.
[i] Press Release, Off. of Pub. Affairs, Justice Dep’t Sues Apple for Monopolizing Smartphone Mkts., United States Department of Justice (Mar. 21, 2024), https://www.justice.gov/opa/pr/justice-department-sues-apple-monopolizing-smartphone-markets
[ii]Id.
[iii] Id.
[iv] Id.
[v] Moon & Bonk, Justice Dep’t. Files Antitrust Lawsuit Against Apple Over its Infamous ‘Walled Garden’, Yahoo News (Mar. 21, 2024), https://news.yahoo.com/justice-department-files-antitrust-lawsuit-against-apple-over-its-infamous-walled-garden-144834571.html
[vi] Id.
[vii] Id.
[viii] Fung et al., Apple Sued in a Landmark iPhone Monopoly Lawsuit, CNN Business (Mar. 21, 2024), https://www.cnn.com/2024/03/21/tech/apple-sued-antitrust-doj/index.html
[ix] Id.
[x] Miller, U.S. Sues Apple: Read the full DOJ Lawsuit Doc. Here, 9to5Mac (Mar. 21, 2024), https://9to5mac.com/2024/03/21/doj-sues-apple-full-lawsuit-document-here/
[xi] The Genius Behind Apple’s Design Philosophy: An In-Depth Look at Their Aesthetic Approach, The Inspire Talks (July 13, 2024), https://theinspiretalks.com/apple-review/
[xii] Id.
[xiii] Miller, supra note 10.
[xiv]Apple Inc., App Sec. Overview, Apple Support (May 13, 2022), https://support.apple.com/en-gb/guide/security/sec35dd877d0/web
[xv] Miller, supra note 10.