Skip to main content
search

Artificial Intelligence and the Legal World – Will More Class Actions Result?

By Josette Nelson,
The Gavel, Contributor
J.D. Candidate, Class of 2024

Since Artificial Intelligence (AI) has entered the legal scene, there have been many questions surrounding how it will impact the legal profession as a whole. AI’s ability to process vast amounts of data and predict legal outcomes based on information it gathers from recent and past history, has the potential to streamline legal processes and aid attorneys in making informed decisions. However, the question arises: Can AI truly interpret the complexities of a legal analysis, or is it being oversimplified, potentially leading to inaccurate legal conclusions? Predictive analytics is one of the key areas where AI is making its mark in the legal field.1 By analyzing past legal cases, court decisions, and other relevant data, AI algorithms can forecast potential outcomes of current cases, however, issues have arisen in class action cases where AI misses certain issues, like disclosure of information and nuanced questions of law.2

Complications have arisen in the legal sphere, specifically in the form of class-actions suits.3 In pertinent part, GitHub, Microsoft, and OpenAI were filed against based on “GitHub’s Copilot tool.”4 In class action lawsuit against OpenAI LP, OpenAI Incorporated, OpenAI GP LLC, OpenAI Startup Fund I, LP, OpenAI Startup Fund GP I, LLC, and Microsoft Corporation the plaintiffs accused GitHub of copying and republishing content data, while failing to “provide attribution.”5

The recent class-action lawsuit filed against GitHub, Microsoft, and OpenAI sheds light on the challenges that arise when cutting-edge AI tools clash with established legal frameworks. At the center of the controversy is GitHub’s Copilot tool, a groundbreaking program that leverages machine learning to assist programmers in generating code snippets based on their existing work. While it has been proclaimed as a “game-changer in software development”, Copilot now finds itself encompassed in a legal battle over allegations of copyright infringement and data mishandling.6

The plaintiffs in the case, against OpenAI LP and GitHub, asserted that Copilot’s predictive code generation feature unlawfully appropriates and reuses code from GitHub repositories without adhering to the requirements of open-source licenses. Central to the dispute is the issue of attribution – a fundamental principle in the open-source community that ensures proper credit is given to the creators of shared code.7 Moreover, the complaint extends beyond copyright concerns, alleging that GitHub failed to adequately safeguard personal data and information entrusted to its platform by users. 

The inclusion of claims of fraud further complicates the legal tussle, highlighting the broader implications of AI ethics and accountability in the digital age. The legal saga, unfolding since the filing of the complaint in November 2022, has seen Microsoft and GitHub vigorously contesting the allegations and seeking to have the case dismissed. 

Their efforts to sidestep legal repercussions underscore the high stakes involved in navigating the murky waters of AI regulation and liability. At the same time, OpenAI, a pivotal player in AI, faces its own legal issues. In Tremblay v. OpenAI, Inc and Silverman et all v. OpenAI, Inc cases, more complexities pertaining to copyright continue to unfold.8 

As the legal battles continue, the outcome of these lawsuits will undoubtedly shape the future trajectory of AI development and regulation. The clash between technological innovation and legal compliance serves to remind those in the legal field of the robust frameworks that balance innovation with ethical considerations that accompany legal responsibilities. In a world where AI continues to push the boundaries of what is permissible in the workplace, the GitHub, Microsoft, and OpenAI lawsuits are a ‘cautionary tale’ – that progress must be accompanied by accountability, transparency, and a steadfast commitment to upholding the rights and interests of those involved.

The recent surge in class action court cases involving OpenAI has brought to light the intricate legal challenges posed by emerging technologies and artificial intelligence. These cases have not only tested the boundaries of existing laws and regulations but have also underscored the evolving role of the legal profession in navigating the complexities of AI-related litigation. As legal professionals grapple with the nuances of AI ethics, accountability, and liability, the OpenAI lawsuits have served as a wake-up call, highlighting the need for a deeper understanding of the legal implications of advanced technologies. The rapid pace of innovation in the AI space necessitates a proactive approach to legal frameworks that can adapt to the changing understandings of technological advancements. Despite AI technology advancements, the legal complexities surrounding AI-related litigation cannot be fully grasped or interpreted by AI alone.9 While AI tools can aid in legal research and analysis, the intricacies of human judgment, ethical considerations, and contextual understanding remain essential components of the legal profession. The nuances of legal reasoning, interpretation, and application require human expertise and experience that cannot be replicated by algorithms or machine learning models.10

In conclusion, the recent class action court cases concerning OpenAI have not only influenced the legal profession’s approach to AI-related matters but have also highlighted the limitations of AI in fully comprehending the multifaceted legal issues at play. As legal professionals continue to navigate the intersection of law and technology, it is imperative that they remain vigilant, adaptable, and informed to effectively address the legal challenges posed by the ever-evolving landscape of AI.11 

References:

1 See Rinkesh Joshi: Reinforcement Learning for GitHub Pull Request Predictions: Analyzing Development Dynamics

2 Joe Panettiere: Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic and More (March 1, 2024) https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/

3 Ben Lutkevich, AI lawsuits explained: Who’s getting sued? https://www.techtarget.com/whatis/feature/AI-lawsuits-explained-Whos-getting-sued

4 Sherry Tseng, The Suit Against Copilot and What it Means for Generative AI (January 2023) https://georgetownlawtechreview.org/the-suit-against-copilot-and-what-it-means-for-generative-ai/GLTR-01-2023/

5 Doe v. Github, Inc., 2023 U.S. Dist. LEXIS 214939, *3

6 James Vincent, The Lawsuit That Could Rewrite The Rules Of AI Copyright (November, 8, 2022) — https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data : Microsoft, GitHub, and OpenAI are being sued for allegedly violating copyright law by reproducing open-source code using AI. But the suit could have a huge impact on the wider world of artificial intelligence.

7 David De Cremer, et al., How Generative AI Could Disrupt Creative Work, Harv. Bus. Rev. (April 13, 2023), https://hbr.org/2023/04/how-generative-ai-could-disrupt- creative-work.

8 Tremblay v. OpenAI, Inc., 2024 U.S. Dist. LEXIS 24618, *3

9 Joshua P. Davis, Law Without Mind: AI, Ethics, and Jurisprudence

10 Nicole Yamane, Artificial intelligence in the Legal Field and the Indispensable Human Element Legal Ethics demands (2020)

11 Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation

Close Menu

(239) 687-5300