Skip to main content
search
0

Navigating Federal Governance Framework for Emerging Technology

By Eva Thompson,
The Gavel, Contributor
J.D. Candidate, Class of 2024

Considering global advancements in artificial intelligence (“AI”) law and policymaking, a comprehensive federal AI governance framework is emerging within the U.S. The President, Congress, and federal entities, including the Federal Trade Commission, the Consumer Financial Protection Bureau, and the National Institute of Standards and Technology, are introducing AI-related initiatives, legislations, and policies.1 However, many experts suggest that public companies ultimately maintain responsibility to regulate and monitor AI policies.2 

An example of a federal initiative aimed at regulating the pace at which AI is accelerating is the Block Nuclear Launch by Autonomous AI Act.3 The Act is aims to ensure “that no matter what happens in the future, a human being has control over the employment of a nuclear weapon – not a robot.”4 To this end, billionaire mogul, Elon Musk, signed an open letter in March 2023 urging others in the tech industry to “immediately pause for at least 6 months the training of AI systems.”5 The letter calls for a federal moratorium and advises that companies and the public evaluate the potential consequences of AI, stating “[the risk of civilization] must not be delegated to unelected tech leaders.”6

Apart from federal initiatives, states are rapidly focusing on regulating AI services and products as the introduction of bills related to AI increased by forty-six percent (46 %) between 2021 and 2022.7 State legislators are also establishing task forces to explore the necessity of AI-specific regulations. Louisiana established a technology and cybersecurity committee to examine the influence of AI on state operations, procurement, and policy.8 Texas instituted an AI advisory council to investigate and oversee AI systems developed, utilized, or acquired by state agencies.9 Similarly, North Dakota and West Virginia are in the process of establishing advisory bodies to scrutinize and monitor AI systems within their respective state agencies.10

Although legislators on federal and state levels aspire to create regulatory framework, experts are skeptical of the extent and limitations of government intervention and ultimately contend companies should assume responsibility. 

As such, AI companies themselves are actively working on self-regulation in the hope of establishing a precedent for others. For example, the Frontier Model Forum was created by the ChatGPT developer OpenAI, Anthropic, Microsoft and Google, the owner of the UK-based DeepMind. The forum’s members state its main objectives is to promote research in AI safety, such as developing standards for evaluating models; encourage responsible deployment of advanced AI models; discuss trust and safety risks in AI with politicians and academics; and help develop positive uses for AI such as combating the climate crisis and detecting cancer.11 

A key takeaway from the rise of AI-related governance initiatives is that generative AI demands self-regulation; however, crafting effective bipartisan regulation poses a challenge for state and federal legislation. Despite skepticism among compliance experts regarding government dependence, companies must develop regulatory policies in collaboration with the government to ensure safer AI usage for the public. 

References::

1 On April 25, 2023, the FTC and officials from three other federal agencies—the Civil Rights Division of the US Department of Justice, the Consumer Financial Protection Bureau, and the US Equal Employment Opportunity Commission—released a joint statement pledging to “uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as ‘artificial intelligence’ or ‘AI,’ become increasingly common in our daily lives—impacting civil rights, fair competition, consumer protection, and equal opportunity.” See FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI, Federal Trade Commission, (April 25, 2023), available at https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai. 

2 See Victor Li, What could AI regulation in the US look like?, American Bar Association, (June 14, 2023), available at https://www.americanbar.org/groups/journal/podcast/what-could-ai-regulation-in-the-us-look-like/#:~:text=%E2%80%9CAt%20present%2C%20the%20regulation%20of,%2C%20security%20and%20anti%2Ddiscrimination. 

3 See Block Nuclear Launch by Autonomous Artificial Intelligence Act, H.R. 2894, 118th Cong. (2023). 

4 Id.

5 See Pause Giant AI Experiments: An Open Letter, Future of Life Institute, available at https://futureoflife.org/open-letter/pause-giant-ai-experiments/. 

6 Id.

7 See National Conference of State Legislatures (Jan. 31, 2023), available at https://www.ncsl.org/technology-and-communication/legislation-related-to-artificial-intelligence. 

8 La. S. Con. Res. 49, Gen. Assemb., Reg. Sess. (La. 2023).

9 Tx. H.B. 2060, 2023 Leg., 88th Sess. (Tx. 2023).

10 Illinois’ Artificial Intelligence Video Interview Act applies to all employers and requires disclosure of the use of an AI tool to analyze video interviews of applicants for positions based in Illinois. New York City’s AI Law (Local Law 144) regulates employer use of automated employment decision tools in hiring and promotions. Vermont’s H.B. 410 created an Artificial Intelligence Commission.

11 See Todd Ehret, Where AI will play an important role in governance, risk & compliance programs, Thomson Reuters, (Aug. 24, 2023), available at https://www.thomsonreuters.com/en-us/posts/corporates/ai-governance-risk-compliance-programs/. 

Close Menu

(239) 687-5300