Skip to main content
search
0

Striking a Balance:

The Integration of Artificial Intelligence in Immigration Practices and What it Means for Individual Human Rights

By Adriana Caceros,
Smith Business Law Fellow
J.D. Candidate, Class of 2025

As the realm of immigration practices continues to grapple with evolving complexities and an ever-growing backlog of cases, the integration of artificial intelligence (AI) has emerged as a promising solution, offering the potential to streamline processes such as case evaluations and enhance overall efficiency. While AI technologies promise accelerated decision-making and improved resource allocation, concerns have surfaced regarding the potential trade-off between expediency and the meticulous assessment required to safeguard the rights of individuals seeking refuge or opportunities in new lands, as well as the individual human rights to privacy.

AI refers to “machine-based operations that mimic human intelligence.”1 A subset of AI is machine learning, whereby applications or programs use extensive collections of data to improve their accuracy over time.2 These applications or programs use “prediction” by utilizing available information, often referred to as “data” to generate information previously unknown to the user.3 Prediction, however, is not the same as understanding.4 This distinction is crucial because despite AI’s advancement, it cannot currently replicate human intelligence, despite its attempts to do so. Deep learning, for example, is a way that AI attempts to mimic the human brain by using algorithms that draw inspiration from the structure and function of the human brain, called “artificial neural networks”, that then progressively enhance the comprehension of the correlation between input and output data.5

As it relates to immigration, the Department of Homeland Security has created its “Artificial Intelligence Use Case Inventory,” which provides a list of current “non-classified and non-sensitive AI use cases.”6 Notable on this list is the I-539 Approval Prediction, which “attempts to train and build a machine learning throughput analysis model” that functions to do the human part of the application process, which is to predict when an I-539 (application to extend or change nonimmigrant status) case will be approved.7 This should wave some flags, as the I-539 is not standard like a green card renewal (where there is no change of status), but rather a complex form for people who wish to extend their stay or change to another nonimmigrant status. Among these classes of immigrants are F1 academic students, ambassadors, J1 exchange visitors, and T nonimmigrants the latter of which are victims of severe forms of trafficking in persons.8 With the growing interest of using AI to streamline processes, the risk of bias increases and thus can cause life impacting ramifications for certain immigrants. In Canada, the Immigration Minister has stated the technology they have implemented in visa approvals is used exclusively as a “sorting mechanism” where immigration officers always make the final decision about whether to deny a visa.9 Yet, AI has been seen to have a “problematic track record” concerning both race and gender. Simply stated, AI is not neutral because it uses a preprogrammed “recipe” and if the recipe itself is biased, then the decisions that the algorithm makes based off of the recipe, will ultimately be biased as well.10 The discretionary nature of immigration should make it the last group to be subjected to such technology because the nuances and ethical judgement required in legal proceedings is without a doubt still a skill that rests within the realm of human abilities.11

Now, cue in the suppliers of the massive amounts of data needed for AI implementations such as those mentioned above. Under one administration, the U.S Immigration and Customs Enforcement (ICE) signed “several key data and analytics contracts” with big names such as Thomas Reuters and Palantir, the company founded by the same person who created Paypal.12 Under another administration, both ICE and Customs and Border Protection (CBP) bought nearly $1.3 million worth of Venntel licenses, a third-party company that tracks cellphone location data and then sells it.13 What this means for individuals, is that the federal government, through its agencies such as ICE, CBP, and DHS, is effectively circumventing the landmark 2018 decision of the Supreme Court in Carpenter v. United States, which makes it so that law enforcement must obtain a warrant before obtaining seven days of historical cell site location information of an individual from cell phone companies, otherwise the activity is prohibited.14 CBP spokesmen have said that the information they use does not include cellular phone tower data, but make no mention of the third-party company information they have purchased.15 The federal government, in essence, is thus carving out an exception to the special protection that the Supreme Court has awarded this type of data, at the expense of individual private rights.

The use of third-party data, predictive programming, and AI in general, is not exclusive to immigration proceedings. As such, any American with a concern for privacy should heed caution. To what extent should we be comfortable with exchanging efficiency for our rights? The hill is steep once we allow our rights to become diminutive in comparison to governmental goals, whether they be one that strive for efficiency or not. Despite well-intentioned efforts to implement AI as an efficient solution to rising legal problems, like backlogs in immigration proceedings, intentions do not override the jeopardization of individual human rights such as the right to privacy.

References:

1 Lucia Nalbandian, An eye for an “I’: A Critical Assessment of Artificial Intelligence Tools in Migration and Asylum Management, 10 Comparative Migration Studies (2022).

2 Id.

3 Agarwal A, Gans J, Goldfarb A, Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press (2018).

4 Lucia Nalbandian, An eye for an “I’: A critical Assessment of Artificial Intelligence Tools in Migration and Asylum Management, 10 Comparative Migration Studies (2022).

5 Id.

6 Homeland Security, Artificial Intelligence Use Case Inventory, (Sep 20,2023), https://www.dhs.gov/data/AI_inventory.

7 Id.

8 Id.

9 Teresa Wright, Federal Use of A.I. in Visa Applications Could Breach Human Rights, Report Says, The Canadian Press, (Sep 26, 2018, 7:52AM), https://www.cbc.ca/news/politics/human-rights-ai-visa-1.4838778.

10 Id.

11 Rachel Immione, The Dual-Edged Sword of AI: Implications for Immigration Lawyers and Visa-Sponsoring Companies, AI for Immigration Law, (June 18, 2023), https://www.immione.com/the-dual-edged-sword-of-ai-implications-for-immigration-lawyers-and-visa-sponsoring-companies.

12 McKenzie Funk, How ICE Picks Its Targets in the Surveillance Age, The New York Times Magazine, (October 2, 2019), https://www.nytimes.com/2019/10/02/magazine/ice-surveillance-deportation.html.

13 Byron Tau, Federal Agencies Use Cellphone Location Data for Immigration Enforcement, The Wall Street Journal, (February 7, 2020, 7:30AM), https://www.wsj.com/articles/federal-agencies-use-cellphone-location-data-for-immigration-enforcement-11581078600.

14 Tyler Gerstein, The Government Has Your Phone Location Data: It Might Be Legal, But What Does It Mean for Your Privacy?, Georgetown Law Technology Review, (March 2020), https://georgetownlawtechreview.org/the-government-has-your-phone-location-data-it-might-be-legal-but-what-does-it-mean-for-your-privacy/GLTR-03-2020.

15 Byron Tau, Federal Agencies Use Cellphone Location Data for Immigration Enforcement, The Wall Street Journal, (February 7, 2020, 7:30AM), https://www.wsj.com/articles/federal-agencies-use-cellphone-location-data-for-immigration-enforcement-11581078600.

Close Menu

(239) 687-5300