Skip to main content
search
0

The Changing Landscape of the Medical Professional Standard of Care

By Emily Feyerabend,
The Gavel, Associate Editor
J.D. Candidate, Class of 2025

In recent years, surgeries, treatment options, data collection, and diagnostics have rapidly evolved in the wake of artificial intelligence (“AI”) technology in healthcare. Although benefits of these advancements far outweigh the costs, medical professionals should nonetheless be wary of the legal ramifications of AI development. Medical malpractice suits arise when a medical professional breaches a legal duty owed to the plaintiff that results in an injury suffered by the plaintiff.1 What constitutes a breach of legal duty is changing due to developments in robotic surgeries and machine learning systems. 

Some of the most prevalent, well-established AI technologies are minimally invasive robotic surgery devices.2 For example, the newest version of the da Vinci robotic surgery systems boasts the ability to manually guide robotic arms with a remote control; reposition the patient mid-surgery; and insert imaging scopes on robotic arms to increase visibility range.3 Additionally, robotic devices can track the movements of instruments in three-dimensional spaces during procedures, using “haptics” to alert medical professionals if the instruments approach the border of “safe zones” and warning them of the possibility of error.4

Naturally, complexities arise with legal ramifications of these evolutionary surgical systems. A medical professional’s decision to rely or not rely on robots during surgery could impact their potential liability should a harm be suffered by the patient. For example, an injury inducing “haptic” malfunction that fails to warn a surgeon of the possibility of error leads to a convoluted surgical liability standard5 The question then presented is whether they acted as the reasonable surgeon would in a similarly situated AI assisted operation.

Overly relying on an automated or semi-automated system may jeopardize the integrity of a medical professional who is qualified to perform the task but deferred to a device without full knowledge of the potential ramifications.6 Inversely, acting without regard to a robot’s recommendations or refusing to learn how to correctly operate a robotic system necessary for an operation may result in a jury finding a breach of the standard of care.7

With commonly used semi-automated systems, the defendant surgeon may have the burden to prove that, based on a preponderance of the evidence, she acted reasonably to deter system malfunctions, or, upon learning of the malfunction, utilized her own professional skill in a reasonable way to deter the injury.8 In Florida, such a burden of proof requires expert testimony.9 Experts must be well-versed in both the type of operation performed and the AI system. How much deference should be given to the system recommendations or to the surgeon’s personal choices, however, is likely to become a near-future problem for experts and juries to dissect.

In addition to robotic surgeries, one of the most widely utilized and fastest growing types of AI in the medical profession is machine learning. Machine learning collects data through complex cognitive computing algorithms to predict patterns; “present[s] doctors with treatment options; and recommend[s] drugs and instructions for administration.”10 Machine learning functions by feeding “reams of information” on a particular matter into vast computing systems, and the technology spits out the recommended treatment pattern based on the imputed factors.11

Data collection through these algorithms threatens the transparency between the physician-patient relationship, a requisite element to the legal duty owed in medical malpractice suits.12 A significant reason transparency threats arise is because the machines supplying the information to medical centers are produced by massive multinational technology corporations such as IBM. These technological corporations supply the machine learning systems with the information they will be able to compute, and from there, physicians utilize the data in the systems to provide patients with medical care.13 

This phenomenon is known as “black-box medicine” or “deep learning” because these algorithmic systems cannot be “explicitly understood.”14 The “black box” is often created by different developers “not working tightly in conjunction” with one another, and no one person is responsible for controlling the data that is computed by these machines.15 Instead, machines are developed over time, in different locations, and without a uniform code system that binds them onto the same path to liability.16 There is no common nucleus from which decisions or outputs are generated.17 Thus, medical professionals relying on these systems are unaware of the sources from which they derive their information, which can inevitably lead to issues of privacy, transparency, misdiagnoses, and complex lawsuits.

Training and experience are crucial to protecting the medical professional’s liability. Physicians must be open-minded to understanding how these devices and machines work and how to best prevent errors and misuse. The standard of care of the medical professional will continue to rapidly evolve for years to come. AI will soon be integrated into our conceptualization of the physician’s standard of care, and experts will be able to assist judges and juries to adequately make decisions in AI-based medical malpractice claims. Until that day comes, patients, physicians, and lawyers alike must remain vigilant in working to equip themselves with the knowledge required to best tackle these complex issues. 

References:

1 Fla. Stat. Ann. § 766.102.

2 Tokio Matsuzaki, Ethical Issues of Artificial Intelligence in Medicine, 55 CAL. W. L. REV. 255, 264 (2018).

3 Da Vinci Xi, INTUITIVE.com, da-vinci-xi-system-brochure.pdf (intuitive.com) (last visited Nov. 21, 2023).

4 Frank Griffin, Artificial Intelligence and Liability in Health Care, 31 HEALTH MATRIX 65, 70 (2021).

5 Id. at 71.

6 Rajadhar Reddy, Patrick Ross & Kathryn Spates, Initial Steps to Develop AI Regulations/Guidances: Security and Safety Issues to Consider, American Bar Association (September 28, 2019), https://www.americanbar.org/groups/health_law/publications/aba_health_esource/2019-2020/september-2019/ai/. 

7 Id. 

8 D. Shama, AI in Medicine: A Futuristic Insurgence, International Journal of Law Management & Humanities, Vol. 5 Iss 6; 1438 (2022). 

9 Fla Stat. Ann. § 766.106.

10 Scott J. Schweikart, Who Will Be Liable for Medical Malpractice in the Future? How the Use of Artificial Intelligence in Medicine Will Shape Medical Tort Law, 22 MINN. J.L. Sci. & TECH. 1, 4 (2021).

11 Id.

12 Id.

13 Id.

14 Id. at 6.

15 Id.

16 Id.

17 Id. 

Close Menu

(239) 687-5300