Health Care’s AI Experience Draws Roadmap for Other Industries

July 18, 2024, 8:30 AM UTC

The health-care industry has relied on artificial intelligence for a long time. With its many challenges, the sector has grappled with how perpetually cash-strapped, thin-margin hospitals can devote the necessary resources for AI use. These tools may require so much oversight and verification that efficiencies and value are lost. However, emerging standards of technological medical care may necessitate their use.

Health care’s experience with the basic questions of AI acquisition and deployment should be instructive to organizations across industries that are evaluating whether and why they should deploy an AI tool.

Some of these basic questions include whether the tool is necessary or helpful to the organization, if the organization has expertise to evaluate and test the tool, and the resources and will to monitor this still-unproven technology for quality control.

Rules-based AI tools—those that make predictions based on defined data sets comprising medical records and other clinical data—have been in the clinician’s toolkit for many years.

They can aid in diagnostic decision-making by:

  • Analyzing the probabilities of certain diseases or conditions
  • Providing guidance for robotic and laser-assisted surgery
  • Reviewing sonography, mammography, and other radiologic images
  • Generating clinical safety prompts in the computerized provider order entry system that alerts of possible medication errors when a clinician enters an order into a patient’s electronic medical record

Issues in the acquisition due diligence of such tools would have included how to benchmark the tool, including evaluation of the pertinence of data used to train the algorithms; rigorous testing before the tool was deployed; and designation of an individual or committee to monitor output—specifically looking for errors, ambiguities, and how much human oversight the tool would require.

Does the tool ultimately save time and add measurable value, or must it be treated like a first-year nurse or medical resident whose every decision should be second-guessed?

Experience with CPOE and diagnostic decision-making tools provides a good roadmap for the type of evaluation that should be conducted on an AI tool in any industry to secure approval and funding. In the CPOE, the clinician enters a medication order, along with certain patient information such as gender, age, and a diagnostic code for the patient’s primary condition.

The CPOE can issue an alert stating that the dosage is too high or low for a patient of that profile, based on data used to train the CPOE algorithm. Similarly, a diagnostic decision-making tool provides the clinician with probabilities for diagnoses based on symptoms documented in the electronic medical record, compared with symptom data from medical records used for the algorithm.

While these tools can be helpful, neither involves hands-on evaluation of a patient necessary to reach a diagnosis or order appropriate medication. Some critical symptoms and markers may not be in the medical records the tool uses for evaluation, making significant human oversight necessary.

There are some cases where the degree of necessary human oversight could exceed the usefulness of the AI tool. A generative AI tool could theoretically analyze a patient’s medical record entries and generate documents such as history and physical notes and discharge summaries, which are traditionally written by clinicians.

The catch is that generative AI tools are infamous for mistakes known as “hallucinations,” in which the tool essentially makes up facts. While lives may be at stake in the accuracy of such entries, making human review of the tool’s output crucial, few workers are more time-strapped than hospital clinicians.

This raises the question: Would human review of AI-generated documents require more time than document preparation without the tool?

AI use in health care is also subject to rather strict, but confusing, liability standards that enter the decision matrix for AI tool acquisition and deployment. Some questions to consider include:

  • Do standards of care require use of the available AI tools so that a clinician could be liable for not using them? Or could the clinician rely too much on the tool?
  • If there was verification or rejection of a medical decision suggested by the tool, should the decision be rationalized by contemporaneous documentation?
  • How much time does that documentation require?
  • Was the data used to train the algorithm timely, sufficiently comprehensive, and pertinent to the local patient population?

Businesses in any industry that is highly regulated and prone to litigation must factor in liability exposure when deciding to acquire and deploy a new AI tool.

Because past is often prologue, organizations in other industries can learn from health care’s years of experience with AI. Asking the right questions at or before acquisition, or engaging experts who can pose those questions, can make AI use smoother and reduce economic and legal exposures.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Kenneth Rashbaum is partner at Barton and advises multinational corporations, financial services organizations, life sciences organizations.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Daniel Xu at dxu@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.