author photo
By Cam Sivesind
Tue | May 14, 2024 | 12:36 PM PDT

Artificial intelligence is rapidly reshaping many industries, and healthcare is no exception. AI's growing capabilities are sparking an intense debate over whether it will eventually replace human clinicians altogether or emerge as an invaluable collaborative partner.

Leading healthcare providers and companies are avidly adopting advanced generative AI tools to drive operational efficiencies and improve patient care. AI-powered virtual assistants can quickly triage non-critical cases, schedule appointments, provide follow-up instructions, and deliver medication reminders.

Aaron Weismann, CISO at Main Line Health, questions how valuable AI-powered virtual assistants really are.

"Every EMR already does that without AI, so what is AI really bringing to the equation?" Weismann asks. "I kind of think that's another problem with AI solutions; they're begging for a problem to solve in a more complex way than present solutions for very fractional improvements."

On the clinical side, generative AI can summarize patient records into concise overviews for doctors, suggest evidence-based treatment plans, and even draft medical notes and documentation. However, experts caution that off-the-shelf generative AI models require careful customization and oversight to prevent potentially dangerous errors or biases when deployed in real-world healthcare settings.

Rebecca Herold, The Privacy Professor, CEO, Privacy & Security Brainiacs, had this advice for healthcare providers looking at AI resources.

"It is important before purchasing, implementing, and using these types of tools to ask the vendor some important questions:

  • Was actual patient data used to train the AI tools? If yes:
    • Did the associated patients provide consent to have their PHI used for such purposes?
    • What protections were built into the AI algorithms to prevent real data from being exposed within the AI tool results?
  • Were the tools trained with a wide enough range of data to prevent bias in the results? Ask them to explain their answer by describing how they included a wide enough breadth of training data, not just provide a yes or no.
  • Will the vendor be incorporating the patient data you use with their tool into their pool of data they will be using to continually train their AI tool? You should have a choice whether or not do to this. If you don't have a choice and they are going to use your data, don't use the tool; it would create several privacy risks that you'd then need to spend time mitigating in some way."

Can the human element be completely removed from the equation? Most practitioners, in cybersecurity and healthcare, think not.

"The human touch of clinical care is irreplaceable due to common unpredictability of the human condition, much like in cybersecurity—the human factor will always be our biggest challenge," said Krista Arndt, CISO at United Musculoskeletal Partners. "To effectively implement AI in a clinical setting, physicians and practitioners must remain involved in the care process, using AI to augment care, but not to become wholly reliant on it. AI is meant to reduce administrative burden to free up staff for more strategic roles in clinical care, not to replace them."

"In a space where costs are rising and staff are often overloaded, implementing AI will allow these experts to do what they do best, to improve quality of care through better time management. Additionally, it will help the patient care process with better ease of access to providers, quicker scheduling, easier communication, and reference for things like discharge instructions, etc."

One area where AI is already making its mark is in clinical documentation and medical coding. Advanced natural language processing can automatically transform dictated notes and transcripts into standardized patient records with accurate diagnostic codes. AI eliminates countless human labor hours and reduces costly billing errors.

Organizations utilizing AI coding solutions report significant cost savings by streamlining operations and optimizing coding accuracy. Still, the complexities of medical documentation mean a hybrid human-AI approach is required for now. Doctors must validate the AI's outputs to ensure equitable, unbiased care delivery across all patient populations.

"AI tools certainly have the possibility of helping to make code more accurate. However, all AI tools should still be thoroughly tested prior to use," Herold said. "Not only tested by the AI tool creator, but also by AI tool customers prior to using within their organization, which will likely have unique characteristics within their organization's digital ecosystems that may not have been considered in the manufacturer's testing. If such testing does not occur, privacy breaches could occur, security incidents could result, and patient safety could be  compromised."

As AI's decision support capabilities rapidly expand, many healthcare leaders envision a future of collaborative "augmented intelligence." In this paradigm, AI acts as a real-time resource that synthesizes data, surfaces relevant information, and proposes guided recommendations. But human clinicians would retain final decision-making authority while benefiting from vastly enhanced speed and analytical capacity.

"The 'augmented intelligence' concept really resonates with me. I think large language and image processing models have come a long way, but they are still susceptible to drift and hallucinations," Weismann said. "What's worse, when they're wrong, they're very confidently wrong and don't self-correct. Most models can't or won't because they're effectively using very complex wrote recall based on association probability."

"So in order to make those models 'independently functional,' there needs to be error alerting, monitoring and validation, corrections, etc.," Weisman said. "While that might save clinician hours on the front end, it generates lots of work on the back end, especially because getting something wrong potentially means diminished patient outcomes up to and including death. Since the clinician is ultimately responsible for their patients' care, it behooves them to validate the AI output anyway. At no point is the model truly independent."

Proponents argue the symbiosis of human expertise and artificial intelligence will usher in a new era of highly-personalized, predictive care that maximizes positive outcomes for patients worldwide.

Of course, not everyone is enthusiastic about AI's ascendance in healthcare. Critics warn that deploying overly ambitious AI systems could displace human medical professionals, leading to job losses, reduced personalized care, and overreliance on potentially flawed AI outputs.

"Rapid innovation models are antithetical in the healthcare provider space. With AI, the Silicon Valley approach to 'move fast and break things' can have very real, and negative, consequences on cybersecurity and patient safety," said Esmond Kane, CISO at Steward Health Care. "Add to that, the macroeconomic pressure to drive value and efficiency which can conflict with the ability to invest and innovate, to build a resilient enterprise. 'Bleeding edge' means something entirely different in healthcare!"

"The potential benefits offered by AI must be balanced by the potential downsides," Kane continued. "Good governance, accountability, and education is essential for AI to be safe and successful. Adoption of AI requires organizations to bake in resiliency from the start, to scale agility with safety, to empower the workforce with sandboxes, due diligence to sanction specific models, and careful engineering to increase fidelity and reduce hallucinations. We should not go it alone; strategic partnerships must be a part of the foundational planning."

Whether AI will ultimately serve as a force multiplier that allows healthcare workers to achieve more remains to be seen; some see it as a threat to the human medical workforce. Responsible governance and objective third-party auditing will play a crucial role in navigating AI's ethical implementation.

"The potential for AI use in healthcare to be a force multiplier is real. However, the potential for the overzealous use of AI in healthcare to wreak havoc or cause injury or death is also real," said Richard Halm, Senior Attorney at Clark Hill PLC. "Unintended consequences when using AI are real; there have been instances where AI has been used in financial decision making, resulting in unintended discrimination. When rolling out an AI product, proper controls and testing have to be in place, more so when the consequences possibly include bodily harm."

The future of AI in healthcare holds both immense promise and challenging implications to be carefully managed. As with previous technological revolutions, proactive leadership will help steer these powerful capabilities toward maximizing positive patient outcomes and minimizing unintended negative impacts.

Shawn Tuma, had this to add:

"Given the exponential advancements we have seen with AI in just a couple of years, we are all rightly concerned about whether one day AI will replace humans in virtually all industries, including healthcare. But we are not there yet, and none of us really know if or when that time may come. For now, AI is an incredibly powerful tool that all of us need to embrace and learn to use as a tool—and only a tool—in a responsible manner."

"When it comes to healthcare, there are two major risks that immediately leap to mind: first, the catastrophic consequences of it making a mistake, and second, the sensitivity of healthcare data and potential for privacy breaches. The key to minimizing these risks is to first begin by developing and adhering to a comprehensive governance strategy built upon a strong understanding of the objectives for the processing, what processing will occur, what information will be processed, what tools will be used, how transparency will be achieved, how the processing will be measured, and what will occur when the AI is not performing as intended. When AI is used in this manner, as a tool and not as the final solution, it can be an incredibly powerful tool for improving the quality and efficiency of healthcare."

Here are some additional comments from Justin Armstrong, vCISO and Founder, Armstrong Risk Management, LLC:

"As with any new technology, AI is both exciting and terrifying. The Czech play which coined the term 'robot' also expressed mankind's fear of artificial intelligence. The robots ultimately kill off mankind and take over—a theme that is repeated over and over again in science fiction. It is not going to be an easy journey, but well-managed AI shows tremendous promise in healthcare. Clinicians are extremely challenged; they must assimilate a tremendous amount of information about a patient in short order and deliver care. They are always against the clock. AI can prove tremendously helpful in digesting information and helping clinicians to document their work."

Comments