The damage AI hallucinations can do – and how to avoid them
Zeb (left) and his best friend, Dr. Jay Anders, chief medical officer at Medicomp Systems

The damage AI hallucinations can do – and how to avoid them

"Even if these systems are right 80% of the time, that still means they're wrong 20% of the time," says tech CMO Dr. Jay Anders, who describes the risks of artificial intelligence errors and outlines some protection strategies for providers.

Health systems are embracing artificial intelligence tools that help their clinicians simplify the creation of chart notes and care plans, saving them precious time every day. 

But what's the impact on patient safety if AI gets the facts wrong?

Even the most casual users of ChatGPT and other large language model-based generative AI tools have experienced errors – often called "hallucinations."

An AI hallucination occurs when an LLM cannot find an appropriate answer and simply makes something up. Essentially, when an LLM doesn't know the correct answer or can't locate appropriate information, it fabricates a response, rather than admitting uncertainty.

These fabricated responses are particularly problematic  because they're often very convincing. The hallucinations can be very difficult to distinguish from factual information, depending on what's being asked. If an LLM can't find the right medical code for a particular condition or procedure, for example, it might invent a number.

The core issue is that LLMs are designed to predict the next word and provide responses, not to acknowledge when they don't have sufficient information. That creates a fundamental tension between the technology's drive to be helpful and its tendency to generate plausible sounding but inaccurate content when faced with uncertainty.

For some further perspective on AI hallucinations and their potential impact on healthcare, we spoke recently with Dr. Jay Anders, chief medical officer at Medicomp Systems, a vendor of evidence-based, clinical AI-powered systems designed to make data usable for connected care and enhanced decision making. He plays a key role in product development and acts as a liaison to the healthcare community.

CLICK HERE TO READ THE COMPLETE STORY

Kevin Petrie

Practical Data and AI Perspectives

2mo

Bill Siwicki, good stuff. I think retrieval-augmented generation (RAG) can help by injecting pre-approved, trustworthy documents into the user prompt to increase the odds that GenAI models generate an accurate answer. But as your article points out, humans are the best governance control.

To view or add a comment, sign in

Others also viewed

Explore content categories