AI
March 11, 2026
Patient Safety Concerns: #1 – AI Diagnostics

Patient Safety Concerns: #1 – AI Diagnostics

In a recent encounter, a colleague revealed that her mother had a transcatheter aortic valve replacement (TAVR) back in October. This is a relatively routine and less invasive procedure to repair part of the heart. But there were complications. The catheter had apparently nicked the heart, causing internal bleeding. Despite heroic efforts to resolve the situation, the patient ultimately succumbed to the mishap and expired. “We lost mom.”

Patient Safety Concerns: #1 – AI Diagnostics

Share

Now, this took place at one of the finest medical centers in America. Nevertheless, it stands as a stark reminder and poignant exemplar that medical mistakes can and do occur in our nation’s healing centers—and they occur often. In fact, a 2024 Johns Hopkins study asserted that “medical errors should be recognized as the third leading cause of death in the United States.”

Enter ECRI (originally founded as Emergency Care Research Institute), which is an independent healthcare research nonprofit organization in Pennsylvania. It is tasked with "improving the safety, quality and cost-effectiveness of care across all healthcare settings worldwide." They have just released their list of the biggest patient safety concerns for 2026. We will begin a new series wherein we discuss these concerns over the next several weeks, beginning with today’s article, which focuses on artificial intelligence (AI) in the diagnosis context.

Creating a Monster?

Many healthcare organizations are turning to AI technology, believing it will lead to improvements in diagnostic efficacy and precision. A survey of nearly 1,200 physicians found that approximately 66% reported using AI in 2024. These providers believe this revolutionary technology will reduce the risk of incorrect, missed or delayed diagnoses, among other benefits. “AI has the potential to improve diagnostic accuracy by automating data retrieval, decreasing cognitive load, reducing cognitive biases and providing clinicians with information to help guide their decisions,” according to ECRI researchers.

And we have pointed this out in previous articles. However, AI systems are only as good as the algorithms they use and the data on which they are trained; and, according to the folks at ECRI, “the potential for errors remains a significant concern.” For example:

Tested machine learning models failed to recognize 66% of critical or deteriorating health conditions and injuries in synthesized cases.

Popular generative AI models more accurately diagnosed genetic conditions based on textbook-like descriptions, while their accuracy dropped significantly when prompts were based on a conversation with a simulated patient, suggesting that AI models struggle with open-ended diagnostic reasoning.

From ECRI’s perspective, “AI is an evolving technology that raises issues related to reliability, transparency, privacy, liability and ethics; and users should not treat it as a replacement for clinical expertise.”

Reducing the Risks

Based on its level of usage in the clinical context and its current lack of infallibility, the below reflects several ECRI recommendations on how hospitals and clinics can better navigate the use of AI in patient diagnoses.

  • Ensure that staff are trained on the proper use of AI systems, particularly those that assist in diagnosis, and inform clinicians of the systems’ capabilities and limitations.
  • Require staff to document instances in which AI was used for diagnostic purposes and how it affected the clinical diagnostic process.
  • Utilize human factors engineering principles to evaluate the usability of AI tools.
  • Carefully evaluate the business case for AI diagnostic tools against the costs related to preventable harm.
  • Disclose the use of AI to patients and obtain informed consent before using generative AI in patient diagnosis or uploading patient information to an AI system. Include opt-out clauses in consent agreements.
  • Monitor staff satisfaction and user experience with systems that incorporate AI.
  • Emphasize that AI is a tool and that clinicians should defer to their own clinical judgment and seek second opinions when questioning clinical decisions or diagnoses aided by AI.
  • Train staff on how to identify and report adverse events attributable to AI. Ensure that such events are properly investigated, and work with AI system manufacturers and developers to prevent future issues.

AI isn’t going away, and it will improve over time; but until the technology is perfect in its capacity to accurately determine a patient’s diagnosis, hospitals and providers would do well to implement clear guidelines that guard against errors—both human and machine.