2025Fall
September 29, 2025
Murky Waters: Where the Medical Record Meets AI

Murky Waters: Where the Medical Record Meets AI

BY JUSTIN VAUGHN, MDIV, Vice President of Anesthesia Compliance, Coronis Health, Pineville, LA

Murky Waters: Where the Medical Record Meets AI

Share

Down in bayou country, where the maze of marshlands meets the open gulf, there are bodies of water with inconsistent properties. These are lakes and inlets and bays that are essentially hybrid in nature. Known as brackish waters, they are part salty and part fresh. What should one make of such an amalgam? What kind of aquatic creatures are fitted by nature to survive that kind of mix?

With the advent of artificial intelligence (AI) and the realization of all its enormous potential, some medical providers have undertaken to use AI in the creation of their medical records. This brackish-waters approach, combining human intelligence with machine learning, raises some important questions as to overall propriety and bottom-line legality. Is the use of AI in the completion of a detailed medical record or limited progress note appropriate from a compliance perspective? Can a medico-legal document, whose contents are partly authored by a machine’s logic and linguistic acuity, be deemed acceptable from a clinical, regulatory or facility bylaw perspective? It’s all a bit murky.

THE LEGAL LANDSCAPE

According to a recent article from Johns Hopkins Medicine, “the regulatory frameworks needed to govern clinical implementation of AI in safe, ethical and equitable ways are still being constructed.” The federal government and certain states are in the beginning stages of assessing the situation from a legal and regulatory standpoint. As of this writing, several states have enacted laws that cover the use of AI in the medical context. Here is a brief sampling:

  • Texas passed HB 149 in 2025, which mandates disclosure to patients when providers leverage AI for healthcare services or treatments.
  • Utah enacted legislation that includes provisions on mental health chatbots, requiring suppliers to disclose the use of AI and implement safeguards for personal information.
  • In Nevada, the legislature passed into law AB 406, which prohibits AI systems from representing themselves as licensed providers for mental or behavioral healthcare. It also restricts licensed professionals from using AI to directly deliver therapy.
  • Illinois recently passed a law prohibiting AI from replacing the independent judgment of registered nurses in clinical decision-making.

While none of these laws directly speaks to the use of AI in the completion of the medical record, it is notable that various jurisdictions around the country are at least contemplating AI’s place in the medical space. Back in March, legislation was introduced in Arkansas (H 1816) that would have restricted the use of AI in providing healthcare services or generating medical records unless specific conditions were met. However, the bill was ultimately pulled; and, so, the question remains open-ended in the Land of Opportunity.

According to multiple sources, other states are actively engaged in shaping the legal and regulatory landscape surrounding AI in healthcare, generally. But, again, this author has yet to find a law that directly addresses the use of AI in the completion of medical records. The presumption, then, is that using these systems for such purposes is not explicitly prohibited—at least in most jurisdictions.

IT WAS INEVITABLE

According to one national auditing firm, there is a growing number of clinical providers making use of AI in the completion of the medical record. One source asserts that AI-powered documentation tools are now being used in nearly 30% of physician practices. The way it typically works is the physician will dictate or type in certain key words or short-hand phrases, and the AI app turns those terms into complete sentences. Since AI has the capacity to learn and become more sophisticated in its aggregation of information and more nuanced in its generated output, one might come to the conclusion that these AI insertions into the medical record are actually quite intuitive. But are they accurate?

It’s clear that the accuracy of traditional (fully human generated) medical notations are not always inerrant. A 2023 Veterans Administration study found that 90% of handwritten notes contained at least one error. In the ER, “phantom exams” (documented but not done) were found in over 40% of records.  So, humans—even smart, well-educated humans—are going to make mistakes, and those mistakes are sometimes going to be transferred onto the medical record. It must be said, however, that since AI is designed by humans and deployed by humans, these systems will inevitably be prone to making mistakes, as well. But to what degree? According to researchers at the Rand Corporation, “As flawed as AI notes may be, it’s entirely possible that human-generated medical notes could be more flawed.”

Despite their imperfections, the use of these widely available applications do provide a level of convenience and speed, which is why it was only a matter of time before large numbers of clinicians began incorporating them into their practice. But are there hidden pitfalls in their usage?

A HIDDEN HITCH

It is the understanding of those familiar with these applications that some of the leading AI resources that medical providers use in the completion of their patient records store queries, as well as the answers they create, in “the cloud.”  Rather than a billowy stratocumulus realm where winged horses effortlessly fly, here, “cloud,” refers to an array of computer servers located worldwide and accessed over the internet. Having medical data reside in the cloud allows users to perform tasks without having to store their data or run programs on their own local servers. Using the cloud does have a couple of advantages: it allows more data to be stored and often acts as a back-up with some applications.

But here’s the catch, according to one prominent healthcare attorney: if a patient’s protected health information (PHI) is stored in the cloud, you must have a business associate agreement (BAA) with the cloud service provider. The problem is that some of the leading AI applications have different plans, and not all of these plans may offer a BAA. Accordingly, if you are using such an app in the completion of your medical records, make sure that you execute a BAA with that cloud provider so that you remain HIPAA compliant.

It will be interesting to see if future legislation or rulemaking will clear up the current murkiness and provide greater guidance on the extent to which AI can be used in the creation of a patient’s medical record, such as a surgeon’s report or a nurse practitioner’s visit note. An ever-changing world means ever-changing responses from the rule makers. AI is changing, as well. In the words of a Rolling Stones ballad, where will it lead us from here?

Justin Vaughn, MDiv, serves as Vice President of Anesthesia Compliance for Coronis Health. Mr. Vaughn has over 20 years of experience in anesthesia compliance and has been a speaker at multiple national healthcare events. He has written two books on compliance-related issues and is the author of numerous articles relevant to the hospital space. Justin can be reached at justin.vaughn@coronishealth.com.