Inherited Concerns of Healthcare Accountability in a New Age of Automation
- Daniel Guglielmo

- Jan 7
- 4 min read
Updated: Feb 4

For years, I have followed the journeys of parents trying to do the right thing for their children. Calling provider after provider, navigating insurance denials, and sitting on waitlists that stretch months or even years. I’ve watched families weigh whether they can afford out-of-pocket evaluations, delay care because of cost, or accept whatever appointment becomes available simply because waiting any longer feels impossible. By the time a family finally receives an evaluation, the document they are handed carries enormous weight. It is not just a report; it is often the gateway to services, understanding, and support.
What some families receive after all that waiting and expense, however, can be deeply concerning. Alongside frustrated and exhausted caregivers, I've assisted in reviewing reports that arrive with repetitive language that doesn’t fully align with the child being evaluated, generic and non-individualized narratives that could apply to almost anyone, and in many cases, the copy-and-paste kiss of death: another patient's name.
The immediate spark for this article came from a recent conversation with a parent who had carefully saved $2,900 to obtain an “express evaluation” for her child, marketed with a guaranteed 48-hour turnaround. After months of waiting within other systems, she believed the premium cost would translate into a thoughtful, individualized assessment of her son without the wait — double positive, worth the cost! Instead, she received a largely generic report that bore little connection to her child, included references to settings, caregivers, and environments that did not reflect the family’s reality, and in one section even contained the wrong child’s name and last four social security number digits. The experience is far from uncommon, yet another example of the long-standing disconnect between the urgency families feel, the financial burden they absorb, and the level of clinical responsibility reflected in the final product for many healthcare workflows.
A New, Convenient Villain
Since becoming the latest focus of public attention, artificial intelligence has quickly been cast as the culprit whenever documentation errors surface. As someone who has worked in the ABA field for years in both a clinical and executive capacity, I find that framing both inaccurate and incomplete.
This new push to blame AI for poor clinical documentation ignores a much older problem. Long before artificial intelligence entered our daily vocabulary, templated reports were commonplace. Wording was reused. Sections were routinely carried forward with minimal revision in the name of efficiency. None of this is new. What is new is how readily we seem willing to fault the tool rather than confront the real issue: who is responsible for ensuring accuracy, individualization, and clinical integrity before a report is placed in the hands of a family?
AI is no different from the standardized language banks and pre-built report structures that have existed in our medical system for decades, all adopted in the name of speed and productivity.
AI is Not Your Scapegoat
One of the most concerning trends I've seen in my own field, especially from Direct Service Professionals writing session notes and Behavior Analysts conducting assessments, is the diffusion of responsibility. When errors surface, the explanation often becomes, “The system generated it,” "I didn't write that," or “That came from the template." Clinical reports are not internal drafts. They are not placeholders. They are not rough outlines. They are the documents families rely on to understand their children, evaluate the effectiveness of services, navigate insurance, and advocate in schools and medical systems. When medical files are sloppy, auto-assembled simply to move on to the next task, or insufficiently reviewed, the consequences still exist even if they are not the ones the clinician personally has to face.
In Applied Behavior Analysis, we are trained to integrate data, observation, and professional judgment. No template, automated system, or AI model can replace that responsibility. Tools can support clinicians; they cannot replace the obligation to think critically, review carefully, and stand behind what is written.
Accountability Over Automation
I am not anti-technology. Like many professionals, I use structured frameworks (including AI; hi OpenEvidence please sponsor me!!!) to improve efficiency. Used appropriately, they can support clarity and consistency. But efficiency cannot come at the expense of accuracy, and automation cannot replace accountability.
If you've made it this far, I'll conclude with this: it ultimately does not matter whether an evaluation or service delivery was completed with pen and paper under legacy healthcare workflows or supported digitally by modern artificial intelligence systems. Various forms of templates and streamlined workflows have existed for decades, often designed to prioritize speed and convenience over careful review. Instead, what has mattered (and will continue to matter) is the licensed professional’s clinical judgment and quality of care accountability, because it is what families and patients ultimately rely on. The real question, then, becomes how can we meet the efficiency-demands of our workplaces without compromising the accuracy and accountability our clients deserve?
- D.
Disclaimer: Although specific identifying details have been intentionally omitted or modified to protect privacy, the parent referenced in this article did give explicit permission for her experience to be shared for educational and professional discussion purposes. And if it is worth mentioning, AI was only used within this article to generate the header image via Apple's Image Playground.


Comments