Ganesh Padmanabhan is the Founder and CEO of Autonomize AI and the host of Stories in AI podcast.
For years, we’ve heard that artificial intelligence would revolutionize medicine. It would help doctors diagnose faster, uncover hidden conditions and even predict the onset of disease. We’ve invested heavily in the idea of AI as a clinical copilot—a tool that could reason, explain and guide decisions at the bedside.
But that dream keeps colliding with an inconvenient reality: Medicine is complex, high-stakes and deeply contextual. And AI, for all its fluency, still hallucinates.
A recent study from Microsoft Research employs stress tests on top LLMs in multimodal medical benchmarks, revealing that even when models generate confident, citation‑style rationales, many explanations rest on fabricated reasoning or shortcut correlations rather than robust medical grounding. Models may still exceed chance when visual inputs are removed, or flip answers under trivial perturbations. These are behaviors incompatible with clinical standards.
And yet, many health systems continue to focus their AI strategies on clinical decision support (CDS), despite its complexity, liability risks and immature tooling.
It’s time for a strategic reset.
Why The Real Crisis In Healthcare Isn’t Diagnostic
The pressure point isn’t misdiagnosis. It’s administrative overload. Ask any frontline physician what’s burning them out, and the answer won’t be “I need AI to suggest treatment options.” It will likely be: “I need someone to help me with the paperwork.”
These tedious tasks include:
• The prior authorization (PA) that takes 20 minutes to submit
• The chart review that takes hours to compile
• The claims appeal letter that needs to cite five pieces of documentation
• The HEDIS report that gets kicked back because of missing context
These are not intellectually difficult tasks; they’re organizational ones. But they are “death by a thousand clicks.” They degrade morale, slow down care and increase cost. These tasks are the kind that AI can help address, often safely and at scale, with immediate impact in some cases.
This is where domain-trained small language models (SLMs) and agentic AI workflows come in.
A Real-World Example: Automating Prior Authorization At Scale
Earlier this year, we worked with a national health plan with over 2 million Medicare Advantage members who faced a familiar problem: Their PA process was drowning in paperwork.
Each PA request required intake, classification, benefit verification, evidence collection and justification—all done manually by nurse reviewers and support staff. Turnaround times stretched beyond CMS guidelines. Provider complaints were rising. Internal teams were burning out. Audits loomed.
Rather than attempt to “diagnose” anything, the plan took a different path. They deployed a purpose-built agentic AI platform, not a single model, but a network of domain-trained agents designed to handle atomic tasks:
• One agent classified and routed incoming requests.
• Another parsed clinical notes to extract medical necessity evidence.
• A third agent drafted the initial determination package, citing guidelines.
• A final agent checked for compliance artifacts before submission.
These agents weren’t guessing at treatments. They were orchestrating documentation, freeing up nurses to focus on edge cases and patient outreach.
The initiative led to a roughly 50% reduction in PA cycle time, reclaimed thousands of nurse hours each month, improved provider satisfaction and maintained fully auditable compliance trails.
Rather than headlines, the project delivered measurable operational results.
Automation Before Augmentation
There’s a pattern here. The most successful healthcare AI deployments today aren’t about diagnosing patients. They’re about removing the cognitive and operational tax surrounding patient care.
I call this shift automation before augmentation.
Instead of asking AI to assist in complex judgment (which risks adding confusion), you can ask it to eliminate repetitive, well-understood tasks. We’ve found that this strategy can lower risk, increase trust and create compounding value.
It also builds the foundation for future clinical AI. Once workflows are digitized, data is structured and processes are reliably orchestrated, you can begin to layer in smarter, more ambitious models. But starting with CDS is like putting a smart thermostat in a house with no electricity.
What Makes SLMs And Agentic Workflows Work
Why does this approach succeed where general-purpose AI often fails? Here are some ways:
• Domain Grounding: These models are trained specifically on payer or provider operations: medical policies, claims data, EHR notes, audit checklists and HEDIS measures. They don’t try to emulate diagnosis; they emulate how a care coordinator or nurse reviewer thinks.
• Atomic Scope: Agentic workflows break problems into narrow, well-defined tasks. One agent extracts prior authorization details. Another compiles evidence. A third matches against policy. This modularity increases interpretability and reduces hallucinations.
• Human-In-The-Loop Design: Rather than remove humans, these systems augment them. Agents surface structured, pre-reviewed outputs that nurses or administrators can approve or override. This accelerates throughput without sacrificing safety.
• End-To-End Observability: Every decision, extraction and suggestion is logged and explainable—a must-have for audit and compliance in regulated environments.
A Strategic Imperative For Healthcare Leaders
For CIOs, CMOs and COOs charting their AI strategy, the message is clear: The fastest, most scalable ROI lies in automating operational workflows, not mimicking clinical judgment.
Prioritize AI that lightens the load now, not models that might assist diagnosis years from now. This approach prioritizes practical, high-impact applications of AI.
Healthcare’s AI future is inevitable, but it starts with infrastructure that works, integrates and delivers measurable impact today. That requires sharper questions:
• Have you stripped out the administrative drag slowing clinical teams?
• Can your AI systems process documentation and extract evidence reliably and at scale?
• Are you reclaiming hours and cost or just generating more reports?
If you answer “yes,” you’re not waiting for the AI future. You’re already building it.
Closing Thought
The most profound application of AI in healthcare may not be the one that wins headlines. It will be the one that gives a clinician back 20 minutes of their day, that restores a patient’s confidence in timely care and that streamlines the business of care so we can refocus on the experience of it.
The path to an AI-enabled healthcare system doesn’t start with genius. It starts with relief.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
