
Can regulations tame the beast?
An October 2023 executive order signed by then-President Biden called for the safe, secure and trustworthy development of AI. It included concerns around healthcare, specifically “when AI use deepens discrimination,” and “where mistakes by or misuse of AI could harm patients … including through bias or discrimination.”
This Biden executive order was supplanted by a brief one signed by President Trump in January 2025, which called out societal harms around bias but only in the context of “engineered social agendas” and did not specifically mention healthcare.
Although Congress has not passed any legislation, several bills are pending.
For its part, the American Pharmacists Association’s House of Delegates in 2024 passed a suite of policies regarding the “judicious” use of AI, ensuring pharmacist inclusion rather than replacement in pharma-
cy practice.
AI, concluded the policies, must elevate the pharmacist’s role, not replace it. Education, caution, collaboration and transparency will ensure AI supports both patient care and professional dignity.
These APhA policy prescriptions included:
- ¤ Using pharmacists in designing AI programs.
- ¤ Using programs to elevate the practice and enhance patient care.
- ¤ Ensure patient safety and privacy.
- ¤ Mitigate bias and misinformation
- ¤ User training in the lawful, ethical and clinical use.
Bias and misinformation
Here’s what really concerns anyone using AI: that bots get their information from the world wide web. Remember the old saying to only trust half of what you read?
That was in the golden era of professional publications with journalists, editors and fact-checkers. But now these large language models are pulling information from sources that are not always quite accurate. You might say they know just enough to be dangerous.
And what may be worse than flagrant violations or occasional obvious mistruths are subtle forms of institutional bias. This bias is baked into the cake because much historic published human clinical research was conducted on Caucasian males. This is starting to change to include women, less so for ethnic minorities and those groups not well represented in medical studies.
For instance, women and Black people are more likely to suffer adverse drug effects compared to the white men who have been studying subjects for decades, results of which inform AI systems. However, the emerging studies that demonstrate this are not always enough to inform systems that inform practitioners who prescribe medicines to patients.
“AI tools have to learn from data, and data are produced in the real world. That means they learn about what does happen in an inefficient, error-prone and unfair health system, which is often a far cry from what should happen,” said Ziad Obermeyer, MD, a physician and researcher who works at the intersection of machine learning and health. Time magazine called him one of the 100 most influential people in AI.
Obermeyer advocates that AI tools have to be developed and evaluated with the utmost of caution. For pharmacies looking to use AI as a tool in operations, one question every store manager should ask an AI vendor is what kind of evaluation has been conducted to make sure the algorithm works as expected.
“It is important to recognize that AI models do contain bias,” said Brigid Groves, vice president of professional affairs at the American Pharmacists Association. “So before the data is used to make decisions or process change, you need to understand the model that created it.”
That means work on the front end to make sure AI tools are aligned with ethical care standards. It means ensuring AI tools are not unintentionally delivering different care to different kinds of customers.
Pharmacies must scrutinize AI inputs and training data to avoid perpetuating health inequities.
“Collaborations between pharmacy professionals, AI developers and other healthcare professionals can enhance AI programs and tools to support pharmacists,” said Groves. That fear around bias and misinforma-
tion can go straight to consumers as they engage with the healthcare system.
Already, insurance giant Cigna has been subject to a class-action lawsuit alleging its AI algorithm, PxDx, is rejecting claims at a rate of 1.2 seconds per claim. These original healthcare decisions are made by actual doctors whereas the claims rejections derive from bots. TikTok abounds with videos explaining how “AI denied my medical claim.” Good luck getting that claim denial overturned.