
Individuals with disabilities have long faced systemic barriers to health care, including misdiagnosis, limited access to medical services and bias in treatment decisions. Now, with the emergence of artificial intelligence in medicine, health care professionals worry these disparities may be exacerbated.
A recent correspondence published in Nature Medicine by Dr. Charles E. Binkley of Hackensack Meridian School of Medicine, Dr. Joel M. Reynolds of Georgetown University School of Medicine and Medical Center and Dr. Andrew G. Shuman of the University of Michigan Medical School, highlights both the harms and benefits AI in health care can pose for disabled individuals.
In an interview with The Michigan Daily, Binkley, director of AI ethics and quality at Hackensack Meridian School of Medicine, explained the importance of training AI to recognize variations in patients’ physical and cognitive ability, in addition to socioeconomic status and insurance coverage.
“The models work best when applied to the typical patient,” Binkley said. “The people who fall outside of what we consider typical are the ones that I’m concerned about. My job as a physician, as an ethicist, as someone who works in clinical AI, is to try to think about what the unintended harms (are) and try to mitigate those in advance.”
Because AI models rely on the use of historic clinical data, researchers claim the models tend to reinforce existing societal and clinical biases. Since disabled individuals comprise a smaller proportion of the overall patient population, AI training data sets have the potential to result in underrepresentation of disabled individuals. Binkley believes collecting inclusive and representative data may help AI models generate unique solutions and better help patients from underrepresented demographics.
“There are the illness trajectories that most people will follow, but there may actually be subtypes,” Binkley said. “Doctors have traditionally looked at those subtypes and said, ‘Well, what explains this is (that) this someone’s non-compliant, or this someone has better access,’ and that may be part of the story, but there may actually be just different expressions … In my mind, the goal of an AI model is not necessarily to detect something, but to try to prevent it, to be able to predict that it might occur, and then to try to mitigate it very early on.”
Binkley underscored the significance of incorporating lived experience into the development of health AI systems to foster inclusivity and strengthen AI’s ability to provide tailored treatment recommendations.
“We (need to) think about how to use data, how to make sure that we either build specific models or recognize the heterogeneity of genetic expression, the heterogeneity of phenotype,” Binkley said. “It’s this interplay between biology and society, biology and environment, biology and epigenetics. That’s where you actually find heterogeneity. And I think what we have to do is really be cognizant of that heterogeneity and expression (and) find out how it may differ from other expressions.”
LSA junior Pooja Kannappan currently works on a research project centered on collecting opinions of individuals with disabilities to improve accessible design for self-driving vehicles. Kannappan echoed Binkley’s call for recognizing heterogeneity in an email to The Daily. Kannappan emphasized inclusive AI design can help prevent the disability dongle effect, an effect that happens when technology is created as solutions for imagined problems people with disabilities face.
“(The technologies) are often well-intended but are created without including PWD in the design process, which leads to these ‘solutions’ not actually addressing the needs of target users with disabilities,” Kannapan wrote.
In an email to The Daily, Kinesiology junior Ruthie Price who is interested in working with people with disabilities and improving their quality of life, highlighted how, although AI is constantly advancing to meet the evolving needs of disabled patients, it still falls short at fully supporting individuals.
“The article did a great job on mentioning that people with disabilities need to be brought into these conversations pertaining to the development of healthcare AI,” Price wrote. “With this being said, AI can be a great partner to any provider or caretaker. I use the word partner because I do not believe that AI has the capacity at this point in time to fully understand in working with these individuals.”
To improve AI models and allow for more benefits to be experienced by patients, Kannappan said she believes universities and research institutions should prioritize including individuals with disabilities in AI research projects.
“While AI-driven healthcare systems have the potential to improve health outcomes, they also harbor the risk of perpetuating harmful stereotypes against already vulnerable groups who are not well-represented in the data used to train these AI models,” Kannappan said. “Intentionally funding more disability-focused research projects and integrating more PWD voices in the creation of tools that are intended to help them would be a proactive way to address accessible concerns.”
Daily News Contributor Akshara Karthik can be reached at karthika@umich.edu.