AI Made Friendly HERE

HHS looks to balance use of clinical data in AI with safety, bias considerations

Leaders in the federal healthcare space revealed ongoing and future artificial intelligence use cases and policy at NVIDIA’s AI Summit in Washington, D.C. on Tuesday, emphasizing the benefits predictive softwares can have on health outcomes.

Much like other federal agencies, HHS is aiming to help spur and harness ongoing innovation with AI tools while applying appropriate guardrails, especially when leveraging these tools alongside sensitive clinical data, according to Micky Tripathi, the national coordinator for health IT and acting chief artificial intelligence officer at the U.S. Department of Health and Human Services.

“We hear more and more about people being concerned about getting too innovative in the healthcare space, and a desire for guardrails to help channel that innovation appropriately,” Tripathi said Tuesday . 

HHS components — like the National Institutes of Health, the Advanced Research Projects Agency for Health, the Centers for Disease Control, the Centers for Medicare and Medicaid and the Food and Drug Administration — have found a wide array of AI use cases. 

As with other agencies, creating internal chatbots to help HHS employees sift through large volumes of diverse data is a popular use case for HHS, but Tripathi notes that major commercial players, such as Meta’s Llama and OpenAI’s ChatGPT, are not trained on clinical data.

Tripathi said HHS is looking to leverage the vast troves of clinical data across HHS and other agencies to create finely-tuned AI models tailored to healthcare uses. 

“We have amazing amounts of clinical data; we haven’t even tapped into that yet,” he said. “How do we open that up in safe ways, but offer the ability to use these technologies on actual clinical data so that we can actually be much more specific as we think about diagnostics, as we think about reduction of medical errors, as we think about streamlining clinical decision making, and as we think about the patient being more of a direct participant in their care,” he said. 

In terms of challenges moving forward with these goals, Tripathi noted that the quality of the clinical data will make or break its applications in AI software.

“Data is inherently biased by the healthcare system that we live in today,” he said. “So those who are left out of the healthcare system today or have poor healthcare because they have less insurance or no insurance at all, that’s all reflected in the data, and machines are unfortunately going to pick that up.”

Belinda Seto, the deputy director of NIH’s Office of Data Science and Sharing, underscored the need for a robust AI ethics perspective. She added that as her agency looks to leverage AI in mission areas like processing and analyzing disparate data types — such as data from genomic research and electronic health records — creating a culture of ethics in using AI tools is paramount.

“When we think about healthcare, It’s a trust relationship with the care provider and the patients,” Seto said. “And if we can think of ways to make sure that the trust is not undermined, and that the explainability of the AI that may be…a clinical decision support, we have to keep in mind not to undermine that trust.”

 Seto and Tripathi’s comments on AI come as the agency’s strategic AI plan –– due to be released in January 2025 –– is set to evaluate the role AI can play across HHS mission areas.

“We’ve got a strategic plan now in each of those primary domain areas hard at work,” Tripathi said. “It’s both externally focused as well as internal.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird