AI Made Friendly HERE

what are the limits in healthcare?

AI has worked its way into many industries over the past decade with no end in sight. The world of healthcare has been no exception, but it has been one of the spaces in which public reception to AI’s implementation has been the most hesitant.

Research by the US Pew Research Centre found the public generally split on the issue, with 60% of those surveyed as part of a nationwide study stating they would be somewhat or very uncomfortable if their healthcare provider were to rely on technology for jobs such as diagnoses and treatment recommendations.

The survey also found that only 38% of Americans believe the use of AI in healthcare would lead to better outcomes whilst only 33% thought it would make it worse, the rest were ambivalent or didn’t know.

Despite concerns, the global healthcare industry has pushed ahead when it comes to implementing the technology, from patient medical records in hospital management to drug discovery and surgical robotics. In the field of medical devices alone research by GlobalData estimates that the market is set to be worth $477.6bn by 2030.

If AI will become ubiquitous and involved in some of the most important decisions in a person’s life, what is the appropriate moral or ethical set of rules for it to which it adheres? What are the ethical upper limits of AI and where does it become unethical to implement the technology?

To find out more, Medical Device Network sat down with David Leslie, director of ethics and responsible innovation research at The UK’s Alan Turing Institute, to understand what set of rules AI should be applied in healthcare.

This interview has been edited for length and clarity.

David Leslie (DL): So, I started to really think more deeply about this during the Covid-19 pandemic because the Turing Institute was working on a project that was supposed to be a rapid response, data scientific approach to asking and answering all kinds of medical questions or biomedical questions about the disease.

I got AI to write an article in the Harvard Data Science Review at the time called “Tackling Covid-19 through responsible AI innovation” and it went through some of the deeper issues around biases and the big picture issues and how these were manifesting in the pandemic. It was called – does AI stand for augmenting inequality in the Covid-19 era of healthcare?

Off the back of that, I was asked by the Department of Health and Social Care (DHSC) to support a rapid review into HealthEquity in AI-enabled medical devices.

Story Continues

DL: I think we can even do it the other way, which is looking at how patterns of inequality and inequity in the world come to find their way into the technology. So first and foremost, we can talk about how social determinants of health play a role. Places where we live create different inequities. So, things like inequities in access to health care inequities in admission and treatment, and unequal resource allocation based upon the particular environments.

There are also biases that manifest in medical training where they privilege certain socio-economic groups in the way that they train. If you are being trained as a dermatologist, it is more likely that you will have deep knowledge of lighter skin tones but not so much knowledge of darker skin tones. So, all these components, they come to be elements that seep into the AI lifecycle.

For instance, inequitable access to medical services will lead to representational imbalances in data sets. You will have data distributions that may not include certain minority groups in the right way because they haven’t been taken up in electronic health records by virtue of gaps in the service.

DL: So just think of the pathway AI takes. Let’s say I am a doctor and you come into my clinic for an assessment. In our interaction, my biases will manifest in my notes. So, when you aggregate that and see that X many doctors have X many notes and all those notes will be used to fine-tune a natural language processing system, one that may be used to support something like triaging or suggestions for some type of path treatment pathways.

It would be illogical to think that when you move from biased clinical notes to the outputs of a natural language processing system, somehow magically the biases would disappear because they’re baked into the data sets.

So, we must be very careful, I think. These are social practices first and foremost and the baked-in biases will manifest across the data.

DL: Inequity and bias, in a sense, we shouldn’t talk about eliminating it because these are kind of operative in our world, and we always need to think about how we mitigate them in technology. How do we lessen the impact and improve the systems based on our attempts to mitigate discrimination and bias? We must be aware that bias will crop up just by humans being humans.

For me, the most important methodologies are those that are anticipatory, so that they incorporate reflective bias mitigation processes into the design of the systems.

DL: I think we need to just be very aware that zero questions matter. When I say that, I mean that there are such a diverse range of AI-supported technologies that are available, but also there are certain problems that are of a complex and social nature that may not be as amenable to being supported by statistical systems. These are all computational statistical systems. There is a call for a little bit more practical judgment and common sense.

Do I think that there are other challenging cases? For instance, in insurance, we have glaring examples of how millions of people have been discriminated against. There is no one sickness score. The reason why we must be more contextually responsive to complex social problems is that when you’re processing social and demographic data, you are going to see more social biases and reverberations of patterns of harm in that data.

“AI ethics: what are the limits in healthcare?” was originally created and published by Medical Device Network, a GlobalData owned brand.

 

The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird