
Heather Dawe
It’s no surprise that women working in the AI sector are heavily outnumbered by men. The wider technology sector has struggled for decades to increase the amount of female workers beyond 25%, so why should AI be any different. The fact that women account for only 22% of AI talent, and at senior executive level this drops even further to just 14%, is therefore all too predictable.
Predictable it may be, but it should still give pause for thought to any organization out there wanting to develop and use AI that is without bias and fair to all.
But following events at the AI Action Summit in Paris, the prospect of equitable, unbiased AI seems to be moving further out of reach. The US and UK governments both declined to sign the Summit’s pledge of open, inclusive and ethical AI. Instead, Vice President Vance popped in to outline the White House’s predilection for AI that’s unrestricted, unregulated and unideological.
That approach could make it harder to banish bias from AI systems. Alongside President Trump’s hatred of all things DEI, and the lack of women working in the AI industry, the outlook frankly looks pretty bleak for ethical AI.
Focus
But while the US and UK governments aren’t keen on being tied down by such trivialities as safety, transparency and ethics, behind the scenes, large enterprises are very much focused on building AI systems around those key tenets, according to Heather Dawe, Chief Data Scientist, UK and Head of Responsible AI at digital transformation specialists UST.
There are plenty of examples of AI that don’t meet the principles of the AI Action Summit. Dawe cites AI used to filter job applications, which favored men for software engineering roles over women due to bias in the machine learning model. She explains:
Simply because more men are software engineers, the AI assumes that that is more appropriate, that it should filter through men. AI is trained on data about society, and we have biases in society. And the AI is very objective in the sense that it doesn’t know what’s right or wrong. So if there’s bias in the data, there’ll be bias in the model and the AI will be biased unless you seek to manage and mitigate for that bias, which you can do. To seek to reduce or remove the bias is a key thing.
Based on her experience in the field, Dawe maintains that no-one wants to be seen to be sexist, racist or any other form of prejudice with their AI. For industries like automotive and healthcare, they particularly want their technology to be safe and not carry risk. She adds:
Risk management is something that companies are taking very seriously, and we’re working with them to help manage that risk. We are seeing that companies are keen to not make mistakes and to be ethical with their AI. They don’t necessarily need to explain it to anyone but themselves. But they’re keen to be ethical.
With the AI workforce heavily dominated by men, Dawe argues that there is such a thing as developer bias. With developers building AI in their own image, therefore you’re going to get AI that’s got male bias. But applying AI governance helps organizations overcome that risk. She says:
Because AI does carry risk; it hallucinates, it can have unintended consequences. AI being used to filter job applications and filter through more men because it carries bias towards men, that’s an unintended consequence. It wasn’t intended to happen, it happened.
The companies we work with – large retailers, large banks, consumer goods companies, large global enterprises – they don’t want things to go wrong. They don’t want to make mistakes with AI, obviously they don’t want to be seen to be making mistakes with AI. But they also want to do the right thing generally.
While Vice President Vance expressed concern that regulation could hobble the AI industry, Dawe takes the opposite view. She suggests:
One of my key messages is, you can develop AI ethically and innovate. The two aren’t mutually exclusive. I’ve innovated with AI through my career as ethically as I possibly can, it’s not mutually exclusive.
Regulations
There are already many examples of industries that are regulated, which rather than hampering innovation, is simply ensuring products and services are fair. In credit risk, when developers are building machine learning models to predict the risk of someone defaulting on a loan based on various characteristics, they’re already not allowed to include gender or ethnicity.
In her former role as an NHS Clinical Indicator Programme Manager, Dawe led on the development of the Summary Hospital-level Mortality Indicator for the UK’s national healthcare service, a widescale data-driven project. She recalls:
It was basically a mortality indicator for every English hospital. It was very political and quite a hot potato at the time.
The team processed millions of rows of NHS data and applied around 140 different machine-learning models to give the mortality indicator. Patient deprivation was deliberately not included in the models, however, even though the more deprived a patient is – to put it crudely – the more likely they are to die in hospital. Dawe notes:
We didn’t include that in the model because if you do, you’re almost saying, it’s okay that the more deprived you are, that your risk is higher of dying in hospital. Actually, it’s not okay. Everyone should receive equity of care. So there’s things you can do when you’re building the model to ensure they’re fair. Sometimes it’s not including variables in the model to explain the pattern, even though there’s an association, because it’s unethical to do so.
While companies are showing their intent to develop AI that is fair and ethical, even with teams that are dominated by men, that doesn’t mean there’s no need to try and redress the gender imbalance in AI. Dawe notes:
We should be working towards a workforce that’s reflective of society. That’s what I’m working towards, that’s what UST is working towards, that’s what many of our peer companies are working towards. The fact of the matter is, I’m working in an industry that there’s a lower proportion of women, particularly in engineering roles. If you look at the engineering leadership roles like mine, it’s even lower than 20%. And it’s not only about gender, it’s also about ethnicity and sexuality and other things.
Prior to events at the AI Action Summit raising concerns over the safety and ethics of AI, the recent arrival of DeepSeek led to similar worries, with many countries taking steps to ban the Chinese technology. However, Dawe views China’s AI platform as showing the art of the possible on quite a small budget. She concludes:
We are working with clients – and I can’t stress the ethical enough, building AI governance in alongside – to do similar things. If you apply the AI governance to make sure it’s fair and ethical, I actually think it’s quite a progressive thing. We are working with clients now to build advanced Large Language Models in quite innovative ways, and we’re ensuring that we do so responsibly and ethically, so that the two can go together, it isn’t one or the other.