AI Made Friendly HERE

Whatever happened to the Chief AI Ethics Officer? Missing in action it seems

A year or so ago, everyone and their Aunt was prognosticating that having a Chief AI Ethics Officer would be de rigueur. That hasn’t quite happened, has it?

Beyond those organizations, such as tech vendors, that need to visibly signal they take the issue seriously, few are tackling the question of AI ethics anywhere near adequately enough. 

But the danger in many countries, including the US and UK, where few mandatory guardrails exist, is that these very tech vendors could end up filling the ethical vacuum themselves. This scenario risks taking society to a rather dark place.

To use the UK Government as an example of where this lack of available guardrails exists, a recent Freedom of Information request revealed that despite the rapid adoption of AI across the public sector, no ministerial government department has employed an AI Ethics Officer.

Wyatt Mayham is Co-Founder and Chief Executive of Northwest AI Consulting. This situation has come about, he believes, because the country has “intentionally adopted a ‘light touch’, decentralized approach” to the issue:

The idea is to be ‘pro-innovation’ by letting existing regulators in different sectors handle AI ethics themselves. The implication, however, is a lack of consistency, accountability, and transparency. Without a central authority to set standards and oversee implementation, you get a fragmented approach, where ethical diligence can vary wildly from one department to another.

The US situation, on the other hand, he says, is:

Completely different – and frankly chaotic. The Biden administration was moving towards a centralized, risk-based framework with clear guardrails. President Trump’s Executive Orders have aggressively reversed this, prioritizing deregulation and speed to compete with China…The US’s push to dismantle guardrails may lead to faster innovation in the short term. But it also creates a higher risk of large-scale AI failures and could isolate the US from global partners who are prioritizing safety.

Looking into an ethics vacuum

Dr Christina Inge is an Instructor for the AI in Marketing Graduate Certificate at Harvard University. She is not so sure the US has ever had many safeguards in place anyway:

In the US, under President Trump’s current administration, there’s little appetite for guardrails. But it’s important to be accurate: he hasn’t dismantled federal AI regulation because we’ve never really had any. US AI governance has been largely voluntary, fragmented across agencies and light on enforcement. The lack of a central ethics lead reflects that broader vacuum.

Despite this scenario, Inge does not believe that the main issue in either country is active resistance to AI ethics. Instead, she says:

It’s inertia, lack of technical fluency, and the sheer speed of AI evolution outpacing slow-moving policy. Most government leaders still don’t fully understand AI systems, let alone how to oversee them ethically. So, we end up with hand-waving, vague commitments to ‘responsible AI’ and no clear lines of authority…The implication is that in both countries, we’re ceding critical oversight to private companies – not by design but by default.

But the problem with this approach, Inge warns, is that:

It’s a bit like letting oil companies write environmental policy or banks regulate themselves—what could go wrong, right? When oversight is left to private companies, ethics becomes a branding exercise. The incentives just don’t align: companies are rewarded for speed, innovation, and shareholder value, not long-term societal well-being. Without regulation, there’s nothing to stop a company from quietly deploying biased models, scraping personal data, or using opaque algorithms to make high-stakes decisions.

The potential repercussions of this scenario could be significant at a societal level though, she says:

The danger isn’t just bad PR—it’s systemic harm. We’re talking about things like discriminatory hiring tools, surveillance tech with no accountability, and automated decision-making that can affect everything from loan approvals to parole decisions. Left unchecked, it creates a two-tiered society: one with access to fair, transparent AI—and one without.

Private sector skills shortfalls

But it is not just in the public sector where (Chief) AI Ethics Officers are thin on the ground – although tech vendors, such as Salesforce.com are the exception to the rule here. In fact, McKinsey points out that only six per cent of employers have hired such specialists, even in the private sector. As Inge says:

The slow adoption of AI ethics roles isn’t because companies don’t care about ethics. It’s because they don’t want to be accountable for them. Creating an AI ethics role means formally acknowledging that AI can cause harm, and that’s a legal and reputational risk. It’s far easier to issue vague ‘responsible AI’ mission statements than to hire someone with the authority to say ‘no’ to a profitable but risky use case.

Mayham also points to the struggle many organizations have in connecting the dots between ethics and business value. There are three challenges in doing so, he believes. The first is a return on investment problem:

It’s historically been difficult to get budget approval for a role that prevents future, hypothetical harm. The value is in risk mitigation, which doesn’t show up as a line item in quarterly earnings.

The second issue is an operationalization one, which involves translating abstract principles into concrete actions. As Mayham says:

It’s one thing to have a high-level principle like ‘fairness’. It’s another to translate that into code and business processes…Everyone agrees that AI should be ‘fair’, but what does that mean mathematically when a decision could be fair to an individual but unfair to a group or vice versa? There are no universally agreed-upon definitions or technical standards, which leads to constant debate and difficulty in implementation.

The ethical debt problem

The third challenge is the “unicorn problem”. Mayham explains:

An effective AI ethics specialist needs to be fluent in technology, ethics, law, and business strategy. These people are incredibly rare, and the talent pipeline is still immature…The skillset is tough: technical fluency, legal knowledge, ethics expertise and executive communication. On the other side, companies delay creating these roles until ethics becomes a revenue or regulatory risk. That creates a chicken-and-egg problem.

Inge agrees that the best candidates often have hybrid backgrounds, such as law and tech, philosophy and data science, or policy and user experience. They also need to demonstrate both “clarity of principle and comfort with ambiguity”. But she agrees that such hybrid talent is rare:

You need someone who understands machine learning and public policy and corporate governance, and who can speak fluently to engineers, lawyers, execs, and other ethicists. That’s a unicorn.

However, most organizations either do not know how to find such people or are unwilling to invest the time or budget to grow their own. To make matters worse, she says:

There’s also confusion about what the role entails. Is it risk management? Is it governance? Is it a PR function? That lack of clarity makes it easy for organizations to kick the can down the road. The result? AI gets deployed without oversight, and we only talk about ethics after something goes wrong.

But this situation could well have serious repercussions. As Mayham warns:

Many organizations are accumulating massive amounts of ‘ethical debt’. They’re deploying systems with unexamined biases and risks, creating a ticking time bomb for a major reputational or legal crisis down the road.

Despite such concerns, Kjell Carlsson, VP Analyst on the Analytics and AI team at research and advisory firm Gartner, is not convinced that taking on an AI ethics lead is the right approach anyway:

They’re usually too removed from outcomes, which is why you tend to see responsibility for ethics being placed on the shoulders of the most senior AI leader. There’s an element of which leader is best placed to do most here in terms of impact and ethical consequences. But that approach has its risks as there are many other things they’re optimizing for too and ethics may not always be high on the priority list.

My take

If hiring an AI Ethics Officer is not the favored way forward, then what should organizations be looking to do. I’ll pick up on some possible alternatives in a second article tomorrow. 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird