AI Made Friendly HERE

AI ethicists were supposed to be a booming job category. Now they’re scrounging for work

In October, Lisa Talia Moretti, an academic who specialises in the ethical dilemmas created by emerging technologies, found that jobs in her field had fallen off a cliff.

Based in the UK, she had been helping conglomerates and medium-sized businesses understand how to adopt AI in a humane and profitable manner. Or, more succinctly, Moretti had been working as an AI ethicist – someone who, as she puts it, helps businesses understand “what this technology is and what it can do.”

But as the AI arms race intensifies and tech giants lock into a battle to beat each other not only in creating the most sophisticated model but also in cornering the market, ethics is becoming an afterthought.

“Most companies are pushing so hard to get more people to use more AI, I don’t think (ethics) is even close to top of mind,” says a principal engineer at a major data company, who wasn’t allowed to comment publicly on the issue and asked to remain anonymous.

This is a problem, according to researchers who study the societal implications of AI tools and their rapid adoption across the corporate and consumer landscape. Fears of AI displacing workers and spreading disinformation are only growing, as developers ship new models at a rapid clip. The ramifications could be stark if certain checks aren’t imposed.

The World Economic Forum stressed the importance of hiring a chief AI ethics officer back in 2021. And last month, the New York Times reported that the rise in artificial intelligence would lead to the creation of a plethora of new roles, such as AI ethicist, trust authenticator, or trust director. But for many who pursued advanced degrees, the dearth of roles feels like a slap in the face.

“I’m having phone calls with maybe seven to 10 various people in this industry a week, and they are all saying, ‘it’s the worst (job market) it’s ever been,’” says Alice Thwaite, an AI ethicist who previously worked on staff at multinational firms and is trying to help her peers find work.

As AI ethicists struggle to find jobs, the technology continues to veer off its guardrails. Chatbots still regularly hallucinate; AI has directed teen users to self-harm; Grok, the AI built by Elon Musk’s team at Xai, has spiralled into full-blown Holocaust denialism and antisemitic tirades. Meanwhile, Google’s AI summaries have torpedoed traffic to news publishers; workforces have been cut in favour of automation; and cyber criminals are using the tool to steal millions through even more sophisticated scams.

All of this is to say: AI is a powerful tool, but decision-makers – from the government to corporate boardrooms – appear to be doing little to protect against its potential downsides.

“Every single paper on AI always used to start off with the benefits of AI are going to be huge and numerous,” Thwaite says. “Yet everyone’s invested in that first part, but no one’s invested in the second part.”

On Wednesday, the White House released its AI action plan, outlining the steps to marry AI to the future of the US economy. Among the policy’s main pillars are “removing red tape and onerous regulation” and creating a “Request for Information from businesses and the public at large about current Federal regulations that hinder AI innovation and adoption.”

With help from a strategic government push, AI tools are poised to roll out and crowd the commercial landscape for the foreseeable future. But according to a recent study published in the Future Business Journal, companies are still struggling with concerns from employees about how an AI system will compromise their personal privacy, and whether certain AI may perpetuate biases based on race or gender. That distrust of AI is also growing among the general public.

Moretti says an AI ethicist can help businesses implement AI smartly while navigating those tricky ethical issues.

That can mean many things in practice, including showing leaders how they can critically examine an AI’s usefulness instead of believing the marketing fluff. “A lot of the time companies are faced with fast-talking marketing and sales people who show them a really fancy deck with a headline that says ‘generative AI is going to improve creativity within your organisation,’” Moretti says. “And there’s not a lot of substance behind some of those market messages.”

Consider AI agents, for example. Despite the hype swirling around them, a study published by Salesforce in May found that “leading LLM agents” achieved a roughly 58 percent success rate for single query tasks. The success rate slid to a paltry 35 percent when agents were queried for multiple tasks in a row.

An ethicist might save your company from licensing an AI agent that won’t actually get the job done. Moretti says she’d ask a client how the AI is actually performing.“We would run a lot of interviews, we would run usability testing sessions,” says Moretti.

Ultimately, an ethics audit might help an organisation understand how its employees and customers actually benefit from the use of AI – if at all. “Since when does the tech industry build and sell stuff that fails at that kind of rate?,” Moretti asks. “We would never accept that from email. We would never accept that from a website just crashes 65 percent of the time.” – Inc./Tribune News Service

Originally Appeared Here

You May Also Like

About the Author:

Early Bird