AI Made Friendly HERE

AI and a question of ethics

Kathy Gibson reports from Gitex Africa Morocco 2024 – When it comes to using artificial intelligence (AI), ethics is a key issue, and organisations are advised start formulating clear policies around trusted and responsible AI.

Rohan Patel, senior vice-president: engineering at Builder.ai in the UK says it’s important to realise that there are two completely different types of AI systems.

The first relates to systems that are used exclusively internally, where companies are expected to self-manage the ethics. The second is systems that use personal data and could have an impact outside the organisation, and regulation will soon take care of these considerations.

A common fear that people have is that AI systems will soon start writing other systems, and completely disintermediate humans from the process.

But Patel doesn’t see this an ethical issue at all. “We already have models creating models, and models creating data sets. If it works for the company, why not?”

That’s assuming that these models are for internal AI usage, and are not systems that could have an impact outside the organisation.

Another way of keeping the models honest is to build “humble” AI systems, says Sebnem Erener, managing legal counsel at Klarna in Sweden.

“This involves progamming humility into AI systems in the sense that, mathematically, they would never assume their predictions or pronouncements are definitive. They would thus continuously update the underlying preferences and values by taking constant feedback from human behaviour.”

Although there are not yet any global initiatives to harmonise AI ethics, the EU is well along the road of setting up regulations for trusted AI, Erener adds. The new act, that deals with AI as well as privacy, will be promulgated in June and will be in force in two years.

What’s important is that the views around these issues are changing all the time, and regulations should be flexible enough to embrace the changes.

“What we protect and what we perceive needs to be protected is changing,” Erener points out. “For instance, we used to really big on privacy and intellectual property (IP) issues. But we have realised that we shouldn’t be afraid of questioning this position.

“If the goal is for systems to be more accurate, you can’t treat privacy and IP as absolute rights; and the existing legal structures will need to be updated.”

Having said that, organisations need to be sensitive about these issues as they relate to AI, and should start formulating policies now.

“For instance, at Klarna we have started experimenting with an ethical AI framework so we have the opportunity learn from consumer needs. Then, when the regulations hit, we won’t be reactive but will rather be part of the conversation in setting out the regulations.”

Is it possible to program ethics into the system? Possibly it is, but Dr Juergen Rahmel, a lecturer at the University of Hong Kong, believes companies should not worry about tackling the biggest issues first.

“Leave the ethical questions to the humans that are being augmented,” he says. “Give the solvable issues to the machines and keep the difficult, ethical, issues for the humans.

“Then, as you make progress, you can start adopting more of the value chain into the electronic models.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird