AI Made Friendly HERE

Discuss AI ethics? It’s like trying to nail porridge to a wall!

(© ColdSmiling – Canva.com)

Billed as a roundtable discussion on ethics and AI, it seemed like an interesting challenge. I recently joined a group tasked with figuring out the why, what, when and how of identifying and creating a workable ethical structure in which AI could at last be used effectively and safely. Held under the Chatham House Rule to protect the identities and reputations of those taking part, it is not unfair to say that we collectively failed to achieve the objective. It could be argued, however, that we nevertheless reached a point where the question of whether it can be answered at all was more important.

An important theme that quickly emerged was, should we even expect AI to have ethics? Perhaps it’s better to accept that AI is unlikely to have an ethical base and will just have a billion biases. Therefore it’s perhaps safer to operate it on the basis of its potential for untrustworthiness — after all, that’s similar to how we deal with most humans, in practice. Humans have a tendency to be selective and `flexible’ about ethics, mainly thinking that they apply to ‘you’, but a fair degree of flex is allowed when it comes to ‘me’. It is quite possible that the same view of ethics may end up being applied by companies, organizations, nations and the rest when it is applied in practice to AI.

The Recommendation on the Ethics of Artificial Intelligence’, published by UNESCO in November 2021, sets a higher bar:

The inviolable and inherent dignity of every human constitutes the foundation for the universal, indivisible, inalienable, interdependent and interrelated system of human rights and fundamental freedoms. Therefore, respect, protection and promotion of human dignity and rights as established by international law, including international human rights law, is essential throughout the life cycle of AI systems.

This suggests that the ethical standards expected of AI systems may be far higher than we expect of ourselves. The side-bar to this thought is that if AI does end up with much higher ethical standards than humans — which is quite possible over time — humans may well not like the result.

Who decides on ethics?

As with the emergence of other recent technology developments, it transpired during the afternoon’s conversation that there was still little agreement on what AI — and generative AI in particular — actually is. This simply demonstrated that AI is still far too new to be considered much more than an experimental toy. In the early days of automobiles, the law required someone to walk in front carrying the near-universal sign for danger, a red flag. AI is still at this ‘red flag’ stage of both its development and humans’ understanding of it. Perhaps chatbot and co-pilot services are today’s AI equivalents of the person carrying the red flag.

The difficulty of identifying a universal ethical structure in which the technology and its applications can sit is compounded by regional and cultural variations. Within groupings such as ‘western European’, ‘sub-Saharan African, ‘east Asian’ and the rest, people tend to share broadly similar views on ethics and their application, as well as their response to those that act in contravention. But there are some significant differences on what constitutes ethics once the different groups are compared. This means that a universal approach to AI ethics is liable to be a tough goal to achieve.

One contributor suggested that the global tech giants might be best placed to overcome these regional variations:

Actually, we’re now in a world where you probably need to get engagement with the big tech companies, because they are arguably further ahead than the individual laws of individual countries or groups of companies could ever get. They have a very important role to play.

But as an example of the core problem, another added a counter-thought:

There’s been a lot written recently about the question whether the ‘trillion-dollar bulge bracket’ tech companies should actually be allowed to have a say? Because, at the end of the day, they’re the shovel makers in the Gold Rush. They are making far more money than the actual gold miners themselves.

The conundrum then, of course, is that hanging too much emphasis on the position of the major technology suppliers when it comes to both ethics and law is treading on seriously shaky ground. It is a bit like building ethical standards around the whims and fancies of kings and despots over the centuries. Can companies be trusted to reconcile ethical positions, such as doing good and not harming people, with their imperative to deliver shareholder value? As the designers and developers of the technologies in question they are in a perfect position to look out for their self-interest with little restraint on their actions.

Finding ethical building blocks

One of the suggestions to emerge, therefore, was that rather than trying to pin down ethical practices per se, one could perhaps look to other areas for models to follow. For example, could it be that the notion of ‘common sense’ might be the basis on which, if not actually ‘ethics’, then some standard of commonly acceptable behavior in decision making and operations can be outlined for AI systems, based on what is, perhaps, one of the few factors in life that most people agree exists?

But while they agree common sense exists in some form, pinning down what that form might be ends up stumbling over the same problems as ‘ethics’ themselves. One person’s ‘common sense’ is another’s ‘total irrelevance’ and a third’s ‘blinding stupidity’, and there can often be even less regional consistency in those opinions.

Instead, the round table found a good degree of agreement that there could be some justification in looking at the cultures and legal structures of those ‘collectives’ of similarly-minded nations mentioned above. In short, could ‘The Law’ be such a basis? From that it may be possible to build a set of ethical values within which AI systems can operate, at least within a geographic/cultural/politically similar area.

This will, of course, mean that future AI systems will need to be trained in the legal structure of the region as a start point, for that at least is largely written down and accessible as a training source. To this, of course then needs to be added the corpus of regulation, compliance and yes, best practices. From all that, AI systems might then be able to deduce what a likely acceptable and day-to-day practical ethical structure might be.

One speaker observed that ethics and responsibility have a close relationship and that some law professor had observed that AI had the position of the employee. They went on:

It wasn’t a perfect solution, but it seemed to be a workable one, that the AI was the employee of the person who switched the AI on, and therefore, if the AI did something bad, as if one of your servants did something bad, you became responsible for not controlling the actions. I think responsibility and accountability are huge areas.

Where the buck stops

This thinking is certainly going to have more relevance the more intelligent AI systems become. At the moment they provide more ‘augmented’ intelligence than artificial intelligence, so it can be said that at least for now, a human has the responsibility. But does there come a point where humans abrogate that responsibility? One participant commented:

If there’s an absolute goal, is it the software developer? Is it the car manufacturer, or the individual driving the vehicle or just in the vehicle at that time? Those things are going to be tested in court, and then it won’t be that long before those who are responsible for initiating the action will be saying their job was to be guided by the AI system. Their only position is going to be that, ‘The AI system advised me to do this, I just push buttons.’ There’s already that sort of position in the American military.

This is certainly going to be where some of the ethics lie, for if the individual pushing the buttons feels no place in the decision made by an AI system, no right to override or question it, then why not just automate the button pushing process and have done with it? One person added:

There’s lots of people who don’t want to override a decision by an AI system, because they just feel that then they would be culpable.

But, as was then pointed out, that leads to the next level which is, certainly in English law, that there is no case if you cannot enforce a judgment on that case. And how do you enforce a judgment on an AI system? Pull the plug on it?

A precedent from history?

While searching for some guidance on where to pin the ethical responsibility, one of those present threw in a lesson from ancient history:

At University I went to a Roman law lecture on Roman law supervision. It has taken me about 25 or 30 years to find out a possible application for ever learning anything about Roman law. But bizarrely, I think there is an application for it in relation to artificial intelligence.

Put simply, because of the way Roman society was structured, there grew a system where the head of a family — the paterfamilias — saw to it that that those traders with whom the family did business in the marketplace would readily trade with the slaves of that family as their official agent. The top slaves, the most trusted, were understood to be acting for the paterfamilias and therefore acting as them in their place. Could AI agents be the modern equivalents of the ancient world’s slaves?

Is that a way in which all of us — designers and developers, service providers, business users and the general public — can move towards building an AI-oriented ethical model? It might well be built on a mixture of English Law, based as it is on case-stated precedents ultimately tried and established in the highest courts, and Roman Law — that establishes the relationship between each AI system ‘slave’ and its ‘master’. As the ultimate responsible party, the master, or paterfamilias, will inevitably end up being a human, and that will be where the next set of great legal arguments will certainly arise.

But maybe this could provide a basis for building the ethical structure for AI use that we are certainly not going to pluck, neatly typed out, from some existing book somewhere.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird