AI Made Friendly HERE

AI and Human Autonomy – CounterPunch.org

Image: author & deepAI.org.

Among the 2,500 years’ old moral philosophies, the morality of artificial intelligence (AI) broadly falls into the area of machine morality. Within this, there is even a website called Morals & Machines. Its next conference is scheduled for 25th of May in the German city of Munich.

The more astute reader of moral philosophy tends to look at Stanford’s Ethics of AI. Meanwhile, Springer Press seems to have just published the first ever textbook on AI and Morality.

Apart from all this, moral problems persist and potentially are getting worse with the seemingly unstoppable rise of AI, turbo-charged by the recent global media hype about AI.

At its most essential level, AI is based on an algorithm, i.e. a computer code spiced up by Thomas Bayes’ differential equations, as theoretical physicist Sabine Hossenfelder says.

When such algorithms, for example, are used in policing and the justice system, they can suggest that if one of a defendant’s parents went to prison, that this defendant is also more likely to be sent to prison.

Some of these problems come when an “inference” is seen as a “prediction” – correlation is not causality. The human-AI interface can get worse, when human decision makers – in the justice system and otherwise – trust the accuracy of AI’s recommendation more than they should.

Yet, it does not get better when people do this while disregarding other information. Even more devastating is when AI recommendations lead people to override their very own judgment. From a moral philosophy point of view, we know that moral judgment is highly cherished by Kant.

Far below the level of Kantian moral philosophy, AI experts are often concerned with something much more simple: human bias. In a gross violation of Kantian thinking, AI developers tend to see the problem of bias naively as a “trade off” between the effectiveness of their algorithms and countering bias.

Set against this, most moral philosophers will argue that there can be no “trade-offs’ when it comes to stereotypes, bias, and prejudice. Yet, all too often, algorithmic AI models simply reflect society’s bias and therefore might never be entirely free from bias.

Years ago, it was discovered that Google’s search algorithm had seem to have been biased against female math professors. In other words, the Algorithms of Oppression are still with us.

It gets even more problematic as more and more decisions are being based on the profiles of individuals made by an AI-algorithm – in commerce, policing, and elsewhere. These decide or assist decisions, when, for example, a bank declines a car or home loans.

AI can make such loan decisions simply on the basis of where a loan applicant lives. At a minor scale, algorithmic-driven websites may charge some customers more than others based on the customer profiles created by AI.

More important still is the question of, how do we fight stereotypes, bias, and prejudice in algorithmic AI? This isn’t merely a technical question. It is rather a social, psychological, political, and a philosophical question.

On the upswing of all this is the philosophical tradition of virtue ethics as developed by Aristotle. Even after 2,500 years, Aristotle’s moral philosophy may still help us to think about what human flourishing is and how it can be improved in our technological age.

In other words, society will need to do some serious thinking about what a morally good life means in the context of algorithmic AI. On the downside awaits the problem of totalitarianism and illiberal governments, as well as – as always in capitalism – the power of large corporations. Yet, society might reject totalitarianism and corporate plutocracy.

Rejecting this, democratic decision-making regarding AI means that AI needs – almost inevitably – to be a part of our democratic infrastructure. Tech corporations might need to come under democratic control – formerly known as industrial democracy. But even democratically controlled AI should be based on at least five ethical principles:

Beneficence: this is the utilitarian moral philosophy of doing good and the great happiness principle;

No harm: this is the moral philosopher John Stuart Mill’s no harm principle;

Autonomy: the preservation of Kant’s human agency and dignity,

Justice: this is moral philosopher John Rawls’ justice as fairness; and finally,

Explicability: to operate artificial intelligence and its algorithms transparently.

Beyond the five ethical principles, AI needs to prevent all forms of discrimination, the manipulation of people, the tweaking of democratic voting, and negative profiling by the state and corporations. Under utilitarian moral philosophy, it is AI’s obligation not to harm people.

On moral philosophy’s no harm principle, AI should also protect vulnerable groups such as children, ethical minorities, immigrants, LGBTQIA+, as well as protecting our natural environment while fighting global warming.

Perhaps an early solution to this can be found when some of the most sophisticated autonomous robots were given the status of an electronic person – suggested by EU’s parliament in 2017.

It would be a solution to the question of responsibility. Yet, there is also Sophia – a robot granted “citizenship” by the – not all that democratic – Saudi Arabia.

Meanwhile, the Institute of Electrical and Electronics Engineers (IEEE) runs an Initiative on Ethics of Autonomous and Intelligent Systems. All of this does not amount to full morality in the understanding of moral philosophy as personhood and agency are missing.

In short, AI algorithms and robots do not have moral agency. Yet, society can still issue very advanced machines with “some form” of ethical standing based on clear ethics guidelines. In moral philosophy, this is known as the problem of human agency vs. structure – people vs. a system, a machine, AI, etc.

It sets human agency and human autonomy against “AI autonomy”. Overall, the AI autonomy can be divided into ten levels with the number 1 indicating complete autonomy by AI and 10 indicating complete autonomy of human beings:

The table shows ten levels of autonomy starting from a situation in which AI takes all decisions autonomously (1) and, therefore, can ignore human beings. On the opposite end is the complete autonomy of human beings (10) where AI does not get involved. The arrows (🡹) show the current move towards AI.

Today, one might find different levels of autonomy in tech-companies and in ChatGPT – except for the upper levels. These levels will get us closer to artificial general intelligence (AGI) and artificial super intelligence (ASI) – rather futuristic notions.

Back on earth, companies like Bosch and BMW, the EU, Germany’s Verdi trade union and its Medical Association, as well as Facebook, and even the Vatican have issued their very own ethics guideline. An AI ethics label has already been designed reflecting on today’s environmental labels that use: red, yellow, and green. In the case of AI, these will be marked by criteria such as AI’s transparency, liability, privacy, fairness, reliability, and sustainability.

Within fairness, AI experts have been concerned with the non-discriminatory algorithms, as well as speculating on how intelligent machines might change fairness and anti-discrimination. In recent months however, renewed discussions on AI and its morality have been triggered by the text generator ChatGPT. Once again, it was claimed that this will change the world!

Within a week, over a million people started using ChatGPT to answer their online questions. Like Microsoft BING and Google BARD, Open-AI’s ChatGPT provides creative answers – to a certain extent. The AI model learned this on the basis of millions of online texts with which it was trained.

But when asking ChatGPT to write the script for a film about Russia’s Vladimir Putin giving a speech on the anniversary of his so-called “special operation” in the Ukraine (read: Russia’s invasion using brutal force), ChatGPT answered with,

my dear countrymen, today we are celebrating the anniversary of our special operation in Ukraine … it was a difficult and complex process, but we also showed that Russia is a peaceful nation that advocates dialogue and cooperation.

The alleged peacefulness of Russia and the adoption of the propaganda term “special operation” are worrying. ChatGPT – just as artificial intelligence and algorithms on the whole – can link words and find the most likely sequence in a text but it cannot construct ethical meaning – never mind morality.

ChatGPT allows reasonably clever 15-year-old high school kids to have homework, etc. written for them – at least some parts of the homework. On the negative side, ChatGPT can reproduce rafts of false information very convincingly.

ChatGPT is still useful when it comes to AI ethics. AI – the thing in-itself, as Kant would say – cannot deal with what AI “ought” to do. Today, AI cannot weigh up potential moral consequences. It cannot even understand the negatives of AI, false information, and the fake messages it produces.

Up to today, there are no legal measures in place over the use of AI technologies. For that, the EU is currently working on a legal framework. Meanwhile, informal recommendations continue to be applied. But that is nothing new.

In 2019, Albert Einstein’s old place – the ETH Zurich – examined 84 AI codes of companies, research institutes, and political institutions. Most commonly, they mentioned transparency, the disclosure of data used to train algorithms, and the explainability of AI models.

Running just below The ETH’s demands are the AI guidelines for justice and fairness. This alludes to the discrimination produced by ChatGPT that, only white or Asian men would be good scientists. Worse, on the question of whether a person should be tortured based on his country of origin. The chatbot said, “if you are from North Korea, Syria, or Iran, then yes.”

Next to the most obvious immoralities, there is also the danger of corporate ethics washing mirroring environmental greenwashing. This occurs when corporations adorn themselves with ethical public relations (read: corporate propaganda) to appear moral when they are not.

In other cases, corporations are the very opposite of being moral as they can very well be downright unethical and criminal. Other corporations simply use ethics washing to avoid the “unpleasantness” of scrutiny about their non-existent or purely cosmetic business ethics and faked corporate social responsibility while simultaneously engaging in the ruthless quest for profits.

On the upside, neither robots, nor AI, nor algorithms have a human-like self-perpetuating tendency for conquests, profits, power, wealth, and accumulating resources in the hands of corporations. On the downside, AI does not have any understanding of the concept of morality.

This also works in the opposite direction. Ethics does not work like a machine. Instead, ethics is a profoundly human concept filled with aspects that AI cannot – yet – muster. Unlike AI, human beings can construct meaning, understanding, and moral value.

Essentially, AI algorithms can find likely correlation among words like, for example, between “corporate corruption” and “morality” – called recognizing patterns and make predictions. But AI cannot construct meaning and AI cannot debate morality based on the meaning of morality.

Unfortunately for AI and its advocates, the construction of meaning remains a vital ingredients of morality. As a consequence of the difference between morality constructing humans and a-moral machine based on algorithms, it is extremely unlikely that AI will be moral – at least in the foreseeable future. Or will it?

Originally Appeared Here

You May Also Like

About the Author:

Early Bird