AI Made Friendly HERE

Adela Cortina, political thinker:  ‘I think it is extremely dangerous to say AI can solve everything’ | Society

Adela Cortina, 77, never imagined that she would write a book on artificial intelligence (AI). “Those of us who work in the field of ethics are very interested in the progress of knowledge. Science and technology, if well directed, are extraordinary for humanity,” says this native of Valencia, in eastern Spain. “The thing is that sometimes they are and sometimes they are not.” A discussion of this is the focus of her new book, ¿Ética o ideología de la inteligencia artificial? (Artificial Intelligence: Ethics or Ideology?, available in Spanish).

A professor of ethics and political philosophy at the University of Valencia, and recipient of honorary degrees by eight Spanish and foreign universities, Cortina is one of the best-known political thinkers in Spain. She was the first woman to join the Royal Academy of Moral and Political Sciences and is the author of more than 30 books, including Aporophobia: Why We Reject the Poor Instead of Helping Them and What are Ethics Really For?, which won Spain’s National Essay Award in 2014. Now, Cortina is looking at the technology that has moved to center stage since ChatGPT burst onto the scene two years ago.

Question: Why are you concerned about AI?

Answer. I think [Karl-Otto] Apel and [Jürgen] Habermas were right when they said that there are three areas of interest when it comes to knowledge: the technical, the practical, and the emancipatory. When the technical interest is governed by the practical or moral interest, it leads to a true emancipation of society. AI is a scientific-technical knowledge that must be directed. If those who control it are large companies that want economic power or countries that want geopolitical power, then there is no guarantee that it will be used properly. If this technology affects all of humanity, it must benefit all of humanity.

Q. Do you think the AI debate is ideologized?

A. In the book, I focus on the two main positions: that of those who fear AI and believe it will be the source of all evil, and the enthusiast stance — that of the transhumanists and posthumanists, who believe that AI will help us reach an absolutely happy world. There are some, like Ray Kurzweil [Google’s Chief Technology Officer], who is talking specific dates: he says that in 2048 we will have done away with death. That would be ideology, in the traditional sense of the term: a distorted vision of reality that is maintained to pursue certain objectives. That is not acting ethically.

It seems to me extremely dangerous to lie, to say that we are going to solve everything with AI. The problem is that such a position is based on the authority that science wields, which makes people take it seriously. I am a follower of the Frankfurt School, which says that science and technology are a marvel, but if they slip into ideology because they are already a productive force, then we have changed the meaning of the matter.

Q. What ethical principles should govern reliable AI?

A. An emancipated society is one that is free of ideologies, and for that to be the case, it must be endowed with ethics. The basic principles are nonmaleficence (do no harm) beneficence (do good) autonomy and justice. In addition to these are the principles of traceability and explainability, a complicated issue when dealing with algorithms, and accountability. And then there is the precautionary principle, which we Europeans take seriously.

The philosopher Adela Cortina, at the Hotel AC Recoletos in Madrid.Pablo Monge

Q. Are we legislating properly on this issue?

A. We Europeans are often labeled as excessively normativist, but I think it is not wrong to be cautious — human lives are at stake. That should not prevent further research. We need an ethic of responsibility; marking how far we go in research is not simple.

Q. In the book, you discuss whether AI can be an autonomous subject or whether it will always be a tool. Is it time to tackle that?

A. Specialists say that, to date, we have not reached general artificial intelligence, which would be comparable to that of human beings. For that to be the case, AI would have to have a biological body, because that is the way to have significance, intentionality, and so on. What we do have are special artificial intelligences, capable of doing things extremely well, even better than us when it comes to certain tasks.

I think it is very important that we know where we are and that we start a serious debate about it. Right now, AI is a tool, and therefore it has to be used for one purpose or another, but it should never replace human beings. Neither teachers, nor judges, nor doctors can be replaced by algorithms. Algorithms do not make decisions, they provide results. Responsibility for the ultimate decision lies with a person. One of the worst consequences of AI would be if it turned machines into life’s protagonists. We have to be very careful because we tend to get complacent, and when we are offered a result, we may be tempted to adopt it without further ado.

Algorithms do not make decisions, they provide results. Responsibility for the ultimate decision lies with a person. One of the worst consequences of AI would be if it turned machines into life’s protagonists

Q. Does it make sense to talk about machine ethics?

A. For some time now we have been trying to create ethical machines, which have a series of values built into them. I think it is very interesting that these codes can be assumed in some way, whether in the algorithms that drive vehicles or the robots that take care of the elderly, so that they can make decisions without the need to have a human guiding them all the time.

Q. You devote the last part of the book to the relationship between education and AI. Why?

A. Education is the key to society, and it is very neglected. In China, they are very focused on applying AI to education, because they say that if we explain yesterday’s things to people, we lose tomorrow. For me, the key issue in education is justice: there are a lot of people who don’t have access to these tools, and that makes the gap bigger and bigger [between the haves and the have-nots].

I am also concerned about autonomy. One of the great tasks of the Enlightenment is to shape people’s autonomy, so that they know how to direct their own lives. We have to educate so that there is a critical, mature public that moves by itself, by its own convictions; that does not allow itself to be carried along like sleepwalkers, as Kant said, but is guided by its own reason. And that is very difficult in a world in which the platforms are trying to make us spend a lot of time on them so they can collect our data. They are superficializing teaching and making people less and less autonomous. I think autonomy is in danger, and that’s bad for democracy.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Originally Appeared Here

You May Also Like

About the Author:

Early Bird