The impacts and risks of artificial intelligence, along with its potential to reshape jobs and affect our lives in various ways, are a major topic of discussion nowadays.
These risks are diverse, ranging from misinformation and bias to threats that could undermine democracy, an expert on AI ethics and philosopher opined recently.
Speaking to Daily Sabah, Mark Coeckelbergh, professor of philosophy of media and technology at the University of Vienna and prolific author of over a dozen books, evaluated the question of AI regulation and transparency while cautioning of risks posed by this proliferating technology.
“I think it’s not so much like that suddenly there will be like a huge kind of thing like an atomic bomb going off, that’s not how we should think about it. It’s more of stacking up of all kinds of risks,” he said in an exclusive interview on the sidelines of the recent TRT World Forum, held in Istanbul.
Coeckelbergh, who is also a member of the High-Level Expert Group on Artificial Intelligence for the European Commission and co-founder of “Using AI for Good,” was one of the speakers at the panel on AI’s impact on politics and society.
Detailing the scope of potential risks, he said, “So some of them are misinformation, an important one, and bias also, with biases in the data and through AI it can lead itself to more discrimination and less inclusiveness, and also there’s responsibility.”
“There’s more autonomy of the systems, for example, if we make military machines that can do everything by themselves,” he added.
Citing that employment is also one of the risks, the professor said one of the risks that he especially focused on since last year is democracy, mentioning his book “Why AI Undermines Democracy and What To Do About It.”
He explained that the book addresses issues such as “the influence on elections, and manipulations of elections, manipulations of voters basically through AI and social media.”
“But there is also, like less feasible kinda influence, if you know, we need democracy, we need knowledge and we need truth and if we are not sure anymore what’s true or not. There is a lot of fake news, misinformation, manipulations and so on and this creates this kind of environment where democracy becomes less likely to work,” he remarked.
Elaborating on concerns over misinformation and what more could be done by large tech companies and in general to tackle this, he highlighted the need for transparency and regulation to mitigate the bad effects.
“Yeah, I think we can not trust big companies to tackle these things alone, we need regulation and we can for example be more transparent about when AI is used to create fake videos, fake personas even,” he said.
He went on to say that it was now possible to create influencers, news readers that are not humans anymore.
‘Being transparent’
“On one hand, it is exciting what this technology can do but it is also scary because if people don’t know anymore if things are real or not.”
“So I think we should focus on being transparent, we should monitor what’s going on, especially on these social media platforms and yeah we should not leave the moderation, also the content only to these platforms,” Coeckelbergh further said.
“So what happens now is that here in Europe, in Türkiye, we kind of have to take whatever comes from California and we don’t have much influence, so there is kinda geopolitical situation now about technology that is very imbalanced,” he pointed out.
“As McLuhan said ‘the medium is the message,’ so it is important to regulate technology in such a way that it actually doesn’t have all these bad effects that we talked about,” he added.
Awareness, education
Moreover, he underscored the need to raise awareness for citizens and users “to make them aware that these systems such as ChatGPT, for example, have their limitations.”
The professor mentioned a rather funny but relevant example by saying that students should know that you can not just copy-paste into a student essay.
Furthermore, answering the question on what could be done on an individual basis, he acknowledged that while people can find all kinds of informal yet informational YouTube videos, for example, there is still a need for a more professional approach to this as well.
“I think it’s important still to have some steering from professional educators like in schools, also from parents. So we also need to educate teachers and parents about all this, so that they can somehow help students and children to integrate technology in their life in the way that it makes sense, that makes their life meaningful,” he noted.
He also pointed out reliance when it comes to technology while noting that it always has some “unintended effects.”
Recalling the period of the emergence of emails, he said: “Thinking about email, when it appeared … at first sight, it seems like ‘easy,’ we don’t have to write a whole letter, we can quickly type up something and send,” but now we have full inboxes.
“So I think that shows how technology always has this unintended effect. And as a philosopher of technology, I especially think that warning people that tools are not just tools, they are not just doing what they are meant to do. Whatever it is, automated driving, communication, having a conversation, they also have these all other effects which we might not foresee at the moment,” he highlighted.
“So that’s why I think it’s good to work together, as technical people with ethics people, with policy people to try to make sure that technology is more ethical and more responsible,” he added.
AI bias, financial bias
Touching upon AI bias and in particular financial AI bias, where for example AI systems could be used to determine recipients of loans, in addition to the potential transformation of banking systems by AI in the future, the professor said this is “a very good example” to show this technology has effects on people’s lives and can make an existential difference.
“Not getting a loan makes a huge difference for a family for example,” he noted.
Explaining that banking and insurance systems would also transform with AI, analyzing a lot of data, he suggested it is “important to regulate that our private data are not like constantly taken, that we are not constantly under surveillance.”
“People just go online, shop and people have to do online forms for their insurance companies … so they don’t always realize that behind these are these algorithms and also AI,” he said.
“So there again it’s important to create awareness but also regulate all these sectors, maybe in different ways, but maybe also in ways that do not hamper innovation. Because AI can help in some ways but I think it needs to be done in an ethical and responsible way and also ways that do not undermine democracy and lead to more good for society,” he concluded.