Shannon Vallor (left) on stage. Credit: Ainali/Wikimedia Commons
Professor Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh, and author of the new book, “The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking” (Oxford University Press).
You describe artificial intelligence (AI) as a mirror of the human mind. Can you explain that analogy?
Media and tech industry narratives often portray AI as a strange new kind of mind or entity, one that stands apart from people, and will soon surpass, threaten or outcompete us. But none of these things are true.
AI technologies are not intelligent minds. They are mathematical mirrors of our minds. Artificial “intelligence”, a term which must be used very loosely here, doesn’t stand apart from human intelligence, or even replicate it. AI tools are are only reflections of it, laboriously fabricated by human beings from human-generated data…
Think of the surface of an AI mirror as its algorithm. The complex machine learning algorithm that gives a trained large language model [like ChatGPT and Gemini] its seemingly intelligent capabilities is just a set of mathematical instructions, which tell it how to generate a set of numbers from other numbers. The particular mathematical properties of an algorithm are like the physical properties of a glass mirror’s surface – they determine exactly how the mirror will reflect the “input” it gets. For a glass mirror, the input is light. For a large language model, the input language is data. When we initially train that model, we do it by feeding a ton of human language data – books, essays, blog posts, social media posts, you name it – into the algorithm. That’s shining a lot of light on it … The goal of training a language model is to make its algorithm better at reflecting in new outputs the same patterns found in the original data – in this case, the patterns that link words to other words …
Just as a glass mirror is not a living body, even if it reflects the likeness of one, an AI mirror is not a mind even if it reflects the styles and patterns of thinking found in our data. And it’s definitely not a “superhuman” mind, one that understands and reasons better than we do, as some have claimed. AI mirrors don’t reason or understand at all.
This is why today’s AI tools can often solve complex logic problems but be unable to reliably defend their solution. It’s why they can give very impressive answers to complex questions that follow a familiar language pattern, while failing at much simpler reasoning tasks that use uncommon patterns. For example, there’s a well-known logic puzzle about a person who needs to take a wolf, a goat and a cabbage safely across a river in a boat. It requires many complex steps to solve because you can’t take them all across together without one thing eating another. But researchers noticed that these models were failing to solve a much simpler version of the task – namely, how do you take a goat across a river in a boat? Instead of saying “just put the goat in the boat and go,” even the latest AI models suggested many back-and-forth trips with nonsensical instructions. They were simply mirroring the complex pattern in the original puzzle, even though the result was illogical.
The point is: mirroring statistically common patterns is all that these systems ever do – even when they get the answer right. When we mistake a tool like ChatGPT for an intelligent, thinking mind, and treat it as a fearful new competitor for humans, we are not unlike a kitten that tries to fight its own reflection in a mirror. We have mistaken a reflection for the real thing.
Yet, you believe we are ceding power to AI.
That’s exactly right – we are ceding power whenever we rely on AI to take the place of our thinking and deciding. Or we have that power taken from us, not by AI but by the human institutions that decide to replace us with it. Now, there are circumstances where it’s OK to let go of a power … For example, we routinely use AI tools to take the burden of mundane decisions away from us. But what is the kind of power we are giving over to AI when we let it determine who lives or dies in war, who gets to be hired or healed or schooled, who goes to jail and who goes free? What kind of human power do we give up when we let AI tell us what was the important information to remember in a university lecture or what images of the future should look like?
A 1955 short story by Isaac Asimov, Franchise, anticipated the question of what kinds of power are too valuable to surrender to machines. Today that question is more vital than it has ever been.
Will this increasing reliance prevent progress?
All an AI mirror can reflect is the past, because that’s all the data it has. It has no ability to reflect our capacity – which we have used so many times in history and urgently need to use now – to do something we’ve never done, to remake ourselves anew. AI puts us in danger of forgetting that capacity at the very moment that we need it most.
What AI mirrors show us is not a window into the future, but reflections of a past that we need to move beyond … That’s why it’s so dangerous to rely on AI to replace our thinking, or plan our futures for us. That will only ensure that we stay on the very same paths we are on – culturally, economically, politically and environmentally. And we know that those paths are unstable and unsustainable for many reasons. We desperately need to carve new paths into the future, which is something that can only be done with human moral imagination and wisdom.
How do we move forward from here?
The good news is that AI is not the problem. AI doesn’t have to go away, although we need to use it much more selectively and wisely … I think so many of us are willing to believe in “superhuman” AI because we’ve lost our faith in human power and potential. We look around and see people doing increasingly self-destructive things, and corrupt leaders making the worst decisions for our future, and we think “maybe the machines will do better.”
The machines won’t do any better, because they are designed to reflect every decision pattern that brought us here. But if you look at human history, it’s not a story of staying the same. There are always wars, and short-sightedness, and cruelty and destruction, but there is also resistance, reinvention, reimagining, recreation everywhere. We don’t stay in the same place, in the same state. We don’t just make the same art and read only the old books, we don’t all stagnate and mindlessly regurgitate the ideas of our mothers and fathers … We tell new kinds of stories. We make new music. We propose new and better ways of living together.
My book is about reclaiming that future that for humans is always open – never closed, never finished, never hopelessly condemned to mirror the past. AI can be a part of that future, but it can’t draw us the map, and it can’t lead the way. If we can remember who and what we are, and what humans can do for ourselves and one another, that future can still be ours.
This article is a preview from New Humanist’s autumn 2024 edition. Subscribe now.