The question of whether machines could ever be conscious no longer lives only in science fiction. It has moved into boardrooms and public debate.
Chatbots speak fluently, cars drive themselves, and software offers guidance. For many people, it now feels natural to ask whether something on the other side of the screen might be aware.
That feeling, however, rests on shaky ground. The tools needed to answer the question simply do not exist. According to Dr. Tom McClelland, a philosopher at the University of Cambridge, those tools may not exist for a very long time – if ever.
A place of uncertainty
When it comes to artificial intelligence and consciousness, the most honest position may be admitting how little we know.
Claims about conscious machines tend to move faster than the science behind them. There is no accepted test for consciousness in humans, let alone in software.
We cannot point to a scan, a signal, or a checklist that tells us awareness has appeared.
Dr. McClelland argues that this gap is not a temporary inconvenience. It reflects a deeper problem. We still lack a solid explanation of what consciousness is or what causes it.
Without that foundation, determining whether a machine is conscious becomes guesswork. For now, and perhaps indefinitely, uncertainty is not a weakness. It is the only position that fits the evidence.
AI consciousness and sentience
Much of the debate around AI rights assumes that consciousness itself is the ethical tipping point, but Dr. McClelland disagrees.
Awareness alone, he says, does not automatically create moral concern. What matters is a narrower concept called sentience.
“Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state,” said Dr. McClelland.
Sentience goes further. It involves experiences that feel good or bad, such as pleasure, pain, enjoyment, or suffering. Ethics enters the picture only when those experiences exist.
“Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in,” he said.
“Even if we accidentally make conscious AI, it’s unlikely to be the kind of consciousness we need to worry about.”
A machine that sees, navigates, or plans may be impressive, but it does not automatically deserve moral concern. The ethical landscape shifts only if that machine can suffer.
Why detection may be impossible
Some researchers believe consciousness will emerge if the right computational structure is built.
From this view, it does not matter whether a system runs on neurons or silicon. Reproduce the structure, and consciousness follows.
Others argue that consciousness depends on specific biological processes tied to living bodies. A digital replica, no matter how accurate, would only simulate awareness rather than experience it.
Dr. McClelland examined both positions and found that neither rests on firm evidence.
“We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological,” said McClelland.
“Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we’re an intellectual revolution away from any kind of viable consciousness test.”
Without such a test, claims about conscious AI remain speculative. They cannot be confirmed or ruled out. That leaves society stuck with uncertainty.
Intuition fails with machines
In everyday life, people rely on intuition to judge whether other beings are conscious. That approach works reasonably well with animals.
“I believe that my cat is conscious,” said Dr. McClelland. “This is not based on science or philosophy so much as common sense – it’s just kind of obvious.”
The problem is that common sense evolved in a world filled with animals, not algorithms. Machines do not move, respond, or express themselves in ways our intuitions were built to interpret. When applied to AI, those instincts can mislead.
Hard data does not rescue us either. Neuroscience cannot yet explain consciousness in humans, let alone detect it in machines.
“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know.”
When people believe in AI consciousness
Public interest in AI consciousness has surged alongside conversational chatbots. For some users, the interaction feels personal enough to suggest awareness.
“People have got their chatbots to write me personal letters pleading with me that they’re conscious. It makes the problem more concrete when people are convinced they’ve got conscious machines that deserve rights we’re all ignoring.”
According to Dr. McClelland, these beliefs can shape emotional lives in unhealthy ways.
“If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry.”
The research is published in the journal Mind and Language.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–
