On Thursday, 20 November, Edward Santow delivered the 2025 Sir Vincent Fairfax Oration hosted by the Vincent Fairfax Fellowship. Santow is the co-director of the Human Technology Institute and Professor of Responsible Technology at the University of Technology Sydney. This is a slightly edited version of that oration.
Throughout history, we have built machines that are born like Venus, fully formed. When a car rolls off the production line, all it needs is a twist of a key, or the press of a button, and it will work as intended. This is not true of artificial intelligence (AI).
AI systems start as ignorant as a newborn, perhaps even more so. A baby will search for its mother’s breast even before the baby can see. An AI system possesses none of a baby’s genetic instincts. Nothing can be assumed. All knowledge must be learned. The process of teaching an AI system — known as “machine learning” — involves exposing the machine to our world.
At least superficially, this resembles how a baby learns. Parents, family, teachers guide the child. As they experience the world around them, the child learns. However, an AI system cannot experience things as a human does. Instead, it is exposed to a compressed, recorded form of experience.
Put another way, AI systems are exposed to, and learn from, data. For this reason, data is now more important and more valuable than at any moment in recorded time. So, we need to look more closely at the nature of data.
Our data is our history — but not all of it
We have no data from the future. The data that now fuels AI emerges from what we choose to remember from history.
One Saturday afternoon, I walked into the kitchen at home. On entering the room, I felt like the first reporter to arrive in a town after a volcano or earthquake. In that trope, a lone survivor, covered in soot, staggers towards the reporter’s camera. And so it was in our kitchen. The survivor, in this case, was our two-year-old, Hannah. She was covered in flour and cocoa. Only when she blinked her eyes was I certain that it was indeed our daughter.
I suppressed a scream. And then I undertook an un-forensic reconstruction of what my wife would tell me took place: I had started making a cake. I paused briefly to scold one of my other children who had kicked a soccer ball into the neighbour’s yard. Hannah, who very much likes cake, saw her opportunity. She picked up the mixing bowl and, in tipping the contents over her person, she attempted to become one with cake.
It is often said that history is written by the victors. And while it felt less than victorious to spend an hour cleaning Hannah and kitchen, I can at least claim to have restored order. As such, I’m confident that my version of this vignette will prevail. In truth, however, I was not a direct witness to the key events. I did not interview Hannah or any other witness. I do not really know what happened.
My point is that our history is not a perfect replica of all that has happened before this moment. As we move through the world, our actions leave marks or imprints. The imprints are physical, and we use those imprints to create data. We humans puzzle over this data to make sense of the world.
And yet data is an imperfect record. A footprint offers information about my foot, but my footprint is no more my foot than I am my shadow. The discernible imprints left by an event almost never tell the whole story of an event or its context.
AI cannot access the source of its data — the real world — any more than I can go back in time and observe precisely what Hannah did to create such a mess. But unlike humans, who are shaped both by their experiences in the real world and by the data they derive from the real world, AI has no real-life experience. It has only data. And data is not life. It’s a limited and imperfect record of certain aspects of life.
Bullshit in, bullshit out
In reflecting on this, I need to go back more than 150 years and introduce two fascinating characters from the pre-history of artificial intelligence: Ada Lovelace and Charles Babbage.
Ada Lovelace was an exceptionally gifted mathematician. She also happened to be the daughter of the poet, Lord Byron. Charles Babbage’s talents in mathematics were surpassed only by his tendency to put the “I” in team. In any case, in the mid-nineteenth century, the team of Lovelace and Babbage created what they called the “analytical engine” — the forerunner of modern computing.
Babbage was feted by the great and the good of British society. On several occasions, he was invited to meet with Members of Parliament in Westminster. He would leave these encounters shaking his head:
On two occasions I have been asked [by Members of Parliament], “Pray, Mr Babbage, if you put into the machine wrong figures, will the right answers come out?” I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
You may be familiar with an earthier — dare I say, more Australian — way of making this same point: “Bullshit in, bullshit out.” Notwithstanding all its amazing whizz-bangery, even today’s most sophisticated forms of AI cannot outrun this truth. I present two examples.
The first is part of a genre I like to call “More A than I”. Those contributing to this genre are drawn almost entirely from the ranks of our community’s Smart Alec’s, needlers and pests. They dream up clever ways of tricking AI systems to perform in unintended ways. For instance, there was the young man (I don’t know this person’s age or gender, but I’m still confident on both of these descriptors) who set out to trick the Commonwealth Bank’s AI assistant — known as “Ceba, the CommBank Assistant” — into renaming itself “Reginald Hornstein”. Let’s just say: Smart Alec, one — the AI assistant formerly known as Ceba, nil.
The second problem is more serious. As ChatGPT and other forms of generative AI have made producing new content fast and effortless, an ever-increasing proportion of content on the open internet is produced by AI, with no real human intervention. Much of this is “AI slop” — that is, low-quality content that is often riddled with inaccuracies. While the degradation of quality content online is a problem in itself, it is made much worse by the fact that the large language models used to create generative AI applications are themselves trained on data available online. Hence, any deterioration in the quality of data on the internet is likely to trigger a similar deterioration of the most valuable AI models that exist.
To return to what I was saying about bulls: a metaphor might be the crisis in the 1980s when it was discovered that many British cattle farmers had been feeding their cattle on beef — something that was key to the rise of so-called “mad cow disease”.
The weight of history
To this point, I’ve been referring to false data. But there’s also a subtler and more complex challenge. Sometimes our history is accurately recorded, but the wrong lesson is drawn from that history. It was just such a problem that triggered my own interest in AI more than a decade ago.
At the time, I was leading a human rights NGO called the Public Interest Advocacy Centre (subsequently renamed the Justice and Equity Centre). Starting in 2012, a stream of young people came to us with a remarkably similar complaint. They were being repeatedly stopped and searched by the New South Wales Police. Many of our clients were children — some as young as 13. They told us how police officers would “check up” on them at home, at school, on the street, or at work. Often a police officer would knock on their door at home several times in a week — sometimes between midnight and 6am; sometimes multiple times in a single night.
The young people were scared. Being stopped by police in public and at home was humiliating.
Want the best of Religion & Ethics delivered to your mailbox?
Sign up for our weekly newsletter.
At first, we were baffled. These kids weren’t hardened criminals. They might have ridden on the train without a ticket or been accused of shoplifting, but the police response was so clearly disproportionate. Over time, something else came into view. The vast majority of our clients were Aboriginal or had a Middle Eastern background. Almost all our clients had dark skin.
Why were these kids being targeted? On 9 November 2017, there was a breakthrough. The Police Commissioner acknowledged the police used an algorithm to determine which young people to target. He admitted in Parliament that 55 per cent of the roughly 1,800 people on the target list were Aboriginal or Torres Strait Islander. Yet less than 3 per cent of NSW’s population is Indigenous.
The NSW Police’s algorithmic system had been trained on historical data. That data accurately reflects our history — since colonisation, Indigenous people have been imprisoned at a disproportionately high rate. The problem was that, almost certainly, the system learned the wrong lesson: that Indigenous people were more likely to commit crime.
Of course, one’s ethnicity doesn’t make you more or less likely to commit crime. On the contrary, we know that the disproportionate rates of Indigenous imprisonment have been fuelled by a number of historical facts: over-policing of Aboriginal communities; government policies that tore at the fabric of Aboriginal families; higher rates of injustice based on discrimination and prejudice.
Our past, AI’s future
I don’t want to suggest that AI is all doom and gloom. I’m definitely not saying, “Burn the machines”. The groundbreaking work of my extraordinary colleagues — like Professor Sally Cripps — shows how AI can be used thoughtfully to address pressing social problems in education, mental health and the environment. But the world doesn’t need another AI hype man. I can safely leave that to Sam Altman and his friends in Silicon Valley.
My argument is that AI embodies all that is cutting edge and modern, and yet it anchors us to our history. And our history is a messy mix of triumph and shame, of big moments where the stakes were high, and of smaller, less consequential moments. It encompasses the dispossession of Aboriginal people and my weekend’s minor kitchen disaster. This history is the source from which the almost unimaginably vast data oceans are formed. AI systems trained on our data ingest a mixed history of prejudice and progress, injustice and efforts towards equality.
As we rely more and more on those AI systems, we need to do three things:
- we should retain a healthy level of scepticism for the decisions and recommendations that come from AI;
- we need to do a better job of curating the material from which AI systems learn;
- we need strong laws that protect our community from the harms that AI can bring.
Only if we respond forcefully to the things that make AI dangerous, can this technological transformation deliver a future we want and need, rather than the dystopia we fear.
Edward Santow is the Director of Policy and Governance at the Human Technology Institute and Industry Professor of Responsible Technology at the University of Technology Sydney. He is co-author (with Daniel Nellor) of Machines in Our Image: The Need for Human Rights in the Age of AI.
Posted 1h ago1 hours agoMon 24 Nov 2025 at 2:49am, updated 1h ago1 hours agoMon 24 Nov 2025 at 2:58am
