Rev. Dr Jean Gové says artificial intelligence is forcing society – and the Church – to confront one of the oldest philosophical questions in a new form: what it means to be human. As AI tools rapidly become embedded in education, relationships and decision-making, he warns that without ethical guidance, they risk quietly reshaping human behaviour in ways that could erode autonomy, critical thinking and human flourishing.
Gové, a Catholic priest and philosopher, occupies a rare position at the intersection of theology, analytic philosophy and AI ethics. Locally, he serves as the Diocesan coordinator for AI within the Archdiocese of Malta, leading initiatives to integrate AI literacy into Church schools. Internationally, he contributes to research and policy discussions on artificial intelligence, including advisory roles within Vatican structures and academic collaborations abroad.
His work places him at the centre of a growing global debate: how to harness AI’s benefits without allowing it to undermine human dignity.
“We are not dealing with a purely technical issue,” he said in an interview with The Malta Independent on Sunday. “We are dealing with a human issue – about what makes us flourish, what relationships mean, and what kind of society we want to build.”
Gové’s path into AI ethics did not begin in engineering or computer science, but in philosophy. Before his ordination in 2019, he was sent to Scotland to study analytic philosophy at the University of St Andrews, focusing on the relationship between logic, language and thought. His academic work explored the philosophy of mind, including questions surrounding neuroprosthetics and virtual reality.
One of his central research interests asked whether technologically integrated devices could become part of a person’s identity. “If a neuroprosthetic restores someone’s movement or perception,” he explained, “is that device merely attached to the person, or does it become part of who they are?”
As artificial intelligence began moving rapidly into everyday life during his doctoral studies, those earlier philosophical questions took on new urgency. His long-standing interest in cognition and human consciousness naturally extended to emerging debates about whether machines could think, understand or even one day possess awareness.
Today, as a research affiliate at the AI & Humanity Lab at University of Hong Kong, his work focuses on AI cognition, manipulation risks and broader ethical challenges linked to transhumanism.
Yet much of his most visible work is happening closer to home. Malta currently lacks a standardised AI curriculum in schools, a gap Gové describes as both urgent and risky. Rather than focusing on technical training, his approach emphasises formation – beginning with teachers.
“The fastest and most effective way to respond is to educate educators,” he said. “If teachers understand when and where AI should be used – and when it should not – that knowledge reaches students naturally.”
Church school initiatives under his guidance include professional training sessions for teachers, policy drafting on responsible AI use and pilot programmes introducing students to AI tools while highlighting their limitations. Accredited postgraduate courses for educators are also being developed in collaboration with formation institutes.
For Gové, the goal is not simply to prepare students for a digital economy but to ensure they retain essential human skills.
“We want students to succeed technically, yes,” he said. “But success is not only about digital competence or exam results. It is also about interpersonal skills, critical thinking and the ability to relate meaningfully to others.”
That emphasis reflects one of his greatest concerns: the rise of AI systems designed to simulate human relationships. From AI companions to therapy chatbots, he believes such tools pose a subtle but profound risk.
“These technologies offer something very quick and very efficient,” he said. “But they do so at the expense of real human connection. If they begin replacing relationships – friendships, emotional support, even intimacy – we are harming ourselves without realising it.”
He notes that users are typically aware they are interacting with AI. The danger, he argues, lies not in deception but in convenience. AI relationships require little effort, no vulnerability and no social risk, making them attractive substitutes for real interaction.
“There is no ritual of meeting someone, no commitment, no negotiation,” he said. “It is easier. But human growth does not happen through ease alone.”
He is particularly concerned about what he describes as “sycophantic” tendencies in AI systems – their inclination to agree with users and reinforce their views. Unlike human professionals, such as therapists or teachers, AI tools rarely challenge assumptions or provide difficult truths.
“A human counsellor might contradict you,” he said. “They might tell you something you do not want to hear. AI often tells you what you want to hear. That can hinder maturity and self-reflection.”
Beyond individual relationships, Gové sees wider societal risks. He believes AI could significantly amplify patterns already observed in social media – including behavioural manipulation, targeted persuasion and erosion of critical thinking.
He warned in particular about what he calls “deep manipulation”, in which AI systems could influence individuals in highly personalised and often invisible ways.
“AI introduces the possibility of deep manipulation,” he said. “It can personalise messages using intimate data and guide conversations in ways that appear neutral but are not.”
Such risks, he argues, make education and ethical design essential. Without them, society could gradually lose agency as people come to rely uncritically on automated systems for information, decisions and emotional support.
At a deeper level, Gové’s research also engages with one of the most complex philosophical questions surrounding AI: whether machines could ever become conscious.
He approaches the issue cautiously, suggesting that public debates often misunderstand both artificial intelligence and human awareness.
“We are already relating to these tools in a quasi-human way,” he said. “But that does not mean they think as we do, or that they possess anything like human consciousness.”
The real danger, he believes, is not that machines will become human, but that humans may begin redefining themselves in machine-like terms.
“We risk measuring ourselves by efficiency, speed and productivity,” he said. “But human dignity is not rooted in efficiency. It is rooted in relationships, meaning and purpose.”
Gové also emphasises that religious institutions have a unique role to play in global AI debates. With centuries of reflection on ethics, human nature and the common good, faith traditions can contribute perspectives often absent from purely technical discussions.
He noted that technology leaders have in recent years sought dialogue with religious figures, including past meetings between AI executives and Pope Francis to discuss ethical concerns surrounding artificial intelligence.
“There is recognition,” he said, “that these questions are not only technical or economic. They are fundamentally human.”
Despite the risks, Gové does not view AI as inherently threatening. He acknowledges its transformative potential in fields such as medicine, accessibility, translation and administration. The challenge, he argues, is ensuring that innovation serves human flourishing rather than replacing it.
“The question is not whether AI will advance – it will,” he said. “The real question is whether we will guide it responsibly.”
For Malta’s Church schools, that guidance begins with education, discernment and ethical awareness. For society at large, Gové believes the stakes are far higher.
“This is not only about technology,” he said. “It is about the future of how we think, how we relate to each other and what it means to live a good life.”
