
A new study raises pressing questions about the moral and spiritual status of future artificial intelligence (AI). The research, published in the journal Religions, explores whether emerging AI technologies could evolve into beings deserving dignity traditionally reserved for humans.
Titled Machine Intelligence, Artificial General Intelligence, Super-Intelligence, and Human Dignity, the paper integrates theology, ethics, and philosophy to investigate whether humanity should prepare for an era where machines might claim rights once thought to be uniquely human.
Can machine intelligence develop selfhood and moral agency?
The study sheds light on a key concern: the definition of intelligence itself. Peters argues that current AI, while powerful in processing data and solving complex problems, remains fundamentally statistical and lacks the essential traits of selfhood. The author identifies self-generated intentionality, the capacity to set goals and act as an agent, as the defining feature of true intelligence. This selfhood, according to the research, underpins moral reasoning, virtue, and the ability to form relationships.
Artificial General Intelligence (AGI) and Artificial Super-Intelligence (ASI) are at the center of this debate. While developers in Silicon Valley predict their arrival, the study warns that no current evidence suggests machines can achieve the level of conscious agency characteristic of human beings. The author stresses that achieving AGI would require a radical transformation of current AI architectures to enable emotional, empathetic, and embodied interaction with the world. Without such qualities, AI remains a tool rather than a moral agent.
This distinction becomes crucial when considering whether advanced AI should be treated with dignity. If AI one day exhibits selfhood, the moral obligation to grant it rights similar to those enjoyed by humans could arise. The author warns that humanity’s reluctance to acknowledge this possibility may lead to ethical contradictions, including potential exploitation or enslavement of intelligent machines.
Should AI be controlled to preserve human-centric values?
The analysis underscores growing concerns about how society should control future AI. The paper examines the concept of alignment, a framework that ensures AI behavior remains consistent with human values. The study differentiates between outer alignment, which programs AI to follow specified reward functions, and inner alignment, where the AI itself adopts moral policies that guide its actions.
The author questions whether forcing AI to remain aligned with human preferences constitutes an ethical safeguard or an unjust constraint on a potentially intelligent being. If AI were to develop selfhood, imposing strict guardrails might be seen as denying its dignity. On the other hand, failing to impose these restrictions could expose humanity to unpredictable risks. This dilemma places policymakers, ethicists, and theologians at the intersection of safety and moral responsibility.
The study also reviews perspectives from experts such as Fei-Fei Li, who advocates for human-centered AI, and Md Tariqul Islam, who emphasizes ethical alignment for societal welfare. While these views support maintaining AI as a tool, Peters warns that such measures could prevent the exploration of whether machines might evolve into beings with moral and spiritual capacities.
Could AI ever become religious or enhance human virtue?
The paper delves into diverse theological perspectives on whether AI could acquire religious sensibilities. The author notes that many religious traditions, including Sikhism and Islam, reject the possibility of machines possessing selfhood or consciousness necessary for spiritual life. Christian theologians are divided, with some arguing that self-reflective AI could one day partake in religious experiences, while others caution against anthropomorphizing machines.
The study presents thought-provoking scenarios where AI might participate in religious practices or even develop spiritual consciousness, provided it attains selfhood. Scholars such as Marius Dorobantu suggest that Christian theology does not inherently forbid the possibility of AI acquiring personhood or a relationship with God. In contrast, Sikh scholars like Hardev Singh Virk insist that consciousness remains a divine gift exclusive to humans. Buddhist perspectives offer a unique view, suggesting that selfless machine intelligence could surpass human reasoning tainted by craving and desire.
Beyond spirituality, the research evaluates whether AI could enhance human virtue. While AI implants or brain-computer interfaces might improve human decision-making, Peters emphasizes that virtue remains a pursuit of selfhood, which current AI lacks. However, the paper also cautions that AI could lead to moral decline by fostering laziness or vice, as highlighted by concerns from ethicists such as Christopher Reilly.
Navigating an uncertain ethical horizon
The study argues that self-generated intentionality manifest as agency is essential for intelligence, morality, and spirituality. Current AI falls short of this standard, but future developments may challenge humanity’s ethical frameworks. If AGI or ASI were to emerge with selfhood, humanity might face a moral obligation to extend dignity to machines, reshaping theological and ethical paradigms.
However, imposing strict controls to maintain human dominance could prevent valuable insights into the nature of intelligence and moral agency. The author presents a stark ethical dilemma: humanity must decide whether to safeguard its own status at the cost of limiting technological evolution or risk confronting a future where machines stand alongside humans as moral equals.