
As artificial intelligence (AI) systems grow more advanced and integrated into everyday life, discussions surrounding AI ethics have taken center stage. Traditional AI ethics often focus on rules, fairness, transparency, and accountability. However, an emerging perspective suggests that AI systems should not only adhere to ethical principles but also exhibit virtues – traits of excellence that guide their decision-making processes. This approach, rooted in virtue ethics, expands the discourse beyond mere compliance with ethical rules and into the development of AI systems that possess dispositions aligned with excellence and moral integrity.
A recent study, “Virtues for AI” by Jakob Ohlhorst, published in AI & SOCIETY (2025), critically examines the potential for integrating virtues into AI. Ohlhorst argues that the current discourse on AI virtues has been narrowly focused on Aristotelian moral virtues, constraining the possibilities of virtue ethics in AI. He proposes a three-dimensional classification system for artificial virtues, offering a broader and more systematic way to conceptualize AI excellence beyond moral considerations.
The need for a virtue-based AI framework
AI is fundamentally designed to perform tasks with competence and efficiency. However, Ohlhorst suggests that designing competent AI inherently involves designing virtuous AI. If virtues are understood as “excellent dispositions,” then AI systems, to be effective, must exhibit excellence in how they process information, make decisions, and interact with human users. The problem, however, lies in the fact that existing discussions about AI virtues have largely been restricted to moral virtues, neglecting other essential dimensions of virtue that could apply to AI.
Ohlhorst makes a crucial distinction between anthropic virtues – virtues applicable to human agents – and artificial virtues – virtues applicable to AI systems. Human virtues are often tied to emotional, social, and rational capacities that AI lacks. Therefore, a direct transfer of human virtues to AI would be inadequate. Instead, AI virtues should be designed to reflect the specific nature and function of artificial systems.
To bridge this conceptual gap, Ohlhorst introduces a three-dimensional classification system for AI virtues. This system classifies AI virtues along three axes:
- Domain: The specific area in which a virtue operates (moral, epistemic, aesthetic, or practical).
- Norms: The principles that define excellence in a given virtue (agent-based, value-based, or relational).
- Mode: The way in which a virtue operates (reliabilist or responsibilist).
This framework allows AI virtues to be more systematically assessed, distinguishing them from human virtues while still enabling AI systems to be guided by excellence in decision-making and behavior.
Moving beyond moral virtues in AI
The traditional discourse on AI virtues has primarily revolved around moral virtues – those that define what it means for an AI system to be “good” in an ethical sense. These discussions often focus on AI fairness, justice, and accountability, mirroring Aristotelian moral virtues applied to human beings. However, Ohlhorst argues that this is only one aspect of AI virtue and that moral virtues alone do not sufficiently capture the full range of qualities AI systems should exhibit.
For example, epistemic virtues – virtues related to knowledge and reasoning – are critical for AI systems, particularly in contexts where AI must evaluate vast amounts of data to make informed decisions. An AI model that possesses intellectual humility, curiosity, and accuracy is arguably more reliable than one that solely aims for fairness. Similarly, aesthetic virtues – such as creativity and elegance – may be essential for AI applications in art, music, and design. Even practical virtues, which relate to an AI system’s ability to fulfill its intended function efficiently, play a role in determining whether an AI is truly “excellent.”
Ohlhorst highlights an imbalance in AI virtue research, noting that while artificial moral virtues have received considerable attention, epistemic, aesthetic, and practical virtues remain largely unexplored. The lack of research into these areas means that AI is often assessed purely on ethical grounds, neglecting broader conceptions of what it means for an AI to be truly “virtuous.”
Role of reliability and responsibility in AI virtue
A key part of Ohlhorst’s framework is the distinction between reliabilist and responsibilist virtues in AI. This distinction originates in virtue epistemology, where reliabilist virtues emphasize consistent performance and accuracy, whereas responsibilist virtues emphasize reflective decision-making and the ability to weigh competing considerations.
Most current AI models are reliabilist – they aim to maximize accuracy and efficiency but lack the capacity for deeper reflection and judgment. For instance, AI-powered recommendation systems reliably suggest content based on user preferences but do not critically assess whether those recommendations contribute to users’ well-being. Ohlhorst argues that moving AI beyond mere reliability requires the development of responsibilist virtues – traits that allow AI to assess risks, consider ethical implications, and adjust its behavior accordingly.
One example of responsibilist AI could be an autonomous vehicle that not only follows pre-programmed traffic laws but also adapts to unpredictable road conditions with ethical considerations in mind. Another example is an AI-driven hiring system that goes beyond simply matching candidates to job descriptions, instead incorporating fairness considerations and mitigating biases dynamically rather than through rigid, predefined rules.
Future of artificial virtues
Ohlhorst’s research underscores the urgent need for a more sophisticated and nuanced discussion of AI virtues. As AI systems become more autonomous and embedded in society, they must not only comply with ethical standards but also embody virtues that make them genuinely excellent in their domains of operation.
A broader AI virtue framework could lead to better evaluation metrics for AI models, enabling researchers to assess AI performance in terms beyond pure accuracy or fairness. It could also inform AI design principles, guiding engineers and developers to build systems that exhibit epistemic, aesthetic, and practical virtues in addition to moral ones.
Furthermore, Ohlhorst’s work opens up new interdisciplinary research directions. Philosophers, cognitive scientists, and AI ethicists could collaborate to refine artificial virtues, ensuring they align with societal needs while maintaining the distinct nature of AI as a non-human entity. Future research could also explore whether AI can develop self-improving virtues, dynamically enhancing its own decision-making capabilities in response to evolving environments.
Ultimately, shifting the conversation from AI ethics to AI virtues broadens our perspective on what it means to design truly excellent artificial systems. By moving beyond the constraints of Aristotelian moral virtue and embracing a richer framework for artificial virtue, AI research can advance toward creating systems that are not only ethical but also intellectually and functionally superior.