
By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In a world increasingly shaped by artificial intelligence, the ethical implications of this transformative technology are sparking intense debate. Dr. Emmanuel R. Goffi, a leading voice in AI ethics, recently shared his take on an episode of “Regulating AI Podcast,” illuminating the complexities of governing AI in a way that balances innovation with responsibility. His message is clear: the path to ethical AI requires confronting biases, embracing cultural diversity, and fostering informed dialogue.
Goffi begins by dismantling the prevailing narrative around AI, often clouded by misinformation. “The biggest ethical issue today is the narrative, the discourse, the discussion that we have around that,” he explains. Biases, he argues, are not flaws to be eradicated but realities to be understood. They shape how we perceive AI’s capabilities and influence the expectations we place on it. Rather than chasing an unattainable bias-free ideal, Dr. Goffi advocates for acknowledging these human tendencies to set realistic goals for AI development.
READ: ‘The future is now’: Congressman Gabe Amo on AI policy, equity and education (April 2, 2025)
The absence of consensus on what AI ethics entails adds another layer of complexity. With no standardized certification for AI ethicists, organizations struggle to identify qualified experts.
Goffi emphasizes the need for diverse backgrounds blending philosophy, technology, and cultural perspectives to address this gap. “Morals [are] made of those big norms that are out there applicable to everyone in the city in it could be community ethics is a decision that you’re making in a very specific situation. Ethics [are] really highly contextual,” he notes, highlighting how moral frameworks vary across cultures and use cases. This distinction between morals and ethics is pivotal, as it underscores the need for tailored ethical frameworks that reflect real-world applications.
Creating effective AI ethics boards is another challenge. Goffi suggests a mix of internal and external members to ensure transparency and honesty. External perspectives, free from workplace pressures, can foster candid evaluations, helping companies navigate ethical dilemmas without compromising integrity. He also champions educational initiatives that encourage debate, urging students to wrestle with ethical dilemmas to develop nuanced solutions.
On a global scale, AI governance faces hurdles rooted in cultural differences and power dynamics. Dr. Goffi critiques the EU AI Act for its superficial approach, arguing that rapid technological advancements outpace rigid regulations. He warns that cultural tyranny happens when dominant powers impose their values, stressing the importance of true compromise among nations. For historically marginalized communities, prioritizing their own values over imposed Western viewpoints is crucial for equitable participation in AI’s future.
The issue of bias remains thorny. While a bias-free world might seem appealing, Goffi celebrates the “beauty of our imperfections.” Biases, he explains, are contextual what’s discriminatory in one culture may be acceptable in another. This complexity demands a deliberative approach, prioritizing dialogue over purely technological solutions.
READ: California lawmaker Ted Lieu discusses AI regulation and legislative efforts (April 7, 2025)
In military applications, assigning responsibility for autonomous systems is particularly fraught. Goffi points out that accountability often depends on post-incident analysis, involving not just operators but also superiors and legal advisors. Meanwhile, technologies like facial recognition and deep fakes raise broader societal concerns, requiring critical thinking to discern truth from manipulation.
Ultimately, Goffi’s vision for ethical AI hinges on adaptability and inclusivity. Regulations must evolve with technology, and diverse voices must shape the discourse. Ethics must not be an afterthought or an imported ideal. Instead, it must be co-created—rooted in culture, responsive to context, and guided by continuous, inclusive dialogue.