Caroline Petit, Global Director, Advertising and Promotions, Regulatory Affairs at Takeda.
From drug discovery to diagnostics or personalized treatment, AI promises to accelerate innovation. Regulators are catching up; the EU AI Act and the U.S. Executive Order and AI state laws all recognize that ethics must play a central role in this transformation.
Yet something essential is lacking. In the conversations most companies and regulators hold about “AI ethics,” the term is used vaguely and not rooted in that one field that has spent over a century considering ethical questions in medicine: bioethics.
A Missing Discipline Within The AI Conversation
Bioethics, also known as medical ethics, has long helped doctors, researchers and hospitals navigate medicine’s moral gray areas. It doesn’t deal in abstractions but in real human decisions, such as when to prolong life, how to respect patient autonomy and how to act justly in the face of uncertainty.
Despite this discipline’s long history, it is largely absent from discussions about AI in healthcare and pharmaceuticals. Many business and technology leaders have never even heard of it. Instead, ethics is treated as a checkbox, something you demonstrate through compliance programs or public pledges. But bioethics offers something much deeper: a structured, time-tested framework for analyzing complex human and technological decisions.
That’s why I believe now is the moment to reintroduce bioethics into the AI conversation to help leaders see how its principles can ground innovation in responsibility and humanity.
Four Principles That Still Hold True In The Age Of AI
In 1979, Tom Beauchamp and James Childress defined the four guiding principles of bioethics, namely: autonomy, beneficence, non-maleficence and justice.
• Autonomy upholds a patient’s right to make informed choices about their care.
• Beneficence encourages practitioners to act in the patient’s best interest.
• Non-maleficence (the Hippocratic “do no harm”) forbids causing unnecessary injury.
• Justice demands fairness in access, treatment and outcomes.
These are resoundingly practical principles. Physicians and hospital committees apply these principles daily in decisions that bear on life and death, to guide choices in weighing quality of life against survival, in allocating scarce medical resources and in weighing consent when patients are vulnerable.
And now, AI is presenting those same dilemmas in new forms. Machine-learning models can predict outcomes, optimize treatment protocols and expedite research, but they can also reinforce bias, obscure accountability and reshape clinical judgment.
From Theory To Practice
In medicine, ethical reflection never happens in a vacuum. Hospitals set up bioethics committees to work through complex cases. These committees are formed to share different perspectives, from doctors and nurses to psychologists, lawyers, regulators, policy makers, ethicists and sometimes even patients.
Now imagine applying that same idea to AI.
Let’s say an algorithm predicts that a patient with cancer has only a 5% chance of survival. Should the treatment team blindly accept that prediction and stop treatment or keep going because hope is important? These are the questions doctors are dealing with, and AI doesn’t make them go away; it just makes them more complicated.
Current regulations touch on AI ethics but rarely delve into the core challenging medical dilemmas. That’s where bioethics comes in, asking questions like: What does it mean to “do no harm” if doing nothing might actually cause harm? How is a patient’s autonomy impacted when they’re making decisions based on AI predictions they don’t understand? Bioethics helps clarify decisions while balancing patients’ and families’ views with medical stakes.
Why AI Committees Need Bioethics At The Table
AI is fast and powerful. But it can only see what it’s been trained to see. Without human oversight, even the best models can miss important details, amplify biases or make recommendations that make sense statistically but just don’t feel right.
That’s why the bioethics committee model is so valuable. It’s based on the same collaborative spirit that the field itself was founded on: bringing people together from all different backgrounds (medicine, philosophy, law, psychology, mathematics) to talk through what they can and should be doing.
In clinical research, there are specific laws framing the setting of medical experiments abiding by ethical guidelines for getting ethical approval for experiments, but in an everyday hospital setting, to discuss patients’ cases and related ethical challenges, committees are formed voluntarily at the hospital’s discretion. This requirement should be envisioned more widely as a legal step now that AI is playing a more significant role in diagnosis, treatment and drug discovery.
Pharmaceutical leaders are starting to discuss the role of advisory boards and consult committees to frame and monitor the consequences of using AI in practice. Bioethics committees help make sure AI recommendations get put through the lens of human experience, with patients at the center of care.
Connecting Regulation And Reality
The EU AI Act and state laws in the U.S. mark critical milestones demonstrating that regulators recognize the urgency for ethical oversight. They remain fairly broad, however, focusing more on principles than on how those principles play out in practice.
In the U.S., Institutional Review Boards (IRBs) are well established for clinical trials but their mandate does not extend to everyday patient care in hospitals. This is a similar practice we can find in Europe. That’s a gap we need to close.
We’re on the right track because ethics has entered the conversation. But now we must go deeper and embed bioethical thinking and frameworks into the everyday decision-making process.
That is, regulators, scientists and industry leaders must work in concert to help connect regulation to real-world practice and make the link between policy and patient within a sound bioethical framework.
Taking The First Steps
Pharmaceutical and healthcare leaders can help accelerate the process with the following steps:
1. Create multidisciplinary advisory groups similar to bioethics committees that review AI-driven decisions.
2. Organizational-level awareness programs will help teams identify ethical blind spots and understand how bioethical reasoning applies to AI development and deployment.
3. Scale these activities regionally and globally through the interconnection of hospitals with authorities and health networks, universities and industry partners to share case studies and frameworks for responsible AI.
By grounding AI governance in bioethical expertise, companies go beyond merely navigating compliance to reach another level of true integrity and trust.
From Awareness To Alignment
This call is an invitation for collaboration between healthcare experts to include ethics in AI legislation
To date, we have made progress on ethics and governance; now, these efforts must be anchored in depth and precision as taught by bioethics.
AI holds the potential to guide many improvements in human health. But to be truly successful, it needs the ethical wisdom that has so long guided medicine. Bioethics provides the compass to navigate technology’s role in care and reminds us that every algorithm may be tied into a patient’s future and in every decision, there must be care.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
