
Guadalupe Hayes-Mota | CEO, Hayes-Mota Advisors | Director of Bioethics, Santa Clara University | MIT Senior Lecturer | AI Advisor, EU & NIH
Artificial intelligence is changing healthcare at lightning speed. It is accelerating drug discovery, streamlining operations and improving diagnostics. But one of its most consequential frontiers lies closer to the patient: AI systems that answer personal medical questions in real time.
Imagine a parent with a sick child at midnight, an injured worker in a rural town where the nearest doctor is hours away or an older adult overwhelmed by managing five prescriptions. An AI tool could provide 24/7 first-line guidance, explaining symptoms, clarifying lab results or reminding patients about medication schedules. The promise is enormous.
Yet so are the risks. Without careful design, these tools can spread misinformation, worsen inequities or create false confidence that delays professional care. In medicine, a wrong answer is not an inconvenience. It can cost lives.
The question is not whether we should build these systems. It is how we should build them.
Ethics As Strategy
We know that ethics is not just a moral imperative. It is a business strategy. Healthcare is one of the most trust-sensitive industries in the world. A company that launches a careless AI tool risks regulatory penalties, lawsuits and reputational damage that can take years to undo.
Conversely, companies that embed ethics into their products from day one build trust with patients, credibility with providers and goodwill with regulators. They can also expand their market reach, reduce liability and future-proof their innovations in an evolving regulatory landscape.
Simply put, ethical AI is not only the right thing to do. It is the smart thing to do.
The CARES Framework
To guide responsible development, I propose the CARES Framework. It outlines five principles every company should adopt when building AI for personal medical question-answering.
C: Clinical Accuracy
Accuracy is the foundation. AI systems must be trained on peer-reviewed, evidence-based sources and updated whenever medical guidelines change. If blood pressure treatment thresholds shift, the AI should reflect that immediately.
But accuracy also means clarity. A system that overwhelms patients with jargon is not useful. Patients need plain-language explanations they can act on without losing fidelity to medical science.
A: Accessibility And Equity
A tool that only serves English speakers with high-speed internet is not just incomplete. It is inequitable. Ethical AI must be multilingual, culturally aware and functional on low-bandwidth networks.
Consider the immigrant worker who speaks Spanish, the refugee who speaks Arabic or the elder with low digital literacy. Each should be able to ask, “What does this test mean?” and receive a clear, appropriate answer.
Accessibility also includes designing for people with disabilities such as visual, hearing or cognitive. The business upside is obvious. Inclusivity means a larger, more loyal user base.
R: Responsibility And Human Oversight
No AI should pretend to be a doctor. Systems must be explicit, with a message such as: “This information is guidance only. Seek professional care for medical decisions.”
Responsible AI also requires escalation pathways. If a user reports chest pain, the system should immediately advise calling emergency services, not attempt to diagnose. If a patient’s answers signal worsening depression, the AI should provide hotline numbers and encourage professional care.
This is about ethics as well as liability. A system that fails to flag red-flag symptoms is not only unsafe but also a legal risk.
E: Ethics In Data Privacy
Medical questions reveal some of the most intimate details of people’s lives. Protecting that data is non-negotiable.
Companies should use robust encryption, minimize data retention and make consent processes simple and transparent. Patients deserve clear choices: Will my data be shared? Saved? For how long?
Firms that mishandle health data not only lose consumer trust, but they can also face regulatory scrutiny and lawsuits. Data stewardship is both an ethical duty and a competitive advantage.
S: Social Accountability
Finally, companies must be accountable to the societies they serve. That means independent audits, transparent reporting and community engagement. It also means owning responsibility for harm.
If a system consistently delivers unsafe advice, the public deserves transparency, and regulators must be empowered to act. Businesses that embrace accountability do not just avoid backlash, but they also position themselves as industry leaders, setting the standard.
Putting CARES To Work
What does this look like in practice?
• Symptom Checkers should go beyond listing possible conditions. They should highlight red-flag symptoms requiring urgent care and cite trusted sources while steering users toward professional follow-up.
• Medication Management Apps can do more than send reminders. They should flag dangerous drug interactions and encourage pharmacist consultation while being transparent about how drug databases are maintained.
• Chronic Disease Tools, for conditions like diabetes or asthma, can help track symptoms and provide lifestyle guidance. But they must avoid making unproven promises such as guaranteeing that supplements will improve outcomes.
In each case, CARES ensures that the tool is not just technically sound but also equitable, responsible and trustworthy.
Why The Business Case Matters
Ethical AI is not philanthropy—it is the foundation of sustainable success.
• Risk Reduction: Clear safeguards reduce exposure to lawsuits and regulatory fines.
• Market Expansion: Designing for equity and accessibility unlocks new user segments.
• Brand Trust: Transparency and accountability create reputational capital that competitors cannot easily copy. AI will not replace doctors. But it can help patients ask better questions, understand their conditions and engage more effectively in their care.
• Talent Attraction: Companies with a reputation for ethics draw top engineers, clinicians and business leaders who want to work with purpose.
Investors are increasingly attuned to ESG principles. Building AI under the CARES framework signals long-term viability, not just short-term gains.
The Path Forward
AI will not replace doctors. But it can help patients ask better questions, understand their conditions and engage more effectively in their care. In a world where millions of Americans live in “healthcare deserts” and billions globally lack access to physicians, these tools could be transformative.
The challenge is both moral and strategic. Companies that embrace CARES not only protect patients but also build the most resilient businesses in healthcare AI.
The future of AI in healthcare not only needs to be intelligent. It needs to be trusted. And in healthcare, trust is the ultimate competitive edge.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?