In recent months, the conversations around artificial intelligence (AI) and how it may transform our lives have become quite common. From computer and data scientists to philosophers and ethicists, there is no shortage of predictions, excitement and anxieties. ChatGPT has added a new and important dimension to the conversation about data, knowledge, creativity and our way of doing things.
As with many new tools, the realm of public health is well within the range of impact of new developments in AI. In the recent past, a number of opinions and stories in the newspapers in Pakistan have talked about how AI can make a significant positive impact on public health in the country. The arguments given in support of this thesis focus fundamentally on improving efficiency in our public health system. They talk about better diagnosis, improved patient management, enhanced delivery of health services and efficient use of patient health data. Indeed, all of these aspects of our public health system are in desperate need of an upgrade. However, it is rather troubling to note that nearly all articles, op-eds and stories that focus on the potential rewards of AI fail to even mention the deep ethical questions that emerge from not just the use of AI in public health in general, but questions that are particularly relevant to Pakistan’s public health system.
First, we must note that medical ethics in the country are far from where they ought to be. A whole spectrum of medical malpractice that includes, but is not limited to, financial corruption, negligence, physical and verbal abuse of patients, overcharging for services, kickbacks and unauthorised perks from industry is rampant. Vulnerable patients, on the other end of the equation, have few rights and if they have any, remain woefully unaware. The legal framework, hefty fees and dysfunctional judicial system make it impossible for patients to be compensated for the wrong that is committed against them. The regulatory framework that should underpin all medical practices is either completely absent or absent in essence.
In this environment, a potent new tool needs to be handled with great care. Increased reliance on AI for diagnosis may sound cool to many, but the system is far from perfect. What would happen in case of misdiagnosis? What safety nets would be in place? Who will be held responsible? Dipping into patient records to improve delivery sounds good in a perfect system – but in a country where data privacy is still in its infancy, how would we ensure that the patient and their data is protected? Similarly there are important ethical questions about wearable devices and rapid reporting systems.
Perhaps the most important thing to consider for us is the oldest principle where it all starts – do no harm. It is not only to be considered at the level of a patient, but society as a whole. Does reliance on AI create more haves and have-nots? Does having a smartphone become a pre-requisite for good care? What about those who are too poor, and getting poorer by the minute given the state of the economy? Does a set of new tools make it harder for them to get care they desperately need?
Before we jump on (yet another) bandwagon because everyone else is doing it, let us first make sure that we have the safety nets in place. We have plenty of examples to learn from when it comes to technology and the vulnerable. For example, when a seeming cool technology (e.g. an iris scanner) was used to improve efficiency of aid in a refugee camp in Jordan, the entire system did more harm than good. Lack of data privacy has made many Rohingyas, and their families, more vulnerable and unable to go back to their homes. There are plenty of other examples when fascination with a technology, without an ethical framework, did more harm than good. We absolutely need to improve our healthcare system and increase its efficiency so it works for everyone. This requires an effective ethical framework more than the newest gadget.
Published in The Express Tribune, March 28th, 2023.
Like Opinion & Editorial on Facebook, follow @ETOpEd on Twitter to receive all updates on all our daily pieces.