AI Made Friendly HERE

Ethics of artificial intelligence in supportive care in cancer

Supportive care in cancer involves preventing or managing the symptoms of cancer and the side effects of treatment, and encompasses physical, psychosocial and spiritual adverse effects. Supportive care in cancer aims to improve the patient’s quality of life from diagnosis through treatment and survivorship care.1 Applying artificial intelligence (AI) to supportive care in cancer involves using AI platforms that combine cancer knowledge bases, precision medicine libraries and guidelines with patient data, including genomic profiles, laboratory tests, and medications.

The use of AI can provide decision support to patients and clinicians in delivering personalised supportive care in cancer.2 This includes, for example, optimising anticancer drug dosing to avoid toxicity and selecting the appropriate supportive care in cancer drug dosing using a patient’s pharmacogenomic profile.3 AI enables a more accurate prediction of toxicities such as emesis by including patient‐related factors added to the emetogenicity of cancer treatment. Moreover, AI can monitor patients to detect early signs of toxicity.4 In addition, natural language processing — an application of AI — has been used to extract patient‐reported outcomes of adverse events such as social isolation, which is not captured routinely and is not encoded in electronic health records but may be included in clinical notes.5 Otherwise, adverse events would need to be captured separately in patient surveys or questionnaires.6

AI also involves training computers on large datasets using algorithms that provide instructions to find patterns in the data, which, with artificial neural networks, can continue to self‐learn, weigh parameters and provide a summarised interpretation.7,8

Addressing ethical concerns relevant to using AI can reassure patients, as its use will partly rest on public perceptions of AI. This will depend on whether patients trust its accuracy, the transparency of how their data are used in the process, the privacy of their data, and their ability to make informed choices about their health information.

The first consideration for trust in AI is non‐maleficence. This is particularly important in supportive care in cancer, which aims to reduce symptoms, so adverse effects of the supportive care in cancer treatment must be minimal. This is an underlying concept in patient‐centred care which serves to reassure patients about the advice they are given. If AI is used for clinical decision support, it could cause harm by providing inaccurate or inconsistent results.9 The algorithm should not be considered as value neutral; it will reflect any biases in the training set used. Common sources of bias are race, sex, and social or economic factors, which means that the algorithm may not be transferable to a dataset with different characteristics to that on which it was derived.10 Moreover, in recognising patterns, AI tools do not provide the meaning or context of the outcome. The AI decision making is based on features of their input data, whereas human decision making encompasses knowledge, beliefs and values.10

One recommendation for improving patient information and acceptance of AI is to seek informed consent for its use. Before patients provide that consent, they will need to know the likelihood of the use of AI improving their outcomes as part of predicting an adverse effect or providing management advice. However, to date, much evaluation of AI tools is based on their comparative accuracy with human clinical decision making, whereas what patients want to know is the impact on outcomes such as their quality of life.9,11 For example, in supportive care in cancer, even if the AI is better at predicting vomiting with chemotherapy, will that translate to better control of vomiting? Moreover, as part of trusting their outputs, patients may have the expectation that AI tools have undergone a full evaluation. However, a recent review of whether clinical studies of AI were comprehensive enough to support a full health technology assessment showed that most studies had limitations and suggested a requirement that the assessment procedures should be modified to specifically assess AI before clinical implementation.12

A further recommendation is to have a global governance and regulation framework for AI. This framework would include the governance of data, such as consent and data protection, how governments can share data, benefits with the private sector, and data ownership. The World Health Organization has a working group to look at regulating AI and achieve the balance between promoting and stifling innovation.13 Other European and United States agencies, such as the Council of Europe and the White House Office of Management and Budget, are also beginning to address governance frameworks.14 The development of these frameworks should include community consultation. Nationally, the Australian Alliance for Artificial Intelligence in Healthcare is updating its roadmap after community consultation to assist with developing policy options and supporting companies to invest in AI in health care.15

As publicly available AI chatbots are being used by patients to access information on supportive care in cancer, it is important to ensure that the AI output is accurate and that will depend on the algorithm and training set. General chatbots often do not provide the source of the information, which compromises it being checked for accuracy, and quoting it could open the issue of plagiarism.16 A chatbot giving a single answer as opposed to making multiple suggestions may give an impression of false objectivity.

In addition, there is also the question of ownership of the data. Patients should be asked to give consent to the use of AI both for use of their data in training sets (which has not always been the case) and for the use of AI in clinical decision making.16,17 This requires transparency and the ability to inform patients about AI and its limitations and performance gaps.18 The problem is that in using deep learning algorithms, which continue to learn from data without further human direction, the patients will have to accept reduced transparency of the process, which becomes a “black box” in terms of how the decision was reached.

The privacy and secure storage of an individual’s data are also potential concerns with digitised data. Even with large de‐identified datasets, the potential for re‐identification must be communicated.19,20 There is always a balance between maintaining the privacy of health data and making it available for research and policy generation. One solution is distributed learning, where instead of sharing and centralising individual data, clinicians share the metadata and the algorithms analyse separate databases. This allows obtaining the same solutions as if the data were centralised, with questions and answers shared without needing to share the individual data.21

Some patients may be uncomfortable with the idea of a computer making a decision about their treatment instead of a clinician, but would accept AI‐based input into a clinician’s decision‐making process. Shared decision making then has three components: doctor, patient and AI. However, with the increasing use of algorithms where the use of patient data is more opaque, patients may not be able to exercise their autonomous input over an AI‐derived decision.22 Clinician time could be freed by the use of AI to enable engagement in better supportive care in cancer, such as providing psychosocial support, thereby enhancing the doctor–patient relationship. Alternatively, trust in the accuracy of AI could erode trust in a clinician and patients could access chatbots independent of clinician input.16

Given that adverse outcomes are particularly problematic in supportive care in cancer, another major ethical concern with AI is traceability.17 With the increasingly complex interactions of humans and AI, to whom can moral or ethical responsibility be traced for an adverse outcome for an AI‐based decision? There are so many people involved with the development of algorithms and training sets, marketing the tools, analysing the output and applying it to a clinical situation that transparent allocation of responsibility and accountability for an adverse outcome is very difficult.

Moreover, there are also legal implications. For example, if a patient had an adverse outcome through the use of AI, tort law is based on human performance or hardware defects, not defects in autonomous software. Suggested solutions would be to confer personhood on the AI tool, have everyone involved sharing a common enterprise liability, or simply apply a standard of care to the implementation and evaluation of the AI tool.23 Harm from an unrecognised AI error could be grounds for negligence, but, in future, it may be negligent not to rely on AI when a vast amount of omics and other data become part of the decision‐making process. If humans start to favour AI‐generated decisions (also known as “automation bias”), this may lead to errors of omission, where AI errors are not recognised or are disregarded, or to errors of commission, where the AI decision is accepted despite other evidence to the contrary.17

A more global ethical issue is the just allocation of resources. Are AI‐based tools only going to be available in higher income countries?13 If they do become available in countries where there is a shortage of human clinical resources, will an over‐reliance on AI lead to the pitfalls of automation bias? Even in higher income countries, there could be disparity between availability in the private and the public sectors.16

In conclusion, AI tools have enormous potential in supportive care in cancer as a clinical decision support that can analyse vast quantities of data and deliver personalised solutions, as well as providing information to patients through chatbots and providing patient support. However, patient acceptance will depend on addressing ethical challenges and, thus, there is a need for global standards for governance and for assessing the impact of the use of AI on patient‐related outcomes.

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird