AI Made Friendly HERE

What if Artificial Intelligence Replaces Human Therapists?

Artificial intelligence (AI) technologies have hit us hard and fast. We love new technology, and the more it appeals to our own vanity, the better. Contemplating the hazards, and the benefits, though, I am reminded of the myth of Narcissus. So great was his pulchritude, that—cursed by the god Nemesis such that he should never be loved back by one he loves—Narcissus encounters his own image reflected in a pool of water, and despairs. He realizes that he is seeing his own reflection, and not that of another. In some versions of the myth, he starves to death. In some, he turns into a flower of unsurpassed beauty. In others still, he dies by his own hand.

AI is considered by many to be an existential threat, one of the top ways our smartest minds fear we could become extinct, essentially by our own hand. Others see AI as our salvation. We don’t know what will happen; AI introduces massive uncertainty. We don’t pause and reflect much when we invent something new and exciting. Rather, we rush to adopt it. We’ve seen this with computers and social media. Introducing AI the way we have may be akin to throwing gasoline on a fire. There are legitimate concerns that by the time we realize what’s happening, it will be too late.

Therefore, I was glad to see new work on the ethical issues surrounding the potential wholesale adoption of AI in therapy. In this interview with Nir Eisikovits, a professor of philosophy and founding director of the Applied Ethics Center at the University of Massachusetts, Boston, about his paper,The Ethics of Automating Therapy (Institute for Ethics and Emerging Technologies, 2024), we cover some of the most pressing issues. Eisikovits’s research focuses on the ethics of technology and the ethics of war. The Applied Ethics Center at UMass Boston, in partnership with the Institute for Ethics and Emerging Technologies, is leading a multiyear project on the ethics of AI.

GHB: What is the need for—and what are the potential benefits of—AI therapy?

NE: We are hearing alarming reports of an escalating mental health and loneliness crisis in the aftermath of Covid and challenges fueled by unchecked social media use. This crisis highlights the gap between therapeutic demand and supply. There’s just not enough affordable, effective mental health help on offer to answer the need. Some entrepreneurs have entered this space and tried to leverage the remarkable abilities of conversational chatbots to solve this problem by creating AI therapists. As for the potential benefits, right now I am optimistic about the technology’s ability to serve in an assistive capacity: Chatbots can be good at—they are already starting to prove good at—helping with intake, scheduling, follow-up on therapy plans, check-ins, etc. The caveat about all of this is that it’s still early days, and the amount of empirical research on how the chatbots are doing is still limited.

GHB: What are the limits and hazards of AI therapy, and what makes human-based psychotherapy unique?

NE: Even in these assistive capacities we must make sure that the apps deployed take privacy very seriously, are trained on valuable and reliable and diversified data, have good guardrails, and deploy professional, human quality control. All of that is expensive. Will companies cut corners on these requirements? More significantly, what about using chatbots as therapists rather than in these auxiliary capacities? Can a chatbot become your actual therapist? I would be very wary. Therapy depends on developing a therapeutic alliance between caregiver and patient—an actual relationship of mutual recognition in which both parties decide on the goals of their interaction together and in which they can care about each other, within boundaries.

In that relationship, important psychological processes can play out (depending on the modality of therapy) such as transference and countertransference. But chatbots are not conscious; they can’t actually feel—rather than mimic feeling—empathy and, in short, they can’t have a relationship. Is it enough for a patient to feel like someone—something—cares about them? I would argue that, long term, that does more harm than good to a patient’s understanding of and ability to function in a relationship.

GHB: Could sufficiently advanced AI ever be better than human therapy, in selected cases or in general?

NE: AI can be more helpful in CBT protocols where it focuses on giving practical guidance. Even in these cases it must be very carefully guardrailed to make sure that it dispenses competent, empirically-based guidance. There’s a well known “hallucination” problem for earlier version of all chatbots [GHB in machine learning, “hallucinations” refer to the construction of false, potentially dangerous or misleading perceptions], but it’s getting better. But even with CBT, the relationship of trust between patient and therapist is crucial for clients’ adherence and motivation. And sometimes you just need to like someone in order to listen to them. So we need to ask ourselves whether we can trust or like a chatbot. Maybe we can. Maybe we just think, incorrectly, that we can, because of our tendency to anthropomorphize technology.

GHB: What do you recommend to ensure that we proceed wisely?

NE: To recap some of what I said before, I think we should focus (and the profession should feel less suspicious about) adjunctive uses of AI—treating it as a capable administrative assistant. I think any use of the tech to replace the actual human relationship at the heart of psychotherapy should be viewed with heightened scrutiny. Not because of the guild interests of therapists, but because there is still something that is impossible to technologically replicate about human relationships, even if some of the people interacting with chatbots feel more satisfied by those interactions than by their real-life ones. The solution to that is not necessarily to celebrate the technology that makes them feel that way but to help people improve their capacities for intimacy and relating. That, of course, requires a structural investment in the affordability of mental healthcare, and, at least in the United States, that’s a tall order. So, we might be left with the question of whether chatbot therapy is better than no therapy. Your readers will have to make up their own minds about that.

To find a therapist, visit the Psychology Today Therapy Directory.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird