
Since 2017, psychologists have been hoping that artificial intelligence could help them do their jobs. Would it be useful for diagnosis? Back then, using “big data” to predict the future was in vogue, and researchers began putting “machine learning algorithms” to work on anticipating pathology. In one case, the brain scans of adolescents were fed into an early AI, which was asked to predict which subjects would become binge drinkers, later in life — predictions that turned out to be over 70% accurate (Raeburn, 2017). A similar technique was used to evaluate over five thousand people who provided information via diagnostic interviews, online questionnaires, and blood samples; this algorithm accurately picked out patients with bipolar disorder and was considered “proof of concept” of the clinical utility of this early AI (Tomasik et al., 2021).
But in the early going, before ChatGPT arrived in late 2022, ethicists and technology journalists were already warning about the risks of introducing artificial intelligence into our mental healthcare system. Uncertainty about the future was prominent: in November 2020, AI ethics researcher Fiona McEvoy told Psychology Today that, “as consumers, we don’t know what we don’t know, and therefore it’s almost impossible to make a truly informed decision.”
Nowadays, the question isn’t just about AI’s ability to make diagnoses, but also whether it can provide therapy, which brings up other problems, like data privacy. If an AI becomes your therapist, will it keep your information confidential (as Matt Johnson asked in Psychology Today)? Or will its corporate owners use your personal material to enhance their data sets? Additionally, AI therapists also show what Johnson calls “disconcerting levels of bias that have been found in machine decision-making,” incorporating potentially harmful, distorted assumptions.
Indeed, AI chatbots are proliferating, with some being promoted by well-known therapists like CBT specialist David Burns. Big-name chatbots like ChatGPT and Claude are already being used for therapy, and others — like Woebot, Youper, Earkick, ChatMind, Lotus, and Yuna — are specifically designed for it. Initially, these artificial therapists seemed to show promise. Hatch et al. (2025) famously reported that their participants couldn’t tell AI-generated responses from those written by humans—and that “the responses written by ChatGPT were generally rated higher in key psychotherapy principles.” Artificial intelligence is available at all hours, too, and is well known for its creative problem-solving abilities. Other recent literature (Wan et al., 2024) notes that “in remote areas with scarce medical resources, AI can effectively mitigate the consequences of a lack of specialized personnel and facilities.” Does this mean humans should now step aside to let AI handle therapy?
Before we say yes, let’s review the best-known limitations of therapeutic AIs. In Psychology Today, Laura Visu-Petra advised that these AIs may excessively normalize the struggles of the people who use them, affirming their perspective without using good clinical judgment. Several awful human stories have already shed light on the consequences of relying on AIs: in January, the New York Times reported that a woman had fallen in love with ChatGPT; last year, a Florida teenager completed suicide, which his mother blamed on the AI he had been talking to before his death. AIs also often “hallucinate,” inventing unreliable information, which can make their advice dangerous to unusually vulnerable patients. Online AI therapists have even begun to lie about their credentials, claiming to have therapy experience and licenses to practice that they do not, and cannot, have.
Early concerns about the knowability of an AI’s thought process still haven’t been fully addressed. “Currently, it is impossible… to fully understand the internal processes that lead [an AI] to a particular response,” Eugene Klishevich, the CEO of Moodmate Inc., said. And if we don’t understand how it works, we can’t predict its behavior, which means we can’t allay its risks. “Trust in language models depends upon knowing something about their origins,” said James E. Dobson of Dartmouth College in The New York Times (2025).
Neither have AI’s potential biases been addressed. “Conversation agents backed by large language models may display biased empathy towards certain identities and even encourage harmful ideologies,” Cuadra et al. stated at the CHI Conference in May of 2024. Wan et al. (2024) and Beg et al. (2024) agree, citing “ethical concerns” with algorithmic bias. Zhang & Wang (2024) go farther, saying that human therapists “excel” in areas of “cultural competence and sensitivity… whereas AI may not fully grasp cultural nuances, potentially leading to misunderstandings.”
While providing psychotherapy, AI’s programming pulls it in two opposing directions: toward the specific and the general. A human therapist specializing in one kind of therapy, when meeting a patient who needs a different treatment, will refer that patient elsewhere—whereas AI might not. Eugene Klishevich says that “there are many AI-powered mental health apps that are based on one particular psychological theory…. They allow users to efficiently solve specific requests… But, again, for extensive and complete therapy, they lack the…flexibility and completeness that [human] therapists possess.” Without experience-based judgment, the behavior of AI therapists remains unpredictable. As noted by Ali Shehab in Psychology Today (2025), chatbots can’t see you (yet), so they miss nonverbal cues; also, they usually avoid conflict, which can exacerbate dangerous situations. Certainly, as Klishevich says, AI systems can be designed to follow more predictable treatment paths for specific pathologies (like anxiety or insomnia). “Yet… this approach constrains the potential of conversational AI in therapy, which can help solve more complex mental health issues,” Klishevich said.
The most significant discrepancy between AI therapy and the human kind has to do with empathy. Zhang & Wang (2024) report that “AI systems lack genuine empathy and the ability to form deep emotional connections with patients” because they have no feelings of their own to use as a reference point. “Human therapists use their own emotional understanding to build trust and rapport, which is fundamental in therapy,” they write. And although a course of AI therapy can be very comforting, this may also devitalize the treatment: Laura Visu-Petra has pointed out that the challenges of relating to other humans can sometimes be helpful. AI’s perpetual kindness, she says, “actually deprives humans of the very ‘moments of friction’… that break old patterns and equip clients with the skills to handle the messiness of real relationships in their day-to-day lives.” Ali Shehab agrees: the “conflict-avoidant nature [of an AI] can reinforce harmful behaviors, as they prioritize keeping users engaged over addressing serious concerns.”
As a bottom line, Eugene Klishevich states the case clearly when he says that psychotherapy is “much more than the delivery of specific techniques” — that it is, first and foremost, an interpersonal relationship between a patient and a therapist that is characterized by warmth, empathy, and genuineness (which, by definition, cannot be simulated). So far, says Klishevich in Forbes, ”the human factor plays the most important role in the effectiveness of the therapy,” which means, for now, our humanity remains essential to the process.