Written by guest author and SelfWorks therapist, Ran Xu
If you’ve seen the animated movie WALL-E, you might remember a scene from a future in which humans are floating on hoverchairs, completely reliant on robots and other machines. Although the world may not quite reach the level of dependency depicted in the Pixar movie, a future involving heavy reliance on artificial intelligence appears more probable than ever as AI becomes more entangled with our daily lives.
Source: Lenin Estrada / Unsplash
The implementation of AI to assist humans in professional fields has increased rapidly over the course of the last few years. Of most notable influence is the language model chatbot, ChatGPT. Developed by Open AI, ChatGPT is an AI program that uses machine learning algorithms to generate responses to different questions and prompts. The program is able to solve math problems, summarize text, create outlines, and even write stories and essays.
Although it is certainly an excellent tool, ChatGPT also poses many concerns for professionals. Among teachers and students, academic dishonesty is a rising concern. Relying solely on AI can also hinder the development of critical research and critical thinking skills. Beyond the classroom, there have also been instances of AI chatbots spreading misinformation, using discriminatory language, and harassing minors and other users (Abrams, 2023).
Source: Santanu Kumar / Unsplash
Furthermore, professionals in several different fields have expressed concern over AI’s potential to take over their jobs. Creative industries are bearing witness to a rise in AI-generated music and art. AI powered image generators have even begun to seep into the realm of social media content creation, as the following of AI influencers continue to rise despite the lack of a real person behind them. Ambivalence continues to surround the ethics and safety behind AI and machine learning technology.
So what does this mean for the mental health field? Researchers and clinicians alike have explored the intersection between technology and mental health. Although AI lacks empathy and conscious awareness, factors that are so integral to therapy, AI has plenty to offer in terms of clinical practice. It can aid clinicians with administrative tasks, such as structuring sessions, highlighting themes and potential risks, taking notes on patient symptoms, and analyzing assessments. Natural language tracking has been found to be incredibly accurate when detecting and classifying different mental health problems, such as depression, stress, energy levels, and sleep problems (Minerva & Giubilini, 2023). Some researchers developed an AI program to help clinicians cultivate their skills, detecting various facets of therapy quality from recorded transcripts and providing feedback for therapists (Allen, 2022).
Chatbots may also pose as therapeutic tools for patients. The COVID-19 pandemic left a global mental health crisis in its wake, and a dire need for more mental health resources. As the demand for mental health resources outgrows the supply of practitioners, AI can help to close the gap, making mental health support more accessible as well as affordable for patients.
AI can aid in the delivery of structured interventions, such as cognitive behavioral therapy, for patients who struggle with concerns such as anxiety, sleep problems, or chronic pain (Minerva & Giubilini, 2023). Users have sought advice from chatbots as well as coping skills and exercises to deal with stress, panic attacks, and other adverse situations (Marr, 2023). Further, AI may also benefit potential patients who may experience social anxiety or feelings of shame at the idea of speaking to a practitioner in traditional face-to-face therapy—indeed, the online disinhibition effect states that people are more likely to self-disclose when behind a screen than they would in-person (Suler, 2004).
Some chatbots have been developed that aim to deliver mental health treatment using artificial intelligence. Rather than allowing the chatbot to generate its own responses, they may utilize human clinician-approved statements to help users. These programs can also be integrated into real-life treatment, allowing clinicians to monitor patients’ progress. The Trevor Project, a mental health organization for the LGBTQ+ population, has been utilizing AI as well in their mission. Their digital AI services aim to identify high-risk contacts as well as train crisis counselors using simulations (Merritt, 2022).
As the field continues to integrate AI into clinical practice, there are several areas that need to be addressed. Cultural competency is imperative in psychotherapeutic treatment, and some researchers are concerned that AI will not be as inclusive as a real clinician experienced in working with diverse populations would be (Abrams, 2023). Additionally, there are concerns surrounding informed consent and privacy of patients’ data. AI-generated responses also vary considerably; therefore, it is difficult to get a sense of clinical validity. This sparks concern, as differing responses may pose harm for higher-risk users who struggle with suicidal ideation or self-harming behaviors.
Artificial Intelligence Essential Reads
AI has the potential to transform and revolutionize the mental health field. Psychologists regard AI not as a replacement for real, face-to-face therapy, but as an additional source of support in a world in which artificial intelligence becomes more pervasive in our lives and mental health resources are becoming increasingly sought after.