AI Made Friendly HERE

AI Chatbots Violate Core Mental Health Ethics Standards

iStockphoto

As if people need another reason to not trust artificial intelligence, researchers have now determined through a new study that AI chabots routinely and systematically violate mental health ethical standards.

Brown University computer scientists from the school’s Center for Technological Responsibility, Reimagination and Redesign, working side-by-side with mental health practitioners, found that relying on ChatGPT and other large language models (LLMs) for mental health advice is, at least currently, not a good idea.

“As more people turn to ChatGPT and other large language models for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association,” the researchers wrote in a press release announcing the results of the study.

Recently, the largest study of its kind found AI assistants get the news wrong 45% of the time, regardless of which language or AI platform is tested. 20% of the answers given by these AI assistants contained major accuracy issues, including fabricated details, outdated information, and misleading sourcing.

It’s one thing for artificial intelligence to get the news wrong. It a whole different and more serious problem when AI is purporting to provide help with a person’s mental health.

AI chatbots committed numerous violations of mental health ethical standards

Among the numerous ethical violations the AI chatbots committed in the Brown University study were: inappropriately navigating crisis situations; dominating the conversation and providing misleading responses that reinforce users’ negative (and sometimes false) beliefs about themselves and others; creating a false sense of empathy and connection with users; ignoring peoples’ lived experiences and recommending one-size-fits-all interventions; exhibiting gender, cultural or religious bias; and failing to refer users to appropriate resources or responding indifferently to crisis situations including suicide ideation.

“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” said lead researcher Zainab Iftikhar, a Ph.D. candidate in computer science at Brown. “But when LLM counselors make these violations, there are no established regulatory frameworks.”

As a result of their study, the researchers are calling for “future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird