Examining the Possibilities of Using Artificial Intelligence in the Counseling Setting: Part 2

Posted By: Hayley Twyman Brack Clinical Practice,

a person stands with hands turned upright. A screen appears with graphics and the letters "AI"

Part 2: Concerns and Limitations

As explored in the previous article outlining the history and possible clinical utility of its use in therapy, artificial intelligence technology has had a role in mental health care and research since the 1960s. However, there are still ethical and evidential concerns regarding using AI in therapy. Decision-making capabilities, combined with potential data bias, may lead to ethical pitfalls in the utilization of AI in mental healthcare.

In order for AI technology to formulate an answer or solve a problem, it must have a catalogue of data to compare, contrast, and compute the answer to the question it has been given. If an AI program is tasked with making a diagnostical decision or a therapeutic response, then it must have “learned” what a proper decision or response is by having access to a database of accurate decisions or responses. According to the United Nations Educational, Scientific, and Cultural Organization (UNESCO)’s Recommendation on the Ethics of Artificial Intelligence, a significant component of using AI ethically is knowing where the AI program is sourcing its data.  

The book, Ethical Machines by Dr. Reid Blackman explores ethical considerations for utilizing AI. According to the book, because AI programs are created by humans and all humans have their own biases, it is not unlikely--or uncommon--that AI programs may have been created with bias. For example, in 2017 the Department of Veterans Affairs launched REACH VET, an AI program that aimed to identify veterans’ risk for suicidal behaviors. Though admissions for psychiatric hospitals decreased by 8% after its launch, independent researchers found that the program significantly underestimated the severity of illness in Black patients.

Barriers to receiving mental healthcare and the increased likelihood for both mental health and physical health concerns in African Americans to be dismissed by healthcare providers may lead to insufficient or inaccurate data on the symptoms, severity and indications of mental health problems. If the AI program makes decisions based off of insufficient or inaccurate data, it is more likely to make an inaccurate assessment or response.

Along with potential biases in the data, another ethical concern of utilizing AI is black box decision making. The “black box problem” refers to the inability to determine how an AI program came to a decision or answer to a problem once it is made. For example, if a human therapist makes a diagnosis, they can usually outline their rationale for making the decision (ex. referencing the DSM-V or specialized training). However, if an AI system is programmed to make a diagnosis, once the diagnosis is given there is no way to track how it made the conclusion. Though AI can be used as a tool in decision-making, UNESCO’s Recommendation on the Ethics of Artificial Intelligence discourages using AI as the sole decision-maker without human oversight and recommends that a human should always be held accountable for decisions made by AI.

Though there has been preliminary research that suggests AI may be helpful in some circumstances related to the facilitation of treatment or assistance in diagnosing, there have unfortunately been instances of AI programs being unhelpful—and possibly even harmful—in its response to mental health concerns. In 2022, the National Eating Disorders Association (NEDA) launched Tessa, an AI chatbot that replaced the NEDA National Eating Disorder Hotline’s human operators and was programmed to offer support and education to those experiencing problems related to disordered eating. The chatbot was disabled by the NEDA in June of this year after the program began giving dieting advice that could exacerbate the symptoms of disordered eating. It is unknown what led the AI program to give such advice and as of the publication of this article, neither the NEDA hotline or Tessa appear to be operational.

There have been many instances of promise leads, as well as potential ethical concerns, regarding the use of AI in treatment. However, there also continues to be many unanswered questions and uncertainties about its utilization. The final article of this series will explore the research deficit, lack of regulations, and mixed perceptions of trust when it comes to using AI in mental healthcare.

Click here to suggest a correction.