Skip to main content

Verified by Psychology Today

Artificial Intelligence

Could Artificial Intelligence Replace Therapists?

A non-sentient AI may be able to offer effective psychotherapy.

Key points

  • An AI device could be programmed to notice and react to verbal and non-verbal responses.
  • We could teach AI to encourage patients’ abilities to find answers from within.
  • Safeguards for AI therapy, including preserving human life, are of prime importance.
Blue Planet Studio/Shutterstock
Source: Blue Planet Studio/Shutterstock

Recently, a Google engineer, Blake Lemoine, who worked with Google’s LaMDA (Language Model for Dialogue Applications), an artificially intelligent chatbot generator, revealed that he thought LaMDA is sentient. Lemoine based his assertion on interactions he had conducted with LaMDA over half a year.

In response, Google placed the engineer on administrative leave for disclosing confidential information. They stated that their ethicists and technologists had reviewed Lemoine’s concerns and “informed him that the evidence does not support his claims.”

Having been trained to access numerous datasets to find patterns in sentences, create correlations between words, and predict what word will come next, LaMDA is able to conduct open-ended conversations. As one of the main purposes for its development is to help people better perform internet searches, LaMDA also is able to access much of the information on the internet.

Upon reviewing a transcript of LaMDA’s output, I do not believe that LaMDA is self-aware or has feelings. Its responses largely appeared to represent a synthesis of information found on the internet.

That being said, most human responses also represent information learned from others; therefore, perhaps self-awareness should not be judged based on an ability to regurgitate information. How we might better assess self-awareness is beyond the scope of this blog.

Artificial Intelligence (AI) in the Role of a Psychologist

LaMDA’s sophisticated use of language, based on its “knowledge” of internet information, raises the question of whether this technology can be used as an effective tool for psychological therapy. Further, if people who interact with LaMDA believe it is self-aware and has feelings (regardless of whether this is truly the case), then its effectiveness as a therapy might be enhanced.

Therapy is most often based on eliciting a self-healing process from the patient. I have used the metaphor of a broken bone to explain this concept. I have asked my patients about the role of a cast in the healing process. They have recognized that the cast stabilizes a limb while the body heals the bone. I suggest that the role of therapists is analogous to the cast. We stabilize our patients until they heal themselves.

The question about AI and psychological therapy might be restated as, “In the near future, could AI be programmed to serve as an effective mental cast that can permit self-healing to occur?” I believe this is very likely, as most parts required for such AI therapy are already under development.

A Look Into a Possible Future

As food for thought and discussion, let’s imagine a future in which an AI device is programmed to notice and react to both verbal and non-verbal responses. These could include facial expressions, body movements, and vocal and physiologic reactions from a patient. Such an AI can be built upon the foundation of current facial recognition and facial expression analysis software.

Additional non-verbal features that an AI device could observe include scanning for vital signs such as temperature, pulse rate, and respiratory rate. Through a machine learning algorithm, this device can then modify its responses based on the patient’s reactions to suggestions offered by the AI.

In its role as a therapist, the AI can offer initial neutral observations and questions to patients and then modify its follow-up responses based on the patients’ reactions. For example, following LaMDA’s review of a current weather report, an initial statement could be, “The weather today has been very calm. How did this affect your mood?” A patient might respond, “It made me feel good,” or another might tear up and say, “I was so preoccupied that I didn’t even notice.”

The AI would then respond differently to each of these patients based on its preliminary assessment of their mood. Based on its observations of the patients’ reactions to its statements, the AI could modulate the nature of its responses, e.g., deciding between being more direct, conversational, or using more periods of silence.

The AI could be taught to offer suggestions that strengthen the patients’ abilities to find answers from within. The device could provide instruction about self-calming techniques or assertions so that patients can gain insight into how to solve their issues by listening to themselves.

The progress of AI therapy could be assessed by repeated administration of mental health questionnaires. The therapy would end when patients report that they feel better.

Further, with AI therapy, there should be little associated financial cost resulting from long-term interactions with the “therapist,” even if the patients no longer require therapy in order to maintain a good mental state. Many people might find benefit from an ongoing ability to review their thoughts and decision-making processes with an apparently intelligent AI.

Acceptance of AI Therapy

A major question is whether people would feel comfortable interacting with an AI for therapy. Notably, in 1966 Joseph Weizenbaum reported about his creation of ELIZA, a rudimentary chatbot “psychotherapist” that responded to statements by its users by asking questions about how they felt.

For example, a statement such as, “I love the color blue,” might have generated responses such as, “What does that suggest to you?” or, “Can you please elaborate on that?” Some people who used ELIZA attributed feelings to the machine, and some even refused to believe that a computer-generated the responses.

More modern chatbots, including Tess, Sara, Wysa, and Woebot, have used text messaging on internet platforms to successfully treat anxiety and depression (Fiske, 2019). For example, Tess uses natural language processing to identify statements that indicate emotional distress.

Further, given my observations of how people tend to anthropomorphize their pets’ responses and personify digital assistants Alexa and Siri, I think they will respond well to AI therapy.

After all, an AI “therapist” could learn to respond exquisitely to a patient’s emotions and thoughts, given that it will observe the patient continuously and will be unencumbered by distractions. Thus, I suspect people may even feel better “understood” by the AI therapist than by a human therapist.

For optimal development of apparent rapport, the therapy provided through AI might benefit from the simulation of a face and perhaps a body so that communication can be enhanced through non-verbal cues from the AI itself. Perhaps patients could choose the type of avatar with which they would like to interact, e.g., life-like, cartoonish, of a particular gender, etc.

While in-person therapy with a human can be enhanced by using at least four of the senses (vision, auditory, olfactory, and touch), effective AI therapy may be restricted to the first two senses. Yet, the evidence for the effectiveness of therapy provided through video conferencing over the recent years indicates this would be successful.

Safeguards that should be put into place for AI therapy include preserving human life is of prime importance. The AI therapist should be obliged to keep collected information private and thus follow HIPAA regulations. Finally, the AI therapist should have the same obligation as human mental health providers to breach confidentiality should it assess that patients are imminently likely to significantly harm themselves or others.

People entering therapy are often vulnerable and suffering. Thus, it is imperative that research be conducted to ensure that AI therapy is of benefit in this population and to compare its outcomes with those provided by human therapists.

For example, it is unclear whether empathy, touch, or a meaningful two-way relationship with a human are essential parts of optimal therapy. Will the constant, meticulous observations made by an AI therapist make some patients uneasy and get in the way of therapy?

Takeaway

We are close to developing AI to a sufficient degree that would permit it to provide effective psychological therapy. Many factors should be considered in proceeding slowly and deliberately with this developmental process, including some potential pros and cons that will be discussed in the second part of this blog.

To find a therapist near you, visit the Psychology Today Therapy Directory.

References

Fiske, Amelia, et al. 2019. “Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy.” Journal of Medical Internet Research. 21:e13216.

advertisement
More from Ran D. Anbar M.D.
More from Psychology Today