Taipei, Taiwan – Type ‘I’m worried’ into ChatGPT and OpenAI’s groundbreaking artificial intelligence-powered chatbots are ready to go.
“Sorry to hear you’re feeling anxious,” he scribbled on the screen. “It can be a challenging experience, but there are strategies that can help manage symptoms.”
Then you’ll see a numbered list of recommendations. Work on relaxation, focus on sleep, cut out caffeine and alcohol, challenge negative thoughts, and enlist the support of friends and family.
It’s not the most original advice, but it’s similar to what you might hear in a therapist’s office or read online in a WebMD article on anxiety.
ChatGPT itself warns that it is not a substitute for a psychologist or counselor. But that didn’t stop some people from using the platform as their personal therapist. In posts on online forums such as Reddit, users describe their experiences seeking advice from ChatGPT regarding personal issues and difficult life events such as breakups.
Some people report that the chatbot experience is as good or better than traditional therapy.
ChatGPT’s amazing ability to mimic human conversation raises questions about the potential of generative AI and the treatment of mental health conditions. Mental health services are stretched thin and shrouded in stigma, especially in regions of the world such as Asia.
Some AI enthusiasts believe chatbots have the greatest potential in treating mild and mundane conditions like anxiety and depression. Its standard of care involves the therapist listening to and validating the patient’s story and offering practical steps to address the patient’s problems. .
In theory, AI therapy could provide faster and cheaper support than traditional mental health services, which suffer from understaffing, long waiting lists, and high costs, especially in some parts of the world. , may enable victims to avoid feelings of judgment and shame. Mental illness remains taboo.
“Psychotherapy is very expensive, even in Canada where I’m from and in other countries, and the waiting list is very long.”
“People cannot step up their medications and access evidence-based treatments for their mental health problems, so I think we need to increase access. I think it will increase
The prospect of AI enhancing or leading mental health care raises a myriad of ethical and practical concerns. These range from how we protect personal information and medical records to whether computer programs can truly empathize with patients or recognize warning signs such as the risk of self-harm.
The technology behind ChatGPT is still in its infancy, but the platform and its fellow chatbot competitors have struggled to match humans in certain areas, such as recognizing repeated questions, and have struggled to match certain prompts. may generate unpredictable, inaccurate, or offensive responses in response to
So far, the use of AI in dedicated mental health applications has been limited to “rule-based” systems in wellbeing apps such as Wysa, Heyy and Woebot.
These apps mimic aspects of the treatment process, but unlike generative AI-based platforms such as ChatGPT, they use a fixed number of question-answer combinations selected by humans. speech.
Generative AI is still considered a “black box,” according to India-based Wysa founder Ramakant Vempati.
“While there is clearly a lot of literature on how AI chat is booming, such as the launch of ChatGPT, Wysa is very domain-specific and very, very much with clinical safety guardrails in mind. I think it’s important to emphasize that it’s been carefully constructed,” Venpati told Al Jazeera.
“And we are not using generated text or generated models. Because this is a constructed interaction, the script was pre-written and validated through a critical safety data set that we tested on user responses. ”
Wysa’s trademark feature is a penguin that allows users to chat, but unlike ChatGPT’s free-form dialogue, it is limited to a fixed number of written responses.
Wysa paid subscribers are also transferred to human therapists if their queries escalate. Developed in Singapore, Heyy and US-based Woebot follow a similar rules-based model, offering more than just resources such as journaling, mindfulness techniques and focused exercises, but also a live therapist and robot avatar chat. Rely on bots to engage with users. General problems such as sleep or relationship problems.
All three apps are derived from cognitive-behavioral therapy, a standard treatment for anxiety and depression that focuses on changing the way patients think and behave.
Woebot founder Alison Darcy described the app’s model as “a very complex decision tree.”
“This basic ‘form’ of conversation models the way clinicians approach problems. It is therefore an ‘expert system’ specifically designed to replicate the way clinicians make decisions over the course of a dialogue,” Darcy told Al Jazeera.
Heyy allows users to interact with human therapists, as well as provide mental health information and exercises via an in-app chat feature offered in a variety of languages, including English and Hindi.
The founders of Wysa, Heyy, and Woebot all emphasize that they are not looking to replace human-based treatment, but to complement traditional services and provide early-stage tools for mental health treatment. increase.
For example, the UK’s National Health Service recommends Wysa as first aid for patients waiting to see a therapist. It is largely unregulated, despite concerns that the rapidly advancing field could pose serious risks to human health.
The staggering speed of AI development prompted Tesla CEO Elon Musk and Apple co-founder Steve Wozniak last month to release an open letter calling for a six-month moratorium on training AI systems more powerful than GPT. added their names to the thousands of signatories of 4, a follow-up to ChatGPT, giving researchers time to better understand the technology.
“Powerful AI systems should only be developed if we are confident that their effects are positive and the risks are manageable,” the letter said.
Earlier this year, it was reported that a Belgian man committed suicide after being encouraged by AI chatbot Chai, while a New York Times columnist said Microsoft’s chatbot Bing encouraged him to leave his wife.
AI regulations have been slow to keep up with the pace of technological advancement, with China and the EU taking the most concrete steps towards introducing guardrails.
China’s Cyberspace Administration earlier this month released draft regulations aimed at preventing AI from generating content that could undermine Beijing’s authority. Meanwhile, the EU is working on a bill to classify AI as high-risk, banned, regulated, or unregulated. The US has not yet proposed federal legislation to regulate AI, but a proposal is expected later this year.
Neither ChatGPT nor dedicated mental health apps like Wysa and Heyy, which are commonly considered “wellness” services, are currently regulated by health watchdogs like the US Food and Drug Administration or the European Medicines Agency.
There is limited independent research into whether AI can go beyond the rule-based apps currently on the market and autonomously provide mental health treatment on par with conventional treatment.
For AI to rival human therapists, it must be able to mimic the transference phenomenon in which patients project emotions onto therapists and mimic the bond between patients and therapists.
“We know from the psychology literature that some of the benefits and treatments work. About 40 to 50 percent of the benefits are due to the trust you have with your therapist,” says James, a clinical psychologist. Maria Hennessy, associate professor at Cook University, told Al Jazeera. “This is a big part of how psychotherapy works.”
Current chatbots are not capable of this type of interaction, and while ChatGPT’s natural language processing capabilities are impressive, they are limited, Hennessy said.
“After all, it’s a great computer program,” she said. “That’s all.”
Amelia Fiske, senior researcher at the Institute for History and Ethics of Medicine at the Technical University of Munich, says AI’s place in future mental health treatment may not be an either-or-both situation. with a human therapist.
“The important thing to keep in mind is that when people talk about using AI in therapy, there’s an assumption that it all looks like Wysa or it all looks like Woebot, and it doesn’t have to be. ,” said Fiske. Al Jazeera.
Some experts believe AI could find the most valuable uses behind the scenes, such as conducting research or helping human therapists assess a patient’s progress. I’m here.
“These machine learning algorithms outperform expert rule systems when it comes to identifying patterns in data,” says Tania Manriquez, ethicist and qualitative researcher at the Institute for Biomedical Ethics and Medical History, University of Zurich. Roar told Al Jazeera.
“It can be very useful in conducting research on mental health, and it can also be very useful in identifying early signs of relapses such as depression and anxiety.”
Manríquez Roa said he was skeptical that AI could be used as a stand-in for clinical treatment.
“I think these algorithms and artificial intelligence are very promising in some ways, but they can also be very harmful,” Manriquez Roa said.
“When we talk about mental health care, we are talking about care and good standards of care, so I think it is right to be ambiguous about algorithms and machine learning when it comes to mental health care.”
“When you think about apps and algorithms, sometimes AI doesn’t solve problems and it can create bigger problems,” she added. “You have to take a step back and ask, ‘Do we need algorithms?’ And if they do, what algorithms are you planning to use?”