Analysis: Chatbots for Mental Health Care Booming, But Little Evidence It Helps




KFF Health News

Over the past few years, 10,000 to 20,000 apps have flooded the mental health arena, trying to “disrupt” conventional treatments. The frenzy around AI innovations like ChatGPT is driving claims that chatbots can provide mental health care.

The numbers explain why. The stress of the pandemic has left millions more Americans seeking treatment. At the same time, the United States has long had a shortage of mental health professionals. More than half of all counties lack psychiatrists. Given the Affordable Care Act mandate for insurers to provide equal mental and physical health coverage, there is a wide gap between supply and demand.

For entrepreneurs, it is a huge boon to the market. At the South by Southwest conference, where medical startups showcased their products in March, they offered apps and machines that can diagnose and treat all kinds of diseases on behalf of doctors and nurses, and how AI can improve healthcare. A near-religious conviction spread that it could be rebuilt.

Unfortunately, in the field of mental health, evidence of its effectiveness is lacking. Of the many apps on the market, few have had independent performance research to show they are useful. Most have not been scrutinized by the FDA. Although marketed to treat conditions such as anxiety, attention-deficit/hyperactivity disorder, and depression, or to predict suicidal tendencies, many do not seek medical, behavioral health, or other health care services. It’s not what it was intended for,” it warns the user (in small print). or “not an FDA-cleared product.”

There are good reasons to be cautious in the face of this huge marketing.

A few decades ago, Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology and considered one of the fathers of artificial intelligence, said that if AI can’t be a great therapist, it can be made to sound like one. I expected it to be possible. In fact, his original AI program, created in the 1960s, was that of a psychotherapist named ELIZA, who used word and pattern recognition combined with natural language programming to make him sound like a therapist. rice field.

woman: Well, my boyfriend made me come here.
Eliza: Did your boyfriend bring you here?
woman: He says I’m depressed a lot.
Eliza: I am sorry to hear that you are depressed.
woman: That’s true. I’m not happy
Eliza: Do you think that coming here will save you from unhappiness?

Although hailed as a triumph for AI, ELIZA’s “success” terrified Weizenbaum, whom I once interviewed. He said the students would interact with the machine as if Eliza was the actual therapist, even though what he created was a “party trick.”

He foresaw the evolution of much more sophisticated programs like ChatGPT. But “the experience a computer can have under those circumstances is not a human experience,” he told me. “Computers, for example, do not experience loneliness in any sense that we understand them.”

The same applies to anxiety and ecstasy. Because these emotions are neurologically complex, scientists have been unable to pinpoint their neural origins. Can chatbots achieve transference, the flow of empathy between patients and doctors that is central to many types of care?

“The core tenet of medicine is that it’s about human relationships,” said Von Koo, director of the Health Design Lab at Thomas Jefferson University and a pioneer in medical innovation. I can’t,” he said. “I have a human therapist, but it will not be replaced by AI.”

Ku said he hopes to use AI instead to ease practitioners’ tasks such as record keeping and data entry, “giving more time for humans to be connected.”

While some mental health apps may ultimately prove valuable, there is also evidence that some can be harmful. One researcher noted that some users blamed these apps for their “scripted nature and lack of adaptability beyond textbook cases of mild anxiety and depression.”

It may be tempting for insurers to offer apps and chatbots to meet mental health parity requirements. Ultimately, it’s a cheap and easy solution compared to the difficulty of providing a panel of human therapists, especially since many people don’t have insurance because they think insurance companies are paying too little. will be

Seeing AI flood the market, the Labor Department announced last year that it would step up its efforts to make sure insurers comply with mental health equality requirements.

Similarly, the FDA said late last year that it “intended to exercise enforcement discretion” over various mental health apps it reviewed as medical devices. So far, none have been approved. And only a few have earned breakthrough device designations from government agencies that are rapidly reviewing and researching devices that demonstrate their potential.

These apps primarily offer what therapists call structured therapy: when a patient has a specific problem, the app can respond with a workbook-like approach. For example, Woebot combines exercises for mindfulness and self-care (a team of therapists create answers) for postpartum depression. Another app with breakthrough device designation, her Wysa offers cognitive-behavioral therapy for anxiety, depression, and chronic pain.

But gathering reliable scientific data on how well app-based treatments work will take time. “The problem is that there is very little evidence at this point for the authorities to come to any conclusions,” said Kedar Maté, director of the Boston-based Institute for Better Healthcare.

Until we have that research, we won’t know if app-based mental health care is better than Weizenbaum’s ELIZA. AI may advance in the years to come, but it is woefully premature for insurers to claim that providing access to apps is close to meeting mental health parity requirements at this point. .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *