Best New Ideas in Health Care: How AI ‘therapists’ and machine learning could be the answer to affordable mental health care

Best New Ideas in Health Care: How AI ‘therapists’ and machine learning could be the answer to affordable mental health care

18 Feb    Finance News

“Alexa, I’m depressed.” The idea that a tabletop virtual assistant such as Alexa or Siri knows or cares how you’re feeling sounds straight out of some neurotic comedy. Today, at least.

Mental health is a brave new frontier for artificial-intelligence and machine-learning algorithms driven by “big data.” Before long, if some forward-looking psychologists, doctors and venture-capital investors have their way, your therapist could be a virtual human able to listen, counsel and even bill for that 50-minute hour.

“AI will be a game-changer,” says James Lake, a California psychiatrist and author of a series of self-help e-books showing individuals how to integrate a broad-based plan for their mental health. AI tools, he adds, will allow mental-health providers to “optimize patient care on the basis of each individual’s unique history, symptoms, needs, financial constraints and preferences.”

What if you need an anti-depressant or another psychiatric medication? AI can help a psychiatrist pinpoint the exact drug or drug class that your body will respond to, shortening or even eliminating the trial and error — and its side-effects — that frustrates both patients and doctors. Algorithms also can tell, based on someone’s age, gender, responses to questions and other factors, if that person is about to attempt suicide; Facebook FB, +1.62%, for example, uses an algorithm that flags a post if it contains words that suggest suicidal thoughts or self-harm.

As mental disorders rise — the cost to the global economy is projected to be $16 trillion over the next decade, according to the Lancet Commission — caring for patients with precision is a Holy Grail for mental-health professionals. Current diagnosis and treatment methods, while skilled and insightful, cannot fully capture the unique needs and complexity of every patient — not without time, money and a willingness that many people simply do not have. AI-based therapies have the potential to be faster and cheaper, and therefore more effective, which in turn can encourage patients to continue their counseling.

Data-based precision mental health also appeals to cost-conscious employers and insurance plans. Startups with traction in this area include Quartet Health, whose backers include GV (formerly Google Ventures), a unit of Alphabet GOOG, +0.09% GOOGL, +0.15%, which has partnered with health-care systems and health plans in several U.S. states, with a particular focus on underserved Medicaid patients. Another startup, Lyra Health, matches employees to health professionals using big data to diagnose mental conditions, and counts eBay EBAY, -0.59% and Amgen AMGN, -0.88%  among its customers.

See also  Meta, Tribal Affairs Ministry launches second phase of GOAL 2.0 to digitally skill tribal youth

“We can predict whether someone would recover if they took a specific treatment,” says Adam Chekroud, a clinical psychologist and co-founder of Spring Health, a startup whose predictive models detect mental states and recommend appropriate treatment.

Some large companies including Gap GPS, -4.22%  and Amazon’s AMZN, +0.98%  Whole Foods use Spring Health’s technology for their employees. After answering questions about personal problems and behaviors, employees are directed to an in-network provider who is given specific treatments that Spring Health determines are most likely to help that patient. Adds Chekroud: “When people did what was predicted [for them], they were twice as likely to recover.”

Encouraging outcomes are also apparent from smartphone-based “chatbots” that use AI to deliver cognitive behavioral therapy, or CBT, which can help emotionally troubled people build life skills and self-compassion. CBT is a proven way to treat depression, and CBT-based chatbots, or “conversational agents” — which simulate human conversation through voice and text — have been shown to reduce depressive episodes in users after just two weeks of daily interaction.

For example, Woebot, a chatbot that Stanford University clinicians originally developed for college students, is now a downloadable app with venture funding. Woebot introduces itself as an “emotional assistant” that is “like a wise little person you can consult with during difficult times, and not-so-difficult times.” Its chatty interface is friendly and colloquial, gently probing about feelings and habits. “This allows me to find patterns that are sometimes hard for humans to see,” Woebot explains.

Chatbots like Woebot aim to tap into the root of psychotherapy — a therapeutic relationship of trust, connection, and a patient’s belief that a provider understands and cares about their feelings, thoughts and experiences. Chatbots aren’t yet so sophisticated, but Woebot reminds you that it will check in every day, and a session ends with the app offering an element of positive psychology, such as practicing gratitude.

Clearly, AI has the potential to reshape mental-health care in powerful and meaningful ways — if people choose to get help, or are able to find it. One of every five adults in the U.S. experienced mental illness in 2018, according to the U.S. Department of Health and Human Services, but less than half received treatment. One reason is that psychiatrists and psychologists in the U.S. are concentrated in urban areas, mostly in the Northeast and on the West Coast. More than three times as many psychologists practice in New England, for example, than in the Gulf states. Many rural counties have no mental-health professionals.

See also  Tech industry responds to UN calls for global AI watchdog

Perhaps the biggest obstacle to treatment is an age-old cultural stigma about mental illness and therapy. The irony is that while mental disorders now are more out in the open, privately many patients and their families carry shame and embarrassment about it. Add financial constraints and lack of health insurance for some, and it’s clear why many people who need psychological counseling go untreated. The gap in care is global: the World Health Organization reports that one in four people globally will suffer from psychological distress at some point in life, but two-thirds will never seek help.

“Stigma is something we can do a great deal about,” says Bandy Lee, a psychiatrist on the faculty of the Yale School of Medicine’s Law and Psychiatry Division. “It takes very little to acquire an attitude of openness and educate the public in ways that could change the way we view mental illness. This has happened with cancer and AIDS — we don’t immediately prejudge the person because of the illness and we don’t consider the illness to define the person.”

Using AI-based tools to destigmatize mental illness, lower treatment costs, and promise care for people who have limited access to it would be a giant leap toward mainstreaming mental health. Moreover, for therapy-resistant people, virtual therapists present a unique solution. Just as patients are more likely to open up to a chatbot, studies show that people reveal more personal, intimate information “face-to-face” with a humanoid-like machine than to a live human. The AI in the virtual human, in turn, is designed to sense a person’s intonation, movements and gestures for clues to their mental state.

Could machines that look, act and sound human replace psychologists and psychiatrists? Probably not — that possibility is limited so far by a lack of technological understanding and infrastructure — but many clinicians fear this future nonetheless. Virtual therapists are available anytime, anywhere. “They’re never tired, never fatigued; they can build a giant database of what you say and how you say it,” says Skip Rizzo, director for medical virtual reality at the University of Southern California’s Institute for Creative Technologies.

See also  China’s Slowdown Casts Pall Over Xi’s Yangtze Delta Project

Rizzo and his colleagues are leaders in the research and development of virtual humans for mental health treatment, but he insists that the technology exists solely to alleviate the shortage of providers. “We’re not making a ‘doc in a box,’ ” Rizzo says. “We’re helping a person to put a toe in the water in a safe, anonymous place where they can explore their issues.”

If all this sounds like science-fiction, it isn’t. Consequently, like a dystopian science-fiction story, the known rewards of AI carry unknowable risks. AI poses sobering ethical issues that psychologists and psychiatrists, along with the data scientists and companies creating the technology, are just beginning to confront. Who owns your mental health? Who has access to the data? What happens if the data is hacked? Might your record be used against you by employers, by governments?

David Luxton, a clinical psychologist and an authority on the ethics of artificial intelligence in behavioral- and mental-health care, is concerned about these questions and more. “Who is controlling the technology?” he says. “I would be reluctant to provide private information about my mental state on a mobile app or the internet. How do you know what the company is going to be doing with that information?”

More chillingly, machine-learning algorithms can be biased. Algorithms look for patterns — that’s how Amazon and other retailers can tell you what to buy, given what you’ve purchased or shown interest in. But algorithmic patterns can be harmful, making systematic errors that, for instance, favor one ethnic or cultural group over another or define your emotional state based on incomplete data and inaccurate assumptions. Before long the pattern becomes self-reinforcing in its confirmation bias, leading to unfair and unfortunate results.

Given these dangers, the time is now, Luxton says, to revise and update codes and practices to ensure that AI-based mental health tools are used ethically, with particular attention to privacy and transparency rules and laws. “We’ve got this stuff in our hands,” Luxton adds. “Where are we going to be in the next 10 years?”

If you or someone you know is experiencing a mental-health crisis, the National Suicide Prevention Lifeline is available at any time at this toll-free telephone number: 1-800-273-8255.

Jonathan Burton is an editor and reporter at MarketWatch.

Leave a Reply

Your email address will not be published. Required fields are marked *