How AI can improve mental health

November 7, 2024

Clinicians and researchers are putting the technology to use to improve training, identify at-risk individuals, and potentially save lives

Key Takeaways


For years, Crisis Text Line, a nonprofit that provides free, text-based mental health support and crisis intervention, has seen people start training to become volunteers, but not formally complete the program. Their team suspected many would-be volunteers dropped out because they didn’t feel confident in their abilities to help those in crisis.

Using scenarios developed by its clinical team, Crisis Text Line developed an AI conversation simulator that allows volunteers to have practice conversations about bullying, arguments with family, or suicidal intentions.

“One of the nice things about this is that crisis counselors can experience realistic conversation content and timing,” said Elizabeth Olson, a research scientist at Crisis Text Line, during a recent event at the Johns Hopkins University Bloomberg Center.

Identifying those in need

Emily Haroz, an associate professor at the Johns Hopkins Bloomberg School of Public Health, thinks AI has significant potential to enhance mental health care.

“It has the ability to increase the quality of the care we provide and to help us better identify where priority populations are in a faster way,” said Haroz, who organized the “AI for Hope” convening.

Haroz recently partnered with Indian Health Services to develop an AI model capable of identifying people at risk of suicide. It provides clinicians with an extra nudge to ask if a patient may be considering suicide.

Mitigating bias

But Haroz and other speakers at the event said addressing potential bias in data sets is critical if AI is to reach its full potential in health care settings.

Kadija Ferryman, an assistant professor at the Johns Hopkins Berman Institute of Bioethics, for example, examined data that could be fed into an algorithm that would provide a “score” for a patient’s risk of overdose. One of the possible variables is drug arrests rather than convictions.

“There’s research showing that rate of arrests doesn’t really align with criminality in certain communities,” she said. “There are racial arrests.” Including data that reflects biased policing, then, could make it appear that people of color are more likely to overdose.

A Rubik’s Cube of regulation

For Steven Posnack, a principal deputy assistant secretary at the Department of Health and Human Services, there might soon be a time when not using AI tools would be considered below the standards of care.

Posnack and his colleagues are now trying to think through how to regulate AI use in technology. He compared the challenge to that of solving a Rubik’s Cube.

“It’s a three-dimensional problem. How many people is it going to reach? How autonomous is the AI in a particular workflow?” he said. “All of those are different factors we’re going to need to consider how to approach from a regulatory policy.”