Jan. 2, 2020 -- Artificial intelligence (AI) drives most of your interactions with social media. It determines which posts appear at the top of your feed and which ones don’t make the cut. It serves up ads and suggests events, too.

But AI also has a more serious job.

Computers behind social media sites, including Instagram and its owner Facebook, continually scan posts for signs of suicide risk or self-harm. (When asked, representatives of Twitter would neither confirm nor deny the use of AI for these purposes.) The systems then flag the posts for a trained professional to review.

This matters because as more social interaction takes place in the virtual world, those telltale physical signs of sadness or distress become harder to see. And, make no mistake, we are online more than ever. Each year, Americans spend more time online than the year before. But “you can’t see the slumped shoulders, the dragging feet, the breadcrumb trail that people leave,” says Andrew Reece, PhD, a behavioral data scientist who developed algorithms while at Harvard University that predict the likelihood that an Instagram or Twitter user is depressed or has PTSD. “So what would those breadcrumbs be?”

That’s where AI comes in. AI algorithms enable computers to identify warning signs of suicide that a real person might never see—or only see too late. Crisis responders all over the world are wielding these tools to save lives.

How can computers detect signs of trouble that people may not? Here’s how the computers at Facebook do it, according to a statement from the social media giant. First, scientists fed computers scads of posts until the machines learned to tell the difference between “I have so much homework I could kill myself” and true threats of suicide. Now every post that the AI system flags goes to a real person for review. The next step depends on the severity of the post.

For concerning posts that don’t pose an immediate threat of suicide, the platform sends the person resources, including the option to connect with a crisis hotline directly through Facebook messenger. These hotlines keep callers or texters talking to get them through the crisis.

“Through Facebook, we’re able to provide our services to a group of people we might not otherwise reach,” says Ashley Womble, head of communications at the New York-based Crisis Text Line, an international crisis hotline and one of Facebook’s partners in suicide prevention. “About 65% of people who contact us share something they’ve never told anyone else. So we know we are a first point of contact for many people.”

Facebook reports urgent posts directly to local police in the Facebook user’s area. If possible, first responders find the person, sometimes through cell phone pings, and check on them. To pinpoint which posts warrant emergency response, AI analyzes comments. “I’m here for you,” for example, is less concerning than “Tell me where you are” and “Has anyone seen him/her?” Facebook said in a statement.

In 2017, the first year that Facebook used AI for this purpose, emergency responders made 1,000 in-person checks as a result.

The proactive AI approach may address a crucial gap in mental health care for youth. “We were seeing massive amounts of disclosure online about what young people were feeling,” says Andrew Sutherland, manager of the Online Crisis Intervention program at Zeal, a youth-centered non-profit in New Zealand, which uses AI to help young people in crisis around the world. “They didn’t seem to be getting the support they need. We wondered if we could take it to them.”

Before Facebook and Instagram started using AI for suicide prevention in 2017, Sutherland and Zeal’s general manager Elliot Taylor started combing Instagram for red flags themselves. No AI—they just opened the app and searched for hashtags such as #depressed and #suicidal. “We’d see posts by people in distress and asked them if they wanted to talk,” says Taylor. They only reach out to public accounts posting publicly available hashtags. The format worked because it’s just not that unusual for teenagers to meet and talk to strangers online. In the U.S., nearly 60% of teens make new friends online. “They say, ‘Thank you so much for reaching out’ or ‘Yes, I would like to talk.’ They need some help, but the traditional means of accessing it [that is, seeking out help and getting it yourself] haven’t been effective,” says Taylor.

Soon after they started, Zeal adopted AI tools to increase efficiency. AI brought in more posts, for which Zeal needed more responders. The responders’ job, like at any crisis hotline, is to keep kids messaging through their crisis. Today, the program counts on about 40 trained volunteers. They aim to grow to 200, working remotely around the world, in the next year.

AI’s potential benefits extend beyond suicide prevention. Reece, the behavioral data scientist, trained computers to identify trends in the Instagram pics of depressed people. He and colleague Christopher Danforth fed the computer 43,950 Instagram photos from 166 users—71 of whom had a diagnosis of depression. Many photos were posted months before the diagnosis.

“We found that people who are developing depression post photographs, even without a filter, that are bluer, darker, and grayer than those of healthy people,” Reece says. When they add filters, people with depression tend to prefer “inkwell,” the one that renders photos virtually black and white.

Pictures from people with depression also contain fewer people than those of their peers. In a similar study, Reece and his colleagues showed that AI could detect changes in the language of tweets that could predict development of depression or PTSD.

Instagram and Twitter do not use Reece’s algorithms. He created them as part of data science research to prove that the concept would work. Reece notes that this use of AI isn’t ready for prime time. “There’s a huge privacy question that we need to think about,” he says. “Just because I can tell by looking at your tweets that you’re depressed, it doesn’t mean I should.” He emphasizes conflicts that would arise if doctors, health plans, or employers analyzed social media in this way.

Still, the research reflects AI’s potential to address countless health concerns. “It’s like an echocardiogram for your social media,” Reece says.

Computers can detect online signs of suicide risk, but friends and loved ones can see them in real life. If you or someone you know shows any of these signs, call the National Suicide Prevention Lifeline at 1-800-273-TALK or text “Home” to the Crisis Text Line at 741741.

  • Talks about wanting to die or kill him/herself
  • Talks about feeling hopeless, of having no reason to live
  • Talks about being a burden to others
  • Increases use of alcohol or drugs
  • Acts anxious or agitated, behaves recklessly
  • Sleeps too little or too much
  • Withdraws or isolates him/herself

Show Sources

Andrew Reece, PhD, behavioral data scientist, Harvard University, Cambridge, MA.

Facebook, “How Facebook AI Helps Suicide Prevention.”

Ashley Womble, head of communications, Crisis Text Line, New York.

Andrew Sutherland, manager, Online Crisis Intervention program at Zeal, New Zealand.

Elliot Taylor, general manager, Zeal, New Zealand.

Pew Research Center, “Teens, Technology and Friendships.”

EPJ Data Science, “Instagram photos reveal predictive markers of depression.”

American Council on Science and Health, “Algorithm Predicts If Twitter Users Are Becoming Mentally Ill.”

© 2020 WebMD, LLC. All rights reserved. View privacy policy and trust info