When a person dies by suicide, those in their life often wonder what they could’ve done to prevent it.

Social media users may even regret seeing something troubling posted by the person, yet not doing anything about it.

In an attempt to help, Facebook has announced it’s expanding its use of artificial intelligence (AI) tools to identify when someone is expressing thoughts about suicide or self-injury on the social media website.

Prior to this month, Facebook only used the tools on some users in the United States. Now, it’s available to most of the site’s 2 billion users, except those in the European Union, which has stricter privacy and internet laws.

Mark Zuckerberg, the chief executive officer of Facebook, says this use of AI is a positive development.

He recently posted on his Facebook timeline that, “In the last month alone, these AI tools have helped us connect with first responders quickly more than 100 times.”

How exactly do the tools do that?

Facebook isn't revealing in-depth details, but it seems that the tool works by skimming through posts or videos and flagging them when it picks up on words, videos, and images that may indicate a person is at risk for suicide.

Facebook already uses AI in a similar matter to scan and remove posts that present child pornography and other objectionable content.

A Facebook representative told Healthline that the suicide prevention tools help detect content more quickly. The AI also helps prioritize the reports, indicating which cases are more serioius.

Then, trained members of Facebook's community operations team review the content and determine the type of help to provide to the user.

These members work around the world, around the clock, and review both reports from AI tools and from concerned Facebook users.

One way the AI tool detects suicidal tendencies is by prompting users with questions such as, “Are you OK?” “Can I help?” and “Do you need help?”

Facebook’s community operations team is tasked with reviewing content reported as being violent or troubling.

In May, Facebook announced it would add 3,000 more workers to the operations team, which had 4,500 employees at the time.

According to the Facebook spokesperson, the technology helps to detect concerning content and videos that people post on Facebook often times quicker than a friend or family members can report the material.

When these are detected, the Facebook user is put in touch with Facebook Live chat support from crisis support organizations through Messenger, and able to chat in real time.

Suicide awareness advocates on board

In creating AI for suicide prevention, Facebook works with mental health organizations, including Save.org, National Suicide Prevention Lifeline “1-800-273-TALK (8255)”, and Forefront Suicide Prevention.

Daniel J. Reidenberg, PsyD, executive director of Save.org, says he’s thrilled that Facebook is taking strides to help advance suicide prevention efforts in ways that haven’t been done before.

“If we look over the last 50 or 60 years — whether you’re talking about advances in medication or treatment for suicide and mental health — we haven’t seen reductions or seen suicide drop because of those things, so the idea that maybe technology can help is the best opportunity that we have right now to try to save lives,” Reidenberg told Healthline.

While he notes that the AI tools may not be fully designed and may present false positives of people who are at risk, he says that it’s a cutting-edge intervention for suicide prevention that may take time to understand its effectiveness.

“Before AI came along, there were false positives from people who were reporting things to Facebook who thought a friend might be suicidal. AI is only speeding up the process to help eliminate some of those false positives and really pick up on those who are truly at risk,” said Reidenberg.

He adds that people do show signs of suicidal tendencies on social media, and that this is neither a good or bad thing.

“Social media is just where people are living out their lives today. Years ago, they lived it out in the park or at recess or wrote notes to each other, maybe shared over the phone. As more and more people do live their lives on social media, they share both the happy moments and the challenges they face,” he said.

The change, he adds, allows people to reach hundreds and hundreds of people at a time.

Reidenberg says if you notice someone on social media who may be depressed or at risk for self-harm, reach out to them with a message, text, or phone call if you’re close friends. Facebook even offers pre-populated texts to make it easier to start a conversation.

If you don’t feel comfortable with that approach, Reidenberg suggests using the reporting function on Facebook.

“It’s an easy and quick thing to do. The technology can’t do this alone. We need people to be involved. Not doing something is the worst possible thing that can happen,” he said.

What about privacy issues?

Aside from the good intention, it’s hard not to consider invasion of privacy.

Charles Lee Mudd Jr., a privacy attorney and principal at Mudd Law, says that Facebook scanning for keywords shouldn’t be considered a privacy violation if it’s been disclosed ahead of time.

“As long as Facebook discloses it reviews the content, I see no real privacy concerns,” Mudd told Healthline. “One should understand that anything published anywhere on the internet, including through email — private or not — or social media, may find its way to unintended recipients. At least if Facebook lets us know it has robots that read our mail — or at least scan for keywords or phrases — we can adjust our behavior should it be necessary to do so.”

While legally Facebook may be in the clear, whether it’s acting ethically is up for debate.

Keshav Malani, co-founder of Powr of You, a company that helps people make money off of their digital presence, says no matter the intentions of Facebook, every person should be free to decide how their personal data is used.

“Or else it’s a slippery slope on what is considered ‘good’ vs. ‘bad’ use of the personal information we share on platforms such as Facebook. Also, intentions aren’t enough, because biases in data can result in invalid or harmful claims from even just basic historical correlation analysis,” Malani told Healthline.

He adds that AI is only as good as the data it receives as input.

“Individual platforms such as Facebook trying to assume they know you well enough to draw conclusions about your well-being would be naive. Facebook, or any other media outlet for that matter, only cover a small part of our life, and often paint a picture we choose to share, so drawing conclusions from such a limited and possibly biased data source should be done with extreme caution,” he said.

Still, Reidenberg says people shouldn’t be afraid of Facebook using AI.

“This is not Facebook stalking people or getting into people’s business,” he said. “It’s using technology and people to try to save people’s lives,” he said. “Trust me, if you have a loved one in crisis, you want everything to be done for them, whether you’re in an emergency room or online.”

In fact, he hopes more technology can intervene with people in crisis.

“When someone is in a crisis, options and alternatives go away from them. They become very focused on what’s happening in that moment and they don’t have the tools necessary to get them through,” he said.

Anytime technology can help give people more options, Reidenberg says the less they will be in crisis. He’d like to see technology create more ways to identify people at risk before they’re even at risk for, say, depression.

For example, he says that if we know that as we become more depressed that we interact less, isolate more, withdraw more, have less energy, and talk and write differently, then programming technology to notice these changes could be beneficial.

“Let’s say you’re a regular poster on Facebook, but then you’re getting more depressed in life and your activity is dropping off slowly. Then you begin posting pictures on Instagram of someone very sad or a gloomy day outside. If we can get technology to pick up on what’s happening to you in your life based on your behavior online, we could start giving you things like resources or support, and maybe we can turn it around,” said Reidenberg.

Zuckerberg shared a similar sentiment in his post, alluding to future plans to use AI in other ways.

“There’s a lot more we can do to improve this further,” he wrote. “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”