Even as the debate around machines becoming smarter with artificial intelligence (AI) is still going strong, Facebook intends to step up its work on making the technology efficient enough so as to help save lives. CEO Mark Zuckerberg announced how with the use of AI, suicides can be prevented. Also Read - WhatsApp announces rollout of end-to-end encrypted backups for Android, iOSAlso Read - Facebook launches Ray-Ban Stories smart glasses alongside new Facebook View app
In his post, Zuckerberg said Facebook is upgrading its AI tools to identify when someone is expressing suicidal thoughts through posts, even before someone else reports them as conflicting. This is said to enable Facebook to reach out to those in need of support quickly. Also Read - 5 best AR or VR toys to help your child learn while having fun: Orboot Mars, Educational flashcards and more
It seems extremely complex to identify possibly suicidal thoughts from posts. However, the upgraded AI tools will use pattern recognition to identify signals. So for example, if there’s a status by your friend to which you comment asking if everything is okay, the system will report it to Facebook’s team of moderators who are available 24×7, and have been trained to provide help to an individual. Facebook is also working with around 80 partners including Save.org, National Suicide Prevention Lifeline and Forefront, which provide resources to assist at-risk users and their networks.
Now, if there is someone expressing thoughts of suicide in any type of Facebook post, the AI tools will proactively detect it and flag it to the team of moderators, and also make reporting options available for viewers more accessible. According to TechCrunch, when a particular post is reported, the system can highlight the part of the post or video that matches suicide-risk patterns. This saves time required by human moderators to sift through the content.
AI prioritizes user safety than other content-policy violations such as depicting violence or nudity. The tools then bring up local language resources from the partners including helpline numbers and nearby authorities.The human moderators can then take help from these resources and attempt to send assistance to the suicidal individual’s location, or suggest the user mental health resources, or send them to friends who can talk to the user.
The use of AI tools to, in a way, scan through users’ posts and comments without their knowledge could trigger the privacy battle again. However, Facebook s chief security officer Alex Stamos addressed the concerns through a tweet suggesting that the misuse of AI will be a risk forever, and hence, it is important to set good norms today.
Zuckerberg says going forward, the AI system will be able to trace subtle nuances of language and be able to identify issues beyond suicide, such as online bullying and hate speech. Preventing suicides is one of the newest areas where Facebook is incorporating AI in the mix. Other areas where Facebook is actively using the technology is spotting fake news.