In a first, Facebook has made public its guidelines on removal of abusive content. The company published 25 pages of detailed criteria, complete with examples, which its moderators use when removing violence, spam, harassment, self-harm, terrorism, intellectual property theft, and hate speech from the platform. This comes as Facebook is still answering pending questions owing to how it tracks and access user data.
From the controversial removal of the ‘Napalm Girl’ historical photo, Facebook has come a long way in improving its rules when it comes to taking down content deemed unsuitable for the platform. One of the most important changes that the company has introduced is that it no longer disqualifies minorities from shielding from hate speech because an unprotected characteristic like “children” is appended to a protected characteristic like “black”, TechCrunch reports.
It is to be noted that the policies of Facebook have not been changed. But, this is the first time the company is making its method public. The guidelines will also be translated into over 40 languages for the public to understand. The company uses a mix of AI and human moderators to scan content related to terrorism or extremism. However, understanding the context of the content is important, which the current AI systems are yet to perfect. For this, Facebook has 7,500 content reviewers, which records an increase of 40 percent from a year ago.
These human moderators have to undergo exposure to disturbing content on a daily basis. The abusive content ranges from child porn to beheading videos, and racism. While these moderators are trained to deal with this, and have access to counseling, they can request to analyze certain kind of content they are sensitive to. Bickert did not reveal if Facebook imposes an hourly limit on how much of such content is analyzed by these moderators per day. In comparison, YouTube recently implemented a four-hour limit.
Under the Community Standards section, Facebook categorizes objectionable content under Hate Speech, Graphic Violence, Adult Nudity and Sexual Activity, and Cruel and Insensitive. Currently, Facebook allows users to request a review of a decision to remove their profile, page, or group. With the revised guidelines, Facebook will also notify them when their objectionable content is removed and allow them to request a review by simply hitting a button. The review is said to happen within the course of 24 hours.
To make its platform hate speech and abuse free, Facebook will also be holding Facebook Forums Community Standards events in regions including Germany, France, the UK, India, Singapore, and the US to give its biggest communities a closer look at how the social network’s policy works.
Another noticeable change in the guidelines is the inclusion of “black children” as protected from hate speech and not just “white people”. Facebook’s VP of Global Product Management Monika Bickert says “Black children — that would be protected. White men — that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion . . . If someone says ‘this country is evil’, that’s something that we allow. Saying ‘members of this religion are evil’ is not.”
Watch: What is the Blue Whale Challenge
The stress on transparency arises out of the recent controversy involving data analytics firm Cambridge Analytica which allegedly wrongfully accessed user data to build anti-political tools. It is also a result of the long-running battle against fake news and hate speech. Activists have often accused Facebook of allowing misinformation on its platform. Bickert admits that there are concerns about the spread of terrorism through the platform or hate groups finding new ways to evade the moderators, “but the benefits of being more open about what’s happening behind the scenes outweighs that.”