Facebook has been doing multiple things to check the spread of fake news on its platform. The latest is a ‘Context’ button which it is reportedly testing to allow users to receive more information about the source of the news story. The feature would also help users ascertain whether the source is credible or not, the social networking giant said. This comes shortly after the Las Vegas gun attacks about which Facebook allegedly misinformed users. Also Read - Happy Friendship Day 2021: How to send Friendship Day wishes Stickers on WhatsAppAlso Read - Google, Facebook make vaccination mandatory for employees returning to office
“We are testing a button that people can tap to easily access additional information without needing to go elsewhere. The additional contextual information is pulled from across Facebook and other sources, such as information from the publisher’s Wikipedia entry,” Facebook product managers Andrew Anker, Sara Su and Jeff Smith said in a blog post. And in cases when the information is not readily available, Facebook “will let users know, which can also be helpful context”. ALSO READ: Facebook addresses fake news, issues newspaper ads and app notifications to educate users
Recently as news circulated that Facebook had allowed untrusted Russian pages to advertise violent messages and influence the 2016 US Presidential elections, Senators had demanded that officials from the company along with those from Google and Twitter testify in courts. The ‘Context’ button is being seen as a response to that. The court of law still wants three of the most powerful social news platforms to explain how they plan to curb misinformation and news manipulation going forward. ALSO READ: Facebook to fight fake news by restricting changes to link previews
Facebook announced earlier this week that it would be hiring 1,000 more people to manually review harmful or divisive content on its platform, given algorithms have failed to do so convincingly. It would also be investing more in machine learning to “better understand when to flag and take down ads” as well as tweak its advertising policies such that content with even “subtle expressions of violence” are knocked down from its platform. “We constantly update our systems and monitor for malicious activity and we have been forthcoming in what we ve found,” an earlier Facebook statement mentioned.