In a bid to stem hate speech, Facebook has revealed that its Artificial Intelligence (AI) systems are spotting more offensive photos than humans on its platform. According to a Tech Crunch report, nearly 25 percent of Facebook engineers now regularly use its internal AI platform to build features and do business but the best use is to check and find offensive photos. Also Read - Facebook gives voice to emojis with Soundmoji: Here's how to sendAlso Read - Netflix could launch its video game streaming service next year, at no extra cost
“One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human,” Joaquin Candela, Facebook s director of engineering for applied machine learning was quoted as saying. Also Read - You can now use WhatsApp web without your phone
This AI system helps rank News Feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 percent, he informed.
AI can finally help Facebook tackle hate speech.
Facebook, along with Twitter, YouTube and Microsoft have also agreed to new European hate speech code that requires them to review “the majority of” hateful online content within 24 hours of being notified – and to remove it, if necessary.
The new rules, announced by the European Commission, also oblige the tech companies to identify and promote “independent counter-narratives” to hate speech and propaganda published online. According to the Verge, hate speech and propaganda have become a major concern for European governments following terrorist attacks in Brussels and Paris and amid the ongoing refugee crisis.
The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” Vera Jourov , the EU commissioner for justice, consumers, and gender equality, said in a statement.
“Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and to spread violence and hatred, she added. In short, the ‘code of conduct’ downgrades the law to a second-class status, behind the ‘leading role’ of private companies that are being asked to arbitrarily implement their terms of service,” the statement read.
“This process, established outside an accountable democratic framework, exploits unclear liability rules for companies. It also creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism, it added.