Online extremism is one of the most pressing issues internet companies are dealing with after fake news. YouTube, being one of the largest global platforms for sharing video content, is not immune to the spread of violence, abuse, and extremism. However, the company is taking steps to purge it of such content and it recently updated the platform to better deal with the spread of dangerous videos. Also Read - Google Workspace now available for everyone, including free Google account ownersAlso Read - UEFA Euro 2020: Colourful Google Doodle kicks off European Football Championship
YouTube has a four-step plan to combat terrorism spread through its platform and a mix of machine learning and human moderators is what forms the core of the ambitious plan. Machine learning has become the buzz word for this year similar to how a child is taught to decipher content based on a guide, this branch of artificial intelligence uses algorithms to decode complex information from pre-fed data. Also Read - Android 12 beta 2 rolling out: New privacy features, tweaked design and more
However, for a sensitive issue like that of extremism, it is crucial to gauge how much can the machines be trusted. With machine learning, YouTube is able to classify uploaded content automatically and delete those videos which contain extremist and terrorism-related content. The company announced that machine learning technology is responsible for over 75 percent of all videos which are taken down from the platform. ALSO READ: Google finally joins the battle against fake news with Project Owl, but will that be enough?
While machine learning has accelerated the process of identifying and removing such content, one must take into account that as these decisions are based on previously fed data, such as an image or video clip related to ‘violence,’ it may not always be the most accurate. Therefore, similar to Facebook‘s systems, YouTube also relies on the human force.
The company has given users in its ‘trusted flagger program’ to help identify and flag content deemed unsuitable. These users are considered three times more accurate at finding offensive content that actually violate the platform norms than machine learning alone. YouTube has further extended its program to include advice from NGOs and institutions such as the Anti-Defamation League, who help make more nuanced decisions about violent propaganda and religious or newsworthy speech, mic.com reports. RELATED: Is AI the perfect solution to help Facebook counter terrorism and extremism?
It is difficult to maintain the free culture of the web if there are more restrictions than openness. In order to address the gray-area videos, YouTube makes it difficult to find videos that may contain hateful or supremacist language. Videos with hate speech are disabled from being monetized, recommended, or considered eligible for comments or user engagements.
Lastly, spreading the awareness about the larger harm extremism does to humanity is what YouTube aims to achieve. To help people deviate from falling prey to such content, YouTube has decided to redirect users who look for specific extremist content to playlists with ‘curated’ videos which confront and debunk violent extremist messages. For this, YouTube has asked its budding community of influencers to spread the word. ALSO READ: Facebook to fight fake news by restricting changes to link previews