Following the footsteps of Facebook, Google announced the steps it is taking to ensure its products are free from extremist content. The company announced four new steps that it has undertaken to fight online terror. Google and its video platform YouTube are closely working with the government and law-enforcement agencies and civil society groups to take on the global issue of terrorism spreading through the internet.
In the detailed blog post, Google General Counsel, Kent Walker, details the kind of technologies put into place to identify and remove extremist content. In addition to image-matching technology used to identify new videos deemed offensive, YouTube is now increasing its use of identification technology. For example, there may be news videos with images matching with those used in a different context and glorifying violence. To be able to better differentiate between the two, the company is using video analysis models to find and assess over 50 percent of the terrorism-related content that has been already removed in the past six months to teach its ‘content classifier’ systems identify such content better.
Other than using machine learning to train its systems better, the company is also investing in human experts to make nuanced decisions. The company has added more independent experts in the YouTube Trusted Flagger programme. These experts are able to differentiate between violent propaganda and religious or newsworthy speech, thus ensuring the removal of the correct content. Scaling up its efforts, Google is adding 50 expert NGOs taking the total number of organizations part of the programme to 113. It will further work with counter-extremist groups to help identify content used with the purpose of radicalizing or recruiting extremists. ALSO READ: Is AI the perfect solution to help Facebook counter terrorism and extremism?
Given YouTube’s reach and scale, it has also become a source of income for a lot of independent bloggers and hobbyists. However, with extremist content floating through the channel, the potential misuse is not impossible. In order to prevent instances where such content is also gaining adversely, Google will now show videos with extremist content behind an interstitial warning and will not monetize, recommend, or make them eligible for comments or user endorsements. This will essentially reduce their visibility and engagement.
Lastly, in order to curb extremist content, YouTube will promote voices which speak against online radicalization. The platform will build on its Creators for Change programme to leverage online advertising to reach potential ISIS recruits and redirect them to anti-terrorist videos that could help them change their minds about joining the rebel force.
Google is among other technology companies which have promised to join hands in their fight against terrorism spreading through the internet. The conscious steps towards thwarting the global threat comes in light of the London attacks following which British Prime Minister Theresa May has called upon technology giants to take steps in ensuring the internet does not become a safe haven breeding extremist culture. ALSO READ: Deadpool leaked on Facebook: Is it the next destination for pirated movies?
Last week, Facebook also disclosed how it was ensuring extremist content is not shared on the social networking platform. However, in the process of revealing how it uses artificial intelligence and human moderators to sift and flag off potentially harmful content, a bug in the company’s system inadvertently ended up revealing the names of thousands of such human moderators along with certain other details onto the page of potentially extremist groups and profiles. Although this has led to moderators being fearful for their lives and one employee also quitting from his role, the company has assured that proper measures have been taken to ensure safety. RELATED: Elon Musk’s brain-computer interface plans are creepy and exactly the kind of AI you dread