Fake content is an epidemic which is spoiling the reputation of industry biggies and making them lose on ad business. Not only does it question the authenticity of the medium but is also increasingly taking away the faith users bestow on the web in general. With the advent of technology, the internet has become the go-to source for the information for education, news, research, and so on. The information today has the potential to reach millions in milliseconds and the power of social media accelerates the process. In such a situation, even a single instance of incorrect or false information – when shared on a mass level- has the tendency to bring about catastrophic damage to the collective pool of data available online.
In a bid to make the internet a more accurate source of information, companies such as Facebook, Google, and Wikipedia are working towards developing tools which not only detect false information circulated through their channels but also ask users to proactively report such instances. Once a place to connect with people, Facebook is now synonymous with a ‘news feed’ where fellow users share content discovered, at times also generated, on the platform. However, between discovering, liking, and sharing content, one important step is missed – verifying the source.
Facebook has time and again received flak for the widespread issue of fake news on its platform and CEO Mark Zuckerberg admitted that there is a lot that needs to be done to combat the issue. Now, the world’s online library, Google announced its first major attempt to combat the circulation of fake news on its search engine. At a time when others have already been working on improving their tools and algorithms to address the issue, Google’s measure might seem like a delayed effort, nonetheless crucial. If one looks up for even a single keyword, the search engine shows up millions of web pages with information. However, given the open nature of the internet, anyone with basic knowledge of creating web pages has the liberty to put up information which may or may not be factually accurate. In order to filter out such web pages from its results, Google is working on ‘Project Owl’. The aim of the project is to deal with spam, false information, piracy, and poor quality content which show up for popular search terms. ALSO READ: Facebook fake news: If Wikipedia is democratic, so is Facebook
Google labels them as ‘problematic searches’ which involve rumors, conspiracies, myths, biased content, along with offensive information. Addressing these ‘problematic searches’, Google today announced that it has improved the ‘autocomplete’ feature in its search engine. For example, if you are typing in the search box something like ‘how to kill’, the autocomplete suggestions would refrain from showing possibly life-threatening search results. The purpose of the prompted suggestions is to make search quicker to show results for what popular results for the particular phrase are.
With the new changes, Google now has a new feedback form for search suggestions, along with formal policies which indicate as to why certain suggestions might be removed. With a huge pool of data comes the dire need for constantly monitoring what goes around on the internet. There have been instances in the past where Google has been asked to pull down certain results which were false, accusatory in nature, or downright demeaning. As part of improving its search results, the company has updated its guidelines to help its human evaluators better decipher between low-quality and high-quality content and appropriately flag off content.
Furthermore, Google has added some direct feedback tools for both ‘Autocomplete’ and ‘Featured Snippets’. This will allow users to tell Google if a particular autocomplete suggestion or snippet was misleading, inaccurate, or offensive. Using this user-generated data, Google aims to improve its search algorithm and have lesser instances of fake content. ALSO READ: Facebook faces backlash after man posts murder, confession videos
The issue of fake news is not an easy one to tackle. At the core of platforms such as Google, Wikipedia, and Facebook is the large user base which makes it difficult to contain what is circulated without any filters. To combat the worldwide phenomenon, there needs to be more public awareness and stronger checks and measures which immediately sift the inaccurate from the opinionated. To better train their filtering systems, these companies are putting to use artificial intelligence and machine learning. However, these too are trained by human editors. So essentially, there is still a human telling a machine what is false and what is rightfully opinionated.
Although there still lies a huge scope of improvement for subjective content, for facts and figures, these artificially intelligent systems are less likely to get the algorithms incorrect with better training mechanisms. Now, for example, a particular subject of interest may have one source of information that is factually correct coming from a reliable and authentic source. Alternately, there could be someone who might have a contradictory opinion on the subject.
And lastly, there could also be a source that sort of mixes the two things and comes up with a factually inaccurate and biased piece of information. Now, the job of the search algorithm and anti-fake news tools here is to not just highlight the difference between these three types, but also make sure free speech is not hampered. To sum up, being democratic mediums of information, having a perfect mechanism is an ambitious thing to say, however, having a near-perfect algorithm is something these companies are striving to do in order to have a fair and free data pool. ALSO READ: Wikipedia co-founder Jimmy Wales launches Wikitribune portal to fight fake news