A billion user strong community with thousands of posts shared every minute; the core idea behind the very existence of Facebook is to be a community for people to connect with each other. In this virtual connection based out of reality, people often share their everyday selfies, boring rants, interesting cat videos, and some serious newsy stuff. While we have been troubled enough with the selfies and the rants, and also the undeniably addictive weekly fluff videos, it is the news-related content that has caused Facebook a lot of trouble off lately. Having a huge user base has its share of pros and cons and for the Mark Zuckerberg-owned company, the disadvantage of facilitating a global community is to curb the spread of misinformation.
Facebook has repeatedly come under the scanner for allowing false information and fake news to spread on its platform. Although Facebook has acknowledged that there are instances, some even having serious political repercussions, which are centred around misinformation and it has also implemented tools to curb their spread; advocates argue that the company is actually willing to have such kind of content to be shared through its platform for the sheer number of hits or ‘clicks’ it will garner. However, Zuckerberg dismissed the allegations saying, “No one in our community wants fake information. We are also victims of this and we do not want it on our service.”
Who did it?
The whole fake news controversy came into global notice during the US elections when dubious posts were shared widely which were claimed to have influenced votes in the country. In the run-up to the US election articles with misleading headlines were being displayed alongside genuine reports about the candidates. Following the global backlash, Facebook did its research, tweaked its algorithm, and four months later, it still stands under the sun, accused of spreading rumors and false news. The question is why is Facebook is bearing the brunt for something which we, as end users, are doing?
When Facebook opened its service outside Harvard 13 years ago, it started off as a platform for people who are known to each other ‘offline’, to connect ‘online’. Over the years, it moved from home-like ‘Walls’ to news-like ‘Feed’, adding pictures, videos, and live streams into the mix. In its journey, thousands of brands and publishers joined the bandwagon where they had the potential to reach out to billions of people simultaneously, and then came the notion of advertised or paid content. At the core of this Facebook’s formative years were users who facilitated it all. Be it a friend’s wedding video that went viral or a road-rage incident captured on a smartphone that made it to the headlines on a prominent news channel; it has always been the end user who has initiated and facilitated the spread of information.
In such a scenario, holding Facebook alone accountable is a bit unfair because as a global platform it has its checks in place and it is up to an individual user to discern information in terms of falsehood and accuracy. ALSO READ: Facebook removes fake news about Fox News anchor Megyn Kelly from Trending Topics
Wikipedia – the world’s source of information, the world’s online research tool Google’s first response, and also a medium to cross check some information, is primarily crowd-sourced. The go-to place for seeking information on any subject under the sun, is fully editable, by humans – like you and me. Millions of corrections are made, tons of information is added every day to keep it as open-source and democratic as possible. However, when it comes to producing accurate content for mass production, researchers or publishers never depend on Wikipedia. Then, why is it that Facebook- which is essentially a social networking platform – is held accountable for spreading misinformation it never created and is constantly trying to get rid of. ALSO READ: These are the top 15 most edited Wikipedia pages of all time
But is Facebook doing enough?
Currently, Facebook has tools in place which allow users to flag off content deemed inappropriate, offensive, false, or misinformed. If there has been an article or a post under circulation on the platform, the primary responsibility lies with the user to flag it off. Internet is a home to any and every kind of content, some of which is factually incorrect, while some of it is merely a different take on a particular subject. While one post or news article might have opinion A about the subject, another publication might have opinion B on the same. Now this does not necessarily mean one of the two opinions are false because at the end they are opinions and are subjective. As Zuckerberg puts it, “It’s not always clear what is fake and what isn’t. A lot of what people are calling fake news are just opinions that people disagree with.”
Nonetheless, Facebook announced policy changes that prevents adverts showing “misleading or illegal” content. The company also formed a human task force to look into allegations of false stories shared through its platform which allegedly colored the US elections to favour Donald Trump. Furthermore, the company recently also tweaked its ‘trending’ feature to include stories covered by multiple publishers rather than showing most-shared content. ALSO READ: Facebook rolls out ‘disputed tag’ to crackdown on fake news
For users or publishers, it takes a second to hit the share and like/react button. But the content has the potential to reach out to millions of other users out there, perhaps even affecting lives of those who may not be even related to the subject. In this situation, the reach of Facebook has become both a boon and a bane.
In times when social media has unofficially become a source of news for users, there lies a certain responsibility from the users’ end as well to ensure dubious content is not channelized from their end. While authentic news websites largely follow the ethics of news reporting and social media sharing, fake websites are unaccountable and it is because of these websites that both genuine publishers and social media platforms are facing the heat. Recently, Google announced a change in its search result guidelines which are aimed at helping its human editors teach the search algorithm how to spot false results more efficiently. Hence, if a machine-based algorithm still requires human understanding to improve false news spotting, why can’t humans cross-check a piece of information before sharing it to others on a larger platform like Facebook and aid in herding?