Software robots masquerading as human beings are influencing the political discourse on social media as never before and could threaten the very integrity of the 2016 US presidential election, researchers have warned. In the study, the researchers analysed 20 million election-related tweets created between September 16 and October 21, this year. Also Read - Twitter down: Users facing logout issues, failure to load feed and more
They found that a high percentage of the political discussion taking place on Twitter was created by pro-Donald Trump — Republican Party nominee for President of the US in the 2016 election — and pro-Hillary Clinton — Democratic Party nominee for President of the US in the 2016 election — software robots, or social bots, with the express purpose of distorting the online discussion regarding the elections. Robots, rather than people, produced 3.8 million tweets, or 19 per cent. Also Read - Twitter is down in India for some users, company says it's working on a fix
Further, social bots also accounted for 4,00,000 of the 2.8 million individual users, or nearly 15 per cent users on the microblogging website, the study said. The presence of these bots can affect the dynamics of the political discussion in three tangible ways, the researchers observed. First, influence can be redistributed across suspicious accounts that may be operated with malicious purposes. Second, the political conversation can become further polarised. Also Read - Twitter Spaces: How to start, join the audio-based Clubhouse alternative
Third, spreading of misinformation and unverified information can be enhanced. “As a result, the integrity of the 2016 US presidential election could be possibly endangered,” said Emilio Ferrara, Assistant Professor at the University of Southern California. In addition, the researchers found that Trump’s robot-produced tweets were almost uniformly positive, boosting the candidate popularity.
By contrast only half of Clinton’s were, with the other half criticising the nominee. The sophistication of these social bots’ makes it often impossible to determine who created them, although political parties, local, national and foreign governments and “even single individuals with adequate resources could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation,” the study noted.
The “master puppeteers” behind influence bots, often create fake Twitter and Facebook profiles, Ferrara added. They do so by stealing online pictures, giving them fictitious names, and cloning biographical information from existing accounts. These bots have become so sophisticated that they can tweet, retweet, share content, comment on posts, “like” candidates, grow their social influence by following legitimate human accounts and even engage in human-like conversations, Ferrara said, in the study appearing in the journal First Monday.