Twitter has said that it is cracking down on mean, hateful or menacing tweets that cross the red line from free speech into abuse. Twitter is overhauling its safety policy and beefing up the team responsible for enforcing it, along with investing “heavily” in ways to detect and limit the reach of abusive content, general counsel Vijaya Gadde said in an column published by the Washington Post. Also Read - COVID-19 third wave: Twitter shuts offices as coronavirus cases rise
“We need to do a better job combating abuse without chilling or silencing speech,” Gadde said. Twitter last month modified its rules to ban ‘revenge porn’ — the tweeting of intimate or revealing pictures or video of people without their permission. Also Read - Twitter Voice Tweets rolling out for iOS: What are they, how to send
The San Francisco-based micro-blogging site is also taking steps to curtail the use of anonymously created Twitter accounts to intimidate or silence targeted people. “We are changing our approach to this problem, in some ways that won’t be readily apparent and in others that will be,” Gadde said. Also Read - World Emoji Day 2021: Twitter reveals 10 most used emojis in 2021 in India
Twitter has tripled the size of the team responsible for protecting users of the service, resulting in a five-fold increase in the speed of response to complaints, according to the general counsel.
“We are also overhauling our safety policies to give our teams a better framework from which to protect vulnerable users,” Gadde said.
Changes included expanding the definition of banned “abuse” to include indirect threats of violence.
“As some of our users have unfortunately experienced firsthand, certain types of abuse on our platform have gone unchecked because our policies and product have not appropriately recognized the scope and extent of harm inflicted by abusive behavior,” Gadde said.
“Even when we have recognized that harassment is taking place, our response times have been inexcusably slow and the substance of our responses too meager. This is, to put it mildly, not good enough.”
Facebook last month updated its “community standards” guidelines, giving users more clarity on acceptable posts relating to nudity, violence, hate speech and other contentious topics. Facebook-owned smartphone photo and video sharing service Instagram followed suit today with a similar overhaul of its rules about what is deemed unacceptable.