comscore
News

Can AI really save Facebook?

Facebook has problems but it cannot just leave it to AI to fix them.

Facebook F8 Gallery main

Highlights

  • Facebook is engrossed in problems ranging from fake news and privacy violation

  • It believes that AI tools can fix some of its immediate problems

  • The question is: Can AI really fix them.

2018 has seen a new Mark Zuckerberg. The 33-year-old CEO and co-founder of Facebook, the world’s largest social media platform, decided to own up for failure to keep a leash on Facebook. Late March, Zuckerberg announced that he will testify before Congress in the wake of Cambridge Analytica scandal, which exposed user data of 87 million Facebook users. His agreement to face US Congress also had the fore bearings of revelation that Russia’s Internet Research Agency successfully managed to implant misinformation among American citizens using Facebook ahead of 2016 Presidential election.

In its 14-year history, Facebook had gone from being the poster child of technology startup to a community as populous as that of India and China combined to dismantling the democracy. Of course, it led to an outcry and forced regulators around the world to take note. It is becoming clearer by the day that Facebook can no longer be restricted to being just a social community. It has the potential to make or break peace. And taking control ain’t easy.

During the two day long Congressional hearings which lasted for ten long hours, Zuckerberg transformed himself from a young software developer to the business professional the legislators wanted to meet. He took a page from the playbook of 44th US President Barack Obama and dressed in navy suit, white shirt and a light blue shirt. He also seemed really calm and apologetic of what happened on the platform.

“Facebook is an idealistic and optimistic company. For most of our existence, we focused on all of the good that connecting people can do. And, as Facebook has grown, people everywhere have gotten a powerful new tool for staying connected to the people they love, for making their voices heard and for building communities and businesses,” Zuckerberg said as he spoke for the first time at the US Senate. “But it’s clear now that we didn’t do enough to prevent these tools from being used for harm, as well. And that goes for fake news, for foreign interference in elections, and hate speech, as well as developers and data privacy,” he added.

During the two day testimony in front of the US Senate and House Committee, Zuckerberg reiterated his passion for Facebook and how he hopes to make world a better place by connecting people around the world. Zuckerberg also spoke in length about how Facebook works and what is the source of its $40 billion revenue. There were multiple takeaways during the two days when Zuckerberg switched from his traditional attire to suit and chuckled in front of people with power to regulate the platform:

1. These US Senators and Congressmen don’t really know how Facebook works
2. The five minute window to quiz Zuckerberg was just not enough for even legislators who support Facebook
3. Regulating Facebook and other internet companies was on the card but it didn’t play out that big
4. Zuckerberg could not tell if it has any real competitor
5. Mark Zuckerberg as CEO of Facebook could not answer total shareholder equity
6. Last but not the least, Zuckerberg believes that AI can save the platform.

During the first day of testimony before the Senate, the word AI was referenced 25 times while it was used nine times when Zuckerberg testified before the House Committee on April 11. Zuckerberg almost implied that AI is the end to all the woes that have plagued the platform in the past year. This particular belief, however, did not come with backing statements and Facebook itself has not shed light on how and where AI is deployed apart from the fact that it is being used actively to take down hate speech and terror content. That leads us to a really big question: Can AI really save Facebook?

Before we find answer to that question, it is important to go through some of the exchanges between Zuckerberg and Senators during the two-day testimony.

Senator John Thune: As we discussed in my office yesterday, the line between legitimate political discourse and hate speech can sometimes be hard to identify, and especially when you’re relying on artificial intelligence and other technologies for the initial discovery. Can you discuss what steps that Facebook currently takes when making these evaluations, the challenges that you face and any examples of where you may draw the line between what is and what is not hate speech?

Zuckerberg: Yes, Mr. Chairman. I’ll speak to hate speech, and then I’ll talk about enforcing our content policies more broadly. So — actually, maybe, if — if you’re okay with it, I’ll go in the other order. So, from the beginning of the company in 2004 — I started in my dorm room; it was me and my roommate. We didn’t have AI technology that could look at the content that people were sharing. So — so we basically had to enforce our content policies reactively. People could share what they wanted, and then, if someone in the community found it to be offensive or against our policies, they’d flag it for us, and we’d look at it reactively. Now, increasingly, we’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook.

Some problems lend themselves more easily to AI solutions than others. So hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced, right? Contrast that, for example, with an area like finding terrorist propaganda, which we’ve actually been very successful at deploying AI tools on already. Hate speech — I am optimistic that, over a 5 to 10-year period, we will have AI tools that can get into some of the nuances — the linguistic nuances of different types of content to be more accurate in flagging things for our systems.

Senator Patrick Leahy: You know, six months ago, I asked your general counsel about Facebook’s role as a breeding ground for hate speech against Rohingya refugees. Recently, UN investigators blamed Facebook for playing a role in inciting possible genocide in Myanmar. And there has been genocide there. You say you use AI to find this. This is the type of content I’m referring to. It calls for the death of a Muslim journalist. Now, that threat went straight through your detection systems, it spread very quickly, and then it took attempt after attempt after attempt, and the involvement of civil society groups, to get you to remove it. Why couldn’t it be removed within 24 hours?

Zuckerberg: Yes. We’re working on this. And there are three specific things that we’re doing. One is we’re hiring dozens of more Burmese-language content reviewers, because hate speech is very language-specific. It’s hard to do it without people who speak the local language, and we need to ramp up our effort there dramatically. Second is we’re working with civil society in Myanmar to identify specific hate figures so we can take down their accounts, rather than specific pieces of content. And third is we’re standing up a product team to do specific product changes in Myanmar and other countries that may have similar issues in the future to prevent this from happening.

Senator John Cornyn: Thank you, Mr. Zuckerberg, for being here. I know in — up until 2014, a mantra or motto of Facebook was move fast and break things. Is that correct?

Zuckerberg: I don’t know when we changed it, but the mantra is currently move fast with stable infrastructure, which is a much less sexy mantra.

Senator Christopher Coons: My core question is isn’t it Facebook’s job to better protect its users? And why do you shift the burden to users to flag inappropriate content and make sure it’s taken down?

Zuckerberg: Senator, there are a number of important points in there. And I think it’s clear that this is an area, content policy enforcement, that we need to do a lot better on over time. The history of how we got here is we started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff. So just because of the sheer volume of content, the main way that this works today is that people report things to us and then we have our team review that. And as I said before, by the end of this year, we’re going to have more than 20,000 people at the company working on security and content review, because this is important. Over time, we’re going to shift increasingly to a method where more of this content is flagged up front by AI tools that we develop.

Senator Jeff Flake: There are obviously limits, you know, native speakers that you can hire or people that have eyes on the page. Artificial intelligence is going to have to take the bulk of this. How — how much are you investing in working on — on that tool to — to do what, really, we don’t have or can’t hire enough people to do?

Zuckerberg: Senator, I think you’re absolutely right that over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content. We’re investing a lot in doing that, as well as scaling up the number of people who are doing content review. One of the things that I’ve mentioned is this year we’re — or, in the last year, we’ve basically doubled the number of people doing security and content review. We’re going to have more than 20,000 people working on security and content review by the end of this year. So it’s going to be coupling continuing to grow the people who are doing review in these places with building AI tools, which is — we’re — we’re working as quickly as we can on that, but some of this stuff is just hard. That, I think, is going to help us get to a better place on eliminating more of this harmful content.

Senator Gary Peters: You also know that artificial intelligence is not without its risk and that you have to be very transparent about how those algorithms are constructed. How do you see artificial intelligence, more specifically, dealing with the ecosystem by helping to get consumer insights, but also keeping consumer privacy safe.

Zuckerberg: Senator, I think the — the core question you’re asking about, AI transparency, is a really important one that people are just starting to very seriously study, and that’s ramping up a lot. And I think this is going to be a very central question for how we think about AI systems over the next decade and beyond. Right now, a lot of our AI systems make decisions in ways that people don’t really understand.

Senator Gary Peters: And so, is your company — you mentioned principles. Is your company developing a set of principles that are going to guide that development? And would you provide details to us as to what those principles are and how they will help deal with this issue?

Zuckerberg: Yes, senator. We have a whole AI ethics team that is working on developing basically the technology. It’s not just about philosophical principles; it’s also a technological foundation for making sure that this goes in the direction that we want.

The above interaction along with follow-up answers offered by Mark Zuckerberg outlined a common theme that the company has deployed AI tools and wants to build even better tools in the future. The idea here is use these AI tools to tackle issues like spread of fake news, terror content and even foreign influence to disrupt an election. While Zuckerberg briefly spoke about ethical use of AI, he did not offer to elaborate on what his company is building. Also, these AI systems won’t be working to fix the common problem with Facebook – privacy. This leads to a number of questions including is Facebook relying too much on AI. To answer that, it is important to understand the basics of AI and other underlying technologies.

What is Artificial Intelligence?

Artificial Intelligence is a new technology that aims to store and process information similar to human brain. While AI has become a buzzword in the past few years, the underlying technology has been around for several decades. AI, as a technology, is basically a combination of machine learning (ML) and neural networks. Neural Networks, as the name implies, is a technology that aims to mimic the neural schema of a human brain to learn and relearn information.

Machine Learning is a broader technology used to design algorithms that process data, make predictive analysis and help reach decisions. In a Medium post last month, Michael Jordan, Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley, wrote, “In terms of impact on the real world, ML is the real thing.”

In a way, Jordan is right. A lot of thing being passed around today is basically machine learning in the name of AI. For example, object recognition is nothing but machine learning which uses computer algorithms to study multiple datasets and uses predictive analysis to reach a conclusion. “As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data,” Jordan explains.

What AI can do right now?

Artificial Intelligence or Machine Learning to be specific is already deployed by companies like Google, Microsoft, Apple and Amazon to enhance your experience. In fact, ML is at the core of Amazon’s existence since it relies extensively on large-scale data and gives user-specific data using neural networks and sequential analysis. The most immediate form of AI comes on our mobile devices which all have a digital assistant of some sort.

On Android, there is Google Assistant while on Apple’s iOS, there is Siri. There is also smart speakers from Amazon which uses its own Alexa. These digital assistants use computational algorithm to understand user behavior and then offers them results like weather and commute to work at a specific time without their need to quarantine the system. AI is also used to do things like speech recognition and object recognition and are deployed in wide array of industries.

Is relying entirely on AI a good strategy?

This is the question that Facebook is shying away from answering in depth right now. During the Congressional testimony and Tuesday’s appearance before the European Parliament, Mark Zuckerberg often said that he will answer their questions in written statements at a later stage. For most part, Facebook does not seem very good with follow-ups and its executives have mostly laughed at the scenario of its top honcho appearing before the committee.

“AI systems and tools can be used to do a consistent first pass at identifying hate or toxic speech, and automatically suppressing what is thought to be clearly toxic. We can reach very high levels of accuracy with such systems. However, no (AI) system can be 100 percent correct,” told Siddhartha Chatterjee, Chief Technology Officer of NSE-listed Persistent Systems.

Chatterjee adds that a lot of policing of social media posts is currently done using manual methods such as keyword searches. Since humans are involved in existing manual method, there is a possibility of people getting tired and inconsistent in applying rules. Here is a possible fix to the existing mechanism:
a. In case there is doubt, one or more humans can step in and make a judgment,
b. In case any post is suppressed, there is a way for the user to get the suppression reviewed,
c. There is a continuous virtuous feedback cycle which ensures that such AI systems learn as they go and become better with time
d. There is always a path for users to provide feedback on your processes, and for the feedback to be analyzed and acted upon

What’s a fail-safe way to eliminate human bias?

Facebook has been constantly criticized by legislators for supporting only one set of voice. It has been said to favor liberal voices and eliminate conservatives from the communication. While the bias has been said to affect speech on its platform, the effect could be far flung and affect the algorithms as well. In order to eliminate or reduce bias, Facebook and other software companies must need to identify it. Chatterjee says having a diverse pool of people involved in software development will help, for any software, not just AI software.

He believes one answer is to ensure that we not only test the behavior of AI program for correctness and completeness but also for (lack of) bias. Chatterjee explains, “The bias could creep in from data (e.g., when we miss data about some groups of people, say, or from the algorithms, or the way algorithms are applied. We have some tools, for example tools that identify ‘tone’ in written language about gender, and how best to reach to a larger diverse range of candidates. We need more such tools, and we need to make these more integrated in our tool set, so we can always apply these tools and checklists.”

On the sidelines of Facebook’s annual developers conference, F8 in San Francisco, select journalists were detailed how company is integrating ethics into its AI implementation. While it has not detailed how ethics will be programmed into its AI algorithms and how it defines ethics at the core of any of its utilities, it does offer a glimpse into the future of all AI programs. “We need to ask ourselves not only what computers can do but what computers should do,” Microsoft CEO Satya Nadella remarked at Build 2018, company’s annual developer conference.

“AI systems have an added ethical burden in that there may be decisions taken by AI systems that affect lives, and those will need to be tracked and kept right,” Chatterjee adds.

Can tech companies responsibly use data on social media?

Chatterjee says there is a lot of information that can be gleaned from public data and put to social good. “At an individual level: Done right, such data can be used to ensure advertisements reach the right targets, so both advertisers and consumers are happier. Coupons and discounts can be better targeted.”

The public data can also be used to offer right news stories and customized user suggestions based on their activity. Big tech companies also use volumetric analysis to locate areas that are hit by epidemic or a disaster. Facebook already uses similar technology to let its users mark themselves “as safe” during a disaster. The data posted on social media platforms can also be constructively used in areas such delivering health service. “By tracking mentions of diseases and symptoms on social media, we can track the spread of such diseases, and plan appropriate social health responses. For example, something similar was used in China during the Avian flu outbreak in 2013, using information from Weibo,” explains Chatterjee.

Problems A Plenty

Facebook’s problems are a plenty and its immediate focus seems to be to prevent spread of fake news on its platform and jeopardize another election around the world. However, its extreme reliance on AI tools shows its inability to own up to some of those problems. It also raises flags on how the company is disengaging itself from public discourse on key topics like ethical use of AI and limiting access to information shared by users on its platform.

  • Published Date: May 25, 2018 9:27 AM IST