In a wide-ranging interview with ProMarket, media scholar Siva Vaidhyanathan explains why Facebook has become “too big to manage” and why he believes the only solution to its bigness is a global political movement aimed at breaking it up. 

Last week, as Facebook’s stock dropped 20 percent in a single day and wiped $120 billion off its market value, some saw it as the beginning of Facebook’s reckoning. Two “hellish” years filled with numerous scandals and growing concerns over its impact on American democracy and its collection and usage of user data, the thinking went, have finally dented Facebook’s growth prospects. With its aura of invincibility removed, market forces will now force the company to course correct.

Siva Vaidhyanathan, however, was buying none of it. Facebook’s user growth in North America and western Europe may have stalled as of late, he wrote in a blistering Guardian piece, but that’s not where most of Facebook’s growth is coming from. With 2.2 billion users worldwide and growing, it is far from hurting, regardless of its recent PR debacles in the US and the UK. Market forces, he argued, will never be able to rein in Facebook—“only a global political movement aimed at breaking up that company and limiting what it can do with our behavioral data” can do that.

Vaidhyanathan, the Robertson Professor of Media Studies and director of the Center for Media and Citizenship at the University of Virginia, is one of the sharpest and most prominent critics of digital platforms like Facebook and Google. He is also one of the most pessimistic. His 2011 book The Googlization of Everything was among the first works to discuss the implications of digital monopolies and the proliferation of surveillance-based business models. His latest book, Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy, is the most thorough critique yet of Facebook and the dangers its growing dominance poses for competition and democracies worldwide.

We recently caught up with Vaidhyanathan for a wide-ranging interview about what’s wrong with Facebook and how its massive size negates the company’s own attempts to fix its hate-speech problem, why he believes Facebook should be broken up, and why the fledgling movement to #DeleteFacebook is unlikely to lead to substantial change.

[The following conversation has been edited and condensed for length and clarity.]

Siva Vaidhyanathan

Q: What do you see as the main problems with Facebook today?

There are three main problems with Facebook. Number one: that 2.2 billion people and growing use Facebook regularly. In other words, the very scale of Facebook is a problem.

Two: the fact that its algorithms favor items that generate engagement means that items that have the strongest emotional pull have the strongest influence on Facebook.

Problem number three is that the advertising system is so powerful, inexpensive, efficient and effective that two things happen: one, anybody hoping to distribute propaganda through [Facebook’s] advertising system can do it in a surreptitious way. The second fallout of the advertising system is it draws advertising money away from other businesses that depend on advertising, like newspapers and magazines and television and radio, especially at the local level.

Those three things—the scale of Facebook, the way the algorithms work, and the advertising system—combine to create all of the problems we have seen from Facebook in the last two to three years.

Q: When you say the advertising system, do you mean Facebook’s business model?

Pretty much, but it’s not just the business model, it’s [Facebook’s] very essence. The advertising system depends on a rich collection of data, a record of our behavior, both individually and collectively. The algorithm, for it to work properly and give us what it thinks we want, has to have the same collection of data, the same record of our behavior and our desires, and the very scale of Facebook makes it unmanageable.

Q: Especially since the algorithms favor negative content that’s meant to elicit a very intense emotional response?

That’s right. I don’t think that was the intent, but that is the result. I think that in Mark Zuckerberg’s dreams, the money would flow from the life-affirming interactions that we have on Facebook— and we do have those. We do keep up with our cousins’ children as they grow and we see cute pictures of puppies and we learn when we need to send condolences or congratulations.

In Mark Zuckerberg’s fantasy, that is how we relate to each other, and we do it through his service and because his service gratifies us, we reward him with our attention and our attention generates revenue that he can then fold back into the company to make it even better. So it becomes a virtuous cycle, a perpetual motion machine of goodness.

Unfortunately, Facebook was designed for a better species than ours. Real human beings don’t limit themselves to the sweet things in life. He created a system that was easily manipulable by all sorts of nefarious forces and amplifies many of our flaws, like our willingness to believe just about anything.

“Unfortunately, Facebook was designed for a better species than ours. Real human beings don’t limit themselves to the sweet things in life.”

Q: How many of Facebook’s problems are related to Facebook’s sheer size?

In some sense they’re all tied to Facebook’s size. With 2.2 billion people using Facebook, Facebook cannot possibly keep track of all of the partners that have built applications and taken data away, because there are many thousands of those application developers around the world. With 2.2 billion people, Facebook can’t create machine learning systems in every language in the world. There are almost 200 languages active on Facebook. To be able to create a machine learning system for every one of those languages and have it understand the context of human interaction in every one of those cultures is an impossible task. It’s more than daunting, it’s impossible. And having the ability to create a system with human screeners in each of these countries implies the ability to identify and hire and maintain and protect human screeners who might be working against the interests of some very nasty governments.

In Myanmar, Facebook keeps talking about how they’re scrambling to find people who can do the work in Burmese to scan for anti-Rohingya propaganda. Well, the fact is most of the Buddhist majority in Myanmar supports the genocide of Rohingya [Muslims]. So what do you do with that? Are you going to hire a bunch of dissidents who the government mistrusts to do the work on behalf of Facebook and the human rights community? That’s very unlikely, and it’s a very risky role for Facebook to play. And who’s going to code the artificial intelligence engine that would do that work, in lieu of human partners?

The very scale of Facebook means it’s operating in all of these parts of the world where Facebook staff and Facebook leadership know nothing and are incapable of handling the challenges that face them.

Q: Much of the present-day criticisms of Facebook focus on its impact on the political systems of the US and the UK. Yet the majority of Facebook’s growth comes from outside the US, and in other countries its impact is arguably more consequential.

That’s right: 2.2 billion users around the world, 220 million of them are in the United States—1 out of 10. It is important to remember that each user in the US is a stronger generator of revenue than a user in any other country. On the other hand that, that 10 percent is shrinking. As Facebook grows, US users of Facebook have plateaued and might even be going down. Compare 220 million Facebook users in the US to 250 million users in India. While those 220 million in the US are about 68 percent of our population, the 250 million in India are only 25 percent of its population, so the potential for growth is significantly higher. Already, more people in India use Facebook than do in any other country. I wouldn’t be surprised if a significant portion of the next billion Facebook users come from some combination of India, Pakistan, Sri Lanka, Bangladesh, and Myanmar.

Q: One of Facebook’s main initiatives in the developing world was Free Basics, a program meant to provide residents of developing countries with free access to a number of websites, including Facebook. Free Basics, you write, was a “spectacular failure.” What can we learn from it about Facebook’s approach to its global role?

If we measure Free Basics’ success by the rate of adoption, it’s tremendous. If we measure its success as its effect on people’s lives, it’s been terrible.

Free Basics was a philanthropic or quasi-philanthropic experiment that Facebook is quite proud of: it goes into a country that has a lot of low-income and low-wealth citizens. Facebook cuts a deal with a telecommunication provider to provide a small set of applications that are Facebook-chosen. And when people use this suite of applications that’s called Free Basics, it doesn’t count against the data they purchased or their data cap in a given month. Even when you run out of data and you don’t have any cash, you can still use a core set of applications and those include Facebook and WhatsApp and Wikipedia and a few others.

That encourages people to live their lives through Facebook and treat Facebook as the Internet. In the US, where we all purchase our data in blocks upfront at the beginning of the month and most of us have the cash to do it, we don’t think of it this way. We have dozens of applications on our phones or our laptops which most of the world doesn’t even have, we have our choice of websites, so we think of our information ecosystem as being quite diverse.

In a place like Myanmar or the Philippines or Cambodia, the information ecosystem is not diverse at all. Independent journalists are often imprisoned or tortured or humiliated or driven out of business by oppressive states and by oppressive political parties in all of these countries [that] Facebook has encouraged. If you’re running an authoritarian government, you love Facebook. Facebook is the key to keeping people distracted and confused and somewhat fearful.

“If you’re running an authoritarian government, you love Facebook. Facebook is the key to keeping people distracted and confused and somewhat fearful.”

Q: Is there a specific point in time when Facebook became, as you write, “too big to manage”?

Facebook, I think, has been too big to manage since it hit 1 billion people in 2011. Now that it is 2.2 billion and climbing it is beyond manageable, and that leaves us with this very difficult situation: we have no way of responding, just as Facebook has no way of responding internally to these challenges because it’s too big. We have no way of conceiving of a way out. Even Mark Zuckerberg can’t flip a switch and turn Facebook off. That would violate his fiduciary duty to his shareholders. He can’t break Facebook up willingly because again, that would not be in the interest of the bottom line and would violate his fiduciary duty to his shareholders. He also firmly believes that the more people who use Facebook and the more time they spend on Facebook, the better their lives will be.

On the policy side, where we the citizens might have some say, we have limited tools as well. I like to think that we could refocus antitrust law on these sorts of companies that don’t treat us as consumers and therefore they don’t charge us a price. Unfortunately, the Supreme Court for the most part protected intermediaries like Facebook. In the US, antitrust is a shell of its former self and doesn’t seem to be something that our current courts want to take seriously.

All we are left with is the potential of stronger data protection regulation coming through Congress. That’s unlikely to happen until at least 2021. Even after that, it’s unlikely to happen because strong data protection not only works against the interest of Facebook and Google, it works against the interests of Verizon and Sprint and AT&T and Comcast and every newspaper and magazine in the country that depends on advertising. Advertising-based businesses love the fact that our privacy is being violated constantly and we are being abused by these systems. It’s great for them. It would be a very difficult effort to generate the political movement necessary to put in this sort of data protection that Europe enjoys, but I still see that as our only reasonable response.

“Even Mark Zuckerberg can’t flip a switch and turn Facebook off. That would violate his fiduciary duty to his shareholders. He can’t break Facebook up willingly because again, that would not be in the interest of the bottom line and would violate his fiduciary duty to his shareholders.”

Q: Facebook insists that it can fix the problems itself. It keeps touting all sorts of steps it’s taking to identify and remove malicious political actors attempting to subvert elections. You don’t believe Facebook can fix these problems on its own? 

No. Everything that Facebook has proposed in the past year has been cosmetic. They’ve talked about strengthening their filtering systems, both the human filtering systems and their algorithmic filtering system as well. That’s practically almost impossible to do. They can’t hire enough people to police all of us and they can’t generate artificial intelligence systems that would be subtle enough to do the job. What they’ve been trying to do country by country is focused on filtering out content about specific election issues and candidates, but that’s a constant game of whack-a-mole: one month they’re in Germany working on filtering out neo-Nazi propaganda; the next month they’re in the Netherlands; the next month they’re in Ireland, worrying about the abortion referendum; then they go to the United States where they’re worried about all of these nationwide Congressional elections and state legislature elections. I can’t imagine how they’re policing all of these very small races all over the country in preparation for fall of 2018. They say they’re trying.

I think they mean well, I think they really want to fix the problem. They just won’t face the reality that the problem with Facebook is Facebook.

“Facebook, I think, has been too big to manage since it hit 1 billion people in 2011. Now that it is 2.2 billion and climbing it is beyond manageable, and that leaves us with this very difficult situation: we have no way of responding, just as Facebook has no way of responding internally to these challenges because it’s too big.”

Q: You argue that Facebook should be broken up, that it should be forced to separate from WhatsApp, Instagram, and Oculus. A growing number of voices, including the New York Times editorial board, have also called for Facebook to be broken up. Why do you believe this is necessary?

Because the only way that we can limit its power and influence is to make sure that those huge pools of data about all of us are as small as possible and as limited in scope as possible. If all of our Instagram activity and all of our WhatsApp activity feed into the larger Facebook collection, then that just increases Facebook’s ability to target us and thus increases its power in the advertising market. If we want real competition in social media and in advertising, we should want Facebook and Instagram to compete against each other. They’re very different products. They do different things for us. They appeal to us in different ways. If enough of us get tired and annoyed of Facebook, we’ll have Instagram. Instagram is a lot more pleasant, has a lot less hate speech, is much more about the good things in life—it’s one of the reasons why young people are much more interested in Instagram than they are in Facebook.

As it is now, to move from Facebook to Instagram is not a real choice. It’s just moving from Facebook A to Facebook B. It would be a much better situation if we could express our disgust with Facebook by shifting our attention to Instagram. Secondly, advertisers might decide they don’t want to spend their dollar on Facebook when they can spend it on Instagram. Right now that doesn’t make any difference—it still makes Facebook rich. But if there were real competition in that field that might make a big difference.

Same with WhatsApp. Right now, WhatsApp is a much better messaging service than Facebook Messenger, but Facebook pushes us to use Messenger. It’s impossible to opt out of it if you are active on Facebook, and so WhatsApp is sort of crowded out. I’m fairly confident that Facebook is going to try to figure out how to fold WhatsApp into Facebook Messenger so they’re only offering one service for messaging instead of two. I think that’s clearly happening. It would be a much better world if WhatsApp were its own company, as it once was.

With Oculus Rift, again, if we take seriously the potential for virtual reality to change things like how we train surgeons and pilots and how we entertain ourselves, we don’t want all of that data to be controlled by this one big company that already knows us far too well. If those platforms are going to be used for advertising, we should want competition in that area as well. If VR grows in importance in our lives, it should detract from, not contribute to, what Facebook offers.

Q: In the book, you argue that antitrust is the best way to deal with Facebook’s concentration of power. But it seems that now you believe the more likely remedy is regulation.

Exactly. I love the thought that we need both antitrust intervention and data protection regulation. I’m just a lot less confident that antitrust can happen in my lifetime. It just seems like the forces aligned against real competition in commerce are too strong. I’m just not optimistic now. Europe has the potential to work with a stronger sense of its own competition law, but I have less and less faith in my own country’s ability to confront the problem of Facebook.

“Those of us who study this stuff, we were trying to get someone to pay attention to the fact that these companies were abusive to people, that they were easily hijacked and hackable and that their optimism and naiveté would ultimately be their undoing.”

Q: In 2011, you published The Googlization of Everything, in which you warned about the dominance of another digital platform: Google. Facebook, which was only mentioned briefly in the book, has become so much bigger and consequential since then. What else has changed?

Oh boy, so many things. That book came out in early 2011. Since that time, we saw an even stronger sense of idealism come out of Silicon Valley. There was this sort of nonsense about the Arab Spring being a social media phenomenon, as opposed to a price-of-wheat phenomenon, or revolt-against-governments-that-torture phenomenon, or an Al Jazeera phenomenon. From that, we saw the Obama administration and the Clinton State Department really champion this idea of Internet freedom as a solution to problems in the world, which could not have been a more empty set of ideas, but now you had the foreign policy establishment in Washington echoing the naive idealism of Silicon Valley. It was a terrible stereo effect, and those of us who study this stuff, we were trying to get someone to pay attention to the fact that these companies were abusive to people, that they were easily hijacked and hackable and that their optimism and naiveté would ultimately be their undoing. In The Googlization of Everything I hinted all of that, but I didn’t see a few things coming.

First of all, I didn’t see how important YouTube would be to the propaganda efforts around the world, to the spread of neo-Nazi and other right-wing messages and the spread of Islamic terrorist messages.

So you had this rising sense of idealism and appreciation at the very moment that oppressive forces, nationalist forces, terrorist forces are learning to work these systems better than the activists who were fighting for democracy and human rights.

Then you have the fact that so many of these companies were able to make tremendous amounts of money by 2011 to 2013. Google made money early, but Facebook didn’t start making money until about 2011. In 2012 was its IPO, and then you had IPOs for dozens of other smaller Internet companies that seemed to bring all sorts of crazy cash in to that system. And Amazon started making money finally—as late as 2015, I think—so you started seeing more and more money flow into the system, thus reinforcing all of the shallow beliefs that drive Silicon Valley. It got echoed from Washington, DC, so nobody was stepping up in Washington and proposing a countervailing force. It was a really weird time until 2016, when all of a sudden everyone woke up and said, “Whoa, what just happened?”

Q: It wasn’t a coincidence that Washington echoed the messages of tech platforms, though. Google essentially ran the Obama administration’s IT. Facebook was widely credited with delivering Obama his reelection victory. There are many who suggested that regulatory capture played a part here.

That’s right. It just made it very difficult to raise these questions effectively in Washington. And [the Trump] administration doesn’t take policy seriously, so there’s no way to approach these companies in a coherent, informed, and reasonable way and use policy tools to rein them in or even support them if need be. We moved from a situation where there was this naive-bordering-on-corrupt relationship between the Obama administration and Facebook and Google to this sort of weird maelstrom where we just can’t even talk about policy in any serious way.

Q: Another development is that it’s not just Google or Facebook surveilling us anymore. Every site and company today engages in the tracking and profiling of users. The entire digital economy, in that sense, has been Google-ized or Facebook-ized.

Absolutely. The best journalistic work done on the ad-tech business and surveillance capitalism has come from the Wall Street Journal and the Wall Street Journal‘s advertising platform itself is constantly watching me. I’m a Wall Street Journal subscriber, so it knows everything about me. I’m deeply profiled by it. The Wall Street Journal is only short of Facebook’s level of data-veillance and coercion because it is smaller than Facebook. If it could be as big and rich as Facebook and run an advertising system as big and powerful as Facebook, it would.

But if I ran the Wall Street Journal, could I in good conscience suggest a different way to run advertising? I don’t think so. That would be unilateral disarmament. It is the way that advertising is done these days. To be the one media company that says “No, I’m going to step back from this” is malpractice. You would be surrendering to the competition.

“It’s naive to think that removing oneself from Facebook hurts Facebook in any measurable way.” 

Q: So if the government is unlikely to do anything, and the platforms themselves are unlikely or unable to fix the problem, what about consumers? The Cambridge Analytica scandal led to the rise of a Twitter movement calling to #DeleteFacebook. You came out against this movement, essentially describing it as an apolitical act of moral indignation by those privileged enough to do so. Why do you think a consumer boycott of Facebook is unlikely to work?

For two reasons. One, we are not consumers. We’re merely cattle. Cattle can’t choose to boycott the slaughterhouse. If any of us sign off from Facebook, delete our accounts, we’re still traced and tracked by Facebook. By doing that, you’re not denying Facebook access to your data: it still has records of all you did with Facebook and continues to track everything you do on most other websites, and it probably tracks your phone so it has the ability to constantly track your location. Plus, Facebook has purchased consumer data about all of us from the big data vendors. It’s naive to think that removing oneself from Facebook hurts Facebook in any measurable way.

Second, we have to keep in mind how big a number 2.2 billion is and we have to keep in mind what a big number 250 million people in India is. Those two numbers should give us pause. Even if 1 million or 10 million or 20 million Americans quit Facebook tomorrow, Facebook would worry about the public relations hit, but it wouldn’t worry about its overall growth and impact in the world. Facebook expects to lose users in the United States over time. Facebook expects to gain many more users than that in Brazil and Nigeria and Kenya. Mark Zuckerberg spends hours per day trying to figure out how to crack China. He does not spend hours per day on how to convince the handful of people who are motivated by reading my book or reading articles on how Facebook is bad for them. That number of people will never reach 20 million. And even if it did, he wouldn’t care.

Also, has there ever been a political movement in this country that could summon 20 million people to do something against their own short term interests? It’s not a common thing. The things that have generated this kind of movement have been deadly serious, like civil rights.

Q: And even the civil rights movement didn’t mobilize 20 million people.

It motivated just enough millions of people to make a difference and it changed opinions overall. But it’s fantasy to think that that could happen. We have to remember that when people call what they do boycotts, often what they really mean is “I’m just going to stop doing something for myself.” A boycott is an organized political movement, not a hashtag and a personal decision. A boycott has to be carefully targeted at a company toward some end and some goal. You have to be able to say “We will stop riding the transit service of Birmingham, Alabama” until voting rights are secured, or until the sheriff releases these activists or desegregates the transit system. It has to be organized, it has to have clear goals and a clear membership, and it has to be visible. And it’s always best done locally, by a few hundred people attacking a particular institution or business.

To do something so global as to say we’re going to boycott Facebook, that’s just silly. That’s not political. That’s personal. That’s a statement of personal indignation. The reason I recoil at the notion that people should do that is not that I’m necessarily against them doing it, it doesn’t bother me one way or another. What really gets me is this fantasy that this is a political act, or that it’s a politically effective act, instead of just an act of self-righteousness.

Look, I’ve currently suspended my account on Facebook and I’m going to be off it for probably five to six weeks. And the reason I’ve done that is I was not enjoying it. I was barraged with many items that deeply troubled me and I could not get through my day in a decent spirit while being hooked on Facebook. So I suspended my account for personal reasons. I totally applaud and appreciate anyone else who makes the same decision for personal reasons, but I will not pretend that Facebook cares or that it’ll make a difference. I think we should be considering actual political responses. Even though those responses are unlikely to succeed or unlikely to succeed soon, they are the only ways to go.

Q: What would a political solution to Facebook look like? 

A political solution would look like widespread dissatisfaction with this level of surveillance expressed clearly to our legislators and structured through some organizational instruments. It might be some version of the ACLU, it might be the Electronic Frontier Foundation. It could be any number of public service organizations working through DC or working through Brussels or London or Ottawa and they would clearly articulate the goals of legislation. They would harness public support and keep the focus on these issues.

Here’s the thing: I am very pessimistic about all of these efforts either because I think they won’t work in many cases or I think they would work, but they’re unlikely to happen in many cases. I’m not strategically pessimistic, I’m sincerely pessimistic. But I think there’s an additional strategic advantage to my position and that is I am challenging people to prove me wrong. If a group of very smart people can show me that my diagnosis of the problem is way off, that Facebook is not actually a danger to people and to democracy, I could not be happier. If I would be able to say three to five years from now, “You know that book I wrote called Anti-Social Media? Forget about it, it’s not worth it, I was totally wrong about everything,” that would be great.

Disclaimer: The ProMarket blog is dedicated to discussing how competition tends to be subverted by special interests. The posts represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty. For more information, please visit ProMarket Blog Policy.