Mark Zuckerberg took to Facebook Wednesday to once more defend himself and his platform. Responding to a cavalierly-tweeted charge of anti-Trump bias from the President of the United States, Zuckerberg again repeated his claim that Facebook was “a platform for all ideas,” and that, contrary to unfolding public opinion, his company did much more to further democracy than to stifle it. For evidence, Zuckerberg—as is his wont—turned to the data. “More people had a voice in this election than ever before,” he wrote. “There were billions of interactions discussing the issues that may never have happened offline.” He also pointed to the number of candidates that used Facebook to communicate, and the amount of money they spent publishing political advertising on his platform.
Zuckerberg has made this kind of quantitative argument before. In his first letter to investors in 2012, he wrote that “people sharing more … leads to a better understanding of the lives and perspectives of others” and “helps people get exposed to a greater number of diverse perspectives.”
These arguments rest on a simple equation: The amount of information that a population shares is directly proportional to the quality of its democracy. And, as a corollary: the more viewpoints that get exposed, the greater the collective empathy and understanding.
That math has worked out well for Facebook for most of its history, as it convinced its users to share more information in the name of community and openness. It found its ultimate expression in the Arab Spring, when protestors around the Middle East connected over Facebook to have conversations they couldn’t in public. In retaliation, some of those threatened governments shut down the internet, only proving the point: good guys spread information, and bad guys try to stop it.
But as Facebook has grown, that equation has become less certain. Today, Facebook users perform two very different functions; they are both sources and recipients of information. Zuckerberg’s formulation, that more information is always empowering, may be true when I’m sharing information—I certainly benefit from my ability to say whatever I want and transmit that information to anyone in the world. But it’s not necessarily the case when it comes to receiving information.
Future Shock author Alvin Toffler saw the problem back in 1970, when he coined the term “information overload.” “Just as the body cracks under the strain of environmental overstimulation,” he wrote, “the ‘mind’ and its decision processes behave erratically when overloaded.” Fellow futurist Ben Bagdikian expressed similar concerns, writing that “the disparity between the capacity of machines and the capacity of the human nervous system” results in “individual and social consequences that are already causing us problems, and will cause even more in the future.”
Zuckerberg’s corollary, that exposure to more viewpoints makes you more informed, doesn’t fare any better. By that logic, CNN’s shoutfest-panels, in which a half-dozen consultants yell at one another, should be the most illuminating show on television. (It isn’t.)
We are certainly hearing more from one another than ever before. Ideas that were once dismissed as fringe, from white supremacy to socialism, are getting expressed and openly shared. By Zuckerberg’s math, that should be producing a more cohesive society and a better-functioning democracy. But that isn’t happening, because of what Zuckerberg’s equation leaves out.
Zuckerberg’s stance requires him to argue that any conclusion someone reaches as a result of what they see on Facebook is by definition good for society. After the election, Zuckerberg dismissed claims that fake news had swung the vote to Trump as condescending: “Voters make decisions based on their lived experience,” he said. Twitter made a similar argument in June, when its vice president of public policy, government and philanthropy wrote that its users wouldn’t be swayed by fake news on its platform because they “journalists, experts, and engaged citizens tweet side-by-side correcting and challenging public discourse in seconds.” Trusting users to discern meaning from a barrage of tweets is the informational equivalent of the mythical homo economicus, the perfectly rational consumer who always acts in his own self-interest. It’s also a familiar argument for anyone who has railed against the power that our self-designated cultural gatekeepers exercised to limit our worldviews and control the terms of our discourse.
But we are starting to see the limits of that argument. However you feel about Trump, you’d be hard pressed to conclude that the deluge of digital information that Zuckerberg celebrates has created a more cohesive and politically coherent nation. In his book Propaganda, the pioneering publicist Edward Bernays wrote that “In theory, every citizen makes up his mind on public questions and matters of private conduct. In practice, if all men had to study for themselves the abstruse economic, political, and ethical data involved in every question they would find it impossible to come to a conclusion about anything.” Bernays was a megalomaniacal jerk, but maybe he was onto something.
Companies like Facebook imagine that they are taking a great step toward civilizational progress—that by removing barriers to communication they are building a new era of human consciousness. Perhaps they are right. But progress requires other ingredients as well—like a coherent narrative. “Any large-scale human cooperation—whether a modern state, a medieval church, an ancient city or an archaic tribe—is rooted in common myths that exist only in people’s collective imagination,” writes Sapiens author Yuval Noah Harari. “Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods, or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of strangers to cooperate and work towards common goals.”
This is what’s missing from Zuckerberg’s math—the transmutation of information into common myth. We have more data then ever before, but when you put it all together, it doesn’t add up to much.
This isn’t to suggest that we need to return to the days of cigar-smoking, back-room gatekeepers. Facebook proved, thrillingly, that an algorithm can be a better judge of what someone wants to read than any human ever could. But that’s not always a good thing. People tend to read, like, and share information that confirms their own biases, or stokes their anger—not necessarily information that brings them closer to citizens of all political persuasions.
Imagine if Facebook were to ask a different question. Instead of asking what someone wants to read, it could ask what someone should read. If Facebook decided it really wanted to bring diverse people together, it could promote stories that diverse people like—stories that get high completion rates and engagement from users of all political persuasions, or all ethnic backgrounds, or who are distributed evenly around the country. That might create its own problems—favoring bland centrism over radicalism, for instance. But it may suggest a new way of creating a less top-down, more-inclusive common narrative.
Of course, it would be difficult for Facebook to make that decision. It might make for a higher-quality experience, but a less addictive one. It might even cause people to spend less time on Facebook altogether. And less time means less revenue which means a lower stock price.
And that’s an equation that Facebook understands better than anyone.