screen-shot-2016-11-22-at-9-51-29-pm

The buses were in Austin for a software programming conference — but Tucker’s tweet was shared more than 350,000 times on Facebook

Donald Trump’s victory has apparently prompted some soul-searching in Silicon Valley. In the days after November 8, Facebook employees were reported to be questioning the role they had played in the election. Facebook’s potential to trap users in a partisan bubble is well-known, but this cycle raised a new concern: fake news, which circulated widely on Facebook and other platforms during the campaign. Reports that Pope Francis had endorsed Donald Trump, or that an F.B.I. agent investigating Hillary Clinton’s emails had died in an apparent murder-suicide, were shared millions of times. In response to the spread of fake news, Facebook and Google announced this week that they would no longer allow websites peddling such falsehoods to use their advertising services. Twitter, scene of so much “alt-right” (read: white-supremacist) racism and harassment during the election, has also taken action. On November 15, the company suspended the accounts of Richard Spencer and around a dozen other white supremacists.

Facebook, Google, and Twitter represent a new kind of public actor: information gatekeepers with unrivaled dominance in their respective formats. No social-media site rivals Facebook’s 1.8 billion monthly users; no search engine can compete with Google; no other site even tries to replicate Twitter. Those three companies therefore have immense power over what information Americans — indeed, people everywhere — encounter. So some people have been understandably uneasy about those companies’ ventures into what seems, from a certain angle, to be censorship.

Whenever someone begins restricting the flow of information based on what’s “true” or “right,” the inevitable question becomes: Who decides what’s “true” or “right”? Historically, only governments and religious institutions had the ability to control the flow of information to large numbers of people. But with that power comes the potential for abuse or collective delusion. Governments might censor embarrassing information, or a church might persecute scientists whose theories undermine its dogma. Accordingly, liberal societies have generally declined to give anyone the power to determine truth on a society-wide scale. We do not have a Ministry of Truth in the United States. We have the opposite: the First Amendment allows different media outlets and political actors to offer their own versions of the truth, which citizens can then choose between. Ideally, the best ideas — the true ones — float to the top, and the rest fall away. But when Facebook pre-screens the information that 1.8 billion people see, there’s a risk that something true might get screened out, or that value judgments might leak into assessments of truth or falsity. What would Facebook have done with the headline, “SCANDAL that will SINK NIXON: Government SPIES break into hotel, steal documents!” ?

This dilemma exists because people can, in good faith, disagree about the truth. No one has privileged access to it: when some authority claims such access, whether a church or a government or anyone else, they only impose their preferred truth on others, and often for unsavory ends.  So if someone thinks it’s true that Pope Francis endorsed Donald Trump, but Facebook thinks it’s false, why do we side with Facebook?

The answer is that the people responsible for that report did not actually believe that Pope Francis endorsed Donald Trump. The fake news stories that percolate through the internet do not represent “different points of view,” but lies and propaganda. There is no good-faith disagreement between people grappling with truth, but an attempt by one person to manipulate and deceive others. The truth wins when people approach their disagreements in good faith. This doesn’t mean people must renounce attachments to prior beliefs: it only means that people must argue only for things they actually believe. The truth wins because, over time, people converge on it, thanks to our shared faculties of sense, language and reason. Despite humans’ boundless capacity for self-delusion, contradictory evidence makes it harder to hold onto beliefs. People change their minds. Social norms and scientific beliefs evolve. But sometimes people don’t approach disagreements with good faith. Sometimes people propagate lies that even they know are lies. That was a theme of this year’s election cycle, from fake news to white supremacist Twitter. But it is not a new theme. As Jean-Paul Sartre wrote in 1946:

               “The anti-Semite has chosen hate because hate is a faith; at the outset he has chosen to devaluate words and reasons… Never believe that the anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse, for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past.”

There is a distinction between good-faith disagreements, and disagreements between honest people and propagandists. In the first case, it is questionable whether any informational authority should privilege one side over the other. That’s why we have the First Amendment: the government shouldn’t silence one side of an argument, no matter how wrong that side appears to be. Similar logic might apply to Facebook and Google, who have traditionally been wary of curating the content on their sites. Hopefully, they shouldn’t have to curate, because the truth wins out over time. But this rationale becomes less clear when people cease to argue in good faith. Some people don’t care for the truth: they care only for comfort and power. Their purpose, as Sartre notes, is not to persuade but to intimidate and to disconcert. Intimidation seems the greater threat here, but a liar’s ability to disconcert is even more corrosive. Because when those around you seem impervious to truth, you risk losing your own grip on it. When truth appears to lose its power to persuade, how do you know it’s true?

Lies and propaganda are not just “different points of view”: they are dangerous, and do not merit the same deference and tolerance as reasoned disagreements. This seems like an obvious proposition, but it is hard to square with liberal society’s tradition of respect for differences of opinion. How can one tell the difference between good-faith disagreement and malicious falsehood? There is no objective, foolproof method. Rather, the answer lies in each one of us, in the common subjective faculties of sense, reason and speech that allow us to agree on truth in the first place. As Sartre notes, propagandists do not bother to speak carefully or reasonably, because they have no use for the common faculties of truth. Rather, hate is their faith. But if the rest of us hold ourselves accountable to words and reasons, and trust them, we can identify those who do not, and resist them.

In the words of Vaclav Havel, who led the Velvet Revolution that toppled Czechoslovakia’s Communist government: We must live within the truth. Havel laid out the philosophy behind the Velvet Revolution in an essay called The Power of the Powerless. By the 1980s, Czechoslovakian Communism persisted only because no one would say what everyone could see: that it was failing.  Party members promised the imminent victory of the working class, but they didn’t believe it, nor did they expect their subjects to. Rather, they expected only that people would go along with the charade, out of fear and collective psychological inertia. The system fell when people refused to go along with the charade any longer — when, as Havel said, they began to live within the truth, a truth that they could all see, and hear, and taste, and name. Havel tells the story of a brewery worker who lost his job when he refused a Party official’s order to adopt a new, worse recipe. The man was not fired because his taste in beer didn’t align with the Party’s preferences — just the opposite. Everyone at the brewery, the managers and officials and the Party official who ordered the change, could taste that the beer was worse. Yet this readily apparent sensory truth conflicted with the “truth” upon which Czech communism was predicated: that the Party could not err. The man was such a threat because everyone knew he was right. They could taste it for themselves. And that collective knowledge of the truth — that the beer was worse, or that Czech Communism was failing — could have unified a political movement. Eventually, it did.

As long as people can tell truth from lies, a government built on lies will be unstable. A government cannot last long when its stability depends on everyone saying the beer is fine when they all know it isn’t. When people can see or hear or taste something, and put a name to it, and infer that others have seen or heard or tasted the same thing, they can organize themselves according to this shared truth, this literal common sense. Accordingly, the most sinister technique of authoritarian power is to deprive people of common sense, as in Orwell’s 1984. Oceania’s fictional government aims to destroy language and replace it with “newspeak” that gradually narrows the range of thoughts a speaker can express. The goal, as a government minister explains to Winston, the book’s almost-hero, is to reduce language to a single word. If people know the beer is bad, but have no words for “beer” or “bad,” their ability to organize around that knowledge is limited (It’s questionable whether this could even be called knowledge). So much for language. Yet 1984’s darkest moment comes during Winston’s torture inside the government citadel. While Winston writhes in pain, the interrogator O’Brien holds up two fingers on one hand and two fingers on the other.  How many fingers am I holding up?, he asks. Four, Winston replies. His pain intensifies. O’Brien asks again. And this time, for a split second, Winston sees a fifth finger. Two and two make five. Winston’s own senses betray him: he can no longer believe what he sees. He can taste the beer, but he is no longer sure whether it is good or bad. Orwell writes:

               “In the end the Party would announce that two and two made five, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all, how do we know that two and two make four? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable – what then?”

We know that two and two make four because we can see it with our own eyes. At bottom, this is how we know all the truth that we know. But when we can no longer believe our own eyes, we lose any common faculty of truth, and thus any access to shared truth, and thus and any shot at political agency.

The genius of 1984 is that while its setting seems far removed from today’s reality, its insights about the nature of power are always relevant. Like O’Brien, Donald Trump is now asking us not to believe our own senses. He ran a campaign that consisted mainly of lies, and in the end it was inevitable that he should claim that two and two made five.

One of Trump’s more inflammatory proposals from the campaign trail was his proposal to create a database of Muslims. There is a video of Trump saying that he would implement such a database. In the video, a reporter asks Trump, “Should there be a database system that tracks Muslims here in this country?” Trump responds: “There should be a lot of systems, beyond databases.”

Trump goes on to talk about the border, and then the reporter follows up: “But that’s something your White House would like to implement?” “Oh, I would certainly implement that,” Trump responds.

The reporter presses for details. “Do you go to mosques and sign these people up?”

“Different places,” Trump replies. “You sign them up at different — but it’s all about management.”

In another clip, a different reporter asks what the difference is between a database of Muslims, and the Nazi policy of requiring Jews to register. After asking the reporter what outlet he was from — presumably, Trump would not have spoken to hostile press — Trump responds, “You tell me.”

The proposal garnered renewed scrutiny recently when Trump surrogate Kris Kobach suggested that the president-elect’s team was considering implementing the registry. The media pounced. But then Trump spokesman Jason Miller said in a statement that, “President-elect Trump has never advocated for any registry of system that tracks individuals based on their religion, and to imply otherwise is completely false.”

Miller knows that Trump supported the registry. Trump knows that Trump supported the registry. They both know a video exists of Trump saying he supports the registry. They deny it nonetheless. This is a naked lie, in defiance of common sense, in defiance of any concept of truth. They are telling us that 2 + 2 = 5,  and we should get used to it. Steve Bannon, the white nationalist former Breitbart News boss, will be arguably the most influential person in Trump’s White House. He seems to be the only man with an ideological hold on Trumpism as a whole: what it is, who its constituents are, why they support it. And he surely still wields considerable clout at Breitbart, which, as an ex-staffer put it to Forbes, “will be the closest thing to a state-owned media entity” the U.S. has ever seen. Breitbart recently published an article titled, “Donald Trump won 7.5 Million Popular Vote Landslide in Heartland.” It was accompanied by a map that showed the entire heart of the country, including Chicago and St. Louis, colored red, with blue confined to the effete liberal enclaves of California and the Northeast. The article didn’t contain any blatant falsehoods: it used some bad math and a weird map to make it seem that Trump had won some sort of popular vote. But it was propaganda nonetheless, designed to further a political agenda rather than reveal any actual truth. Such stories are not uncommon from Breitbart, and now that the site will be Trump’s official propaganda wing, they should increase in number and in danger. How long until they stop stretching the truth and start ignoring it is anyone’s guess.

Falsehood is ascendant in Trump’s America: According to a Buzzfeed analysis, in the weeks before the election, fake news was shared more frequently on social media than real news was. And now it’s not just basement-dwelling internet trolls and Macedonian teenagers spreading misinformation: The President-elect of the United States is a shameless serial liar who recently called meeting of media executives and personalities specifically to berate them for their accurate coverage of him. People who believe in any form of objectivity, reason or truth need to stand firm in the face of this onrushing tide of deceit. In such an environment, enlightened tolerance of purposeful falsehood is not enlightened tolerance: it is relativism. Facebook’s fake-news ban is a good start, but it might not be enough: after all, most news on the site spreads through people’s newsfeeds, not through paid advertisements. Facebook should investigate stories that get widely shared on its site, and publicly flag those that appear to contain purposeful falsehoods. It should not ban or remove these articles: as David Frum points out, Twitter’s ejection of many white supremacists likely only fanned the flames of racial partisanship, and legitimized white supremacists’ complaints of marginalization and censorship. Rather than depriving such people of their platform, we should seek to deprive them of their audience, by making clear when they are peddling falsehoods. Hopefully, reasonable people would stop and think before sharing or believing a story that Facebook had flagged as likely false. Anyone intent on believing a false story that Facebook had flagged as such is likely uninterested in the truth to begin with. Furthermore, Facebook and other sites have an incentive to be restrained and objective in deciding what content to flag: over-zealous curation would lead to claims of bias and partisanship. People will make these claims nonetheless, but Facebook and other sites should not be deterred. Rather, as we approach the inauguration of Donald Trump as President of the United States, they should have the courage to be partisans of the truth. As should we all.

Advertisements