By Noah Feldman
After the El Paso shooter posted a manifesto on the anonymous message board 8chan, the network provider, Cloudflare, suspended the site’s account, taking it offline — at least for now.
Whether you applaud or oppose the action, it raises a fundamental problem for the future of free speech: Should there be some place on the Internet where even the most repellent, vile discussion is allowed? Or would we be better off collectively if we hounded such speech wherever it crops up, driving it ultimately to the dark web, and attacking it even there in the hopes of eliminating it altogether?
The case of 8chan seems to provide the basis for the strongest possible case that some speech just shouldn’t be allowed to appear on the internet. In posting his manifesto there, the El Paso shooter was following in the footsteps of the Christchurch, New Zealand, shooter, and the shooter at the Tree of Life synagogue in Pittsburgh. 8chan was founded specifically to host speech that was too extreme to appear on other message boards, including 4chan, which until 8chan came along was thought to be at the extreme end of permissive policy. Since its founding, 8chan has become home to speech that is extremist along a range of dimensions, including most of the usual suspects: racism, sexism, homophobia, paranoid conspiracy theory, and the like.
The posting of manifestos by people who go out and become active shooters is a strong proof that speech isn’t always just words — it’s also a type of conduct. Seen from the perspective of a shooter, the posting of a manifesto is part of the overall performance of what can technically be called spectacular violence. The spectacle of the shooting is supposed to guide the public to the ideas contained in the manifesto. Doing the shooting without posting the manifesto would not count as a complete accomplishment of the shooter’s ends.
Traditional First Amendment doctrine prohibits the U.S. government from banning almost any speech at all — including most shooters’ manifestos. The relevant legal standards, from a 1969 case called Brandenburg v. Ohio, allow speech to be punished when the speaker intends to incite imminent violence — and the speech is actually likely to incite imminent violence. As interpreted by the courts, the imminence and likelihood standards effectively mean that the only kinds of speakers who may be stopped are those who are standing in front of an angry crowd and inciting the crowd to take violent action.
A shooter’s manifesto might conceivably incite others to commit harm — but not imminently, at least not in the traditional legal understanding of the term. The intent is there, but the probability of actually creating imminent harm in the legal sense typically is not. It might creatively be argued that when a shooter posts a manifesto, he’s inciting himself to violence. But that notion — which I have to admit I just came up with myself — doesn’t fit the ordinary meaning of incitement.
Of course First Amendment protection doesn’t mean that any private actor, including a web hosting service like Cloudflare, has to give a home to horrific speech. It only means the government can’t punish the speech. Indeed, the First Amendment as currently interpreted would protect the right of a private actor to shut down speech posted on a platform that actor controls. If the government tried to make Cloudflare restore 8chan, Cloudflare could refuse on First Amendment grounds. A corporation enjoys free speech rights under U.S. constitutional law.
So the tough question is not really one of law so much as one of where we want to put our advocacy resources. Shooters will always be able to type up their manifestos and leave them at home for the police to find. The real issue is whether to use public pressure to shut down every web-based venue where very, very bad speech flourishes.
My own instinct is that, horrible as the speech on 8chan is, there should be somewhere on the web where it can be expressed
I’m not saying that access to web hosting services is a fundamental human right any more than there is a fundamental right to publish your ideas in Bloomberg Opinion.
And I’m not very comfortable arguing that allowing vile speech functions as an outlet to enable extremism to die down. Often, the expression of extremism can encourage more extremism.
But we no longer communicate using flyers printed in a dark basement by a lonely pamphleteer. For better or worse, the web has become our primary forum for written communication.
It follows that if there is nowhere on the web to express certain ideas, then those ideas — bad ones, to be sure —won’t be expressed in writing at all. That in turn would lead to a narrowing of the ideas available to all humans. The core idea of the freedom of speech has always been that we allow the expression of certain ideas that we condemn and believe to be morally wrong and even dangerous — because their expression ultimately fuels the search for truth. Refuting bad ideas is part of shaping new ones, as the philosopher John Stuart Mill famously argued.
And maybe, as Justice Oliver Wendell Holmes suggested, we might even consider it a good idea to question our own most profoundly held moral certainties. It’s particularly hard to do that when we are faced with true evil. But really, that’s the time the protection of free expression most counts.
Noah Feldman is a Bloomberg Opinion columnist. He is a professor of law at Harvard University and was a clerk to U.S. Supreme Court Justice David Souter. This column does not necessarily reflect the opinion of Minnesota Lawyer, the Bloomberg editorial board or Bloomberg LP and its owners.