Content Moderation, Free Speech, and Community
Unlimited dialog makes conversations harder.
Even though I’ve been blogging for 19 years and change, I’m a piker compared to Mike Masnick, who founded TechDirt way back in 1997. He’s been struggling with how to manage a commenting community for a quarter-century now and, unlike most with large platforms, hasn’t simply given up by either shutting down the discourse entirely or just letting it run rampant.
In his latest post on the subject, “Why Moderating Content Actually Does More To Support The Principles Of Free Speech,” he rejects the absolutist principle.
Obviously over the past few years there’s been all of these debates about the content moderation practices of various websites. We’ve written about it a ton, including in our Content Moderation Case Study series (currently on hiatus, but hopefully back soon). The goal of that series was to demonstrate that content moderation is rarely (if ever) about “censoring” speech, and almost always about dealing with extremely challenging decisions that any website has to deal with if they host content from users. Some of that involves legal requirements, some of it involves trying to keep a community focused, some of it involves dealing with spam, and some of it involves just crazy difficult decisions about what kind of community you want.
And yet, there are still those who insist that any forms of content moderation are either censorship or somehow “against the principles of free speech.” That’s the line we keep hearing. Last week in the discussion regarding Elon Musk’s poll about whether or not Twitter “supported” free speech, people kept telling me that the key point was about the “principles of free speech,” rather than what the law says. This discussion also came up recently with regards to the various discussions on cancel culture.
I understand where this impulse comes from — because I had it in the past myself. Over a decade ago I was invited to give a talk to policy people running one of the large user-generated content platforms, and it was chock full of former ACLU/free speech lawyers. And I remember one of them asking me if I had thoughts on when it would be okay for them to remove content. I started to say that it should be avoided at almost all costs… when they began tossing out example after example that began to make me realize that “never” is not an answer that works here. I still recommend listening to a Radiolab episode from a few years ago that does an amazing job laying out the impossible choices when it comes to content moderation. It highlights how not only is “never” not a reasonable option, but how no matter what rules you set, you will be faced with an unfathomable number of cases where the “right” answer or the “right” way to apply a policy is not at all clear.
Honestly, absolutism has never struck me as a reasonable policy for a personal site, which OTB has always been. For platforms that essentially serve as a public accommodation, like Facebook or Twitter, there’s a better argument for it. To a lesser extent, that’s even true of large mainstream media sites that engage a huge audience. (Although, in that case, turning off the comments entirely is likely the best approach.)
OTB has had a commenting policy for a very long time, in an effort to signal to those who wish to participate what behaviors we’d prefer not to see. Because there are only two or three active bloggers at any given time and we’ve got busy day jobs, enforcement is inconsistent at best. But it’s never occurred to me that anyone who isn’t paying to host the site has any right to comment here.
After a short discourse on cancel culture that largely mirrors what Steven Taylor and I have already written here, Masnick hits on a useful analogy:
[C]ontent moderation clearly actually enables more free speech. First, let’s look at the world without any content moderation. A website that has no content moderation but allows anyone to post will fill up with spam. Even this tiny website gets thousands of spam comments a day. Most of them are (thankfully) caught by the layers upon layers of filtering tools we’ve set up.
Would anyone argue that it is “against the principles of free speech” to filter spam? I would hope not.
But once you’ve admitted that it’s okay to filter spam, you’ve already admitted that content moderation is okay — you’re just haggling over how much and where to draw the lines.
We have long employed a spam filter here that, in the last decade or so, has been incredibly good at getting rid of true spam and seldom accidentally flags legitimate commenting. And I think we’d all agree that’s a good thing for the conversation.
Frankly, I’d go further than Masnick here: some actual commenting, including intentional trolling, is actually worse for the community than spam. Posts about ways to make $500 a day working from the house or increase the size of one’s penis with one easy trick are annoying but we’ve all trained ourselves to scroll past those. Most can’t resist engaging with trolls, though, and this quickly derails comment threads.
Masnick follows this with an even more valuable insight:
There’s increasing evidence that when you have a totally freeform venue for free speech, it makes many people hold back and not join in. For all the talk of “cancel culture” that relies on claims that people are somehow “afraid” to speak their minds, they should maybe consider that the problem might not be cancel culture, but that some people don’t want to have to constantly debate their beliefs with every rando who challenges them.
In other words, a full open forum is not all that conducive to “free speech” either, because it’s too much.
Instead, what content moderation does is create spaces where more people can feel free to talk. It creates different communities which aren’t just an open free for all, but are more focused and targeted. This actually ties back into the Section 230 debate as well. As the authors of Section 230 have explained, when they wrote that “The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity” they did not mean that every website should host all of that content itself, but rather that by enabling content moderation, distinct and diverse communities could form. As they explained:
In our view as the law’s authors, this requires that government allow a thousand flowers to bloom—not that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same “neutral” point of view. This is the opposite of true diversity.
To use an obvious example, neither the Democratic National Committee nor the Republican National Committee websites would pass a political neutrality test. Government-compelled speech is not the way to ensure diverse viewpoints. Permitting websites to choose their own viewpoints is.
Section 230 is agnostic about what point of view, if any, a website chooses to adopt; but Section 230 is not the source of legal protection for platforms that wish to express a point of view. Online platforms, no less than offline publishers, have a First Amendment right to express their opinion. When a website expresses its own opinion, it is, with respect to that expression, a content creator and, under Section 230, not protected against liability for that content.
In other words, the concept of free speech should support a diversity of communities — not all speech on every community (or any particular community). And content moderation is what makes that possible.
As has often been noted, OTB is really unusual in that it began as a very Republican-leaning political site that managed, for a variety of reasons, to attract commenters from across the political spectrum. As the site hosts became more critical of—and ultimately left*—the GOP, most of the conservative commenters departed, a cycle that was self-reinforcing because they felt increasingly unwelcome by a comment section that was ever-more dominated by Democrats and progressives. Regardless, even now, Steven and I are to the right of the median site commenter.
Steven and I would both strongly prefer more diversity in the comment sections. Both in terms of the sheer number of people posting and in the range of opinions expressed. Alas, we also want the level of discourse to remain high and, with rare exception, the Republican-leaning commenters we’ve managed to attract in recent years tend to bring in tiresome social media talking points rather than engage in useful discourse.
Regardless, Masnick is right: the value of comment moderation is the ability to maintain a community. While I’d ideally prefer the range of opinions to be broader—and for more conservative opinions to be engaged more thoughtfully—we’ve formed a community here that has very little patience for nonsense.
As Steven noted in a comment thread the other day when a longtime former commenter re-appeared under a new pseudonym, we’ve never banned any commenter for being pro-Trump or expressing any particular value system. I’m open to arguments why Trump’s actions surrounding the 2020 election were actually within bounds or why Rick DeSantis would actually be a much better choice in 2024 than Joe Biden or Kamala Harris. Those views are sufficiently alien to the site hosts and the overwhelming percentage of the commentariat that, if presented well, would inject something useful into the conversation. But we also hold to Daniel Patrick Moynihan’s dictum that we’re entitled to our own opinions but not our own facts.
*Steven defected after the Palin nomination, voting for Obama in 2008 and I held on through the Trump nomination, voting for Clinton in 2016. The late Doug Mataconis was harder to pin down, tending to be a Libertarian protest voter even though he was always accused of being in the party opposite the commentator.