Disinformation and Free Speech

How far should the government be able to go?

When I saw the AP headline (“Judge’s order limits government contact with social media operators, raises disinformation questions“) and a couple of others like it yesterday, I didn’t think much of it. It seemed pretty mundane. Kevin Drum was apoplectic, though, and there are a spate of op-eds aghast at the ruling out this morning, so I decided to drill deeper.

The AP report:

An order by a federal judge in Louisiana has ignited a high-stakes legal battle over how the government is allowed to interact with social media platforms, raising broad questions about whether — and how — officials can fight what they deem misinformation on health or other matters.

U.S. District Judge Terry Doughty, a conservative nominated to the federal bench by former President Donald Trump, chose Independence Day to issue an injunction blocking multiple government agencies and administration officials. In his words, they are forbidden to meet with or contact social media companies for the purpose of “encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”

Thus far, aside from the fact that the judge is a Trump appointee, this strikes me as perfectly reasonable. While government officials have a role in coordinating against foreign agitprop and illicit speech (promoting criminal activity, including terrorism), attempts to strongarm social media companies to remove “disinformation” coming from US citizens is worrisome. While I’m more comfortable with the Biden (or any “normal”) administration doing it than I would have been under Trump, I’m pretty close to a free speech absolutist and “suggestions” or “advice” from regulators could easily have a chilling effect.

The order also prohibits the agencies and officials from pressuring social media companies “in any manner” to try to suppress posts, raising questions about what officials could even say in public forums.

This, on the other hand, seems sloppy and overbroad.

Doughty’s order blocks the administration from taking such actions pending further arguments in his court in a lawsuit filed by Republican attorneys general in Missouri and Louisiana.

The Justice Department file a notice of appeal and said it would also seek to try to stay the court’s order.

White House press secretary Karine Jean-Pierre said, “We certainly disagree with this decision.” She declined to comment further.

An administration official said there was some concern about the impact the decision would have on efforts to counter domestic extremism — deemed by the intelligence community to be a top threat to the nation — but that it would depend on how long the injunction remains in place and what steps platforms take on their own. The official was not authorized to speak publicly and spoke on the condition of anonymity.

Some activities by “domestic extremists” (criminal conspiracy, incitement to violence, harassment) are not protected speech and presumably falls outside the order. Quite a lot (mere repugnant utterances), though, is.

The lawsuit alleges that government officials used the possibility of favorable or unfavorable regulatory action to coerce social media platforms to squelch what the administration considered misinformation on a variety of topics, including COVID-19 vaccines, President Joe Biden’s son Hunter, and election integrity.

The injunction — and Doughty’s accompanying reasons saying the administration “seems to have assumed a role similar to an Orwellian ‘Ministry of Truth’” — were hailed by conservatives as a victory for free speech and a blow to censorship.

Legal experts, however, expressed surprise at the breadth of the order, and questioned whether it puts too many limits on a presidential administration.

“When we were in the midst of the pandemic, but even now, the government has significantly important public health expertise,” James Speta, a law professor and expert on internet regulation at Northwestern University, said Wednesday. “The scope of the injunction limits the ability of the government to share public health expertise.”

The implications go beyond public health.

Disinformation researchers and social media watchdogs said the ruling could make social media companies less accountable to label and remove election falsehoods.

“As the U.S. gears up for the biggest election year the internet age has seen, we should be finding methods to better coordinate between governments and social media companies to increase the integrity of election news and information,” said Nora Benavidez, senior counsel of the digital rights advocacy group Free Press.

There’s more, but you get the idea.

Absent outside analysis and the context of Trumpism, my reaction would largely be what it was from the headline but with slight modification. It seems obvious just from the AP report that the order was sloppily written and overbroad. Beyond that, my standard lament about mere district judges—who are easily jurisdiction-shopped—having the power to issue national injunctions also applies. We really should require these cases to be filed in the DC Circuit.

As noted earlier, Kevin Drum was upset by the ruling. He cited excerpts of the ruling not included in the AP report:

What is really telling is that virtually all of the free speech suppressed was “conservative” free speech. Using the 2016 election and the COVID-19 pandemic, the Government apparently engaged in a massive effort to suppress disfavored conservative speech.

….The White House Defendants made it very clear to social-media companies what they wanted suppressed and what they wanted amplified. Faced with unrelenting pressure from the most powerful office in the world, the social-media companies apparently complied.

….The VP, EIP, and Stanford Internet Observatory are not defendants in this proceeding. However, their actions are relevant because government agencies have chosen to associate, collaborate, and partner with these organizations….Flagged content was almost entirely from political figures, political organizations, alleged partisan media outlets, and social-media all-stars associated with right-wing or conservative political views.

….The Plaintiffs have outlined a federal regime of mass censorship, presented specific examples of how such censorship has harmed the States’ quasi-sovereign interests in protecting their residents’ freedom of expression.

….The evidence produced thus far depicts an almost dystopian scenario. During the COVID-19 pandemic, a period perhaps best characterized by widespread doubt and uncertainty, the United States Government seems to have assumed a role similar to an Orwellian “Ministry of Truth.” The Plaintiffs have presented substantial evidence in support of their claims that they were the victims of a far-reaching and widespread censorship campaign. [all emphases and ellipses Drum’s]

Which he dismisses as

Deep State derp all the way down.

Which is largely fair. It’s almost certainly true that pre-Musk Twitter and Facebook targeted speech from the “right” more than the “left” during the pandemic and the aftermath of the election. But that’s because there was a hell of a lot more misinformation coming from that camp. And, given that the Trump administration was in charge, I find it hard to believe they were encouraging this particular emphasis. It’s more believable that the Obama administration was trying to crack down on misinformation during the 2016 campaign but it was mostly Russian agitprop that was the object of concern.

Drum continues:

This is Deep State derp all the way down. I wonder if Doughty also wants to prevent the White House from talking to newspapers, TV reporters, talk show hosts, radio chatterers, podcasters, newsletter writers, labor leaders, CEOs, climate activists, and bloggers? It’s going to be mighty lonely in the White House press office before long.

This strikes me as unhelpfully hyperbolic. Policymakers have First Amendment rights, too. Sloppy and overbroad though it may be, there’s no reasonable way of reading the ruling in a way that prevents giving interviews, holding press conferences, and the like—unless they’re using these fora to urge the suppression of the free speech rights of American citizens.

NYT (“Disinformation Researchers Fret About Fallout From Judge’s Order“):

A federal judge’s decision this week to restrict the government’s communication with social media platforms could have broad side effects, according to researchers and groups that combat hate speech, online abuse and disinformation: It could further hamper efforts to curb harmful content.

Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said on Wednesday that the ruling could impede work meant to keep false claims about vaccines and voter fraud from spreading.

The order, she said, followed other efforts, largely from Republicans, that are “part of an organized campaign pushing back on the idea of disinformation as a whole.”

Fair enough although, again, it’s reasonable to worry about government officials deeming protected speech as “disinformation,” particularly when the speaker is an American citizen. And it’s particularly troublesome for them to urge the suppression of that speech on that basis.

Several researchers, however, said the government’s work with social media companies was not an issue as long as it didn’t coerce them to remove content. Instead, they said, the government has historically notified companies about potentially dangerous messages, like lies about election fraud or misleading information about Covid-19. Most misinformation or disinformation that violates social platforms’ policies is flagged by researchers, nonprofits, or people and software at the platforms themselves.

“That’s the really important distinction here: The government should be able to inform social media companies about things that they feel are harmful to the public,” said Miriam Metzger, a communication professor at the University of California, Santa Barbara, and an affiliate of its Center for Information Technology and Society.

While I agree in the abstract, especially when it comes to agencies with unique expertise (CDC, e.g.), some of these determinations clearly have partisan impact. Twitter, for example, banned (presumably, on its own judgment given who was in the White House at the time) spreading of the initial Hunter Biden story broken by the NY Post. It would have been highly problematic if that happened with his father sitting in the Oval Office and presiding over the regulatory agencies.

A larger concern, researchers said, is a potential chilling effect. The judge’s decision blocked certain government agencies from communicating with some research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about removing social media content. Some of those groups have already been targeted in a Republican-led legal campaign against universities and think tanks.

Their peers said such stipulations could dissuade younger scholars from pursuing disinformation research and intimidate donors who fund crucial grants.

That seems like a legitimate concern but not one stemming from the ruling but rather from the actors who brought the suit.

Bond Benton, an associate communication professor at Montclair State University who studies disinformation, described the ruling as “a bit of a potential Trojan horse.” It is limited on paper to the government’s relationship with social media platforms, he said, but carried a message that misinformation qualifies as speech and its removal as the suppression of speech.

“Previously, platforms could simply say we don’t want to host it: ‘No shirt, no shoes, no service,’” Dr. Benton said. “This ruling will now probably make platforms a little bit more cautious about that.”

So, “misinformation” is almost certainly speech. Whether it’s protected speech presumably depends on whether it violates a recognized exception such as fraud. That said, the ruling applies to the agencies, not the platforms. While I lean toward the argument that giants like Twitter and Facebook should be treated like public utilities given that they’re the de facto virtual town square at this point, this ruling doesn’t take us there.

In recent years, platforms have relied more heavily on automated tools and algorithms to spot harmful content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations often complained that platforms were unresponsive to their concerns, said Viktorya Vilk, the director for digital safety and free expression at PEN America, a nonprofit that supports free expression.

“Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases,” she said. “They are less comfortable ignoring the government.”

Ah-hah! Ah-hah!

Again, I think this is a sloppy and overbroad ruling by an activist judge with a partisan agenda. But he’s not without a point.

Several disinformation researchers worried that the ruling could give cover for social media platforms, some of which have already scaled back their efforts to curb misinformation, to be even less vigilant before the 2024 election. They said it was unclear how relatively new government initiatives that had fielded researchers’ concerns and suggestions, such as the White House Task Force to Address Online Harassment and Abuse, would fare.

For Imran Ahmed, the chief executive of the Center for Countering Digital Hate, the decision on Tuesday underscored other issues: the United States’ “particularly fangless” approach to dangerous content compared with places like Australia and the European Union, and the need to update rules governing social media platforms’ liability. The ruling on Tuesday cited the center as having delivered a presentation to the surgeon general’s office about its 2021 report on online anti-vaccine activists, “The Disinformation Dozen.”

“It’s bananas that you can’t show a nipple on the Super Bowl but Facebook can still broadcast Nazi propaganda, empower stalkers and harassers, undermine public health and facilitate extremism in the United States,” Mr. Ahmed said. “This court decision further exacerbates that feeling of impunity social media companies operate under, despite the fact that they are the primary vector for hate and disinformation in society.”

I’m less sure than I was fifteen years or so ago that the United States’ near-absolutism on free speech is better than the approaches taken by our Anglosphere and Western European brethren. The Internet has significantly ratcheted up the potential influence of various crackpots who would previously have had difficulty gaining a platform. But the ruling here is pretty consistent with the longstanding US tradition.

I’ve already spent more time on this post than I’d intended but I would commend to you three other essays.

Leah Litman and Laurence H. Tribe, Just Security (“Restricting the Government from Speaking to Tech Companies Will Spread Disinformation and Harm Democracy“). This is a very detailed legal critique but the nut ‘graphs are these:

While there are, in theory, interesting questions about when and how the government can try to jawbone private entities to remove speech from their platforms, this decision doesn’t grapple with any of them. In fact from the 155-page opinion, it’s not even clear this case really raises those questions. Each step in the reasoning of the decision manages to be more outlandish than the last – from the idea that the plaintiffs have standing to the notion that the plaintiffs are entitled to an injunction at this stage of the case to the sweep of the injunction that the district court issued.

But the absurdity of different aspects of the decision in Missouri v. Biden should not obscure the bigger picture of what happened. Invoking the First Amendment, a single district court judge effectively issued a prior restraint on large swaths of speech, cutting short an essential dialogue between the government and social media companies about online speech and potentially lethal misinformation. Compounding that error, the district court crafted its injunction to apply to myriad high-ranking officials in the Biden administration, raising grave separation of powers concerns. And equally troubling is how the court’s order, which prevents the government from even speaking with tech companies about their content moderation policies, deals a huge blow to vital government efforts to harden U.S. democracy against threats of misinformation.

WaPo Editorial Board (“How far can government go to suppress speech on social media?“). The key ‘graphs:

Deep-state conspiracy theories aside, both sides have a point. The government shouldn’t be allowed to sidestep the First Amendment by cajoling social media sites into stamping out speech that the constitution prohibits the government itself from outlawing. These platforms have immense power over what people can and can’t say, and elected officials have immense power over the platforms — to force a breakup or approve a new merger, say, or, as politicians from both parties have repeatedly threatened, to remove the liability shield provided by the legal provision known as Section 230.

On the other hand, the government has speech rights, too. Just as a member of Congress may, during a hearing, decry Twitter’s attempts to curtail hate speech as part of a liberal or partisan plot, the White House press secretary should be able to declare her boss’s dissatisfaction with Meta’s enforcement of its rules against covid disinformation. Further complicating the question is the fact that, in some areas, there are legitimate reasons for executive agencies and social media sites to operate in sync. For instance, they have been working for years to collaborate more effectively against criminal activity, from terrorism to sex trafficking to election interference.

The injunction itself shows how difficult the issue is to slice: The judge writes that the government can’t urge platforms to remove protected speech, but at the same time he writes that the government may communicate with platforms about “threats [to] the public safety or security of the United States,” and even more vaguely, “other threats.” Where do conversations about medical misinformation during a public health emergency fit in?

Clearer rules about how officials can and can’t try to influence platform policy toward constitutionally protected speech, regardless of message or content, are needed. At the core of the struggle is distinguishing between persuasion and coercion or intimidation. This is easy enough when an official issues an explicit threat that it will use the privileges of the state to punish a platform for disobeying a request to remove legal speech, but it’s harder when the threat is implicit — and harder still when, as with election interference and terrorist material alike, legal and illegal speech can blur together.


FILED UNDER: Law and the Courts, , , , , , , , , , , , , , , , , , , , , , , , ,
James Joyner
About James Joyner
James Joyner is Professor and Department Head of Security Studies at Marine Corps University's Command and Staff College. He's a former Army officer and Desert Storm veteran. Views expressed here are his own. Follow James on Twitter @DrJJoyner.


  1. Stormy Dragon says:

    It’s almost certainly true that pre-Musk Twitter and Facebook targeted speech from the “right” more than the “left” during the pandemic and the aftermath of the election.

    Not only is this not almost certainly true, it’s been repeatedly shown that the exact opposite was true: social media companies went out of their way to favor right wing speech

  2. Modulo Myself says:

    Well, the government can give access and take it away all with a few ‘suggestions’ to a reporter. By the logic of this ruling, a government official can’t call up a reporter after hearing about a story in order to clarify something, which in fact could be read as polite coercion and might actually be coercion.

    But the Trump whackos live in a 24/7 chamber of Orwell/Fauci issuing mind control over the masses. They aren’t concerned about real reporters. That doesn’t even register. Real reporters don’t leak videos of Hunter Biden smoking crack just for kicks, and they don’t publish bullshit stories about the lab leak or a trans clinic in St Louis handing out puberty blockers to anybody who comes in the door.

  3. Just Another Ex-Republican says:

    Free Speech is vital to a functional democracy. Propaganda is poisonous to a functional democracy. Resolving that contradiction in this age of social and mass media and the endless bubbles of unreality they create is the central issue that will determine if democracy even survives in the US.

    Of course, as usual, we assume no one else has ever grappled with this and refuse to look elsewhere to see how it might be done, hiding behind righteous platitudes and philosophy about free speech absolutism (and total hypocrisy on the subject if you are Elon Musk) while the country drifts more and more towards collapse.

  4. James Joyner says:

    @Stormy Dragon: This is a case where two things can be true. Facebook in particular was very friendly to rightish content. But the stuff that was banned, shadowbanned, caused accounts to be suspended, etc. was mostly rightwing conspiracy theories and the like—mostly because they were so prolific.

  5. Matt says:

    @Stormy Dragon: The right really knows how to work the “refs”.

    It’s like you know maybe just maybe the reason twitter/facebook ended up targeting the right more than the left is because the right peddles in a whole lot more lies?

    Legit I cannot even have much of a conversation anymore with members of the GOP’s base simply because every time they will outright deny physics/reality itself…

  6. Modulo Myself says:

    Just adding that Hunter Biden is/was a private citizen. Nobody has the right to publish private stuff from your computer on twitter. Doesn’t matter if your account was hacked or you were smoking tons of crack and left your laptop at a repair replace. Twitter hedged on the laptop for about a day or so because everyone involved was a piece of shit, and they couldn’t be trusted to behave decently. And they didn’t–which is why all of those images of him are out there.

  7. drj says:

    So what is the actual injury that this court’s injunction seeks to prevent? Whose rights have been violated in the past?

    Slippery slopes and potential chilling effects don’t count. Certainly, injunctions of this magnitude need a very strong likelihood of actual harm to justify them. Where is it?

    If we are not supposed to dismiss this ruling out of hand as some crazy Trumpist judicial overreach, we need more than mere theoretical threats to the First Amendment.

    Also, this:

    The judge writes that the government can’t urge platforms to remove protected speech

    ..is prima facie nonsense. Don’t these platforms have a First Amendment right to not want to contribute to the spread of harmful misinformation?

    Unfortunately, Trumpist activist says no. Social media platform can’t accept government guidance to stop the spread of harmful misinformation if that goes against the cult’s pet theory of the day.

    Practically speaking (as opposed to some purely theoretical threat), this means that according to this judge even more people should have died from Covid-19, or perhaps ivermectin overdoses.

  8. gVOR10 says:

    Conservatism is largely a game of make believe. Let’s pretend tax cuts reduce the deficit. Let’s pretend you can’t consider a SCOTUS nominee in the last year of a term until you do. Let’s pretend Donald Trump isn’t a sociopath. Let’s pretend there’s no more racism in the country. Let’s pretend the 2020 election was stolen. Let’s pretend COVID isn’t a real problem. And so on and so on.

    This is a just bunch of pandering red state AGs and a RW judge pretending a lot of BS on the internet is real.

  9. Gustopher says:

    I’m less sure than I was fifteen years or so ago that the United States’ near-absolutism on free speech is better than the approaches taken by our Anglosphere and Western European brethren.

    It’s worth considering the Russian social media campaign in favor of Brexit. The stronger British and EU laws didn’t prevent Russia from getting their preferred outcome there.

    It’s also very hard to say how much of an impact that the Russians had, and how much was just homegrown English sentiments. (Especially since a large chunk of the Russian campaign was identifying and promoting the homegrown English sentiments to manipulate social media algorithms to boosting it and showing it more)

    I’m not a free speech absolutist, but am wary and practical enough that I would ask whether any proposed restrictions were successful elsewhere before considering them. I don’t think the broad swath of European restrictions has protected them from the broadest attacks.

    That said, Germany’s anti-Nazi laws seem to have been pretty effective at keeping their Nazis in check.

    Overall though we (western democracies) do not have a good defense against outside actors using our free speech to undermine and hurt our countries with misinformation.

    In the US, we have a major political party embracing the conspiracy theories.

    The Internet has significantly ratcheted up the potential influence of various crackpots who would previously have had difficulty gaining a platform.

    There’s usually someone pushing the crackpots to make them politically important.

    Sure, Furries are finding it easier to meet other Furries, but they aren’t getting any greater political influence, because it doesn’t serve anyone’s interest to promote their Furry agenda. (I suppose our troops would have a harder time fighting in full fur suits)

    I think the effective solution wouldn’t be to go after the speech itself*, but to figure out what to do about the artificial promotion of conspiracy theories and misinformation.

    *: Except for Nazis. Just persecute them, and make them spend their days in court rooms defending their Hitler shit. Even if they win, and they probably should, that means less time to do Nazi shit in public.

  10. Scott says:

    I wonder when social media companies and their executives will refuse to appear at House hearings or even contacted by Republican House staffers and Congressmen because that would be considered government interference with their free speech rights.

  11. Modulo Myself says:

    What’s insane is that these red-pilled freaks all believe that social media is grooming kids to be trans. Meanwhile, they are taking medical advice from Joe Rogan MD and screaming about how unoppressed they are because of a 24/7 ingestion of memes, all while possessing the thinnest skins known to mankind. And it’s at the ‘elite’ level–Elon Musk, Sam Alito, Ron DeSantis, these guys will fall for anything and the opposite of anything.

  12. CSK says:

    The Gateway Pundit is loudly claiming to have been the lead plaintiff in this case and celebrating it as a victory for free speech.

  13. CSK says:

    Marjorie Taylor Greene was kicked out of the House Freedom Caucus, apparently for being too left-wing.

  14. Gustopher says:

    Fair enough although, again, it’s reasonable to worry about government officials deeming protected speech as “disinformation,” particularly when the speaker is an American citizen. And it’s particularly troublesome for them to urge the suppression of that speech on that basis.

    I’m inclined to agree that it raises the possibility of abuse. I would feel more comfortable if transcripts of these meetings were made available.

    There would be some concern that this would highlight the misinformation — purveyors could point and say “the government doesn’t want you to know this!” — but I think it’s a necessary safeguard, since sometimes the government is Donald Trump.

    If we learn that the Biden administration is trying to keep Hunter Biden’s dick pics off social media, we should know that. That’s more of a campaign thing, and entirely inappropriate for the government to be addressing.

    (I would also want transcripts of social media company meetings with the political parties and campaigns, as they are close-enough to government in that they may well become the government, and there’s that same corrosive effect.)

  15. gVOR10 says:

    @CSK: That alone should have been enough to convince any fair commentator that this case is bull shit.

  16. Gavin says:

    As always, the answer is “until a right wing incel crank complains” — because as we can all agree, the US government exists solely as a safe space for libertarians and dominionists to control everyone else. Their feelings are more important than your actual harm.

  17. CSK says:


    I was thinking that same thing.

  18. Raoul says:

    I agree with KD. The opinion reads like it was written in the deepest swamps of rightwingdom. If it was someone else maybe I would pay more attention to the matter. The fact is that we were in the middle of a pandemic which killed more than a million Americans and everybody, including the government, was feeling it’s way around. It is a sad statement that hundreds of thousands of individuals died because of the disinformation campaign, partly aided and abetted by Russia, and my question is why didn’t we do more to prevent this calamity.

  19. Andy says:

    I pretty much agree with your take James. I’m also against nationwide, single-judge injunctions as a matter of principle.

    My view is that government agencies coordinating with private organizations to try to get third parties to regulate the non-criminal speech of Americans should only be allowed for exceptional and clearly defined circumstances and should be done transparently and subject to Congressional and judicial oversight. The mere claim that something is “misinformation” is not sufficient, especially since misinformation has no objective standard.

  20. Modulo Myself says:


    The fact is that we were in the middle of a pandemic which killed more than a million Americans and everybody, including the government, was feeling it’s way around.

    And it wasn’t like anti-vaxxers or anti-maskers were shut down at all. My god, they flourished everywhere. The government didn’t do anything except make vague gestures in the direction of public health. There was no suppression, no censorship, nothing. Just angry white morons blathering online 24/7 and alienating their friends and appearing ugly and awful and then thinking the government did something to make them this regrettable as humans.

  21. Gustopher says:


    I think the effective solution wouldn’t be to go after the speech itself, but to figure out what to do about the artificial promotion of conspiracy theories and misinformation.

    I just want to note that this is very consistent with my belief that Section 230 should protect the platform hosting speech (not an editorial decision), but should not protect the platform’s algorithmic (or human chosen) promotion of specific content, as that is editorial in nature.

    If someone can make a case that Twitter or Facebook promoting misinformation on vaccines, masks and other Covid health matters has lead to greater death and/or costs, then they should be able to have their day in court. (I’m thinking something similar to the states suing cigarette manufacturers).

    If this has a chilling effect on certain forms of speech, maybe they should be chilled.

    ETA: never a good sign when you’re replying to yourself hours later.

  22. Kathy says:


    but should not protect the platform’s algorithmic (or human chosen) promotion of specific content, as that is editorial in nature.

    Hell, yes!

    ETA: never a good sign when you’re replying to yourself hours later.

    I only talk to myself in real time.

  23. Just nutha ignint cracker says:

    @Kathy: I never call it “talking to myself.” I call it “verbalizing to hear what I’m really saying.”

    ETA: Sometimes, I call it “thinking out loud.”

  24. Kurtz says:

    This kind of illustrates how difficult this issue is.

    President Donald Trump on Thursday signed an executive order cracking down on “censorship” by social media sites, a move widely seen by critics as retaliation against Twitter’s decision to slap fact-checking labels on the president’s tweets.

    On the one hand, @Gustopher makes this point with which I think I agree:

    I just want to note that this is very consistent with my belief that Section 230 should protect the platform hosting speech (not an editorial decision), but should not protect the platform’s algorithmic (or human chosen) promotion of specific content, as that is editorial in nature.

    Twitter made an editorial choice here by appending a label to a serial liar’s posts.

    On the other hand, toothless as the EO was, it was still a threat to revisit Section 230. And it wasn’t over suppression of speech. Twitter didn’t suspend or ban or make Trump’s tweets less visible. All of those things would be a form of censorship. They added a suggestion to seek out credible sources rather than rely on Trump’s word.

    But Trump still claimed he was being censored. He still made a threat to a social media company.

    From EFF:

    The records show how Trump planned to leverage more than $117 million worth of government online advertising to stop platforms from fact-checking or otherwise moderating his speech.

    This sounds a lot like the accusations against Biden made in the Missouri case.

    Reminder that in one of Taibbi’s Twitter Files reports, he said that the Trump White House regularly asked for removal of content. I don’t know if it was that article or not, but I have a recollection of a quote from a Twitter employee that the Trump Admin regularly called the moderation hotline to request tweets they didn’t like be taken down. (apologies, I really don’t feel like looking for it right now.) I’ll post in the open thread tomorrow if I can dig it up.

    Either way, I’m uneasy about heavy handed government interference of speech and land pretty close to James on this. But I am not any more comfortable with placing that power in the hands of techbros and opaque algorithms that aren’t even fully understood by those who design them.

  25. Andy says:


    I think it’s important, when evaluating anything the government might do, to imagine how it or similar moves might be used by political appointments.

  26. Ken_L says:

    There are extraordinary misconceptions in the online discussion of the issues in this case, energetically stoked by right-wing propagandists constantly lying about “censorship” and “coercion”. I haven’t the energy to describe the detailed interactions between various government agents and the social media companies that ocurred, but recommend this analysis to anyone who’s not familiar with it all. https://www.techdirt.com/2022/12/20/no-the-fbi-is-not-paying-twitter-to-censor/

    Bottom line – the government used the social media company’s own systems to advise them that certain posts/accounts seemed to be in breach of the companies’ own terms of service. Sometimes the companies took action in response; sometimes they didn’t. No doubt the government got more attention than similar advice from Joe Nobody, but it was not doing anything that any private citizen couldn’t do.

    And requests by the Biden campaign pre-January 2021 were, by definition, not made by “the government”, although right-wing propagandists have been happy to imply they were.

  27. Gustopher says:


    And requests by the Biden campaign pre-January 2021 were, by definition, not made by “the government”, although right-wing propagandists have been happy to imply they were.

    From mid to late November through most of January, it was the incoming government wearing a different hat. Any pressure they would feel from a government request is going to be similar here.

    And, during the campaign it was people who have a very good chance of becoming the government.

    There is a definite risk of corrupt intent and improper pressure. And absolutely the appearance of undue pressure.

    I think the answer is transparency, at least until we discover a problem.

  28. Rick DeMent says:

    @James Joyner:

    I’m getting a little more then tired of right wing actors, deliberately flooding the zone with BS and then complaining about the government and tech companies rightly calling it BS.

    If they don’t want to be called out on BS they should stop BSing.

    The fact that you can excavate a “fair point” from all of this is charming. Go into Truth social (or any right wing platform) and try and tell the truth about the election and you will get bounced faster then you can say 1st amendment.

    There is an asymmetrical problem here where right wring “voices” can lie, work in bad faith, and flat out game the system to a fir thee well and still keep anything that approaches left-wing (or even mildly centrist) from ever sing the light of day in their “information” eco systems.

    The whole framing of this issue by the right wing has been nothing but an exercise in bad-faith arguments. How many times must we sit back and treat these bad faith arguments as anything other then what they are. Lies, and manipulations.

  29. Ken_L says:


    There is a definite risk of corrupt intent and improper pressure. And absolutely the appearance of undue pressure.

    A great example of the way people lose sight of the issues in the heat of this discussion. The 1st amendment doesn’t prohibit “corrupt intent and improper pressure”, or “the appearance of undue pressure” by political campaigns, even if that had been what occurred here (it wasn’t). The preliminary injunction was a classic example of thoroughly improper legislation from the bench.

  30. James Joyner says:

    @Rick DeMent: My point is a narrow one: government actors should not be in the business of pressuring private businesses to censor the free speech rights of American citizens. It’s fine for politicians, up to the President, to speak out against ideas they disagree with, including calling them lies.

    That Truth Social, a private company with essentially no reach, limits the viewpoints that can be expressed on their platform is a completely different issue. Really, though, it’s not much different than my right to ban trolls here.

  31. Gustopher says:

    @Ken_L: No, I just disagree with your contention that an incoming President is more like a random person off the street than the current President.

  32. Ken_L says:

    @Gustopher: Irrelevant to the discussion. Nobody has suggested political campaigns are interfering with someone’s 1st amendment rights if they press hard to have social media content deleted.

  33. David M says:

    It’s mildly amusing that the Biden Administration is being mentioned here when much or most of the complaints appear to be about activities from the prior Administration.

    And really, no one should be under any illusion that a lawsuit brought by the Gateway Pundit has any more merit than his other conspiracy theories.