Disinformation and Free Speech
How far should the government be able to go?
When I saw the AP headline (“Judge’s order limits government contact with social media operators, raises disinformation questions“) and a couple of others like it yesterday, I didn’t think much of it. It seemed pretty mundane. Kevin Drum was apoplectic, though, and there are a spate of op-eds aghast at the ruling out this morning, so I decided to drill deeper.
The AP report:
An order by a federal judge in Louisiana has ignited a high-stakes legal battle over how the government is allowed to interact with social media platforms, raising broad questions about whether — and how — officials can fight what they deem misinformation on health or other matters.
U.S. District Judge Terry Doughty, a conservative nominated to the federal bench by former President Donald Trump, chose Independence Day to issue an injunction blocking multiple government agencies and administration officials. In his words, they are forbidden to meet with or contact social media companies for the purpose of “encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”
Thus far, aside from the fact that the judge is a Trump appointee, this strikes me as perfectly reasonable. While government officials have a role in coordinating against foreign agitprop and illicit speech (promoting criminal activity, including terrorism), attempts to strongarm social media companies to remove “disinformation” coming from US citizens is worrisome. While I’m more comfortable with the Biden (or any “normal”) administration doing it than I would have been under Trump, I’m pretty close to a free speech absolutist and “suggestions” or “advice” from regulators could easily have a chilling effect.
The order also prohibits the agencies and officials from pressuring social media companies “in any manner” to try to suppress posts, raising questions about what officials could even say in public forums.
This, on the other hand, seems sloppy and overbroad.
Doughty’s order blocks the administration from taking such actions pending further arguments in his court in a lawsuit filed by Republican attorneys general in Missouri and Louisiana.
The Justice Department file a notice of appeal and said it would also seek to try to stay the court’s order.
White House press secretary Karine Jean-Pierre said, “We certainly disagree with this decision.” She declined to comment further.
An administration official said there was some concern about the impact the decision would have on efforts to counter domestic extremism — deemed by the intelligence community to be a top threat to the nation — but that it would depend on how long the injunction remains in place and what steps platforms take on their own. The official was not authorized to speak publicly and spoke on the condition of anonymity.
Some activities by “domestic extremists” (criminal conspiracy, incitement to violence, harassment) are not protected speech and presumably falls outside the order. Quite a lot (mere repugnant utterances), though, is.
The lawsuit alleges that government officials used the possibility of favorable or unfavorable regulatory action to coerce social media platforms to squelch what the administration considered misinformation on a variety of topics, including COVID-19 vaccines, President Joe Biden’s son Hunter, and election integrity.
The injunction — and Doughty’s accompanying reasons saying the administration “seems to have assumed a role similar to an Orwellian ‘Ministry of Truth’” — were hailed by conservatives as a victory for free speech and a blow to censorship.
Legal experts, however, expressed surprise at the breadth of the order, and questioned whether it puts too many limits on a presidential administration.
“When we were in the midst of the pandemic, but even now, the government has significantly important public health expertise,” James Speta, a law professor and expert on internet regulation at Northwestern University, said Wednesday. “The scope of the injunction limits the ability of the government to share public health expertise.”
The implications go beyond public health.
Disinformation researchers and social media watchdogs said the ruling could make social media companies less accountable to label and remove election falsehoods.
“As the U.S. gears up for the biggest election year the internet age has seen, we should be finding methods to better coordinate between governments and social media companies to increase the integrity of election news and information,” said Nora Benavidez, senior counsel of the digital rights advocacy group Free Press.
There’s more, but you get the idea.
Absent outside analysis and the context of Trumpism, my reaction would largely be what it was from the headline but with slight modification. It seems obvious just from the AP report that the order was sloppily written and overbroad. Beyond that, my standard lament about mere district judges—who are easily jurisdiction-shopped—having the power to issue national injunctions also applies. We really should require these cases to be filed in the DC Circuit.
As noted earlier, Kevin Drum was upset by the ruling. He cited excerpts of the ruling not included in the AP report:
What is really telling is that virtually all of the free speech suppressed was “conservative” free speech. Using the 2016 election and the COVID-19 pandemic, the Government apparently engaged in a massive effort to suppress disfavored conservative speech.
….The White House Defendants made it very clear to social-media companies what they wanted suppressed and what they wanted amplified. Faced with unrelenting pressure from the most powerful office in the world, the social-media companies apparently complied.
….The VP, EIP, and Stanford Internet Observatory are not defendants in this proceeding. However, their actions are relevant because government agencies have chosen to associate, collaborate, and partner with these organizations….Flagged content was almost entirely from political figures, political organizations, alleged partisan media outlets, and social-media all-stars associated with right-wing or conservative political views.
….The Plaintiffs have outlined a federal regime of mass censorship, presented specific examples of how such censorship has harmed the States’ quasi-sovereign interests in protecting their residents’ freedom of expression.
….The evidence produced thus far depicts an almost dystopian scenario. During the COVID-19 pandemic, a period perhaps best characterized by widespread doubt and uncertainty, the United States Government seems to have assumed a role similar to an Orwellian “Ministry of Truth.” The Plaintiffs have presented substantial evidence in support of their claims that they were the victims of a far-reaching and widespread censorship campaign. [all emphases and ellipses Drum’s]
Which he dismisses as
Deep State derp all the way down.
Which is largely fair. It’s almost certainly true that pre-Musk Twitter and Facebook targeted speech from the “right” more than the “left” during the pandemic and the aftermath of the election. But that’s because there was a hell of a lot more misinformation coming from that camp. And, given that the Trump administration was in charge, I find it hard to believe they were encouraging this particular emphasis. It’s more believable that the Obama administration was trying to crack down on misinformation during the 2016 campaign but it was mostly Russian agitprop that was the object of concern.
This is Deep State derp all the way down. I wonder if Doughty also wants to prevent the White House from talking to newspapers, TV reporters, talk show hosts, radio chatterers, podcasters, newsletter writers, labor leaders, CEOs, climate activists, and bloggers? It’s going to be mighty lonely in the White House press office before long.
This strikes me as unhelpfully hyperbolic. Policymakers have First Amendment rights, too. Sloppy and overbroad though it may be, there’s no reasonable way of reading the ruling in a way that prevents giving interviews, holding press conferences, and the like—unless they’re using these fora to urge the suppression of the free speech rights of American citizens.
NYT (“Disinformation Researchers Fret About Fallout From Judge’s Order“):
A federal judge’s decision this week to restrict the government’s communication with social media platforms could have broad side effects, according to researchers and groups that combat hate speech, online abuse and disinformation: It could further hamper efforts to curb harmful content.
Alice E. Marwick, a researcher at the University of North Carolina at Chapel Hill, was one of several disinformation experts who said on Wednesday that the ruling could impede work meant to keep false claims about vaccines and voter fraud from spreading.
The order, she said, followed other efforts, largely from Republicans, that are “part of an organized campaign pushing back on the idea of disinformation as a whole.”
Fair enough although, again, it’s reasonable to worry about government officials deeming protected speech as “disinformation,” particularly when the speaker is an American citizen. And it’s particularly troublesome for them to urge the suppression of that speech on that basis.
Several researchers, however, said the government’s work with social media companies was not an issue as long as it didn’t coerce them to remove content. Instead, they said, the government has historically notified companies about potentially dangerous messages, like lies about election fraud or misleading information about Covid-19. Most misinformation or disinformation that violates social platforms’ policies is flagged by researchers, nonprofits, or people and software at the platforms themselves.
“That’s the really important distinction here: The government should be able to inform social media companies about things that they feel are harmful to the public,” said Miriam Metzger, a communication professor at the University of California, Santa Barbara, and an affiliate of its Center for Information Technology and Society.
While I agree in the abstract, especially when it comes to agencies with unique expertise (CDC, e.g.), some of these determinations clearly have partisan impact. Twitter, for example, banned (presumably, on its own judgment given who was in the White House at the time) spreading of the initial Hunter Biden story broken by the NY Post. It would have been highly problematic if that happened with his father sitting in the Oval Office and presiding over the regulatory agencies.
A larger concern, researchers said, is a potential chilling effect. The judge’s decision blocked certain government agencies from communicating with some research organizations, such as the Stanford Internet Observatory and the Election Integrity Partnership, about removing social media content. Some of those groups have already been targeted in a Republican-led legal campaign against universities and think tanks.
Their peers said such stipulations could dissuade younger scholars from pursuing disinformation research and intimidate donors who fund crucial grants.
That seems like a legitimate concern but not one stemming from the ruling but rather from the actors who brought the suit.
Bond Benton, an associate communication professor at Montclair State University who studies disinformation, described the ruling as “a bit of a potential Trojan horse.” It is limited on paper to the government’s relationship with social media platforms, he said, but carried a message that misinformation qualifies as speech and its removal as the suppression of speech.
“Previously, platforms could simply say we don’t want to host it: ‘No shirt, no shoes, no service,’” Dr. Benton said. “This ruling will now probably make platforms a little bit more cautious about that.”
So, “misinformation” is almost certainly speech. Whether it’s protected speech presumably depends on whether it violates a recognized exception such as fraud. That said, the ruling applies to the agencies, not the platforms. While I lean toward the argument that giants like Twitter and Facebook should be treated like public utilities given that they’re the de facto virtual town square at this point, this ruling doesn’t take us there.
In recent years, platforms have relied more heavily on automated tools and algorithms to spot harmful content, limiting the effectiveness of complaints from people outside the companies. Academics and anti-disinformation organizations often complained that platforms were unresponsive to their concerns, said Viktorya Vilk, the director for digital safety and free expression at PEN America, a nonprofit that supports free expression.
“Platforms are very good at ignoring civil society organizations and our requests for help or requests for information or escalation of individual cases,” she said. “They are less comfortable ignoring the government.”
Again, I think this is a sloppy and overbroad ruling by an activist judge with a partisan agenda. But he’s not without a point.
Several disinformation researchers worried that the ruling could give cover for social media platforms, some of which have already scaled back their efforts to curb misinformation, to be even less vigilant before the 2024 election. They said it was unclear how relatively new government initiatives that had fielded researchers’ concerns and suggestions, such as the White House Task Force to Address Online Harassment and Abuse, would fare.
For Imran Ahmed, the chief executive of the Center for Countering Digital Hate, the decision on Tuesday underscored other issues: the United States’ “particularly fangless” approach to dangerous content compared with places like Australia and the European Union, and the need to update rules governing social media platforms’ liability. The ruling on Tuesday cited the center as having delivered a presentation to the surgeon general’s office about its 2021 report on online anti-vaccine activists, “The Disinformation Dozen.”
“It’s bananas that you can’t show a nipple on the Super Bowl but Facebook can still broadcast Nazi propaganda, empower stalkers and harassers, undermine public health and facilitate extremism in the United States,” Mr. Ahmed said. “This court decision further exacerbates that feeling of impunity social media companies operate under, despite the fact that they are the primary vector for hate and disinformation in society.”
I’m less sure than I was fifteen years or so ago that the United States’ near-absolutism on free speech is better than the approaches taken by our Anglosphere and Western European brethren. The Internet has significantly ratcheted up the potential influence of various crackpots who would previously have had difficulty gaining a platform. But the ruling here is pretty consistent with the longstanding US tradition.
I’ve already spent more time on this post than I’d intended but I would commend to you three other essays.
Leah Litman and Laurence H. Tribe, Just Security (“Restricting the Government from Speaking to Tech Companies Will Spread Disinformation and Harm Democracy“). This is a very detailed legal critique but the nut ‘graphs are these:
While there are, in theory, interesting questions about when and how the government can try to jawbone private entities to remove speech from their platforms, this decision doesn’t grapple with any of them. In fact from the 155-page opinion, it’s not even clear this case really raises those questions. Each step in the reasoning of the decision manages to be more outlandish than the last – from the idea that the plaintiffs have standing to the notion that the plaintiffs are entitled to an injunction at this stage of the case to the sweep of the injunction that the district court issued.
But the absurdity of different aspects of the decision in Missouri v. Biden should not obscure the bigger picture of what happened. Invoking the First Amendment, a single district court judge effectively issued a prior restraint on large swaths of speech, cutting short an essential dialogue between the government and social media companies about online speech and potentially lethal misinformation. Compounding that error, the district court crafted its injunction to apply to myriad high-ranking officials in the Biden administration, raising grave separation of powers concerns. And equally troubling is how the court’s order, which prevents the government from even speaking with tech companies about their content moderation policies, deals a huge blow to vital government efforts to harden U.S. democracy against threats of misinformation.
WaPo Editorial Board (“How far can government go to suppress speech on social media?“). The key ‘graphs:
Deep-state conspiracy theories aside, both sides have a point. The government shouldn’t be allowed to sidestep the First Amendment by cajoling social media sites into stamping out speech that the constitution prohibits the government itself from outlawing. These platforms have immense power over what people can and can’t say, and elected officials have immense power over the platforms — to force a breakup or approve a new merger, say, or, as politicians from both parties have repeatedly threatened, to remove the liability shield provided by the legal provision known as Section 230.
On the other hand, the government has speech rights, too. Just as a member of Congress may, during a hearing, decry Twitter’s attempts to curtail hate speech as part of a liberal or partisan plot, the White House press secretary should be able to declare her boss’s dissatisfaction with Meta’s enforcement of its rules against covid disinformation. Further complicating the question is the fact that, in some areas, there are legitimate reasons for executive agencies and social media sites to operate in sync. For instance, they have been working for years to collaborate more effectively against criminal activity, from terrorism to sex trafficking to election interference.
The injunction itself shows how difficult the issue is to slice: The judge writes that the government can’t urge platforms to remove protected speech, but at the same time he writes that the government may communicate with platforms about “threats [to] the public safety or security of the United States,” and even more vaguely, “other threats.” Where do conversations about medical misinformation during a public health emergency fit in?
Clearer rules about how officials can and can’t try to influence platform policy toward constitutionally protected speech, regardless of message or content, are needed. At the core of the struggle is distinguishing between persuasion and coercion or intimidation. This is easy enough when an official issues an explicit threat that it will use the privileges of the state to punish a platform for disobeying a request to remove legal speech, but it’s harder when the threat is implicit — and harder still when, as with election interference and terrorist material alike, legal and illegal speech can blur together.