After Murthy v. Missouri, Diffuse Jawboning Remains Murky

Font Size:

The Court acknowledges governments’ increasing interests in regulating online speech but provides little guidance.

Font Size:

Among the high-profile First Amendment cases heard by the Supreme Court this past term were NRA v. Vullo, which yielded a 9-0 opinion against jawboning—or the “use of official speech to inappropriately compel private action”—and the NetChoice cases, Moody v. NetChoice and NetChoice v. Paxton—which made clear that social media sites are First Amendment speakers. Nestled between these was Murthy v. Missouri, in which a six-justice majority found that the plaintiffs lacked standing to challenge alleged government efforts to pressure social media platforms to suppress their speech.

Unlike Vullo and the NetChoice cases, Murthy provided no substantive guidance as to the relevant First Amendment concern—namely, when government pressure on online platforms to regulate their users’ speech crosses the line into impermissible suppression. The core question coming out of this opinion is whether the holding effectively bars any plaintiff from successfully challenging government jawboning of online platforms. Both Vullo and the NetChoice cases help us to understand this question.

Most major social media platforms have content-moderation policies allowing them to take down or deprioritize the visibility of user-uploaded content. Following the outbreak of the COVID-19 pandemic, several major platforms announced that they would apply these policies to what they deemed to be COVID-19-related misinformation. Also around this time, various government officials and offices—including the White House, Centers for Disease Control and Prevention, Federal Bureau of Investigation, and the Cybersecurity and Infrastructure Security Agency—began reaching out to the platforms to express their own concerns about what they viewed as COVID-19-related misinformation shared by these platforms’ users.

The plaintiffs in Murthy, all subjected to platforms’ adverse content-moderation determinations, argued that government actors coerced the platforms to make these determinations. The sanctions alleged by the plaintiffs included threatened antitrust scrutiny or the revocation of protections granted to the platforms by Section 230 of the Communications Act. It would violate the First Amendment if the government suppressed such speech directly, and the plaintiffs argued that “‘a government official cannot do indirectly what she is barred from doing directly,’” for instance, by using state power to coerce a third party into suppressing protected speech.

The U.S. Court of Appeals for the Fifth Circuit agreed with these concerns and issued a broad preliminary injunction prohibiting government actors from taking action to “coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech.” The Supreme Court’s review focused on this preliminary injunction. Justice Amy Coney Barrett, writing for six justices, reversed and remanded the Fifth Circuit order on standing grounds, finding the matter to “begin—and end—with standing.”

The plaintiffs’ case faced three important headwinds. First, because theirs was a challenge to a preliminary injunction, each plaintiff faced a heightened burden to make “‘clear showing’ that is ‘likely’ to establish each element of standing”—a burden made all the more substantial by having gone through discovery in the district court. Second, because the injunction’s purpose was to prevent ongoing future harm, the plaintiffs needed to demonstrate “a real and immediate threat of repeated injury.” It was insufficient to show past coercive conduct on the government’s part. And third, they needed to demonstrate that government coercion caused the platforms’ adverse content-moderation determinations, and they were not merely decisions the platforms might have taken independent of any government pressure.

The first of these requirements mark an important difference between Murthy and Vullo. Vullo came to the court as a review of a motion to dismiss. All facts alleged by plaintiffs, therefore, had to be taken as true and, assuming they were true, there had to be no viable claim to be made. In Murthy, the facts remained contested and, even assuming they were true, they had to add up to a likelihood, not a mere possibility, of success on the merits.

As discussed immediately below, the Court was skeptical that the platforms’ adverse determinations resulted from government pressure. This skepticism made the second headwind nearly impossible to overcome. It is quite difficult to establish a likelihood of future harms to be enjoined if you fail to demonstrate that any past harms have occurred. As explained by Justice Barrett, “if a plaintiff cannot trace her past injury to one of the defendants, it will be much harder” to show “a continued risk of future restriction.”

The Court’s greatest skepticism, however, was evident in the third headwind. Where there was evidence that the government had played some role in some of the platforms’ content-moderation determinations, the Court also found that “the platforms had independent incentives to moderate content and often exercised their own judgment.” This included determinations made even before any reported communications between government officials and the platforms.

The Court was critical of the Fifth Circuit for having treated all of the plaintiffs, and all of their claims, as subject to a collective effort by the government. It was, instead, necessary to demonstrate some specific coercive effort by a government official that actually caused a specific First Amendment violation to a specific plaintiff. The Court found that all plaintiffs failed to make this showing, especially to the standard of likelihood required for a preliminary injunction.

Here, the Court’s conclusion in the NetChoice cases that online platforms making content-moderation decisions are First Amendment speakers bears heavily. The platforms have every right to reject—and, in fact, had demonstrated an interest in rejecting—content they deemed to be misinformation. This right includes taking heed of government input in identifying such information. And their independent interest in staying in the government’s good graces—even in obtaining benefits in exchange for their determinations—is itself protected by the First Amendment.

The platforms’ interests might have been aligned with the government’s interest in suppressing protected speech. But that alignment does not deputize the platforms as government agents with a need to protect their users’ First Amendment rights.

The concern coming out of Murthy, raised both by the dissent and in subsequent commentary, is that it renders a wide range of government efforts to suppress protected speech nonjusticiable. A diffuse, widespread—but concerted—whole-of-government pressure campaign could well cause platforms to make decisions that encumber protected speech. But, in such a setting, it would be difficult to point to specific statements that amount to coercion. Instead, it could appear that platforms were making their own determinations even where such determinations would not have been made but for the government’s influence. The counterfactual cannot be disproven. As Justice Samuel Alito argues in dissent, this therefore would “stand as an attractive model for future officials who want to control what the people say, hear, and think.”

The result from Murthy is that—absent plaintiffs being able to point to specific evidence to demonstrate the design and effects of such a campaign—resolution of such concerns is left to the political process. In the context of COVID-19, the Biden Administration’s views and efforts to work with social media platforms were no secret, even if specific details might have been.

In any event, this term’s social media cases will continue to be part of both the political and legal discussion. Following both Murthy and the NetChoice cases, one can be confident that the federal and state governments will continue to take an interest in the regulation of online speech, and that many of those efforts—whatever direction they may take—will provide continuing opportunity for the courts to develop these areas of law.

Gus Hurwitz

Justin (Gus) Hurwitz is a Senior Fellow and Academic Director of the University of Pennsylvania Carey Law School’s Center for Technology, Innovation, and Competition.

This essay is part of a series, titled The Supreme Court’s 2023-2024 Regulatory Term.