Facebook Free Speech



Written By: Glen Allen, Esq.

As an advocate for a robust First Amendment on free speech issues, I approach many (not all) of the U. S. Supreme Court’s major free speech cases with something like religious reverence.  Amid a welter of social and political pressures and competing values, the Court, it seems to me, has created with fair clarity some principled guardrails against coerced conformity to majoritarian views. I’m thinking of cases such as Abrams v. United States (Holmes dissent) (1919), Terminello v City of Chicago (1949), Brandenburg v. Ohio (1969), and Snyder v. Phelps (2011).

But sometimes the Court gets it wrong.  In its recent (June 28, 2024) decision in Murthy, et al.  v. Missouri, et al., the Court lapsed badly from its long traditions of protecting dissident speech.

As I noted in an earlier article on the Free Expression Foundation website (February 5, 2023) about the Murthy v. Missouri lawsuit, the case is extraordinary not only for the importance of the issues it presents but because it involved opposing parties roughly equal in legal resources, i.e., the plaintiffs included the Attorney Generals of two states (Missouri and Louisiana) and the defendants were officials of the federal government. This contrasts strongly with the usual cases involving dissident speech, in which the proponent of the dissident speech is often indigent while the opponents are governments or well-funded private organizations conducting lawfare campaigns.

As to the gravity of the issues the case presented — Justice Alito in his dissent (joined by Justices Thomas and Gorsuch) aptly wrote that “this is one of the most important free speech cases to reach this Court in years.”  A little context puts these momentous issues into stark relief.

A fundamental principal applicable to First Amendment jurisprudence is that the First Amendment restrains only state actors – i.e., governmental entities – and not private actors, such as social media companies, e.g., Facebook (which was at the center of the issues in the Missouri case).  Thus Facebook and other social media companies, despite their enormous power over the boundaries and content of public debate, have always successfully argued that they are free to censor and limit as they see fit.  But what if a governmental entity – the Biden administration in the Missouri case – bullied and threatened the private entity – Facebook in the Missouri case – with the aim and successful result of coercing the private entity to comply with the Biden administration’s censorship demands?  This was the central question presented in the Missouri case.

Justice Alito in his dissent described numerous concrete instances of the Biden administration’s tactics, which he convincingly argued crossed the line from mere permissible Bully Pulpit advocacy by the President and his staff into unconstitutional threats and coercion. First, however, Justice Alito explained why the administration had the power to, as he expressed it, coerce Facebook into the role of “a subservient entity determined to stay in the good graces of a powerful taskmaster.”  Justice Alito wrote:

[I]internet platforms, although rich and powerful, are at the same time far more vulnerable to Government pressure than other news sources. If a President dislikes a particular newspaper, he (fortunately) lacks the ability to put the paper out of business.  But for Facebook and many other social media platforms, the situation is fundamentally different. They are critically dependent on the protection provided by §230 of the Communications Decency Act of 1996 . . . which shields them from civil liability for content they spread.  They are vulnerable to antitrust actions; indeed, Facebook CEO Mark Zuckerberg has described a potential antitrust lawsuit as an “existential” threat to his company. And because their substantial overseas operations may be subjected to tough regulation in the European Union and other foreign jurisdictions, they rely on the Federal Government’s diplomatic efforts to protect their interests. For these and other reasons, internet platforms have a powerful incentive to please important federal officials, and the record in this case shows that high-ranking officials skillfully exploited Facebook’s vulnerability.

Justice Alito then spelled out, among many other instances in the administration’s “far-reaching . . .  censorship campaign,” the following conduct by high-ranking government officials, gleaned from the extensive discovery taken in the case:

  • In March 2021, Rob Flaherty, the White House Director of Digital Strategy, emailed Facebook about a report in the Washington Post that Facebook’s rules permitted some content questioning COVID-19’s severity and the efficacy of vaccines to circulate. Flaherty noted that the White House was “gravely concerned that [Facebook] is one of the top drivers of vaccine hesitancy,” and demanded to know how Facebook was trying to solve the problem. In his words, “we want to know that you’re trying, we want to know how we can help, and we want to know that you’re not playing a shell game with us when we ask you what is going on.” Facebook responded apologetically to this and other missives. It acknowledged that “[w]e obviously have work to do to gain your trust.”
  • In April 2021, Flaherty again demanded information on the “actions and changes” Facebook was taking “to ensure you’re not making our country’s vaccine hesitancy problem worse.” To emphasize his urgency, Flaherty likened COVID–19 misinformation to misinformation that led to the January 6 attack on the Capitol. Facebook, he charged, had helped to “increase skepticism” of the 2020 election, and he claimed that “an insurrection . . . was plotted, in large part, on your platform.”  He added: “I want some assurances, based in data, that you are not doing the same thing again here.” Facebook was surprised by these remarks because it “thought we were doing a better job” communicating with the White House, but it promised to “more clearly respon[d]” in the future.
  • A few weeks later, the White House Press Secretary Jen Psaki was asked at a press conference about Facebook’s decision to keep former President Donald Trump off the platform. Psaki deflected that question but took the opportunity to call on platforms like Facebook to “‘stop amplifying untrustworthy content . . . , especially related to COVID–19, vaccinations, and elections.’”   In the same breath, Psaki reminded the platforms that President Biden “‘supports . . . a robust anti-trust program.’”
  • About this time, Flaherty also forwarded to Facebook a “COVID–19 Vaccine Misinformation Brief ” that had been drafted by outside researchers and was “informing thinking” in the White House on what Facebook’s policies should be. This document recommended that Facebook strengthen its efforts against misinformation by adoption of “progressively severe penalties” for accounts that repeatedly posted misinformation, and it proposed that Facebook make it harder for users to find “anti-vaccine or vaccine-hesitant propaganda” from other users. Facebook declined to adopt some of these suggestions immediately, but it did “se[t] up more dedicated monitoring for [COVID] vaccine content” and adopted a policy of “stronger demotions [for] a broader set of content.”
  • The White House responded with more questions. Acknowledging that he sounded “like a broken record,” Flaherty interrogated Facebook about “how much content is being demoted, and how effective [Facebook was] at mitigating reach, and how quickly.” Later, Flaherty chastised Facebook for failing to prevent some vaccine-hesitant content from showing up through the platform’s search function. “‘[R]emoving bad information from search’ is one of the easy, low-bar things you guys do to make people like me think you’re taking action,” he said. “If you’re not getting that right, it raises even more questions about the higher bar stuff.” A few weeks after this latest round of haranguing, Facebook expanded penalties for individual Facebook accounts that repeatedly shared content that fact-checkers deemed misinformation; henceforth, all of those individuals’ posts would show up less frequently in their friends’ news feeds.
  • Facebook subsequently told the press it had partnered with the White House to counter misinformation and had “removed accounts that repeatedly break the rules” and “more than 18 million pieces of COVID misinformation.” But at another press briefing the next day, Psaki said these efforts were “[c]learly not” sufficient and expressed confidence that Facebook would “make decisions about additional steps they can take.”  That same day, President Biden told reporters that social media platforms were “‘killing people’” by allowing COVID related misinformation to circulate.  A day later, Psaki said the White House was “reviewing” whether Section 230 should be amended to remove the Social Media platforms’ immunity to civil suits.

Justice Alito and his fellow dissenters thus made a compelling case that the Biden administration has been strong-arming Facebook into censoring disfavored views on vaccination and other issues in accordance with the White House’s dictates, a set of facts that clearly implicates First Amendment issues. The other six justices, however, in an opinion written by Justice Barrett, sidestepped these uncomfortable facts by invoking the doctrine of standing.  Standing is a rather esoteric doctrine that courts invoke when they conclude for various reasons that the plaintiffs who have brought the suit are not the proper plaintiffs to litigate it.  One of the required elements of standing is “traceability,” i.e.,  the plaintiffs must show that they incurred concrete and redressable harm that was traceable to the defendants’ conduct.  In the Missouri case, the majority, through Justice Barrett, held there was a break in the chain of traceability because Facebook, so they asserted, independently made its decisions about censoring and consequently the White House could not be held responsible for them. For the reasons previously discussed, however, this assertion seems deeply flawed, given that Facebook needed to worry about losing its Section 230 immunity and dealing with antitrust suits if it became too uncooperative with the White House. But on this basis the Court’s majority ruled in favor of the Biden administration.

So where do we stand now? A few points:

  1. Despite its victory in this litigation, I suspect the Biden administration is a little chastened by having its clandestine coercions brought into public view.  Perhaps it will back off a little.  But probably not much and not for long.
  2.  We should keep in mind that although the Missouri case focused on disfavored views about COVID vaccines, precisely the same concerns apply as to other disfavored topics, such as “hate speech,” Israel’s and America’s conduct with regard to Gaza, the prosecution of the January 6 Defendants, and a host of others. We should have no doubt that on these topics as well the White House has pressured and continues to pressure social media to suppress disfavored opinions.
  3.  What would be the effect of the reelection of Donald Trump in 2024? Certainly Trump, with his repeated “false news” accusations, should not expect much cooperation from the likes of Facebook and other social media.  On the other hand, Trump is no stranger to making threats and demanding conformity to his views.
  4.  One hopes that Elon Musk and X are not as obsequious to government demands as Facebook has proved to be.

In summary, the Supreme Court in Murthy v. Missouri missed a once-in-a-generation opportunity to vindicate, dramatically and with far-reaching consequences, the Court’s long tradition of protecting disfavored speech from an overreaching government.  Had Justice Alito’s incisive dissent become the majority opinion, it would have been a watershed victory for civil liberties. It was not to be.  Let me nonetheless end on a positive note. The vast document discovery in the Missouri case stripped away the camouflage surrounding the social media / government connections to reveal what many of us had long assumed – that high ranking government officials have been aggressively pressuring social media to censor dissident viewpoints.  The plaintiffs’ theory of their case in Missouri litigation was based on the First Amendment;  regrettably, this theory failed.  But the government’s threatening communications with social media may have done more than violate the First Amendment;  if false, they may have been defamatory or tortious.  A plaintiff injured by government-engineered censorship on social media, accordingly, may have greater success based on common law tort actions than on the First Amendment.

Similar Posts