The legal answer to that question depends on how the courts treat the status of social media providers. The political answer depends on who and what you want to ban? The fragile Democratic control of Congress faces a steep challenge in passing legislation to answer these questions. And they must get the courts to accept their solution as not infringing on First Amendment rights.
Let’s look at regulating free speech on social media from the perspectives of the courts and Congress. The first is concerned with legal precedents, the latter with the politics of passing legislation. But both are about determining who will exercise political power in defining what free speech is allowed on the internet.
The Courts Perspective
Two years ago, in March 2019, the Congressional Research Service issued an analysis of Free Speech and the Regulation of Social Media Content. Quite simply, social media sites provide platforms for content originally generated by users. According to the CRS review of court decisions, social media has been treated “like news editors, who generally receive the full protections of the First Amendment when making editorial decisions.” In effect, these private companies can remove or alter the user’s content and determine how content is presented: who sees it, when, and where.
For instance, the major social media players, Facebook, Twitter, and YouTube banned or suspended Trump’s accounts because they determined his accounts increased the risk of violence after inciting protesters to march on the Capitol. Data would seem to back up that concern.
Before Trump was banned, research by a global human rights group Avaaz, and The New York Times, found that during the week of November 3, there were roughly 3.5 million interactions — including likes, comments, and shares — on public posts referencing “Stop the Steal.” Erik Trump and two right-wing bloggers accounted for 200,000 of those interactions. After that period and before January 6, Trump was the top poster of the 20 most-engaged Facebook posts containing the word “election,” according to Crowdtangle. All of his claims were found to be false or misleading by independent fact-checkers.
Facebook has also banned many other accounts. One of the largest groupings consists of anti-vaccination sites which post a wide range of baseless or misleading claims about vaccines and covid. Facebook removed more than 12 million pieces of content, including false narratives about covid-19 being less deadly than the flu and that it is somehow associated with a population-control plot by philanthropist Bill Gates. To date, no social media user posting this misinformation has succeeded in forcing the media services to carry their anti-vaccine messaging.
Most recently, SCOTUS (The Supreme Court of The United States) unanimously moved to vacate a lower court ruling which found that former President Trump violated the First Amendment. He had blocked people who had criticized him in the comment threads linked to his @realDonaldTrump Twitter handle. However, Justice Clarence Thomas voiced his concern in a 12-page opinion, saying, “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.” Conservative columnist George Will seconded Thomas’s concerns, without identifying a solution. Both seem to imply that conservatives are not getting a fair deal on these platforms.
Conservative’s concerns about being discriminated against could be addressed by treating these social media giants, and perhaps other providers, as common carriers like licensed broadcast companies. Based on this designation’s past application, providers could be at legal risk if they refuse to post a users’ material, such as misinformation or hate speech.
A more restrictive classification would result if they acted as a state actor. That would occur if they served as an open public forum that mimics a government-like function. According to CSR’s analysis, under this designation, that entity would have to protect its users’ free speech rights before making any editorial changes. In other words, users of the platforms would have a First Amendment constitutional guarantee of free speech, leaving providers little wiggle room for denying a user access to the public.
However, if the providers remain as private companies acting as an editor of publishing other’s works, the case is harder to make that the First Amendment applies to the users. This is because constitutional guarantees generally apply only against government action, not private actions.
As social media sites continue to ban or suspend users who are posting misinformation that endangers public health or incites violence toward others, such as hate speech, the Supreme Court is more likely to be drawn into that discussion. They will have the last word determining how much the government can regulate social media without violating the First Amendment.
Aside from what SCOTUS may do, Congress is already in the process of drawing up legislation to address the many non-constitutional user claims that the courts reject because of Section 230 of the Communications Decency Act.That law provides immunity to providers as long as they act “in good faith” in restricting access to “objectionable” material.
The Political Perspective
At the crux of any congressional action is Section 230, which says that content creators, referred to as users, are liable for the content they post online. Therefore, hosts are not liable, such as Facebook, Twitter, Google, and other major social media platforms. There are exceptions for copyright violations, sex work-related material, and federal criminal law violations, but no one is contesting these exemptions.
The Electronic Frontier Foundation calls this section “the most important law protecting internet speech.” Because the courts treat these private companies as editors, they can create rules to restrict speech on their websites. For instance, Facebook and Twitter have banned hate speech, even though hate speech is protected under the First Amendment.
Section 230 garnered the attention of both former President Trump and now President Biden. In April 2018, Trump signed the FOSTA bill, which was intended to fight sex trafficking by reducing legal protections for online platforms. However, no evidence has surfaced that the law has diminished online sex trafficking. Two years later, following a kerfuffle with Twitter, Trump released an executive order in April 2020 which asked regulators to redefine Section 230 more narrowly, bypassing Congress and the courts’ authority. Trump also encouraged his federal agencies to collect political bias complaints, which conservative groups had been making. The agencies’ findings could justify revoking a sites’ legal protections.
After Biden was elected, Trump pushed for a complete abolition of Section 230, even threatening to veto the National Defense Authorization Act unless it included a repeal of the law. Biden is also not a fan of Section 230. As President-elect, Biden favored revoking Section 230 completely, saying in January 2020 that Facebook and other social media sites are “propagating falsehoods they know to be false.” As of April 11, Biden has not proposed any legislation.
Congress has not been sitting on the sidelines. While Presidents Trump and Biden suggested revoking Section 203, lawmakers instead aim to eliminate protections for specific kinds of content. They also question how social media algorithms have been used to attract more eyes to a platform without concern for the misinformation and the hostile political environment they help create.
The chief executives of Facebook, Google, and Twitter appeared before Congress during the Trump administration and did so again in March 2021 during the second full month of Biden’s administration. In the past, congressional members were interested in anti-trust issues, child sex abuse, and prostitution ads.
This time it was different. Facebook Inc’s Mark Zuckerberg, Sundar Pichai of Alphabet Inc, and Twitter Inc’s Jack Dorsey were aggressively questioned by Democrats on how they handled misinformation and online extremism. Republicans continued to accuse the companies of censoring conservative voices. Strangely very little was said about Trump being banned from their sites. Republicans also demanded that the tech companies protect children and teens from cyberbullying and social media addiction.
Rep. Mike Doyle (D- Pennsylvania) attacked the social media giants for using algorithms that promote attention-grabbing disinformation. He said, “You are picking engagement and profit over the health and safety of users. Your algorithms make it possible to supercharge these kinds of opinions.” A Next TV reporter wrote that a former Facebook exec told House members at a hearing last September that their site, at least in the past, was designed to promote content that drives engagement, even if it was misinformation, conspiracy theories, and fake news.
Other Democrats also focused on reducing the platforms’ incentives for promoting attention-grabbing content, including disinformation and misinformation.
At March’s hearing, Rep. Anna G. Eshoo (D-Calif.) discussed her bill, the Protecting Americans from Dangerous Algorithms Act. It would amend Section 230 to remove tech companies’ protections from lawsuits when their algorithms amplify content that leads to offline violence. As written, the restriction would only apply to platforms with 50 million or more users. The Parler website, which has only 20 million users as of January 2021, would be excluded, and it has a significant user base of conspiracy theorists and far-right extremists. While this legislation has over a dozen Democratic co-sponsors, as of March 23, there were no Republican co-sponsors listed.
However, two significant pending pieces of legislation have bipartisan support pending in the Senate Committee on Commerce, Science, and Transportation.
The Platform Accountability and Consumer Transparency (PACT) Act is co-
sponsored by Sens. Brian Schatz (D-Hawaii) and John Thune (R–South Dakota).
The PACT Act imposes new obligations on platforms based on their revenue and size. It requires them to maintain a complaint system, phone line and produce a transparency report. It also requires users to make complaints in good faith. Consequently, providers would be permitted to filter complaints for spam, trolls, and abusive complaints. And providers would have to review and remove illegal or policy-violating content promptly to receive Section 230 protections.
The other pending legislation is the See Something, Say Something Online Act of 2021. The co-sponsors are Sen. Joe Manchin (D-West Virginia) and Sen. John Cornyn (R-Texas). It would require interactive computer services to report suspicious transmissions that they detect and show individuals or groups planning, committing, promoting, and facilitating terrorism, serious drug offenses, and violent crimes to the Department of Justice. Providers would have to take “reasonable steps” to prevent and address such suspicious transmissions. Failure to report a suspicious transmission would void their use of using Section 230 as a defense from being liable for publishing one.
There may well be more legislation introduced given that there is bipartisan sentiment to tighten regulations, particularly on the social media platforms that appear to monopolize that medium. But Republicans and Democrats differ in their priorities. Republicans have emphasized fighting issues like sexual exploitation and various addictions on social media while taking less interest in stopping political misinformation concerning elections, covid-19, and vaccinations. Democrats have those issues in reverse order of priority.
I expect that Republicans will use former U.S. Attorney General William Barr’s letter to Congress in September 2020 to guide what changes to pursue in Section 230. Barr acknowledges that this section enabled innovations and new business models for online platforms of social media. He makes several suggested adjustments, some are reasonable given as he notes, “many of today’ s online platforms are no longer nascent companies but have become titans of industry.” The largest digital platforms dominate markets; Facebook has roughly 3 billion users, and Google controls about 90 percent of the market in its field.
Barr captures the fundamental political tension in regulating social media’s ability to select what to post. He writes: “Platforms can use this power for good to promote free speech and the exchange of ideas, or platforms can abuse this power by censoring lawful speech and promoting certain ideas over others.” This last condition captures the Republican’s belief that social media has discriminated against conservative ideas.
However, a recent poll shows that while majorities in both parties think political censorship is likely occurring on social media, this belief is widespread among Republicans. Ninety percent of Republicans and independents who lean toward the Republican Party agree with this view. And 69 percent of this group say major technology companies generally support the views of liberals over conservatives, compared with 25% of Democrats and Democratic leaners believing that the industry is biased in favor of conservatives.
Researchers have found no evidence to support these conservative grievances. “I know of no academic research that concludes there is a systemic bias – liberal or conservative – in either the content moderation policies or the prioritization of content by algorithms by major social media platforms,” said Steven Johnson, an information technology professor at the University of Virginia McIntire School of Commerce.
Moving Forward
Some adjustments in moderating content are needed and supported by both liberals and conservatives, Republicans and Democrats. As I have shown above, their perspectives do not agree on what type of bias needs to be addressed. Section 230 will most likely be amended and not discarded. Without some liability protections, our significant social media infrastructure on the web would be in chaos. But to continue with the current situation will only continue to generate the spread of conspiracy theories and political violence.
The bi-partisan legislation so far introduced will make some minor adjustments. They will clarify the responsibilities of both the hosts and the users on the platforms. However, they should go further in setting up a process or establishing a nonpartisan body to expedite the adjudication of any disagreements regarding the veracity of a user’s material.
These types of legislative solutions will lessen the necessity of SCOTUS entering into the fray. Their intervention would be the least desirable path to take in this era. Given the court’s ideological composition, their decision will most likely be subject to attack as being biased. It would likely result in a more divisive political climate and fuel the growth of conspiracy theories.
Nick Licata is the author of Becoming A Citizen Activist, and has served five terms on the Seattle City Council, named progressive municipal official of the year by The Nation, and is founding board chair of Local Progress, a national network of 1,000 progressive municipal officials.
Subscribe to Licata’s newsletter Citizenship Politics